id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2308.09596
Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks for Node Classification
Graph neural networks (GNNs) are increasingly used in critical human applications for predicting node labels in attributed graphs. Their ability to aggregate features from nodes' neighbors for accurate classification also has the capacity to exacerbate existing biases in data or to introduce new ones towards members from protected demographic groups. Thus, it is imperative to quantify how GNNs may be biased and to what extent their harmful effects may be mitigated. To this end, we propose two new GNN-agnostic interventions namely, (i) PFR-AX which decreases the separability between nodes in protected and non-protected groups, and (ii) PostProcess which updates model predictions based on a blackbox policy to minimize differences between error rates across demographic groups. Through a large set of experiments on four datasets, we frame the efficacies of our approaches (and three variants) in terms of their algorithmic fairness-accuracy tradeoff and benchmark our results against three strong baseline interventions on three state-of-the-art GNN models. Our results show that no single intervention offers a universally optimal tradeoff, but PFR-AX and PostProcess provide granular control and improve model confidence when correctly predicting positive outcomes for nodes in protected groups.
Arpit Merchant, Carlos Castillo
2023-08-18T14:45:28Z
http://arxiv.org/abs/2308.09596v1
# Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks for Node Classification ###### Abstract. Graph neural networks (GNNs) are increasingly used in critical human applications for predicting node labels in attributed graphs. Their ability to aggregate features from nodes' neighbors for accurate classification also has the capacity to exacerbate existing biases in data or to introduce new ones towards members from protected demographic groups. Thus, it is imperative to quantify how GNNs may be biased and to what extent their harmful effects may be mitigated. To this end, we propose two new GNN-agnostic interventions namely, (i) PFR-AX which decreases the separability between nodes in protected and non-protected groups, and (ii) PostProcess which updates model predictions based on a blackbox policy to minimize differences between error rates across demographic groups. Through a large set of experiments on four datasets, we frame the efficacies of our approaches (and three variants) in terms of their algorithmic fairness-accuracy tradeoff and benchmark our results against three strong baseline interventions on three state-of-the-art GNN models. Our results show that no single intervention offers a universally optimal tradeoff, but PFR-AX and PostProcess provide granular control and improve model confidence when correctly predicting positive outcomes for nodes in protected groups. 2023 Graph Neural Networks; Node Classification; Algorithmic Fairness + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. ## 1. Introduction Classification on attributed graphs involves inferring labels for nodes in the test set given a training set of labels along with attributes and adjacency information for all the nodes. To address this task, Graph Neural Networks (or GNNs, for short) have exploded in popularity since they effectively combine attributes and adjacency to build a unified node representation which can be used downstream as a feature vector (Han et al., 2017; Karim et al., 2017). GNNs have found applications in a variety of high-risk application domains (as defined, e.g., in the proposed AI Act for Europe of April 20221), including credit risk applications (Kal unfairness (Bogor et al., 2016), GUIDE uses a group-equality informed individual fairness criteria (Srivastava et al., 2017)). Second, dataset properties, training criteria, hyperparameter tuning procedures, and sometimes, even low-level elements of an implementation such as linked libraries are known to significantly influence the efficiency and effectiveness of GNNs on node classification (Krizhevsky et al., 2014; Krizhevsky et al., 2014). Third, while algorithmic discrimination may be reduced at the expense of accuracy (Krizhevsky et al., 2014), specific improvements and trade-offs depend on application contexts (Krizhevsky et al., 2014), and need to be evaluated to understand what kinds of alternatives may offer improvements over current approaches. Our goal is to address these limitations by focusing on the following questions: 1. How do we meaningfully benchmark and analyze the trade-off between algorithmic fairness and accuracy of interventions on GNNs across different graphs? 2. Is there room for improving the fairness/accuracy tradeoff, and if so, how? Our Contributions.We categorize interventions designed to reduce algorithmic discrimination in terms of their loci in the machine learning pipeline: (a) pre-processing, before training, (b) in-processing, during learning, and (c) post-processing, during inference. Using a standardized methodological setup, we seek to maximally preserve accuracy while improving algorithmic fairness. To this end, we introduce two new, unsupervised (independent of ground-truth labels), model-agnostic (independent of the underlying GNN architecture) interventions; PFR-AX that debiases data prior to training, and PostProcess that debiases model outputs after training (before issuing final predictions). In PFR-AX, we first use the PFR method (Krizhevsky et al., 2014) to transform node attributes to better capture data-driven similarity for operationalizing individual fairness. Then, we construct a DeepWalk embedding (Krizhevsky et al., 2014) of the graph, compute its PFR transformation, and reconstruct a graph from the debiased embedding using a method we call EmbeddingReverser. To our knowledge, this is a novel application of a previously known method with suitable augmentations. In PostProcess, we randomly select a small fraction, referred to as \(\gamma\), of nodes from the minority demographic for whom the model has predicted a negative outcome and update the prediction to a positive outcome. This black-box policy aims to ensure that error rates of a model are similar across demographic groups. This is a simple and natural post-processing strategy which, to the best of our knowledge, has not been studied in the literature on GNNs. We conduct extensive experiments to evaluate the efficacies of interventions grouped by their aforementioned loci. To measure accuracy, we use _AUC-ROC_; to measure algorithmic fairness, we use _disparity_ and _inequality_ (cf. Section 3). We compare the accuracy-fairness tradeoff for PFR-AX and PostProcess (plus three additional variants) against three powerful baseline interventions (two for pre-training, one for in-training) on three widely used GNN models namely, GCN, GraphSAGE, and GIN (Gin et al., 2017). Our experiments are performed on two semi-synthetic and two real-world datasets with varying levels of edge homophily with respect to labels and sensitive attributes, which is a key driver of accuracy and algorithmic fairness in the studied scenarios. We design ablation studies to measure the effect of the components of PFR-AX and the sensitivity of PostProcess to the \(\gamma\) parameter. Finally, we analyze the impact of interventions on model confidence. Our main findings are summarized below: * No single intervention offers universally optimal tradeoff across models and datasets. * PFR-AX and PostProcess provide granular control over the accuracy-fairness tradeoff compared to baselines. Further, they serve to improve model confidence in correctly predicting positive outcomes for nodes in protected groups. * PFR-A and PFR-X that debias only adjacency and only attributes respectively, offer steeper tradeoffs than PFR-AX which debiases both. * When imbalance between protected and non-protected groups and model bias are both large, small values of \(\gamma\) offer large benefits to PostProcess. ## 2. Related Work Legal doctrines such as GDPR (in Europe), the Civil Rights Act (in the US), or IPC Section 153A (in India) restrict decision-making on the basis of protected characteristics such as nationality, gender, caste (Krizhevsky et al., 2014). While _direct discrimination_, i.e., when an outcome directly depends on a protected characteristic, may be qualitatively reversed, addressing _indirect discrimination_, i.e., discrimination brought by apparently neutral provisions, requires that we define concrete, quantifiable metrics in the case of machine learning (ML) that can be then be optimized for (Krizhevsky et al., 2014). Numerous notions of algorithmic fairness have been proposed and studied (Krizhevsky et al., 2014). Two widely used definitions include the _separation criteria_, which requires that some of the ratios of correct/incorrect positive/negative outcomes across groups are equal, and the _independence criterion_, which state that outcomes should be completely independent from the protected characteristic (Bogor et al., 2016). Algorithmic Fairness-Accuracy Tradeoffs.However, including fairness constraints often results in classifiers having lower accuracy than those aimed solely at maximizing accuracy. Traditional ML literature (Krizhevsky et al., 2014; Krizhevsky et al., 2014) has extensively studied the inherent tension that exists between technical definitions of fairness and accuracy: Corbett-Davies et al. (Corbett-Davies et al., 2016) theoretically analyze the cost of enforcing disparate impact on the efficacy of decision rules; Lipton et al. (Lipton et al., 2017) explore how correlations between sensitive and nonsensitive features induces within-class discrimination; Fish et al. (Fish et al., 2018) study the resilience of model performance to random bias in data. In turn, characterizing these tradeoffs has influenced the design of mitigation strategies and benchmarking of their utility. Algorithmic interventions such as reweighting training samples (Fish et al., 2018), regularizing training objectives to dissociate outcomes from protected attributes (Krizhevsky et al., 2014), and adversarially perturbing learned representations to remove sensitive information (Krizhevsky et al., 2014) are framed by their ability to reduce bias without significantly compromising accuracy. Algorithmic Fairness in GNNs.The aforementioned approaches are not directly applicable for graph data due to the availability of adjacency information and the structural and linking bias it may contain. GNNs, given their message-passing architectures, are particularly susceptible to exacerbating this bias. This has prompted attention towards mitigation strategies for GNNs. For instance, at the pre-training phase, REDFESS (Beng et al., 2017) seeks to promote individual fairness for the ranking task, and at the in-training phase, FairGNN (Beng et al., 2017) estimates missing protected attribute values for nodes using a GCN-estimator for adversarial debiasing and GUIDE (Srivastava et al., 2017) proposes a novel GNN model for a new group-equality preserving individual fairness metric. We do not compare against these since they are designed for a different task than ours, operate in different settings altogether, and since FairGNN (in particular) exhibits a limited circular dependency on using vanilla GNN for a sensitive task to overcome limitations of a different GNN for classification. We refer the reader to Dai et al. (Dai et al., 2017) for a recent survey. More relevant to our task, EDITS (Dai et al., 2017) reduces attribute and structural bias using a Wasserstein metric and so we use it as a baseline for comparison. At the in-training phase, NIFTY (Beng et al., 2017) promotes a model-agnostic fair training framework for any GNN using Lipschitz enhanced message-passing. However, an explicit fairness-accuracy tradeoff analysis is lacking from literature which, along with methodological differences, makes it difficult to benchmark the comparative utilities of these approaches. Therefore, we include these as baselines. We frame our study in the context of such an analysis and design one pre-training and one post-training intervention that offer different, but useful tradeoffs. ## 3. Problem Setup GraphsLet \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be an unweighted, undirected graph where \(\mathcal{V}\) is a set of \(n\) nodes and \(\mathcal{E}\) is a set of \(m\) edges. Denote \(\mathbf{A}=[a_{uv}]\in\{0,1\}^{n\times n}\) as its binary adjacency matrix where each element \(a_{uv}\) indicates the presence or absence of an edge between nodes \(u\) and \(v\). Define \(\mathbf{D}=\operatorname{diag}\left(\delta_{1},\delta_{2},\ldots,\delta_{n}\right)\) to be a diagonal degree matrix where \(\delta_{u}=\sum_{v}a_{uv}\). Let each node \(u\) in \(\mathcal{G}\) be associated with one binary sensitive attribute variable \(s_{u}\) indicating membership in a protected demographic group along with \(d-1\) additional real or integer-valued attributes. Together, in matrix form, we denote node attributes as \(\mathbf{X}\in\mathbb{R}^{n\times d}\). Lastly, \(\forall u\in\mathcal{V}\), its binary, ground-truth, categorical label is depicted as \(\mathbf{y}_{u}\). Graph Neural NetworksTypically, GNNs comprise of multiple, stacked graph filtering and non-linear activation layers that leverage \(\mathbf{X}\) and \(\mathbf{A}\) to learn joint node representations (see, e.g., Kipf and Welling (Kipf and Welling, 2015)). Such a GNN with \(L\) layers captures the \(L\)-hop neighborhood information around nodes. For each \(v\in\mathcal{V}\) and \(l\in[L]\), let \(\mathbf{h}_{v}^{(l)}\) denote the representation of node \(v\) at the \(l\)-th GNN layer. In general, \(\mathbf{h}_{v}^{(l)}\) is formulated via message-passing as follows: \[\mathbf{h}_{v}^{(l)}=\textsc{CB}^{(l)}\left(\mathbf{h}_{v}^{(l-1)},\textsc{ AGG}^{(l-1)}\left(\left\{\mathbf{h}_{v}^{(l-1)}:u\in\mathcal{N}(v)\right\} \right)\right) \tag{1}\] where \(\mathcal{N}(v)\) is the neighborhood of \(v\), \(\mathbf{h}_{v}^{(l-1)}\) is the representation of \(v\) at the \((l-1)\)-th layer, AGG is an aggregation operator that accepts an arbitrary number of inputs, i.e., messages from neighbors, and CB is a function governing how nodes update their representations at the \(l\)-th layer. At the input layer, \(\mathbf{h}_{v}^{(0)}\) is simply the node attribute \(\mathbf{x}_{v}\in\mathbf{X}\) and \(\mathbf{h}_{v}^{(L)}\) is the final representation. Finally, applying the softmax activation function on \(\mathbf{h}_{v}^{(L)}\) and evaluating cross-entropy error over labeled examples, we can obtain predictions for unknown labels \(\hat{\mathbf{y}}_{v}\). In this paper, we use AUC-ROC and F1-scores (thresholded at 0) to measure GNN accuracy. Algorithmic FairnessWe measure the algorithmic fairness of a GNN model using two metrics. First, _Statistical Disparity_ (\(\Delta_{\text{SP}}\)), based on the _independence criterion_, captures the difference between the positive prediction rates between members in the protected and non-protected groups (Dai et al., 2017). Formally, for a set of predicted labels \(\hat{\mathbf{Y}}\): \[\Delta_{\text{SP}}=\left|\text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=1|- \text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=0\right]\right.\right| \tag{2}\] Second, _Inequal Opportunity_ (\(\Delta_{\text{EO}}\)), which is one _separation criterion_, measures the similarity of the true positive rate of a model across groups (Dai et al., 2017). Formally: \[\Delta_{\text{EO}}=\left|\text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=1, \mathbf{Y}=1\right]-\text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=0,\mathbf{Y}= 1\right]\right. \tag{3}\] Equation (3) compares the probability of a sample with a positive ground-truth label being assigned a positive prediction across sensitive and non-sensitive groups. In the following sections, we refer to \(\Delta_{\text{SP}}\) as _disparity_ and \(\Delta_{\text{EO}}\) as _inequality_ to emphasize that lower values are better since they indicate similar rates. Having defined the various elements in our setting, we formally state our task below: Problem 1 (Algorithmically Fair Node Classification).: _Given a graph \(\mathcal{G}\) as an adjacency matrix \(\mathbf{A}\), node features \(\mathbf{X}\) including sensitive attributes \(\mathbf{s}\), and labels \(\mathbf{Y}_{V}\) for a subset of nodes \(V\subset\mathcal{V}\), debias GNNs such that their predicted labels \(\mathbf{Y}_{\mathcal{V}\setminus V}\) are maximally accurate while having low \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\)._ ## 4. Algorithms In this section, we propose two algorithms for Problem 1: PFR-AX (pre-training) and PostProcess (post-training). ### Pfr-Ax Our motivation for a data debiasing intervention arises from recent results showing that GNNs have a tendency to exacerbate homophily (Srivastava et al., 2017). Final node representations obtained from GNNs homogenize attributes via Laplacian smoothing based on adjacency. This has contributed to their success in terms of classification accuracy. However, it has also led to inconsistent results for nodes in the protected class when their membership status is enhanced in their representations due to message-passing (Dai et al., 2017; Dai et al., 2017), particularly in cases of high homophily. Lahoti et al. (Lahoti et al., 2018) design PFR to transform attributes to learn new representations that retain as much of the original data as possible while mapping equally deserving individuals as closely as possible. The key benefit offered by PFR is that it obfuscates protected group membership by reducing their separability from points in the non-protected group. Therefore, we directly adapt PFR for graph data to debias attributes and adjacency. Algorithm 1 presents the pseudocode for PFR-AX. Debiasing AttributesIn order to transform attributes \(\mathbf{X}\) using PFR, we build two matrices. The first, denoted by \(W^{X}\), is an adjacency matrix corresponding to a \(k\)-nearest neighbor graph over \(\mathbf{X}\) (not including \(\mathbf{s}\)) and is given as: \[W^{X}_{uv}=\begin{cases}\exp\left(\frac{-\|\mathbf{x}_{u}-\mathbf{x}_{v}\|^{2} }{t}\right),\text{ if }u\in N_{k}\left(v\right)\text{ or }v\in N_{k}\left(u\right)\\ 0,\text{ otherwise}\end{cases} \tag{4}\] where \(t\) is a scaling hyperparameter and \(N_{k}\left(v\right)\) is the set of \(k\) nearest neighbors of \(v\) in Euclidean space. We first normalize \(\mathbf{X}\) using Min-Max scaling to ensure that all attributes contribute equally and then compute \(W^{X}\) as per Equation 4. The second matrix, denoted by \(W^{F}\), is the adjacency matrix of a between-group quantile graph that ranks nodes within protected and non-protected groups separately based on certain pre-selected variables and connects similarly ranked nodes. In the original paper, Lahoti et al. (2019) use proprietary decile scores obtained from Northpointe for creating rankings. However, in the absence of such scores for our data, we use one directly relevant attribute for the task at hand. For instance, in the case of a credit risk application, we define rankings based on the loan amount requested. Formally, this matrix is given as: \[W^{F}_{uw}=\begin{cases}1,\text{ if }u\in X^{P}_{s_{u}}\text{ and }v\in X^{P}_{s_{u}},\ s_{u}\neq s_{v}\\ 0,\text{ otherwise}\end{cases} \tag{5}\] where \(X^{P}_{s}\) denotes the subset of nodes with sensitive attribute value \(s\) whose scores lie in the \(p\)-th quantile. Higher number of quantiles leads a sparser \(W^{F}\). Thus, \(W^{F}\) is a multipartite fairness graph that seeks to build connections between nodes with different sensitive attributes based on similarity of their characteristics even if they are not adjacent in the original graph. Finally, a new representation of \(\mathbf{X}\), denoted as \(\tilde{\mathbf{X}}\), is computed by solving the following problem (Krishnan and Krishnan, 2019): \[\begin{split}\text{minimize}_{\tilde{X}}&(1-\alpha) \sum_{u,v}^{n}\|\tilde{x}_{u}-\tilde{x}_{v}\|^{2}W^{X}_{uw}\\ &+\alpha\sum_{u,v}^{n}\|\tilde{x}_{u}-\tilde{x}_{v}\|^{2}W^{F}_{ u0}\\ \text{s.t.}&\tilde{X}^{\top}\tilde{X}=\mathbb{I}\end{split} \tag{6}\] where \(\alpha\) controls the influence of \(W^{X}\) and \(W^{F}\) on \(\tilde{\mathbf{X}}\). _Debiasing Adjacency._ To reduce linking bias from \(\mathbf{A}\), we apply a three-step process. First, we compute an unsupervised node embedding of the graph using a popular matrix factorization approach named DeepWalk (Krishnan and Krishnan, 2019). Formally, this is computed as follows: \[\mathbf{U}=\log\left(\text{vol}\left(\mathcal{G}\right)\left(\frac{1}{C}\sum _{c=1}^{C}\left(\mathbf{D}^{-1}\mathbf{A}\right)^{c}\right)\mathbf{D}^{-1} \right)-\log b \tag{7}\] where \(\text{vol}\left(\mathcal{G}\right)=2m/n\) is the volume of the graph, \(C\) represents the length of the random walk, and \(b\) is a hyperparameter controlling the number of negative samples. Second, using the same aforementioned procedure for debiasing \(\mathbf{X}\), we apply PFR on U. Third, we design a new algorithm to invert this debiased embedding to reconstruct a graph with increased connectivity between nodes in protected and non-protected groups. This algorithm, which we refer to as EmbeddingReverser, proceeds as follows. We initialize an empty graph of \(n\) nodes and locate for each node \(u\), its \(\delta_{u}\) closest neighbors in the embedding space denoted as \(N_{\delta_{u}}\left(u\right)\) where \(\delta_{u}\) is the degree of \(u\) in the original graph. Starting from the first node (say) \(v\), for every \(w\in N_{\delta_{u}}\left(v\right)\), we check if \(v\) is present in \(w\)'s \(\delta_{w}\) closest neighbors. If so, we add an edge between \(v\) and \(w\) and increment counters corresponding to the current degrees for \(v\) and \(w\). We also increment a global counter maintaining the number edges added so far. If the current degree for any node (say) \(u\) reaches \(\delta_{u}\), we mark that node as completed and remove it from future consideration. This continues either for \(T_{\text{SC}}\) rounds where each round iterates over all nodes or until \(m\) edges have been added. Thus, we seek to maximally preserve the original degree distribution. ### PostProcess _Model Predictions._ Let \(\mathcal{M}\) be a GNN model trained on a set of nodes \(V\in\mathcal{V}\). Let \(V^{\prime}=\mathcal{V}\setminus V\) represent nodes in the test set and let \(\mathbf{s}_{V^{\prime}}\) be their sensitive attribute values. For any \(u\in V^{\prime}\), denote \(r\left(u\right)\in\mathbb{R}\) as the original output (logit) score capturing the uncalibrated confidence of \(\mathcal{M}\). In our binary classification setting, we threshold \(r\left(\cdot\right)\) at \(0\) and predict a positive outcome for \(u\), i.e. \(\tilde{\mathbf{y}}_{u}=1\), if \(r\left(u\right)\geq 0\). Otherwise, we predict a negative outcome. Denote \(\tilde{\mathbf{Y}}_{V^{\prime}}\) as the set of labels predicted by \(\mathcal{M}\). _Do-No-Harm Policy._ Next, we present our model-agnostic post-training intervention called PostProcess which operates in an unsupervised fashion independent of ground-truth labels. Different from prior interventions, especially Wei et al. (2019), PostProcess seeks to relabel model predictions following a _do-no-harm policy_, in which protected nodes with a positive outcome are never relabeled to a negative outcome. We audit \(\hat{\mathbf{Y}}_{V^{\prime}}\) and \(\mathbf{s}_{V^{\prime}}\) to identify all the nodes in the test set belonging to the protected class (\(s=1\)) that have been assigned a negative outcome (\(\hat{y}=0\)). Denote this set as S1-Y0 (and so on for S1-Y1, etc.). For a fixed parameter \(\gamma\in[0,1]\), we randomly select a \(\gamma\) fraction of nodes from S1-Y0 and change their predicted label to a positive outcome, i.e., \(\hat{y}=1\). Then, we update \(\mathcal{M}\)'s scores for these nodes to a sufficiently large (uncalibrated) positive value. That is, we post-process \(\mathcal{M}\) to be confident about its new predicted labels. Predictions for all other nodes in the test set remain unchanged. Algorithm 2 describes the pseudocode. ``` Input: Test set \(V^{\prime}\); Sensitive attribute values \(\mathbf{s}_{V^{\prime}}\); Model predictions \(\hat{\mathbf{Y}}_{V^{\prime}}\); Model output scores \(r\) (\(\cdot\)) for \(V^{\prime}\); Flip parameter \(\gamma\); confidence (uncalibrated) MAX-SCORE; Output: Updated model predictions \(\hat{\mathbf{Y}}_{V^{\prime}}\); Updated model output scores \(r\) (\(\cdot\)); \(\textsc{S1-Y0}\leftarrow\emptyset\) for\(u\)in\(V^{\prime}\)do if\(s_{u}=1\) and \(\hat{y}_{u}=0\)then \(\textsc{S1-Y0}\leftarrow\textsc{S1-Y0}\cup\{u\}\) endif endfor \(P\leftarrow\) Randomly select \(\gamma\) fraction of nodes from S1-Y0 for\(v\)in\(P\)do \(\hat{y}_{v}\gets 1\) \(r\) (\(v\)) \(\leftarrow\) MAX-SCORE endfor ``` **Algorithm 2**PostProcess _Choice of \(\gamma\)._ Determining a useful value for \(\gamma\) depends on two factors: (i) imbalance in the test set with respect to the number of nodes in the protected class, and (ii) bias in \(\mathcal{M}\)'s predictions towards predicting negative outcomes. If imbalance and bias are large, small \(\gamma\) values may be sufficient to reduce disparity. If imbalance is low and bias is large, then large \(\gamma\) values may be required. Let \(\hat{n}_{\textsc{S1-Y1}}\) denote the number of nodes in S1-Y1, and similarly for S1-Y0, etc. Then, disparity (Equation 2) is rewritten as: \[\Delta_{\textsc{SP}}=\left|\frac{\hat{n}_{\textsc{S1-Y1}}}{\hat{n}_{\textsc{S1- Y1}}+\hat{n}_{\textsc{S1-Y0}}}-\frac{\hat{n}_{\textsc{S0-Y1}}}{\hat{n}_{\textsc{S0-Y1}}+ \hat{n}_{\textsc{S0-Y1}}}\right|\] Our do-no-harm policy reduces \(\hat{n}_{\textsc{S1-Y0}}\) and increases \(\hat{n}_{\textsc{S1-Y1}}\). \(\hat{n}_{\textsc{S1-Y1}}+\hat{n}_{\textsc{S1-Y0}}\) remains constant. Thus, the first term in the equation above increases while the second remains the same. If the difference between the first and second terms is small, then PostProcess will increase disparity. Conversely, if the difference is large, then PostProcess will reduce disparity. If \(\hat{n}_{\textsc{S1-Y1}}>>\hat{n}_{\textsc{S1-Y0}}\), then PostProcess will have marginal impact on disparity. The effect on \(\Delta_{\textsc{EO}}\) follows equivalently, but may not be correlated with \(\Delta_{\textsc{SP}}\). Note, the impact of \(\gamma\) on accuracy cannot be determined due to the unavailability of ground-truth label information during this phase. So, in Section 5.3, we empirically analyze the impact of \(\gamma\) on accuracy, averaged over \(T\) trials for smoothening. ## 5. Experiments In this section, we describe the datasets and the methodology used in our experimental study and report our findings. ### Datasets We evaluate our interventions on four publicly-available datasets ranging in size from 1K to 67K nodes. For consistency, we binarize sensitive attributes (\(s\)) and labels in each dataset. \(s=1\) indicates membership in the protected class and 0 indicates membership in the non-protected class. Similarly, label values set to 1 indicate a positive outcome and 0 indicate a negative outcome. Table 1 presents a summary of dataset statistics. _Semi-Synthetic Data._ German(Geman, 2017) consists of clients of a German bank where the task is to predict whether a client has good or bad risk independent of their _gender_. Credit(McCarthy et al., 2017) comprises of credit card users and the task is to predict whether a user will default on their payments. Here, _age_ is the sensitive attribute. Edges are constructed based on similarity between credit accounts (for German) and purchasing patterns (for Credit), following Agarwal et al. (Agarwal et al., 2017). We add an edge between two nodes if the similarity coefficient between their attribute vectors is larger than a pre-specified threshold. This threshold is set to 0.8 for German and 0.7 for Credit. _Real-world Data._ In Penn94(Penn, 2007), nodes are Facebook users, edges indicate friendship, and the task is to predict the graduation year (Krishnan et al., 2017) independent of _gender_ (sensitive attribute). Pokec-z (Agarwal et al., 2017) is a social network of users from Slovakia where edges denote friendship, _region_ is a sensitive attribute, and labels indicate professions. ### Methodology _Processing Datasets._Agarwal et al. (Agarwal et al., 2017) and Dong et al. (Dong et al., 2017) utilize a non-standardized method for creating dataset splits that does not include all nodes. Following convention, we create new stratified random splits such that the label imbalance in the original data is reflected in each of the training, validation, and test sets. For German, Credit, and Pokec-z, we use 60% of the dataset for training, 20% for validation, and the remaining 20% for testing. For Penn94, we use only 20% for training and validation (each) because we find that is sufficient for GNNs, with the remaining 60% used for testing. Additionally, we adapt the datasets for use by PFR as described previously (cf. Section 4.1). 3 For computing between-group quantile graphs, we choose Loan Amount, Maximum Bill Amount Over \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Size**} & \multicolumn{3}{c}{**Properties**} \\ \cline{2-7} & \(|\mathcal{V}|\) & \(|\mathcal{E}|\) & \(s\) & \(l\) & \(h_{s}\) & \(h_{l}\) \\ \hline German & 1K & 21K & Gender & Good Risk & 0.81 & 0.60 \\ Credit & 30K & 1.42M & Age & No Default & 0.96 & 0.74 \\ Penn94 & 41K & 1.36M & Gender & Year & 0.52 & 0.78 \\ Pokec-z & 67K & 617K & Region & Profession & 0.95 & 0.74 \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset Statistics: number of nodes (\(|\mathcal{V}|\)), number of edges (\(|\mathcal{E}|\)), sensitive attribute \(s\), label \(l\), sensitive attribute homophily2 (\(h_{s}\)), label homophily (\(h_{l}\)). Last 6 Months, Spoken Language, and F6 as ranking variables for German, Credit, Porec-z, and Penn94 respectively. InterventionsEach intervention in our study is benchmarked against the performance of three vanilla GNNs namely, GCN, GraphSAGE, and GIN, referred to as Original. We construct PFR-AX to debias \(\mathbf{X}\) and \(\mathbf{A}\) as per Section 4.1. For ablation, we consider two variants: (i) PFR-X that only applies PFR on \(\mathbf{X}\), (ii) PFR-A that applies only PFR on a DeepWalk embedding and reconstructs a graph using EmbeddingReverser. We vary \(\gamma\) from 0.1 (1%) to 0.4 (40%) in increments of 0.1. For each \(\gamma\), we use the same hyperparameters that returned the maximum accuracy for vanilla GNNs and post-process their predictions as per Algorithm 2. For each seed and \(\gamma\), we randomly select \(\gamma\) fraction of nodes from the protected class with a predicted negative outcome and smoothen over 20 trials. We define heavy and light versions of PostProcess namely, (i) PostProcess+ and (ii) PostProcess-, in terms of \(\gamma\). PostProcess+ is defined at that value of \(\gamma\) where disparity is lowest compared to Original and PostProcess- is set halfway between the disparity of Original and PostProcess+. We compare these with three baselines: (i) Unaware (that naively deletes the sensitive attribute column from \(\mathbf{X}\)), (ii) EDITS (Chen et al., 2018), and (iii) NIFTY (Chen et al., 2018). Previous studies do not consider Unaware which is a competitive baseline according to our results (see below). TrainingWe set \(k=128\) dimensions for DeepWalk. Depending on the dataset and interventions, we allow models to train for 500, 1000, 1500, or 2000 epochs. As per convention, we report results for each model/intervention obtained after \(T\) epochs and averaged over 5 runs. This is different from previous studies such as NIFTY that train for (say) \(T\) epochs and report results for that model instance that has the best validation score from upto \(T\) epochs. This, combined with our stratified splits, is a key factor for observing materially different scores from those reported by the original authors. To ensure fair comparison, we tune hyperparameters for each intervention and model via a combination of manual grid search and Bayesian optimization using WandB (Chen et al., 2018). The goal of this hyperparameter optimization is to find that setting of hyperparameters that results in a model with a maximal AUC-ROC score while aiming to have lower disparity and equality scores than Original. ImplementationWe implement our models and interventions in Python 3.7. We use SNAP's C++ implementation for DeepWalk. EDITS4 and NIFTY5 are adapted from their original implementations. Our experiments were conducted on a Linux machine with 32 cores, 100 GB RAM, and an V100 GPU. Our code is available at [https://github.com/arpidm/gnn_accuracy_fairness_tradeoff](https://github.com/arpidm/gnn_accuracy_fairness_tradeoff). Footnote 4: [https://github.com/yushundong/EDITS](https://github.com/yushundong/EDITS) (retrieved April 2022) Footnote 5: [https://github.com/chirng126/nify](https://github.com/chirng126/nify) (retrieved April 2022) Since higher values of AUC-ROC and lower values of \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\) are better, the optimal position is towards the bottom right in each plot (cf. RQ2). For ease of presentation, we defer full results for GIN and all interventions to Table 2 in Appendix A. Across datasets, GraphSAGE and GIN are more accurate than GCN but GraphSAGE displays higher disparity and inequality while GIN displays lower. PFR-AX and PostProcess- offer better tradeoffs than other baselines for German and Credit across models. This translates to upto 70% and 80% lower disparity than Original at less than 5% and 1% decrease in accuracy on German, respectively. In comparison, NIFTY offers 60% lower disparity (2.18% vs. 5.16% on German) at a 4.22% reduction in AUC-ROC. The lack of correlation between decreases in disparity and inequality may be explained in part by the impossibility theorem showing that these two criteria cannot be optimized simultaneously Chouldechova (2018). In Penn94 and Pokec-z, PFR-A and PFR-X are more effective than PFR-AX (cf. Table 2). We caveat the use of PostProcess in these datasets because choosing nodes randomly displays unintended consequences in maintaining accuracy without promoting fairness. Unaware proves effective across models and is especially optimal for Pokec-z. EDITS proves a heavy intervention causing large reductions in accuracy for relatively small gains in disparity. _Sensitivity to \(\gamma\)._ Figure 2 trades off AUC (X-axis), disparity (left Y-axis, red points), and inequality (right Y-axis, purple points) for GCN, GraphSAGE, and GIN on Credit as a function of \(\gamma\). Due to large label imbalance in Credit and small number of nodes with negative predicted outcomes from the protected class, varying \(\gamma\) by 1% translates to changing predictions for 7 nodes. PostProcess thus offers granular control. As \(\gamma\) increases, AUC-ROC decreases while \(\Delta_{\text{SP}}\) first reduces and then increases again. This inflection point indicates that the post-processing policy is overcorrecting in favour of the protected class resulting in disparity towards the non-protected class. Conversely, such improvements are absent in Pokec-z since vanilla GNNs themselves are inherently less biased. _Runtime._ Figure 3 depicts the total computation time in seconds (on log-scale) for each intervention on the four datasets for GCN, GraphSAGE, and GIN. We observe similar trends for all three GNN models. As expected, the larger the dataset, the higher the runtime. Updating a model's predictions at inference time is inexpensive and the resulting overhead for PostProcess is thus negligible. The running time for PFR-AX increases significantly with increasing dataset size. The key bottlenecks are very eigenvalue decompositions for sparse, symmetric matrices in PFR requiring \(\mathcal{O}\left(n^{3}\right)\) time and constructing DeepWalk embeddings. For instance, in the case of Pokec-z, PFR required (on average) 47 minutes in our tests while EmbeddingReverser and GNN training required less than 5 minutes each. For comparison, NIFTY required approximately 22 minutes while EDITS did not complete due to memory overflow. Figure 3. Runtime in seconds (log-scale) of various interventions on GCN, GraphSAGE, and GIN for German, Credit, Penn94, and Pokec-z increasing with dataset size. PostProcess is fastest because updating model inference is inexpensive. Figure 2. AUC-ROC as a function of Disparity (red) and Inequality (purple) for varying levels of the \(\gamma\) parameter of PostProcess on the Credit dataset. Higher values of \(\gamma\) are depicted by larger marker shapes and darker colors and indicate heavier interventions. As \(\gamma\) increases, AUC-ROC always decreases and Equality increases. Disparity first decreases upto an inflection point and then increases indicating an over-correction towards the protected class. _Model Confidence._ In Figure 4, we display the impact of fairness interventions on a model's confidence about its predictions, i.e., un-calibrated density (Y-axis), compared to its logit scores (X-axis) on the Credit dataset. The plots in the top, middle, and bottom rows corresponds to GCN, GraphSAGE, and GIN, respectively. Larger positive values imply higher confidence about predicting a positive outcome and larger negative values imply higher confidence for a negative outcome prediction. While there isn't a universal desired outcome, an intermediate goal for an intervention may be to ensure that a model is equally confident about _correctly_ predicting both positive and negative labels. Blue regions show normalized density of logit values for nodes in the protected class with a positive ground-truth label (S1-Y1) and green regions show the same for nodes in the protected class with a negative outcome as ground-truth. The dashed colored lines indicate density values for these groups of nodes for the Optional model. PostProcess and Unaware induce small changes to GNN's outputs while EDITS is significantly disruptive. PFR-AX nudges the original model's output for nodes in S1-Y1 away from 0 making it more confident about its positive (correct) predictions while NIFTY achieves the reverse. ## 6. Conclusion We presented two interventions that intrinsically differ from existing methods: PFR-AX debiases data prior to training to connect similar nodes across protected and non-protected groups while seeking to preserve existing degree distributions; PostProcess updates model predictions to reduce error rates across protected user groups. We frame our study in the context of the tension between disparity, inequality, and accuracy and quantify the scope for improvements and show that our approaches offer intuitive control over this tradeoff. Given their model-agnostic nature, we motivate future analysis by combining multiple interventions at different loci in the learning pipeline. ###### Acknowledgements. This work has been partially supported by: Department of Research and Universities of the Government of Catalonia (SGR 00930), EU-funded projects "SoBigData++" (grant agreement 871042), "FINDHR" (grant agreement 101070212) and MCIN/AE1/10.13039/501100011033 under the Maria de Maeztu Units of Excellence Programme (CEX2021-001195-M). We also thank the reviewers for their useful comments. Figure 4. Density of logit scores of GCN (first row), GraphSAGE (second row), and GIN (third row) after applying different algorithmic fairness interventions for users in the protected class in the Credit dataset. The vertical dashed (black) line depicts the threshold used for label prediction (positive scores indicate positive outcomes). The colored dashed curves indicate the density of output scores of Original PFR-AX and PostProcess- improve model confidence (density) for correctly predicting a positive label for users in the protected class. ## Appendix A Additional Experimental Results
2305.18467
Geometric Graph Filters and Neural Networks: Limit Properties and Discriminability Trade-offs
This paper studies the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold, thus encoding geometric information. We consider convolutional MNNs and GNNs where the manifold and the graph convolutions are respectively defined in terms of the Laplace-Beltrami operator and the graph Laplacian. Using the appropriate kernels, we analyze both dense and moderately sparse graphs. We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold. As a byproduct of this analysis, we observe an important trade-off between the discriminability of graph filters and their ability to approximate the desired behavior of manifold filters. We then discuss how this trade-off is ameliorated in neural networks due to the frequency mixing property of nonlinearities. We further derive a transferability corollary for geometric graphs sampled from the same manifold. We validate our results numerically on a navigation control problem and a point cloud classification task.
Zhiyang Wang, Luana Ruiz, Alejandro Ribeiro
2023-05-29T08:27:17Z
http://arxiv.org/abs/2305.18467v2
# Geometric Graph Filters and Neural Networks: Limit Properties and Discriminability Trade-offs ###### Abstract This paper studies the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold, thus encoding geometric information. We consider convolutional MNNs and GNNs where the manifold and the graph convolutions are respectively defined in terms of the Laplace-Beltrami operator and the graph Laplacian. Using the appropriate kernels, we analyze both dense and moderately sparse graphs. We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold. As a byproduct of this analysis, we observe an important trade-off between the discriminability of graph filters and exhibits to approximate the desired behavior of manifold filters. We then discuss how this trade-off is ameliorated in neural networks due to the frequency mixing property of nonlinearities. We further derive a transferability corollary for geometric graphs sampled from the same manifold. We validate our results numerically on a navigation control problem and a point cloud classification task. Graph Neural Networks, Manifold Filters, Manifold Neural Networks, Convergence Analysis, Discriminability ## I Introduction Geometric data, or data supported in non-Euclidean domains, is the object of much interest in modern information processing. It arises in a number of applications, including protein function prediction [3, 4], robot path planning [5, 6], 3D shape analysis [7, 8, 9] and wireless resource allocation [10, 11]. Graph convolutional filters [12, 13] and graph neural networks (GNNs) [14, 15], along with manifold convolutional filters [16] and manifold neural networks (MNNs) [17, 18, 19], are the standard tools for invariant information processing on these domains when they are discrete and continuous respectively. The convolution operation is implemented through information diffusion over the geometric structure, thus enabling invariant and stable representations [20, 21, 22, 23] and feature sharing. The cascading neural network architecture interleaves convolutions and nonlinearities, further expanding the model's expressiveness. Although there is a clear parallel between graphs and manifolds--the former can be seen as discretizations of the latter--, manifolds are infinite-dimensional continuous latent spaces which can only be accessed by discrete point sampling [7, 24, 25, 26]. In general, we have access to a set of sampling points from the manifold, and build a graph model to approximate the underlying continuous manifold while attempting to retain the local and global geometric structure [7, 11, 27]. GNNs have been shown to do well at processing information over the manifold both experimentally and theoretically [25, 26, 16]. Of particular note, conditions that guarantee asymptotic convergence of graph filters and GNNs to manifold filters and MNNs are known [16]. Asymptotic convergence is a minimal guarantee that can be enriched with non-asymptotic approximation error bounds. These bounds are unknown and they are the focus of this paper. These non-asymptotic approximation error bounds relating graph filters and GNNs to manifold filters and MNNs are important because they inform the practical design on graphs of information processing architectures that we want to deploy on manifolds. In addition, explicit finite-sample error bounds often reveal details about the convergence regime (e.g., rates of convergence and discriminability trade-offs) that are not revealed by their asymptotic counterparts. For example, the non-asymptotic convergence analysis of GNNs on graphs sampled from a graphon (also referred to as a _transferability_ analysis) gives a more precise characterization of the discriminability-convergence tradeoff that arises in these GNNs [29], which is not elucidated by the corresponding asymptotic convergence result [30]. **Contributions.** In this paper, we prove and analyze a non-asymptotic approximation error bound for GNNs on graphs sampled from manifold, thus closing the gap between GNNs and MNNs with an explicit numerical relationship. We start by importing the definition of the manifold filter as a convolutional operation where the diffusions are exponentials of the Laplace-Beltrami (LB) operator \(\mathcal{L}\) of the manifold \(\mathcal{M}\subset\mathbb{R}^{\text{N}}\)[16]. Given a set of discrete sampling points from the manifold, we describe how to construct both dense and relatively sparse geometric graphs that approximate the underlying manifold in both the spatial and the spectral domains. Next, we import the concept of Frequency Difference Threshold (FDT) filters (Definition 3) [17] to overcome the challenge of dimensionality associated with the infinite-dimensional spectrum of the LB operator. We show that manifold filters exhibit a trade-off between their discriminability and their ability to be approximated by graph filters, which can be observed in the approximation error bound of geometric graph filters in Theorems 1 and 2. The same analysis is conducted for GNNs by incorporating nonlinearities (Theorem 3), but in GNNs we hypothesize that the trade-off is alleviated, i.e., that we can recover discriminability, through the addition of these nonlinear operations (Section V). In other words, geometric GNNs can be both discriminative and approximative of the underlying MNNs, which we verify empirically through numerical experiments (Section VI). Finally,
2302.01191
Noncommutative $C^*$-algebra Net: Learning Neural Networks with Powerful Product Structure in $C^*$-algebra
We propose a new generalization of neural network parameter spaces with noncommutative $C^*$-algebra, which possesses a rich noncommutative structure of products. We show that this noncommutative structure induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning equivariant features with respect to group actions. Numerical experiments illustrate the validity of our framework and its potential power.
Ryuichiro Hataya, Yuka Hashimoto
2023-01-26T14:35:37Z
http://arxiv.org/abs/2302.01191v2
Noncommutative \(C^{*}\)-algebra Net: Learning Neural Networks with Powerful Product Structure in \(C^{*}\)-algebra ###### Abstract We propose a new generalization of neural networks with noncommutative \(C^{*}\)-algebra. An important feature of \(C^{*}\)-algebras is their noncommutative structure of products, but the existing \(C^{*}\)-algebra net frameworks have only considered commutative \(C^{*}\)-algebras. We show that this noncommutative structure of \(C^{*}\)-algebras induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning invariant features with respect to group actions. We also show the validity of our framework numerically, which illustrates its potential power. ## 1 Introduction Generalization of the parameter space of neural networks from real numbers to others has attracted researchers for decades. Although using real-valued parameters is standard and straightforward for real-valued data, it may be more suitable to adopt parameters of complex numbers (Hirose, 1992; Nishikawa et al., 2005; Amin et al., 2008; Yadav et al., 2005; Trabelsi et al., 2018; Lee et al., 2022) or quaternion numbers Nitta (1995); Arena et al. (1997); Zhu et al. (2018); Gaudet and Maida (2018) for data in signal processing, computer vision, and robotics domains, among others. Clifford-algebra, the generalization of these numbers, allows more flexible geometrical processing of data, and thus is applied to neural networks Pearson and Bisset (1994); Buchholz (2005); Buchholz and Sommer (2008) to handle rich geometric relationships in data (Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022). Different from these approaches focusing on the geometric perspective of parameter values, another direction of generalization is to use function-valued parameters Rossi and Conan-Guez (2005); Thind et al. (2022), which broadens the applications of neural networks to functional data. Hashimoto et al. (2022) proposed generalizing neural network parameters to \(C^{*}\)-algebra, which is called \(C^{*}\)-algebra net. They showed that multiple related neural networks can be continuously combined into a single \(C^{*}\)-algebra net. For example, networks for the same task with different training datasets or different initial parameters can be combined continuously, which enables the construction of infinitely many networks and efficient learning using shared information among them. Such interaction among networks is also applicable to learning from related tasks, such as ensemble learning (Dong et al., 2020; Ganaie et al., 2022) and multitask learning (Zhang and Yang, 2022). However, because the product structure in the \(C^{*}\)-algebra that Hashimoto et al. (2021) focused on is commutative, they needed specially designed loss functions to induce the interaction. In this paper, we propose a new generalization of neural networks with noncommutative \(C^{*}\)-algebra, which also generalizes the framework of Hashimoto et al. (2022). \(C^{*}\)-algebra is a generalization of the space of complex numbers Murphy (1990); Hashimoto et al. (2021). Typical examples of \(C^{*}\)-algebras include the space of diagonal matrices and the space of (not necessarily diagonal) matrices. An important feature of \(C^{*}\)-algebra is the product structure. Analogous to the case of complex numbers, for any two elements \(a\) and \(b\) in a \(C^{*}\)-algebra, we can calculate the product \(a\cdot b\). However, unlike the case of complex numbers, the product is not always commutative, i.e., \(a\cdot b=b\cdot a\) is not always satisfied. For example, whereas the product of two diagonal matrices commutes, the product of two nondiagonal matrices does not commute. In the case of diagonal matrices, the product is just described by the product of each diagonal element, and there are no interactions with the other elements. On the other hand, the product of two nondiagonal matrices is described by the sum of the products between different elements in the matrices, which induces interactions with the other elements. For neural networks, the interactions induced by the noncommutative \(C^{*}\)-algebras correspond to interactions among multiple neural networks. Because the interactions are encoded in the structure of the network, we do not need to design loss functions to make the networks interact. Instead, the interactions required for them are implicitly and automatically learned through the noncommutative product structure in \(C^{*}\)-algebras. Therefore, our framework, which takes advantage of noncommutative \(C^{*}\)-algebras enables us to go beyond existing frameworks of neural networks. Our framework of the noncommutative \(C^{*}\)-algebra net is general and has a wide range of applications, not limited to the above case of matrices. For example, by setting the \(C^{*}\)-algebra as a group \(C^{*}\)-algebra, we can construct a group equivariant neural network. In this case, we can naturally generalize real-valued parameters of neural networks to those inducing group convolution. We can also set the \(C^{*}\)-algebra as the \(C^{*}\)-algebra of bounded linear operators on a function space, which can be applied to analyzing functional data. Our main contributions are summarized as follows: * We generalize the commutative \(C^{*}\)-algebra net proposed by Hashimoto et al. [2022] to noncommutative \(C^{*}\)-algebra, which enables us to take advantage of the noncommutative product structure in the \(C^{*}\)-algebra when learning neural networks. * We show a wide range of applications of our framework, inducing interactions among networks and learning invariant features with respect to group actions. * We empirically illustrate the validity of noncommutative \(C^{*}\)-algebra nets, including interactions among neural networks. We emphasize that \(C^{*}\)-algebra is a powerful tool for neural networks, and our work provides a lot of important perspectives about its application. ## 2 Background In this section, we review the mathematical background of \(C^{*}\)-algebra required for this paper and the existing \(C^{*}\)-algebra net. For more theoretical details of the \(C^{*}\)-algebra, see, for example, Murphy [1990]. ### \(C^{*}\)-algebra \(C^{*}\)-algebra is a generalization of the space of complex values. It has structures of the product, involution \({}^{*}\), and norm. **Definition 1** (\(C^{*}\)-algebra): _A set \(\mathcal{A}\) is called a \(C^{*}\)-algebra if it satisfies the following conditions: 1. \(\mathcal{A}\) is an algebra over \(\mathbb{C}\) and equipped with a bijection \((\cdot)^{*}:\mathcal{A}\rightarrow\mathcal{A}\) that satisfies the following conditions for \(\alpha,\beta\in\mathbb{C}\) and \(c,d\in\mathcal{A}\):_ * \((\alpha c+\beta d)^{*}=\overline{\alpha}c^{*}+\overline{\beta}d^{*}\)_,_ * \((cd)^{*}=d^{*}c^{*}\)_,_ * \((c^{*})^{*}=c\)_._ 2. \(\mathcal{A}\) _is a normed space with_ \(\|\cdot\|\)_, and for_ \(c,d\in\mathcal{A}\)_,_ \(\|cd\|\leq\|c\|\,\|d\|\) _holds. In addition,_ \(\mathcal{A}\) _is complete with respect to_ \(\|\cdot\|\)_._ 3. _For_ \(c\in\mathcal{A}\)_,_ \(\|c^{*}c\|=\|c\|^{2}\) _holds._ The product structure in \(C^{*}\)-algebras can be both commutative and noncommutative. **Example 1** (Commutative \(C^{*}\)-algebra): _Let \(\mathcal{A}\) be the space of continuous functions on a compact Hausdorff space \(\mathcal{Z}\). We can regard \(\mathcal{A}\) as a \(C^{*}\)-algebra by setting_ * _Product: Pointwise product of two functions, i.e., for_ \(a_{1},a_{2}\in\mathcal{A}\)_,_ \(a_{1}a_{2}(z)=a_{1}(z)a_{2}(z)\)_._ * _Involution: Pointwise complex conjugate, i.e., for_ \(a\in\mathcal{A}\)_,_ \(a^{*}(z)=\overline{a(z)}\)_._ * _Norm: Sup norm, i.e., for_ \(a\in\mathcal{A}\)_,_ \(\|a\|=\sup_{z\in\mathcal{Z}}|a(z)|\)_._ _In this case, the product in \(\mathcal{A}\) is commutative._ **Example 2** (Noncommutative \(C^{*}\)-algebra): _Let \(\mathcal{A}\) be the space of bounded linear operators on a Hilbert space \(\mathcal{H}\), which is denoted by \(\mathcal{B}(\mathcal{H})\). We can regard \(\mathcal{A}\) as a \(C^{*}\)-algebra by setting_ * _Product: Composition of two operators,_ * _Involution: Adjoint of an operator,_ * _Norm: Operator norm of an operator, i.e., for_ \(a\in\mathcal{A}\)_,_ \(\|a\|=\sup_{v\in\mathcal{H},\|v\|_{\mathcal{H}}=1}\|av\|_{\mathcal{H}}\)_. Here,_ \(\|\cdot\|_{\mathcal{H}}\) _is the norm in_ \(\mathcal{H}\)_. In this case, the product in_ \(\mathcal{A}\) _is noncommutative. Note that if_ \(\mathcal{H}\) _is a_ \(d\)_-dimensional space for a finite natural number_ \(d\)_, then elements in_ \(\mathcal{A}\) _are_ \(d\) _by_ \(d\) _matrices._ **Example 3** (Group \(C^{*}\)-algebra): _The group \(C^{*}\)-algebra on a group \(G\), which is denoted as \(C^{*}(G)\), is the set of maps from \(G\) to \(\mathbb{C}\) equipped with the following product, involution, and norm:_ * _Product:_ \((a\cdot b)(g)=\int_{G}a(h)b(h^{-1}g)\mathrm{d}\lambda(h)\) _for_ \(g\in G\)_,_ * _Involution:_ \(a^{*}(g)=\Delta(g^{-1})\overline{a(g^{-1})}\) _for_ \(g\in G\)_,_ * _Norm:_ \(\|a\|=\sup_{[\pi]\in G}\|\pi(a)\|\)_,_ _where \(\Delta(g)\) is a positive number satisfying \(\lambda(Eg)=\Delta(g)\lambda(E)\) for the Haar measure \(\lambda\) on \(G\). In addition, \(\hat{G}\) is the set of equivalence classes of irreducible unitary representations of \(G\). Note that if \(G\) is discrete, then \(\lambda\) is the counting measure on \(G\). In this paper, we focus mainly on the product structure of \(C^{*}(G)\). For details of the Haar measure and representations of groups, see Kirillov (1976). If \(G=\mathbb{Z}/p\mathbb{Z}\), then \(C^{*}(G)\) is \(C^{*}\)-isomorphic to the \(C^{*}\)-algebra of circulant matrices (Hashimoto et al., 2023). Note also that if \(G\) is noncommutative, then \(C^{*}(G)\) can also be noncommutative._ ### \(C^{*}\)-algebra net Hashimoto et al. (2022) proposed generalizing real-valued neural network parameters to commutative \(C^{*}\)-algebra-valued ones. Here, we briefly review the existing (commutative) \(C^{*}\)-algebra net. Let \(\mathcal{A}=C(\mathcal{Z})\), the commutative \(C^{*}\)-algebra of continuous functions on a compact Hausdorff space \(\mathcal{Z}\). Let \(H\) be the depth of the network and \(N_{0},\ldots,N_{H}\) be the width of each layer. For \(i=1,\ldots,H\), set \(W_{i}:\mathcal{A}^{N_{i-1}}\to\mathcal{A}^{N_{i}}\) as an Affine transformation defined with an \(N_{i}\times N_{i-1}\)\(\mathcal{A}\)-valued matrix and an \(\mathcal{A}\)-valued bias vector in \(\mathcal{A}^{N_{i}}\). In addition, set a nonlinear activation function \(\sigma_{i}:\mathcal{A}^{N_{i}}\to\mathcal{A}^{N_{i}}\). The commutative \(C^{*}\)-algebra net \(f:\mathcal{A}^{N_{0}}\to\mathcal{A}^{N_{H}}\) is defined as \[f=\sigma_{H}\circ W_{H}\circ\cdots\circ\sigma_{1}\circ W_{1}. \tag{1}\] By generalizing neural network parameters to functions, we can combine multiple standard (real-valued) neural networks continuously, which enables them to learn efficiently. We show an example of commutative \(C^{*}\)-nets below. To simplify the notation, we focus on the case where the network does not have biases. However, the same arguments are valid for the case where the network has biases. #### 2.2.1 The case of diagonal matrices If \(\mathcal{Z}\) is a finite set, then \(\mathcal{A}=\{a\in\mathbb{C}^{d\times d}\;\mid\;a\text{ is a diagonal matrix}\}\). The \(C^{*}\)-algebra net \(f\) on \(\mathcal{A}\) corresponds to \(d\) separate real or complex-valued sub-models. Indeed, denote by \(x^{j}\) the vector composed of the \(j\)th diagonal elements of \(x\in\mathcal{A}^{N}\), which is defined as the vector in \(\mathbb{C}^{N}\) whose \(k\)th element is the \(j\)th diagonal element of the \(\mathcal{A}\)-valued \(k\)th element of \(x\). Assume the activation function \(\sigma_{i}:\mathcal{A}^{N}\to\mathcal{A}^{N}\) is defined as \(\sigma_{i}(x)^{j}=\tilde{\sigma}_{i}(x^{j})\) for some \(\tilde{\sigma}_{i}:\mathbb{C}^{N}\to\mathbb{C}^{N}\). Since the \(j\)th diagonal element of \(a_{1}a_{2}\) for \(a_{1},a_{2}\in\mathcal{A}\) is the product of the \(j\)th element if \(a_{1}\) and \(a_{2}\), we have \[f(x)^{j}=\tilde{\sigma}_{H}\circ W_{H}^{j}\circ\cdots\circ\tilde{\sigma}_{1} \circ W_{1}^{j}, \tag{2}\] where \(W_{i}^{j}\in\mathbb{C}^{N_{i}\times N_{i-1}}\) is the matrix whose \((k,l)\)-entry is the \(j\)th diagonal of the \((k,l)\)-entry of \(W_{i}\in\mathcal{A}^{N_{i}\times N_{i-1}}\). Figure 1 (a) schematically shows the \(C^{*}\)-algebra net over diagonal matrices. Figure 1: Difference between commutative and noncommutative \(C^{*}\)-algebra nets. Noncommutative \(C^{*}\)-algebra Net Although the existing \(C^{*}\)-algebra net provides a framework for applying \(C^{*}\)-algebra to neural networks, it focuses on commutative \(C^{*}\)-algebras, whose product structure is simple. Therefore, we generalize the existing commutative \(C^{*}\)-algebra net to noncommutative \(C^{*}\)-algebra. Since the product structures in noncommutative \(C^{*}\)-algebras are more complicated than those in commutative \(C^{*}\)-algebras, they enable neural networks to learn features of data more efficiently. For example, if we focus on the \(C^{*}\)-algebra of matrices, then the neural network parameters describe interactions between multiple real-valued sub-models (see Section 3.1.1). Let \(\mathcal{A}\) be a general \(C^{*}\)-algebra and consider the network \(f\) in the same form as Equation (1). We emphasize that in our framework, the choice of \(\mathcal{A}\) is not restricted to a commutative \(C^{*}\)-algebra. We list examples of \(\mathcal{A}\) and their validity for learning neural networks below. ### Examples of \(C^{*}\)-algebras for neural networks As mentioned in the previous section, we focus on the case where the network does not have biases for simplification in this subsection. #### 3.1.1 Nondiagonal matrices Let \(\mathcal{A}=\mathbb{C}^{d\times d}\). Note that \(\mathcal{A}\) is a noncommutative \(C^{*}\)-algebra. In this case, unlike the network (2), the \(j\)th diagonal element of \(a_{1}a_{2}a_{3}\) for \(a_{1},a_{2},a_{3}\in\mathcal{A}\) depends not only on the \(j\)th diagonal element of \(a_{2}\), but also the other diagonal elements of \(a_{2}\). Thus, \(f(x)^{j}\) depends not only on the sub-model corresponding to \(j\)th diagonal element discussed in Section 2.2.1, but also on other sub-models. The nondiagonal elements in \(\mathcal{A}\) induce interactions between \(d\) real or complex-valued sub-models. In practice, to regard the nondiagonal elements as factors of interactions, their values should be small compared to the diagonal elements. We will see the effect of the nondiagonal elements in \(\mathcal{A}\) numerically in Section 4.1. Figure 1 (b) schematically shows the \(C^{*}\)-algebra net over nondiagonal matrices. #### 3.1.2 Block diagonal matrices Let \(\mathcal{A}=\{a\in\mathbb{C}^{d\times d}\,\mid\,a=\mathrm{diag}(\mathbf{a}_{ 1},\ldots,\mathbf{a}_{m}),\;\mathbf{a}_{i}\in\mathbb{C}^{d_{i}\times d_{i}}\}\). The product of two block diagonal matrices \(a=\mathrm{diag}(\mathbf{a}_{1},\ldots,\mathbf{a}_{m})\) and \(b=\mathrm{diag}(\mathbf{b}_{1},\ldots,\mathbf{b}_{m})\) can be written as \[ab=\mathrm{diag}(\mathbf{a}_{1}\mathbf{b}_{1},\ldots,\mathbf{a}_{m}\mathbf{b} _{m}).\] In a similar manner to Section 2.2.1, we denote by \(\mathbf{x}^{j}\) the \(N\) by \(d_{j}\) matrix composed of the \(j\)th diagonal blocks of \(x\in\mathcal{A}^{N}\). Assume the activation function \(\sigma_{i}:\mathcal{A}^{N}\to\mathcal{A}^{N}\) is defined as \(\sigma_{i}(x)=\mathrm{diag}(\tilde{\boldsymbol{\sigma}}_{i}^{1}(\mathbf{x}^{1} ),\ldots,\tilde{\boldsymbol{\sigma}}_{i}^{m}(\mathbf{x}^{m}))\) for some \(\tilde{\boldsymbol{\sigma}}_{i,j}:\mathbb{C}^{N\times d_{j}}\to\mathbb{C}^{N \times d_{j}}\). Then, we have \[\mathbf{f}(\mathbf{x})^{j}=\tilde{\boldsymbol{\sigma}}_{H}^{j}\circ\mathbf{W} _{H}^{j}\circ\cdots\circ\tilde{\boldsymbol{\sigma}}_{1}^{j}\circ\mathbf{W}_{1} ^{j}, \tag{3}\] where \(\mathbf{W}_{i}^{j}\in(\mathbb{C}^{d_{j}\times d_{j}})^{N_{i}\times N_{i-1}}\) is the block matrix whose \((k,l)\)-entry is the \(j\)th block diagonal of the \((k,l)\)-entry of \(W_{i}\in\mathcal{A}^{N_{i}\times N_{i-1}}\). In this case, we have \(m\) groups of sub-models, each of which is composed of interacting \(d_{j}\) sub-models mentioned in Section 3.1.1. Indeed, the block diagonal case generalizes the diagonal and nondiagonal cases stated in Sections 2.2.1 and 3.1.1. If \(d_{j}=1\) for all \(j=1,\ldots,m\), then the network (3) is reduced to the network (2) with diagonal matrices. If \(m=1\) and \(d_{1}=d\), then the network (3) is reduced to the network with \(d\) by \(d\) nondiagonal matrices. #### 3.1.3 Circulant matrices Let \(\mathcal{A}=\{a\in\mathbb{C}^{d\times d}\,\mid\,a\text{ is a circulant matrix}\}\). Here, a circulant matrix \(a\) is the matrix represented as \[a=\begin{bmatrix}a_{1}&a_{d}&\cdots&a_{2}\\ a_{2}&a_{1}&\cdots&a_{3}\\ &\ddots&\ddots&\\ a_{d}&a_{d-1}&\cdots&a_{1}\end{bmatrix}\] for \(a_{1},\ldots,a_{d}\in\mathbb{C}\). Note that in this case, \(\mathcal{A}\) is commutative. Circulant matrices are diagonalized by the discrete Fourier matrix as follows [10]. We denote by \(F\) the discrete Fourier transform matrix, whose \((i,j)\)-entry is \(\omega^{(i-1)(j-1)}/\sqrt{p}\), where \(\omega=\mathrm{e}^{2\pi\sqrt{-1}/d}\). **Lemma 1**: _Any circulant matrix \(a\) is decomposed as \(a=F\Lambda_{a}F^{*}\), where_ \[\Lambda_{a}=\mathrm{diag}\,\bigg{(}\sum_{i=1}^{d}a_{i}\omega^{(i-1)\cdot 0}, \ldots,\sum_{i=1}^{d}a_{i}\omega^{(i-1)(d-1)}\bigg{)}.\] Since \(ab=F\Lambda_{a}\Lambda_{b}F^{*}\) for \(a,b\in\mathcal{A}\), the product of \(a\) and \(b\) corresponds to the multiplication of each Fourier component of \(a\) and \(b\). Assume the activation function \(\sigma_{i}:\mathcal{A}^{N}\to\mathcal{A}^{N}\) is defined such that \((F^{*}\sigma_{i}(x)F)^{j}\) equals to \(\hat{\hat{\sigma}}_{i}((FxF^{*})^{j})\) for some \(\hat{\hat{\sigma}}_{i}:\mathbb{C}^{N}\to\mathbb{C}^{N}\). Then, we obtain the network \[(F^{*}f(x)F)^{j}=\hat{\hat{\sigma}}_{H}\circ\hat{W}_{H}^{\,j}\circ\cdots\circ \hat{\hat{\sigma}}_{1}\circ\hat{W}_{1}^{j}, \tag{4}\] where \(\hat{W}_{i}^{j}\in\mathbb{C}^{N_{i}\times N_{i-1}}\) is the matrix whose \((k,l)\)-entry is \((Fw_{i,k,l}F^{*})^{j}\), where \(w_{i,k,l}\) is the the \((k,l)\)-entry of \(W_{i}\in\mathcal{A}^{N_{i}\times N_{i-1}}\). The \(j\)th sub-model of the network (4) corresponds to the network of the \(j\)th Fourier component. **Remark 1**: _The \(j\)th sub-model of the network (4) does not interact with those of other Fourier components than the \(j\)th component. This fact corresponds to the fact that \(\mathcal{A}\) is commutative in this case. Analogous to the case in Section 3.1.1, if we set \(\mathcal{A}\) as noncirculant matrices, then we obtain interactions between sub-models corresponding to different Fourier components._ #### 3.1.4 Group \(C^{*}\)-algebra on a symmetric group Let \(G\) be the symmetric group on the set \(\{1,\ldots,d\}\) and let \(\mathcal{A}=C^{*}(G)\). Note that since \(G\) is noncomuutative, \(C^{*}(G)\) is also noncommutative. Then, the output \(f(x)\in\mathcal{A}^{N_{H}}\) is the \(\mathbb{C}^{N_{H}}\)-valued map on \(G\). Using the product structure introduced in Example 3, we can construct a network that takes the permutation of data into account. Indeed, an element \(w\in\mathcal{A}\) of a weight matrix \(W\in\mathcal{A}^{N_{i-1}\times N_{i}}\) is a function on \(G\). Thus, \(w(g)\) describes the weight corresponding to the permutation \(g\in G\). Since the product of \(x\in C^{*}(G)\) and \(w\) is defined as \(wx(g)=\sum_{h\in G}w(h)x(h^{-1}g)\), by applying \(W\), all the weights corresponding to the permutations affect the input. For example, let \(z\in\mathbb{R}^{d}\) and set \(x\in C^{*}(G)\) as \(x(g)=g\cdot z\), where \(g\cdot z\) is the action of \(g\) on \(z\), i.e., the permutation of \(z\) with respect to \(g\). Then, we can input all the patterns of permutations of \(z\) simultaneously, and by virtue of the product structure in \(C^{*}(G)\), the network is learned with the interaction among these permutations. Regarding the output, if the network is learned so that the outputs \(y\) become constant functions on \(G\), i.e., \(y(g)=c\) for some constant \(c\), then it means that \(c\) is invariant with respect to \(g\), i.e., invariant with respect to the permutation. We will numerically investigate the application of the group \(C^{*}\)-algebra net to permutation invariant problems in Section 4.2. **Remark 2**: _If the activation function \(\sigma\) is defined as \(\sigma(x)(g)=\sigma(x(g))\), i.e., applied elementwisely to \(x\), then the network \(f\) is permutation equivariant. That is, even if the input \(x(g)\) is replaced by \(x(gh)\) for some \(h\in G\), the output \(f(x)(g)\) is replaced by \(f(x)(gh)\). This is because the product in \(C^{*}(G)\) is defined as a convolution. This feature of the convolution has been studied for group equivariant neural networks (Lenssen et al., 2018; Cohen et al., 2019; Sannai et al., 2021; Sonoda et al., 2022). The above setting of the \(C^{*}\)-algebra net provides us with a design of group equivariant networks from the perspective of \(C^{*}\)-algebra._ #### 3.1.5 Bounded linear operators on a Hilbert space For functional data, we can also set \(\mathcal{A}\) as an infinite-dimensional space. Using infinite-dimensional \(C^{*}\)-algebra for analyzing functional data has been proposed (Hashimoto et al., 2021). We can also adopt this idea for neural networks. Let \(\mathcal{A}=\mathcal{B}(L^{2}(\Omega))\) for a measure space \(\Omega\). Set \(\mathcal{A}_{0}=\{a\in\mathcal{A}\mid a\text{ is a multiplication operator}\}\). Here, a multiplication operator \(a\) is a linear operator that is defined as \(av=v\cdot u\) for some \(u\in L^{\infty}(\Omega)\). The space \(\mathcal{A}_{0}\) is a generalization of the space of diagonal matrices to the infinite-dimensional space. If we restrict elements of weight matrices to \(\mathcal{A}_{0}\), then we obtain infinitely many sub-models without interactions. Since outputs are in \(\mathcal{A}_{0}^{N_{H}}\), we can obtain functional data as outputs. Similar to the case of matrices (see Section 3.1.1), by setting elements of weight matrices as elements in \(\mathcal{A}\), we can take advantage of interactions among infinitely many sub-models. ### Approximation of functions with interactions by \(C^{*}\)-algebra net We observe what kind of functions the \(C^{*}\)-algebra net can approximate. We focus on the case of \(\mathcal{A}=\mathbb{C}^{d\times d}\). Consider a shallow network \(f:\mathcal{A}^{N_{0}}\to\mathcal{A}\) defined as \(f(x)=W_{2}^{*}\sigma(W_{1}x+b)\), where \(W_{1}\in\mathcal{A}^{N_{1}\times N_{0}}\), \(W_{2}\in\mathcal{A}^{N_{1}}\), and \(b\in\mathcal{A}^{N_{1}}\). Let \(\tilde{f}:\mathcal{A}^{N_{0}}\to\mathcal{A}\) be the function in the form of \(\tilde{f}(x)=[\sum_{j=1}^{d}f_{kj}(x^{l})]_{kl}\), where \(f_{kj}:\mathbb{C}^{N_{0}d}\to\mathbb{R}\). Here, we abuse the notation and denote by \(x^{l}\in\mathbb{C}^{N_{0}d}\) the \(l\)th column of \(x\) regarded as an \(N_{0}d\) by \(d\) matrix. Assume \(f_{kj}\) is represented as \[f_{kj}(x)=\int_{\mathbb{R}}\int_{\mathbb{R}^{N_{0}d}}T_{kj}(w,b)\sigma(w^{*}x+ b)\mathrm{d}w\,\mathrm{d}b \tag{5}\] for some \(T_{kj}:\mathbb{R}^{N_{0}d}\times\mathbb{R}\to\mathbb{R}\). By the theory of the ridgelet transform, such \(T_{kj}\) exists for most realistic settings [10, 11]. For example, if \(f_{kj}\) and its Fourier transform is in \(L^{1}(\mathbb{R}^{N_{0}d})\) and \(\sigma\) is the ReLU function, then \(f_{kj}\) has a representation of Equation (5). We discretize Equation (5) by replacing the Lebesgue measures with \(\sum_{i=1}^{N_{1}}\delta_{w_{ij}}\) and \(\sum_{i=1}^{N_{1}}\delta_{b_{ij}}\), where \(\delta_{w}\) is the Dirac measure centered at \(w\). Then, the \((k,l)\)-entry of \(\tilde{f}(x)\) is written as \[\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{kj}(w_{ij},b_{ij})\sigma(w_{ij}^{*}x^{l}+b_{ ij}).\] Setting the \(i\)-th element of \(W_{2}\in\mathcal{A}^{N_{1}}\) as \([T_{kj}(w_{ij},b_{ij})]_{kj}\), the \((i,m)\)-entry of \(W_{1}\in\mathcal{A}^{N_{1}\times N_{0}}\) as \([(w_{i,j})_{md+l}]_{jl}\), the \(i\)th element of \(b\in\mathcal{A}^{N_{1}}\) as \([b_{j}]_{jl}\), we obtain \[\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{kj}(w_{ij},b_{ij})\sigma(w_{ij}^{*}x^{l}+b_ {ij})=(W_{2}^{k})^{*}\sigma(W_{1}x^{l}+b^{l}),\] which is the \((k,l)\)-entry of \(f(x)\). **Remark 3**: _As we discussed in Sections 2.2.1 and 3.1.1, a \(C^{*}\)-algebra net over matrices can be regarded as \(d\) interacting sub-models. The above argument insists the \(l\)th column of \(f(x)\) and \(\tilde{f}(x)\) depends only on \(x^{l}\). Thus, in this case, if we input data \(x^{l}\) corresponding to the \(l\)th sub-model, then the output is obtained as the \(l\)th column of the \(\mathcal{A}\)-valued output \(f(x)\). On the other hand, the weight matrices \(W_{1}\) and \(W_{2}\) and the bias \(b\) are used commonly in providing the outputs for any sub-model, i.e., \(W_{1}\), \(W_{2}\), and \(b\) are learned using data corresponding to all the sub-models. Therefore, \(W_{1}\), \(W_{2}\), and \(b\) induce interactions among the sub-models._ ## 4 Experiments In this section, we numerically demonstrate the abilities of noncommutative \(C^{*}\)-algebra nets using nondiagonal \(C^{*}\)-algebra nets over matrices and group \(C^{*}\)-algebra nets. We use \(C^{*}\)-algebra-valued multi-layered perceptrons (MLPs) to simplify the experiments. However, they can be naturally extended to other neural networks, such as convolutional neural networks. The models were implemented with JAX[Bradbury et al., 2018]. Experiments were conducted on an AMD EPYC 7543 CPU and an NVIDIA A-100 GPU with CUDA 11.7. See Section 6.1 for additional information on experiments. ### \(C^{*}\)-algebra nets over matrices In a noncommutative \(C^{*}\)-algebra net over matrices consisting of nondiagonal-matrix parameters, each sub-model is expected to interact with others and thus improve performance compared with its commutative counterpart consisting of diagonal matrices. We demonstrate the effectiveness of such interaction using image classification and neural implicit representation (NIR) tasks. See Section 3.1.1 for the notations. When training the \(j\)th sub-model (\(j=1,2,\ldots,d\)), an original \(N_{0}\)-dimensional input data point \(\mathbf{x}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{N_{0}}]\in\mathbb{R}^{N_{0}}\) is converted to its corresponding representation \(x\in\mathcal{A}^{N_{0}}=\mathbb{R}^{N_{0}\times d\times d}\) such that \(x_{i,j,j}=\mathbf{x}_{i}\) for \(i=1,2,\ldots,N_{0}\) and \(0\) otherwise. The loss to its \(N_{H}\)-dimensional output of a \(C^{*}\)-algebra net \(y\in\mathcal{A}^{N_{H}}\) and the target \(t\in\mathcal{A}^{N_{H}}\) is computed as \(\ell(y_{:,j,j},t_{:,j,j})+\frac{1}{2}\sum_{k,(l\neq j)}(y_{k,j,l}^{2}+y_{k,l, j}^{2})\), where \(\ell\) is a certain loss function; we use the mean squared error (MSE) for image classification and the Huber loss for NIR. The second and third terms suppress the nondiagonal elements of the outputs to \(0\). In both examples, we use leaky-ReLU as an activation function and apply it only to the diagonal elements of pre-activations. #### 4.1.1 Image classification We conduct experiments of image classification tasks using MNIST [Le Cun et al., 1998], Kuzushiji-MNIST [Clanuwat et al., 2018], and Fashion-MNIST [Xiao et al., 2017], which are composed of 10-class \(28\times 28\) gray-scale images. Each sub-model is trained on a mutually exclusive subset sampled from the original training data and then evaluated on the entire test data. Each subset is sampled to be balanced, i.e., each class has the same number of training samples. As a baseline, we use a commutative \(C^{*}\)-algebra net over diagonal matrices, which consists of the same sub-models but cannot interact with other sub-models. Both noncommutative and commutative models share hyperparameters: the number of layers was set to \(4\), the hidden size was set to \(128\), and the models were trained for \(30\) epochs. Table 1 shows the average test accuracy over sub-models. As can be seen, a noncommutative \(C^{*}\)-algebra net consistently outperforms its commutative counterpart, which is significant, particularly when the number of sub-models is \(40\). Note that when the number of sub-models is \(40\), the size of the training dataset for each sub-model is \(40\) times smaller than the original one, and thus, the commutative \(C^{*}\)-algebra net fails to learn. Nevertheless, the noncommutative \(C^{*}\)-algebra net retains performance mostly. These results suggest that sub-models share knowledge through interaction. Additionally, Table 2 illustrates that related tasks help performance improvement through interaction. Specifically, we prepare five sub-models per dataset, one of MNIST, Kuzushiji-MNIST, and Fashion-MNIST, and train a total of 15 sub-models simultaneously. In addition to the commutative \(C^{*}\)-algebra net, where sub-models have no interaction, and the noncommutative \(C^{*}\)-algebra net, where each sub-model can interact with any other sub-models, we use a block-diagonal noncommutative \(C^{*}\)-algebra net (see Section 3.1.2), where each sub-model can only interact with other sub-models trained on the same dataset. Table 2 shows that the fully noncommutative \(C^{*}\)-algebra net surpasses the block-diagonal one on Kuzushiji-MNIST and Fashion-MNIST, implying that not only intra-task interaction but also inter-task interaction helps performance gain. Note that each dataset is subsampled so that every class has the same number of samples, so it is not possible to compare the values of Tables 1 and 2. #### 4.1.2 Neural implicit representation In the next experiment, we use a \(C^{*}\)-algebra net over matrices to learn implicit representations of 2D images that map each pixel coordinate to its RGB colors (Sitzmann et al., 2020; Xie et al., 2022). Specifically, an input coordinate in \([0,1]^{2}\) is transformed into a random Fourier features in \([-1,1]^{320}\) and then converted into its \(C^{*}\)-algebraic representation over matrices as an input to a \(C^{*}\)-algebra net over matrices. Similar to the image classification task, we compare \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & \# sub-models & Commutative & Noncommutative \\ & & \(C^{*}\)-algebra net & \(C^{*}\)-algebra net \\ & & (baseline) & \\ \hline \multirow{4}{*}{MNIST} & 5 & \(0.963\pm 0.003\) & \(0.970\pm 0.003\) \\ & 10 & \(0.937\pm 0.004\) & \(0.956\pm 0.002\) \\ & 20 & \(0.817\pm 0.018\) & \(0.932\pm 0.003\) \\ & 40 & \(0.107\pm 0.008\) & \(0.858\pm 0.004\) \\ \hline \multirow{4}{*}{Kuzushiji-MNIST} & 5 & \(0.839\pm 0.002\) & \(0.858\pm 0.003\) \\ & 10 & \(0.770\pm 0.006\) & \(0.813\pm 0.006\) \\ \cline{1-1} & 20 & \(0.577\pm 0.024\) & \(0.746\pm 0.008\) \\ \cline{1-1} & 40 & \(0.101\pm 0.004\) & \(0.577\pm 0.010\) \\ \hline \multirow{4}{*}{Fashion-MNIST} & 5 & \(0.861\pm 0.002\) & \(0.868\pm 0.002\) \\ \cline{1-1} & 10 & \(0.837\pm 0.002\) & \(0.852\pm 0.004\) \\ \cline{1-1} & 20 & \(0.740\pm 0.007\) & \(0.829\pm 0.004\) \\ \cline{1-1} & 40 & \(0.103\pm 0.010\) & \(0.782\pm 0.005\) \\ \hline \hline \end{tabular} \end{table} Table 1: Average test accuracy over sub-models of commutative and noncommutative \(C^{*}\)-algebra nets over matrices on test datasets. Interactions between sub-models that the noncommutative \(C^{*}\)-algebra net introduces improve performance significantly when the number of sub-models is 40. noncommutative NIRs with commutative NIRs, using the following hyperparameters: the number of layers to 6 and the hidden dimension to 256. These NIRs learn \(128\times 128\)-pixel images of ukiyo-e pictures from The Metropolitan Museum of Art1 and photographs of cats from the AFHQ dataset [Choi et al., 2020]. Footnote 1: [https://www.metmuseum.org/art/the-collection](https://www.metmuseum.org/art/the-collection) Figure 2 (top) shows the curves of the average PSNR (Peak Signal-to-Noise Ratio) of sub-models corresponding to the image below. Both commutative and noncommutative -algebra nets consist of five sub-models trained on five ukiyo-e pictures (see also Figure 6). PSNR, the quality measure, of the noncommutative NIR grows faster, and correspondingly, it learns the details of ground truth images faster than its commutative version (Figure 2 bottom). Noticeably, the noncommutative representations reproduce colors even at the early stage of learning, although the commutative ones remain monochrome after 500 iterations of training. Along with the similar trends observed in the pictures of cats (Figure 3), these results further emphasize the effectiveness of the interaction. Longer-term results are presented in Figure 7. This INR for 2D images can be extended to represent 3D models. Figure 4 shows synthesized views of 3D implicit representations using the same -algebra MLPs trained on three 3D chairs from the ShapeNet dataset [Chang et al., 2015]. The presented poses are unseen during training. Again, the noncommutative INR reconstructs the chair models with less noisy artifacts, indicating that interaction helps efficient learning. See Sections 6.1 and 6.2 for details and results. ### Group -algebra nets As another experimental example of -algebra nets, we showcase group -algebra nets, which we introduced in Section 3.1.4. The group -algebra nets take functions on a symmetric group as input and return functions on the group as output. \begin{table} \begin{tabular}{l l c c} \hline \hline Dataset & Commutative & Block-diagonal & Noncommutative \\ & -algebra net & noncommutative & -algebra net \\ & & -algebra net & \\ \hline MNIST & -algebra net & -algebra net & -algebra net \\ K-MNIST & -algebra net & -algebra net & -algebra net \\ F-MNIST & -algebra net & -algebra net & -algebra net \\ \hline \hline \end{tabular} \end{table} Table 2: Average test accuracy over five sub-models simultaneously trained on the tree datasets. The (fully) noncommutative -algebra net outperforms block-diagonal the noncommutative -algebra net on Kuzushiji-MNIST (K-MNIST) and Fashion-MNIST (F-MNIST), indicating that the interaction can leverage related tasks. Refer to Section 3.1.4 for notations. A group \(C^{*}\)-algebra net is trained on data \(\{(x,y)\in\mathcal{A}^{N_{0}}\times\mathcal{A}^{N_{H}}\}\), where \(x\) and \(y\) are \(N_{0}\)- and \(N_{H}\)-dimensional vector-valued functions. Practically, such functions may be represented as real tensors, e.g., \(x\in\mathbb{C}^{N_{0}\times\#G}\), where \(\#G\) is the size of \(G\). Using product between functions explained in Section 3.1.4 and element-wise addition, a linear layer, and consequently, an MLP, on \(\mathcal{A}\) can be constructed. Following the \(C^{*}\)-algebra nets over matrices, we use leaky ReLU for activations. One of the simplest tasks for the group \(C^{*}\)-algebra nets is to learn permutation-invariant representations, e.g., predicting the sum of given \(d\) digits. In this case, \(x\) is a function that outputs permutations of features of \(d\) digits, and \(y(g)\) is an identity function that returns their sum for all \(g\in G\). In this experiment, we use features of MNIST digits of a pre-trained CNN in 32-dimensional vectors. Digits are selected so that their sum is less than 10 to simplify the problem, and the model is trained to classify the sum of given digits using cross-entropy loss. We set the number of layers to 4 and the hidden dimension to 32. For comparison, we prepare a permutation-invariant DeepSet model (Zaheer et al., 2017), which uses sum pooling for permutation invariance, containing the same number of parameters as floating-point numbers with the group \(C^{*}\)net. Figure 2: Average PSNR of implicit representations of the image below (top) and reconstructions of the ground truth image at every 100 iterations (bottom). The noncommutative \(C^{*}\)-algebra net learns the geometry and colors of the image faster than its commutative counterpart. Table 3 displays the results of the task with various training dataset sizes when \(d=3\). What stands out in the table is that the group \(C^{*}\)-algebra net consistently outperforms the DeepSet model by large margins, especially when the number of training data is limited. Additionally, as can be found in Figure 5, the group \(C^{*}\)-algebra net converges much faster than the DeepSet model. These results suggest that the inductive biases implanted by the product structure in the group \(C^{*}\)-algebra net are effective. ## 5 Conclusion and Discussion In this paper, we have generalized the space of neural network parameters to noncommutative \(C^{*}\)-algebras. Their rich product structures bring powerful properties to neural networks. For example, a \(C^{*}\)-algebra net over nondiagonal matrices enables its sub-models to interact, and a group \(C^{*}\)-algebra net learns permutation-equivariant features. We have empirically demonstrated the validity of these properties in various tasks, image classification, neural implicit representation, and sum-of-digits tasks. Although Section 4 experimentally showed that noncommutative \(C^{*}\)-algebra nets outperformed the baselines, practical consideration of noncommutative \(C^{*}\)-algebra nets may arise in their computational complexities. In particular, the Figure 3: Ground truth images and their implicit representations of commutative and noncommutative \(C^{*}\)-algebra nets after 500 iterations of training. The noncommutative \(C^{*}\)-algebra net reproduces colors more faithfully. \(C^{*}\)-algebra net over matrices used in the experiments requires \(O(d^{2})\) space complexity for the number of sub-models \(d\), which limits the possible number of sub-models and their interactions. This complexity could be alleviated by, for example, parameter sharing or introducing structures to nondiagonal elements by an analogy between self-attentions and their efficient variants. The design of such structures may be data-dependent, and we leave it for future research. Another important and interesting research direction is an application of infinite-dimensional \(C^{*}\)-algebras. In this paper, we focused mainly on finite-dimensional \(C^{*}\)-algebras. We showed that the product structure in \(C^{*}\)-algebras is a powerful tool for neural networks, for example, learning with interactions and group equivariance (or invariance) even for the finite-dimensional case. Infinite-dimensional \(C^{*}\)-algebra allows us to analyze functional data. Practical applications of our framework to functional data with infinite-dimensional Figure 4: Synthesized views of 3D implicit representations of commutative and noncommutative \(C^{*}\)-algebra nets after 5000 iterations of training. The noncommutative \(C^{*}\)-algebra net can produce finer details. Note that the commutative \(C^{*}\)-algebra net could not synthesize the chair on the left. \begin{table} \begin{tabular}{c c c} \hline \hline Dataset size & DeepSet & Group \(C^{*}\)-algebra net \\ \hline 1k & \(0.413\pm 0.031\) & \(0.777\pm 0.009\) \\ 5k & \(0.807\pm 0.031\) & \(0.921\pm 0.002\) \\ 10k & \(0.878\pm 0.009\) & \(0.944\pm 0.005\) \\ 50k & \(0.904\pm 0.007\) & \(0.971\pm 0.001\) \\ \hline \hline \end{tabular} \end{table} Table 3: Average test accuracy of a DeepSet model and a group \(C^{*}\)-algebra net on test data of the sum-of-digits task after 100 epochs of training. The group \(C^{*}\)-algebra net can learn from fewer data. \(C^{*}\)-algebras are also our future work. Our framework with noncommutative \(C^{*}\)-algebras has a wide range of applications, and we believe that our framework opens up a new approach to learning neural networks. ## Acknowledgement We would like to thank Dr. Tomohiro Hayase for a helpful and constructive discussion about the application of \(C^{*}\)-algebra to neural networks. ## 6 Supplemental Material ### Implementation details We implemented \(C^{*}\)-algebra nets using JAX[1] with equinox[13] and optax[1]. Throughout the experiments, we used the Adam optimizer [14] with a learning rate of \(1.0\times 10^{-4}\), except for the 3D NIR experiment, where Adam's initial learning rate was set to \(1.0\times 10^{-3}\). We set the batch size to 32, except for the 2D NIR, where each batch consisted of all pixels, and 3D NIR, where a batch size of 4 was used. The implementation of 3D neural implicit representation (Section 4.1.2) is based on a simple NeRF-like model and its renderer in Tancik et al. [2021]. For training, 25 views of each 3D chair from the ShapeNet dataset [14] are adopted with their \(64\times 64\) pixel reference images. The same \(C^{*}\)-algebra MLPs with the 2D experiments were used, except for the hyperparameters: the number of layers of four and the hidden dimensional size of 128. Figure 5: Average test accuracy curves of a DeepSet model and a group \(C^{*}\)-algebra net trained on 10k data of the sum-of-digits task. The group \(C^{*}\)-algebra net can learn more efficiently and effectively. The permutation-invariant DeepSet model used in Section 4.2 processes each data sample with a four-layer MLP with hyperbolic tangent activation, sum-pooling, and a linear classifier. Although we tried leaky ReLU activation as the group \(C^{*}\)-algebra net, this setting yielded sub-optimal results. The hidden dimension of the MLP was set to 84 to match the number of floating-point-number parameters equal to that of the group \(C^{*}\)-algebra net. ### Additional results Figures 6 and 7 present the additional figures of 2D INRs (Section 4.1.2). Figure 6 is an ukiyo-e counterpart of Figure 3 in the main text. Again, the noncommutative \(C^{*}\)-algebra net learns color details faster than the commutative one. Figure 7 shows average PSNR curves over three NIRs of the image initialized with different random states for 5,000 iterations. Although it is not as effective as the beginning stage, the noncommutative \(C^{*}\)-algebra net still outperforms the commutative one after the convergence. \begin{table} \begin{tabular}{c c} \hline Commutative \(C^{*}\)-algebra net & Noncommutative \(C^{*}\)-algebra net \\ \hline \(18.40\pm 4.30\) & \(25.22\pm 1.45\) \\ \hline \hline \end{tabular} \end{table} Table 4: Average PSNR over synthesized views. The specified poses of the views are unseen during training. Figure 8: Synthesized views of implicit representations of a chair. Figure 7: Average PSNR over implicit representations of the image of commutative and noncommutative \(C^{*}\)-algebra nets trained on five cat pictures (top) and reconstructions of the ground truth image at every 500 iterations (bottom).
2304.03408
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the $O(1/\sqrt{\text{width}})$ fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width.
Blake Bordelon, Cengiz Pehlevan
2023-04-06T23:11:49Z
http://arxiv.org/abs/2304.03408v3
# Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks ###### Abstract We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the \(\mathcal{O}(1/\sqrt{\text{width}})\) fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width. ## 1 Introduction Learning dynamics of deep neural networks are challenging to analyze and understand theoretically, but recent progress has been made by studying the idealization of infinite-width networks. Two types of infinite-width limits have been especially fruitful. First, the kernel or lazy infinite-width limit, which arises in the standard or neural tangent kernel (NTK) parameterization, gives prediction dynamics which correspond to a linear model [1, 2, 3, 4, 5]. This limit is theoretically tractable but fails to capture adaptation of internal features in the neural network, which are thought to be crucial to the success of deep learning in practice. Alternatively, the mean field or \(\mu\)-parameterization allows feature learning at infinite width [6, 7, 8, 9]. With a set of well-defined infinite-width limits, prior theoretical works have analyzed finite networks in the NTK parameterization perturbatively, revealing that finite width both enhances the amount of feature evolution (which is still small in this limit) but also introduces variance in the kernels and the predictions over random initializations [10, 11, 12, 13, 14, 15]. Because of these competing effects, in some situations wider networks are better, and in others wider networks perform worse [16]. In this paper, we analyze finite-width network learning dynamics in the mean field parameterization. In this parameterization, wide networks are empirically observed to outperform narrow networks [7, 17, 18]. Our results and framework provide a methodology for reasoning about detrimental finite-size effects in such feature-learning neural networks. We show that observable averages involving kernels and predictions obey a well-defined power series in inverse width even in rich training regimes. We generally observe that the leading finite-size corrections to both the bias and variance components of the square loss are increased for narrower networks, and diminish performance. Further, we show that richer networks are closer to their corresponding infinite-width mean field limit. For simple tasks and architectures the leading \(\mathcal{O}(1/\text{width})\) corrections to the error can be descriptive, while for large sample size or more realistic tasks, higher order corrections appear to become relevant. Concretely, our contributions are listed below: 1. Starting from a dynamical mean field theory (DMFT) description of infinite-width nonlinear deep neural network training dynamics, we provide a complete recipe for computing fluctuation dynamics of DMFT order parameters over random network initializations during training. These include the variance of the training and test predictions and the \(\mathcal{O}(1/\text{width})\) variance of feature and gradient kernels throughout training. 2. We first solve these equations for the lazy limit, where no feature learning occurs, recovering a simple differential equation which describes how prediction variance evolves during learning. 3. We solve for variance in the rich feature learning regime in two-layer networks and deep linear networks. We show richer nonlinear dynamics improve the signal-to-noise ratio (SNR) of kernels and predictions, leading to closer agreement with infinite-width mean field behavior. 4. We analyze in a two-layer model why larger training set sizes in the overparameterized regime enhance finite-width effects and how richer training can reduce this effect. 5. We show that large learning rate effects such as edge-of-stability [19, 20, 21] dynamics can be well captured by infinite width theory, with finite size variance accurately predicted by our theory. 6. We test our predictions in Convolutional Neural Networks (CNNs) trained on CIFAR-10 [22]. We observe that wider networks and richly trained networks have lower logit variance as predicted. However, the timescale of training dynamics is significantly altered by finite width even after ensembling. We argue that this is due to a detrimental correction to the mean dynamical NTK. ### Related Works Infinite-width networks at initialization converge to a Gaussian process with a covariance kernel that is computed with a layerwise recursion [23, 24, 25, 26, 13]. In the large but finite width limit, these kernels do not concentrate at each layer, but rather propagate finite-size corrections forward through the network [27, 28, 29, 30, 14]. During gradient-based training with the NTK parameterization, a hierarchy of differential equations have been utilized to compute small feature learning corrections to the kernel through training [10, 11, 12, 13]. However the higher order tensors required to compute the theory are initialization dependent, and the theory breaks down for sufficiently rich feature learning dynamics. Various works on Bayesian deep networks have also considered fluctuations and perturbations in the kernels at finite width during inference [31, 32]. Other relevant work in this domain are [33, 34, 35, 36, 37, 38, 39]. An alternative to standard/NTK parameterization is the mean field or \(\mu P\)-limit where features evolve even at infinite width [6, 7, 8, 9, 40, 41, 42]. Recent studies on two-layer mean field networks trained online with Gaussian data have revealed that finite networks have larger sensitivity to SGD noise [43, 44]. Here, we examine how finite-width neural networks are sensitive to initialization noise. Prior work has studied how the weight space distribution and predictions converge to mean field dynamics with a dynamical error \(\mathcal{O}(1/\sqrt{\text{width}})\)[40, 45], however in the deep case this requires a probability distribution over couplings between adjacent layers. Our analysis, by contrast, focuses on a function and kernel space picture which decouples interactions between layers at infinite width. A starting point for our present analysis of finite-width effects was a previous set of studies [9, 46] which identified the DMFT action corresponding to randomly initialized deep NNs which generates the distribution over kernel and network prediction dynamics. These prior works discuss the possibility of using a finite-size perturbation series but crucially failed to recognize the role of the network prediction fluctuations on the kernel fluctuations which are necessary to close the self-consistent equations in the rich regime. Using the mean field action to calculate a perturbation expansion around DMFT is a long celebrated technique to obtain finite size corrections in physics [47, 48, 49, 50] and has been utilized for random untrained recurrent networks [51, 52], and more recently to calculate variance of feature kernels \(\Phi^{\ell}\) at initialization \(t=0\) in deep MLPs or RNNs [53]. We extend these prior studies to the dynamics of training and to probe how feature learning alters finite size corrections. ## 2 Problem Setup We consider wide neural networks where the number of neurons (or channels for a CNN) \(N\) in each layer is large. For a multi-layer perceptron (MLP), the network is defined as a map from input \(\mathbf{x}_{\mu}\in\mathbb{R}^{D}\) to hidden activations \(\mathbf{h}_{\mu}^{\ell}\in\mathbb{R}^{N}\) in layers \(\ell\in\{1,...,L\}\) and finally output \(f_{\mu}\) \[f_{\mu}=\frac{1}{\gamma N}\mathbf{w}^{L}\cdot\phi(\mathbf{h}_{\mu}^{L})\,\quad\mathbf{h}_{\mu}^{\ell+1}= \frac{1}{\sqrt{N}}\mathbf{W}^{\ell}\phi(\mathbf{h}_{\mu}^{\ell})\,\quad\mathbf{h}_{\mu}^{1}=\frac{1}{\sqrt{D}}\mathbf{W}^{0}\mathbf{x}_{ \mu}, \tag{1}\] where \(\gamma\) is a scale factor that controls feature learning strength, with large \(\gamma\) leading to rich feature learning dynamics and the limit of small \(\gamma\to 0\) (or generally if \(\gamma\) scales as \(N^{-\alpha}\) for \(\alpha>0\) as \(N\to\infty\), NTK parameterization corresponds to \(\alpha=\frac{1}{2}\)) gives lazy learning where no features are learned [4, 7, 9]. The parameters \(\mathbf{\theta}=\{\mathbf{W}^{0},\mathbf{W}^{1},...,\mathbf{w}^{L}\}\) are optimized with gradient descent or gradient flow \(\frac{d}{dt}\mathbf{\theta}=-N\gamma^{2}\nabla_{\mathbf{\theta}}\mathcal{L}\) where \(\mathcal{L}=\mathbb{E}_{\mathbf{x}_{\mu}\in\mathcal{D}}\left({f(\mathbf{x}_{\mu},\mathbf{ \theta}),y_{\mu}}\right)\) is a loss computed over dataset \(\mathcal{D}=\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots(\mathbf{x}_{P},y_{P})\}\). This parameterization and learning rate scaling ensures that \(\frac{d}{dt}f_{\mu}\sim\mathcal{O}_{N,\gamma}(1)\) and \(\frac{d}{dt}\mathbf{h}_{\mu}^{\ell}=\mathcal{O}_{N,\gamma}(\gamma)\) at initialization. This is equivalent to maximal update parameterization (\(\mu\)P)[8], which can be easily extended to other architectures including neural networks with trainable bias parameters as well as convolutional, recurrent, and attention layers [8, 9]. ## 3 Review of Dynamical Mean Field Theory The infinite-width training dynamics of feature learning neural networks was described by a DMFT in [9, 46]. We first review the DMFT's key concepts, before extending it to get insight into finite-widths. To arrive at the DMFT, one first notes that the training dynamics of such networks can be rewritten in terms of a collection of dynamical variables (or _order parameters_) \(\mathbf{q}=\mathrm{Vec}\{f_{\mu}(t),\Phi_{\mu\nu}^{\ell}(t,s),G_{\mu\nu}^{\ell}(t,s ),...\}\)[9], which include feature and gradient kernels [9, 54] \[\Phi_{\mu\nu}^{\ell}(t,s)=\frac{1}{N}\phi(\mathbf{h}_{\mu}^{\ell}(t))\cdot\phi(\bm {h}_{\nu}^{\ell}(s))\,\quad G_{\mu\nu}^{\ell}(t,s)=\frac{1}{N}\mathbf{g}_{\mu}^{\ell}(t) \cdot\mathbf{g}_{\nu}^{\ell}(s), \tag{2}\] where \(\mathbf{g}_{\mu}^{\ell}(t)=\gamma N\frac{\partial f_{\mu}(t)}{\partial\mathbf{h}_{\mu }^{\ell}(t)}\) are the back-propagated gradient signals. Further, for width-\(N\) networks the distribution of these dynamical variables across weight initializations (from a Gaussian distribution \(\mathbf{\theta}\sim\mathcal{N}(0,\mathbf{I})\)) is given by \(p(\mathbf{q})\propto\exp{(NS(\mathbf{q}))}\), where the action \(S(\mathbf{q})\) contains interactions between neuron activations and the kernels at each layer [9]. The DMFT introduced in [9] arises in the \(N\to\infty\) limit when \(p(\mathbf{q})\) is strongly peaked around the saddle point \(\mathbf{q}_{\infty}\) where \(\frac{\partial S}{\partial\mathbf{q}}|_{\mathbf{q}_{\infty}}=0\). Analysis of the saddle point equations reveal that the training dynamics of the neural network can be alternatively described by a stochastic process. A key feature of this process is that it describes the training time evolution of the distribution of neuron pre-activations in each layer (informally the histogram of the elements of \(\mathbf{h}_{\mu}^{\ell}(t)\)) where each neuron's pre-activation behaves as an i.i.d. draw from this _single-site_ stochastic process. We denote these random processes by \(h_{\mu}^{\ell}(t)\). Kernels in (2) are now computed as _averages_ over these infinite-width single site processes \(\Phi_{\mu\nu}^{\ell}(t,s)=\left\langle\phi(h_{\mu}^{\ell}(t))\phi(h_{\nu}^{ \ell}(s))\right\rangle\), \(G_{\mu\nu}^{\ell}(t,s)=\left\langle g_{\mu}^{\ell}(t)g_{\nu}^{\ell}(s)\right\rangle\), where averages arise from the \(N\to\infty\) limit of the dot products in (2). DMFT also provides a set of self-consistent equations that describe the complete statistics of these random processes, which depend on the kernels, as well as other quantities. ## 4 Dynamical Fluctuations Around Mean Field Theory We are interested in going beyond the infinite-width limit to study more realistic finite-width networks. In this regime, the order parameters \(\mathbf{q}\) fluctuate in a \(\mathcal{O}(1/\sqrt{N})\) neighborhood of \(\mathbf{q}_{\infty}\)[55, 51, 53, 46]. Statistics of these fluctuations can be calculated from a general cumulant expansion (see App. C) [55, 56, 51]. We will focus on the leading-order corrections to the infinite-width limit in this expansion. **Proposition 1**_The finite-width \(N\) average of observable \(O(\mathbf{q})\) across initializations, which we denote by \(\left\langle O(\mathbf{q})\right\rangle_{N}\), admits an expansion of the form whose leading terms are_ \[\left\langle O(\mathbf{q})\right\rangle_{N}=\frac{\int d\mathbf{q}\exp{(NS[\mathbf{q}])}O( \mathbf{q})}{\int d\mathbf{q}\exp{(NS[\mathbf{q}])}}=\left\langle O(\mathbf{q})\right\rangle _{\infty}+N\left[\left\langle V(\mathbf{q})O(\mathbf{q})\right\rangle_{\infty}- \left\langle V(\mathbf{q})\right\rangle_{\infty}\left\langle O(\mathbf{q})\right\rangle_ {\infty}\right]+..., \tag{3}\] _where \(\left\langle\right\rangle_{\infty}\) denotes an average over the Gaussian distribution \(\mathbf{q}\sim\mathcal{N}\left(\mathbf{q}_{\infty},-\frac{1}{N}\left(\nabla_{\mathbf{q}}^{2} S[\mathbf{q}_{\infty}]\right)^{-1}\right)\) and the function \(V(\mathbf{q})\equiv S(\mathbf{q})-S(\mathbf{q}_{\infty})-\frac{1}{2}(\mathbf{q}-\mathbf{q}_{\infty}) ^{\top}\nabla_{\mathbf{q}}^{2}S(\mathbf{q}_{\infty})(\mathbf{q}-\mathbf{q}_{\infty})\) contains cubic and higher terms in the Taylor expansion of \(S\) around \(\mathbf{q}_{\infty}\). The terms shown include all the leading and sub-leading terms in the series in powers of \(1/N\). The terms in ellipses are at least \(\mathcal{O}(N^{-1})\) suppressed compared to the terms provided._ The proof of this statement is given in App. C. The central object to characterize finite size effects is the unperturbed covariance (the _propagator_): \(\mathbf{\Sigma}=-\left[\nabla^{2}S(\mathbf{q}_{\infty})\right]^{-1}\). This object can be shown to capture leading order fluctuation statistics \(\left\langle\left(\mathbf{q}-\mathbf{q}_{\infty}\right)\left(\mathbf{q}-\mathbf{q}_{\infty} \right)^{\top}\right\rangle_{N}=\frac{1}{N}\mathbf{\Sigma}+\mathcal{O}(N^{-2})\) (App. C.1), which can be used to reason about, for example, expected square error over random initializations. Correction terms at finite width may give a possible explanation of the superior performance of wide networks at fixed \(\gamma\)[7, 17, 18]. To calculate such corrections, in App. D, we provide a complete description of Hessian \(\nabla_{\mathbf{q}}^{2}S(\mathbf{q})\) and its inverse (the propagator) for a depth-\(L\) network. This description constitutes one of our main results. The resulting expressions are lengthy and are left to App. D. Here, we discuss them at a high level. Conceptually there are two primary ingredients for obtaining the full propagator: * Hessian sub-blocks \(\kappa\) which describe the _uncoupled variances_ of the kernels, such as \[\kappa_{\mu\nu\alpha\beta}^{\Phi^{\ell}}(t,s,t^{\prime},s^{\prime})\equiv \left\langle\phi(h_{\mu}^{\ell}(t))\phi(h_{\nu}^{\ell}(s))\phi(h_{\alpha}^{ \ell}(t^{\prime}))\phi(h_{\beta}^{\ell}(s^{\prime}))\right\rangle-\Phi_{\mu \nu}^{\ell}(t,s)\Phi_{\alpha\beta}^{\ell}(t^{\prime},s^{\prime})\] (4) Similar terms also appear in other studies on finite width Bayesian inference [13, 31, 32] and in studies on kernel variance at initialization [27, 14, 29, 53]. * Blocks which capture the _sensitivity_ of field averages to pertubations of order parameters, such as \[D_{\mu\nu\alpha\beta}^{\Phi^{\ell}\Phi^{\ell-1}}(t,s,t^{\prime},s^{\prime}) \equiv\frac{\partial\left\langle\phi(h_{\mu}^{\ell}(t))\phi(h_{\nu}^{\ell}(s ))\right\rangle}{\partial\Phi_{\alpha\beta}^{\ell-1}\left(t^{\prime},s^{\prime }\right)}\,\quad D_{\mu\nu\alpha}^{G^{\ell}\Delta}(t,s,t^{\prime})\equiv\frac{ \partial\left\langle g_{\mu}^{\ell}(t)g_{\nu}^{\ell}(s)\right\rangle}{ \partial\Delta_{\alpha}(t^{\prime})},\] (5) where \(\Delta_{\mu}(t)=-\frac{\partial\ell(f_{\mu},y_{\mu})}{\partial f_{\mu}}|_{f_{ \mu}(t)}\) are error signal for each data point. Abstracts, we can consider the uncoupled variances \(\mathbf{\kappa}\) as "sources" of finite-width noise for each order parameter and the \(\mathbf{D}\) blocks as summarizing a directed causal graph which captures how this noise propagates in the network (through layers and network predictions). In Figure 1, we illustrate this graph showing directed lines that represent causal influences of order parameters on fields and vice versa. For instance, if \(\Phi^{\ell}\) were perturbed, \(D^{\Phi^{\ell+1},\Phi^{\ell}}\) would quantify the resulting perturbation to \(\Phi^{\ell+1}\) through the fields \(h^{\ell+1}\). In App. D, we calculate \(\mathbf{\kappa}\) and \(\mathbf{D}\) tensors, and show how to use them to calculate the propagator. As an example of our results: **Proposition 2**: _Partition \(\mathbf{q}\) into primal \(\mathbf{q}_{1}=\text{Vec}\{f_{\mu}(t),\Phi_{\mu\nu}^{\ell}(t,s)...\}\) and conjugate variables \(\mathbf{q}_{2}=\text{Vec}\{\hat{f}_{\mu}(t),\hat{\Phi}_{\mu\nu}^{\ell}(t,s)...\}\). Let \(\mathbf{\kappa}=\frac{\partial^{2}}{\partial\mathbf{q}_{2}\partial\mathbf{q}_{2}^{2}}S[ \mathbf{q}_{1},\mathbf{q}_{2}]\) and \(\mathbf{D}=\frac{\partial^{2}}{\partial\mathbf{q}_{2}\partial\mathbf{q}_{1}^{2}}S[\mathbf{q}_ {1},\mathbf{q}_{2}]\), then the propagator for \(\mathbf{q}_{1}\) has the form \(\mathbf{\Sigma}_{\mathbf{q}_{1}}=\mathbf{D}^{-1}\mathbf{\kappa}\left[\mathbf{D}^{-1}\right]^{\top}\) (App D). The variables \(\mathbf{q}_{1}\) are related to network observables, while conjugates \(\mathbf{q}_{2}\) arise as Lagrange multipliers in the DMFT calculation. From the propagator \(\mathbf{\Sigma}_{\mathbf{q}_{1}}\) we can read off the variance of network observables such as \(N\text{Var}(f_{\mu})\sim\Sigma_{f_{\mu}}\)._ The necessary order parameters for calculating the fluctuations are obtained by solving the DMFT using numerical methods introduced in [9]. We provide a pseudocode for this procedure in App. E. We proceed to solve the equations defining \(\mathbf{\Sigma}\) in special cases which are illuminating and numerically feasible including lazy training, two layer networks and deep linear NNs. ## 5 Lazy Training Limit To gain some initial intuition about why kernel fluctuations alter learning dynamics, we first analyze the static kernel limit \(\gamma\to 0\) where features are frozen. To prevent divergence of the network in this limit, we Figure 1: The directed causal graph between DMFT order parameters (blue) and fields (green) defines the \(D\) tensors of our theory. Each arrow represents a causal dependence. \(K\) denotes the NTK. use a background subtracted function \(\tilde{f}(\mathbf{x},\mathbf{\theta})=f(\mathbf{x},\mathbf{\theta})-f(\mathbf{x},\mathbf{\theta}_{0})\) which is identically zero at initialization [4]. For mean square error, the \(N\rightarrow\infty\) and \(\gamma\to 0\) limit is governed by \(\frac{\partial\tilde{f}(\mathbf{x})}{\partial t}=\mathbb{E}_{\mathbf{x}^{\prime} \sim\mathcal{D}}\Delta(\mathbf{x}^{\prime})K(\mathbf{x},\mathbf{x}^{\prime})\) with \(\Delta(\mathbf{x})=y(\mathbf{x})-\tilde{f}(\mathbf{x})\) (for MSE) and \(K\) is the static NTK. The finite-\(N\) initial covariance of the NTK has been analyzed in prior works [27, 13, 14], which reveal a dependence on depth and nonlinearity. Since the NTK is static in the \(\gamma\to 0\) limit, it has constant initialization variance through training. Further, all sensitivity blocks of the Hessian involving the kernels and the prediction errors \(\mathbf{\Delta}\) (such as the \(D^{\Phi^{\prime},\Delta}\)) vanish. We represent the covariance of the NTK as \(\kappa(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{2},\mathbf{x}_{3})=N\text{Cov}(K(\mathbf{x}_{1}, \mathbf{x}_{2}),K(\mathbf{x}_{3},\mathbf{x}_{4}))\). To identify the dynamics of the error \(\mathbf{\Delta}\) covariance, we consider the eigendecomposition of the infinite-width NTK \(K_{\infty}(\mathbf{x},\mathbf{x}^{\prime})=\sum_{k}\lambda_{k}\psi_{k}(\mathbf{x})\psi_{k }(\mathbf{x}^{\prime})\) with respect to the training distribution \(\mathcal{D}\), and decompose \(\kappa\) in this basis \[\kappa_{k\ell mn}=\left\langle\kappa(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_ {4})\psi_{k}(\mathbf{x}_{1})\psi_{\ell}(\mathbf{x}_{2})\psi_{n}(\mathbf{x}_{3})\psi_{m}( \mathbf{x}_{4})\right\rangle_{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_{4}\sim \mathcal{D}}, \tag{6}\] where averages are computed over the training distribution \(\mathcal{D}\). **Proposition 3**: _For MSE loss, the prediction error covariance \(\mathbf{\Sigma}^{\Delta}(t,s)=N\text{Cov}_{0}(\mathbf{\Delta}(t),\mathbf{\Delta}(s))\) satisfies a differential equation (App. G)_ \[\left(\frac{\partial}{\partial t}+\lambda_{k}\right)\left(\frac{\partial}{ \partial s}+\lambda_{\ell}\right)\Sigma_{k\ell}^{\Delta}(t,s)=\sum_{nm}\kappa _{km\ell n}\Delta_{n}^{\infty}(t)\Delta_{n}^{\infty}(s), \tag{7}\] _where \(\Delta_{k}^{\infty}(t)\equiv\exp\left(-\lambda_{k}t\right)\left\langle\psi_{k }(\mathbf{x})y(\mathbf{x})\right\rangle_{\mathbf{x}}\) are the errors at infinite width for eigenmode \(k\)._ An example verifying these dynamics is provided in App. Fig. A.1. In the case where the target is an eigenfunction \(y=\psi_{k^{*}}\), the covariance has the form \(\Sigma_{k\ell}^{\Delta}(t,s)=\kappa_{k\ell k^{*}k^{*}}\frac{\exp(-\lambda_{k^ {*}}(t+s))}{(\lambda_{k}-\lambda_{k^{*}})(\lambda_{\ell}-\lambda_{k^{*}})}\). If the kernel is rank one with eigenvalue \(\lambda\), then the dynamics have the simple form \(\Sigma^{\Delta}(t,s)=\kappa y^{2}\ t\ s\ e^{-\lambda(t+s)}\). We note that similar terms appear in the prediction dynamics obtained by truncating the Neural Tangent Hierarchy [10, 11], however those dynamics concerned small feature learning corrections rather than from initialization variance (App. G.1). Corrections to the mean \(\left\langle\Delta\right\rangle\) are analyzed in App. G.2. We find that the variance and mean correction dynamics involves non-trivial coupling across eigendirections with a mixture of exponentials with timescales \(\{\lambda_{k}^{-1}\}\). ## 6 Rich Regime in Two-Layer Networks In this section, we analyze how feature learning alters the variance through training. We show a denoising effect where the signal to noise ratios of the order parameters improve with feature learning. ### Kernel and Error Coupled Fluctuations on Single Training Example In the rich regime, the kernel evolves over time but inherits fluctuations from the training errors \(\mathbf{\Delta}\). To gain insight, we first study a simplified setting where the data distribution is a single training example \(\mathbf{x}\) and single test point \(\mathbf{x_{\star}}\) in a two layer network. We will track \(\Delta(t)=y-f(\mathbf{x},t)\) and the test prediction \(f_{\star}(t)=f(\mathbf{x_{\star}},t)\). To identify the dynamics of these predictions we need the NTK \(K(t)\) on the train point, as well as the train-test NTK \(K_{\star}(t)\). In this case, all order parameters can be viewed as scalar functions of a single time index (unlike the deep network case, see App. D). **Proposition 4**: _Computing the Hessian of the DMFT action and inverting (App. H), we obtain the following covariance for \(\mathbf{q}_{1}=\text{Vec}\{\Delta(t),f_{\star}(t),K(t),K_{\star}(t)\}_{t\in\mathbb{R }_{+}}\)_ \[\mathbf{\Sigma}_{\mathbf{q}_{1}}=\begin{bmatrix}\mathbf{I}+\mathbf{\Theta}_{K}&0&\mathbf{ \Theta}_{\Delta}&0\\ -\mathbf{\Theta}_{K_{\star}}&\mathbf{I}&0&-\mathbf{\Theta}_{\Delta}\\ -\mathbf{D}&0&\mathbf{I}&0\\ -\mathbf{D}_{\star}&0&0&\mathbf{I}\end{bmatrix}^{-1}\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\mathbf{\kappa}&\mathbf{\kappa}_{\star\star}^{\top}\\ 0&0&\mathbf{\kappa}_{\star}&\mathbf{\kappa}_{\star\star}\end{bmatrix}\begin{bmatrix} \mathbf{I}+\mathbf{\Theta}_{K}&0&\mathbf{\Theta}_{\Delta}&0\\ -\mathbf{\Theta}_{K_{\star}}&\mathbf{I}&0&-\mathbf{\Theta}_{\Delta}\\ -\mathbf{D}&0&\mathbf{I}&0\\ -\mathbf{D}_{\star}&0&0&\mathbf{I}\end{bmatrix}^{-1\top}, \tag{8}\] _where \([\mathbf{\Theta}_{K}](t,s)=\Theta(t-s)K(s)\), \([\mathbf{\Theta}_{\Delta}](t,s)=\Theta(t-s)\Delta(s)\) are Heaviside step functions and \(D(t,s)=\left\langle\frac{\partial}{\partial\Delta(s)}(\phi(h(t))^{2}+g(t)^{2})\right\rangle\) and \(D_{\star}(t,s)=\left\langle\frac{\partial}{\partial\Delta(s)}(\phi(h(t))\phi(h_ {\star}(t))+g(t)g_{\star}(t))\right\rangle\) quantify sensitivity of the kernel to perturbations in the error signal \(\Delta(s)\). Lastly \(\kappa,\kappa_{\star},\kappa_{\star\star}\) correspond to uncoupled variances (and covariance) for \(\{K(t),K_{\star}(t)\}\)._ Figure 2: An ensemble of \(E=1000\) two layer \(N=256\) tanh networks trained on a single training point. Dashed black lines are DMFT predictions. (a) The square deviation from the infinite width DMFT scales as \(\mathcal{O}(1/N)\) for all order parameters. (b) The ensemble average NTK \(\langle K(t)\rangle\) (solid colors) and (c) ensemble average test point predictions \(f_{\star}(t)\) for a point with \(\frac{\mathbf{x}\cdot\mathbf{x}_{\star}}{D}=0.5\) closely follow the infinite width predictions (dashed black). (d) The variance (estimated over the ensemble) of the train error \(\Delta(t)=y-f(t)\) initially increases and then decreases as the training point is fit. (e) The variance of \(f_{\star}\) increases with time but decreases with \(\gamma\). (f) The variance of the NTK during feature learning experiences a transient increase before decreasing to a lower value. In Fig. 2, we plot the resulting theory (diagonal blocks of \(\mathbf{\Sigma}_{\mathbf{q}_{1}}\) from Equation 8) for two layer neural networks. As predicted by theory, all average squared deviations from the infinite width DMFT scale as \(\mathcal{O}(N^{-1})\). Similarly, the average kernels \(\left\langle K\right\rangle\) and test predictions \(\left\langle f_{\star}\right\rangle\) change by a larger amount for larger \(\gamma\) (equation (64)). The experimental variances also match the theory quite accurately. The variance of the train error \(\Delta(t)\) peaks earlier and at a lower value for richer training, but all variances go to zero at late time as the model approaches the interpolation condition \(\Delta=0\). As \(\gamma\to 0\) the curve approaches \(N\) Var\((\Delta(t))\sim\kappa\ y^{2}\ t^{2}\ e^{-2t}\), where \(\kappa\) is the initial NTK variance (see Section 5). While the train prediction variance goes to zero, the test point prediction does not, with richer networks reaching a lower asymptotic variance. We suspect this dynamical effect could explain lower variance observed in feature learning networks compared to lazy networks [7, 18]. In Fig. A.2, we show that the reduction in variance is not due to a reduction in the uncoupled variance \(\kappa(t,s)\), which increases in \(\gamma\). Rather the reduction in variance is driven by the coupling of perturbations across time. ### Offline Training with Multiple Samples In this section we go beyond the single sample equations of the prior section and explore training with multiple \(P\) examples. In this case, we have training errors \(\{\Delta_{\mu}(t)\}_{\mu=1}^{P}\) and multiple kernel entries \(K_{\mu\nu}(t)\) (App. D). Each of the errors \(\Delta_{\mu}(t)\) receives a \(\mathcal{O}(N^{-1/2})\) fluctuation, the training error \(\sum_{\mu}\left\langle\Delta_{\mu}^{2}\right\rangle\) has an additional variance on the order of \(\mathcal{O}(\frac{P}{N})\). In the case of two-layer linear networks trained on whitened data (\(\frac{1}{D}\mathbf{x}_{\mu}\cdot\mathbf{x}_{\nu}=\delta_{\mu\nu}\)), the equations for the propagator simplify and one can separately solve for the variance of \(\mathbf{\Delta}(t)\in\mathbb{R}^{P}\) along signal direction \(\mathbf{y}\in\mathbb{R}^{P}\) and along each of the \(P-1\) orthogonal directions (App. I). At infinite width, the task-orthogonal component \(\mathbf{\Delta}_{\perp}\) vanishes and only the signal dimension \(\Delta_{y}(t)\) evolves in time with differential equation [9, 46] \[\frac{d}{dt}\Delta_{y}(t)=2\sqrt{1+\gamma^{2}(y-\Delta_{y}(t))^{2}}\ \Delta_{y}(t)\,\ \mathbf{\Delta}_{\perp}(t)=0. \tag{9}\] However, at finite width, both the \(\Delta_{y}(t)\) and the \(P-1\) orthogonal variables \(\mathbf{\Delta}_{\perp}\) inherit initialization variance, which we represent as \(\Sigma_{\Delta_{y}}(t,s)\) and \(\Sigma_{\perp}(t,s)\). In Fig. 3 (a)-(b) we show this approximate solution \(\left\langle|\mathbf{\Delta}(t)|^{2}\right\rangle\sim\Delta_{y}(t)^{2}+\frac{2}{N }\Delta_{y}^{1}(t)\Delta_{y}(t)+\frac{1}{N}\Sigma_{\Delta_{y}}(t,t)+\frac{(P- 1)}{N}\Sigma_{\perp}(t,t)+\mathcal{O}(N^{-2})\) across varying \(\gamma\) and varying \(P\) (see Appendix I for \(\Sigma_{\Delta_{y}}\) and \(\Sigma_{\perp}\) formulas). We see that variance of train point predictions \(f_{\mu}(t)\) increases with the total number of points despite the signal of the target vector \(\sum_{\mu}y_{\mu}^{2}\) being fixed. In this model, the bias correction \(\frac{2}{N}\Delta_{y}^{1}(t)\Delta_{y}(t)\) is always \(\mathcal{O}(1/N)\) but the variance correction is \(\mathcal{O}(P/N)\). The fluctuations along the \(P-1\) orthogonal directions begin to dominate the variance at large \(P\). Fig. 3 (b) shows that as \(P\) increases, the leading order approximation breaks down as higher order terms become relevant. ## 7 Deep Networks In networks deeper than two layers, the DMFT propagator has complicated dependence on non-diagonal (in time) entries of the feature kernels (see App. D). This leads to Hessian blocks with four time and four sample indices such as \(D_{\mu\nu\alpha\beta}^{\Phi^{\ell}}(t,s,t^{\prime},s^{\prime})=\frac{\partial }{\partial\Phi^{\ell-1}_{\alpha\beta}(t^{\prime},s^{\prime})}\left\langle\phi( \hbar_{\mu}^{\ell}(t))\phi(\hbar_{\nu}^{\ell}(s))\right\rangle\), rendering any numerical calculation challenging. However, in deep linear networks trained on whitened data, we can exploit the symmetry in sample space and the Gaussianity of preactivation features to exactly compute derivatives without Monte-carlo sampling as we discuss in App. K. An example set of results for a depth 4 network is provided in Fig. 4. The variance for the feature kernels \(H^{\ell}\) accumulate finite size variance by layer along the forward pass and the gradient kernels \(G^{\ell}\) accumulate variance on the backward pass. The SNR of the kernels \(\frac{\langle H\rangle^{2}}{N\text{Var}(H)}\) improves with feature learning, suggesting that richer networks will be better modeled by their mean field limits. Examples of the off-diagonal correlations obtained from the propagator are provided in App. Fig. A.5. ## 8 Variance can be Small Near Edge of Stability In this section, we move beyond the gradient flow formalism and ask what large step sizes do to finite size effects. Recent studies have identified that networks trained at large learning rates can be qualitatively different than networks in the gradient flow regime, including the catapult [57] and edge of stability (EOS) phenomena Figure 4: Depth 4 linear network with single training point. Black dashed lines are theory. (a) The variance of the training error along the task relevant subspace. We see that unlike the two layer model, more feature learning can lead to larger peaks in the finite size variance. (b) The variance of the NTK in the task relevant subspace. When properly normalized against the square of the mean \(\left\langle K(t)\right\rangle^{2}\), the final NTK variance decreases with feature learning. (c) The gap in feature kernel variance across different layers of the network is amplified by feature learning strength \(\gamma\). Figure 3: Large input dimension or multiple samples amplify finite size effects in a simple two layer model. Black dashed lines are theory. (a) The variance of offline learning with \(P\) training examples in a two layer linear network. (b) The leading perturbative approximation to the train error breaks down when samples \(P\) becomes comparable to \(N\). (c)-(d) Richer training reduces variance. [19, 20, 21]. In these settings, the kernel undergoes an initial scale growth before exhibiting either a recovery or a clipping effect. In this section, we explore whether these dynamics are highly sensitive to initialization variance or if finite networks are well captured by mean field theory. Following [57], we consider two layer networks trained on a single example \(|\mathbf{x}|^{2}=D\) and \(y=1\). We use learning rate \(\eta\) and feature learning strength \(\gamma\). The infinite width mean field equations for the prediction \(f_{t}\) and the kernel \(K_{t}\) are (App. L) \[f_{t+1}=f_{t}+\eta K_{t}\Delta_{t}+\eta^{2}\gamma^{2}f_{t}\Delta_{t}^{2}\,\ K_{t+1}=K_{t}+4\eta\gamma^{2}f_{t}\Delta_{t}+\eta^{2}\gamma^{2}\Delta_{t}^{ 2}K_{t}. \tag{10}\] For small \(\eta\), the equations are well approximated by the gradient flow limit and for small \(\gamma\) corresponds to a discrete time linear model. For large \(\eta\gamma>1\), the kernel \(K\) progressively sharpens (increases in scale) until it reaches \(2/\eta\) and then oscillates around this value. It may be expected that near the EOS, the large oscillations in the kernels and predictions could lead to amplified finite size effects, however, we show in Fig. 5 that the leading order propagator elements decrease even after reaching the EOS threshold, indicating _reduced_ disagreement between finite and infinite width dynamics. ## 9 Finite Width Alters Bias, Training Rate, and Variance in Realistic Tasks To analyze the effect of finite width on neural network dynamics during realistic learning tasks, we studied a vanilla depth-6 ReLU CNN trained on CIFAR-10 (experimental details in App. B, F.2) In Fig. 6, we train an ensemble of \(E=8\) independently initialized CNNs of each width \(N\). Wider networks not only have better performance for a single model (solid), but also have lower bias (dashed), measured with ensemble averaging of the logits. Because of faster convergence of wide networks, we observe wider networks have higher variance, but if we plot variance at fixed ensembled training accuracy, wider networks have consistently lower variance (Fig. 6(d)). We next seek an explanation for why wider networks after ensembling trains at a faster _rate_. Theoretically, this can be rationalized by a finite-width alteration to the ensemble averaged NTK, which governs the convergence timescale of the ensembled predictions (App. F.1). Our analysis in App. F.1 suggests that the rate of convergence receives a finite size correction with leading correction \(\mathcal{O}(N^{-1})\) F.2. To test this hypothesis, we fit the ensemble training loss curve to exponential function \(\mathcal{L}\approx C\exp{(-R_{N}t)}\) where \(C\) is a constant. We plot the fit \(R_{N}\) as a function of \(N^{-1}\) result in Fig. 6(e). For large \(N\), we see the leading behavior is linear in \(N^{-1}\), but begins to deviate at small \(N\) as a quadratic function of \(N^{-1}\), suggesting that second order effects become relevant around \(N\lesssim 100\) on CIFAR-10. In App. Fig. A.6, we train a smaller subset of CIFAR-10 where we find that \(R_{N}\) is well approximated by a \(\mathcal{O}(N^{-1})\) correction, consistent with the idea that higher sample size drives the dynamics out of the leading order picture. We also analyze the effect of \(\gamma\) on variance in this task. In App. Fig. A.7, we train \(N=64\) models with varying \(\gamma\). Increased \(\gamma\) reduces variance of the logits and alters the representation (measured with kernel-task alignment), the training and test accuracy are roughly insensitive to the richness \(\gamma\) in the range we considered. Figure 5: Edge of stability effects do not imply deviations from infinite width behavior. Black dashed lines are theory. (a) The loss dynamics for width \(N=500\) networks (solid colors) compared to the infinite width DMFT (dashed black). (b) The average kernel over an ensemble of several NNs. For small \(\gamma\), the kernel reaches its asymptote before hitting the edge of stability. For large \(\gamma\), the kernel increases and then oscillates around \(2/\eta\). (c)-(d) Remarkably variance due to finite size can reduce during training on both sides of the edge of stability (for \(\gamma\) smaller and larger than the critical value \(\sim 1/\eta\)), suggesting that infinite width theory can be predictive of wide but finite networks even in the large learning rate regime. Figure 6: Depth 6 CNN trained on CIFAR-10 for different widths \(N\) with richness \(\gamma=0.2\), \(E=8\) ensembles. (a)-(b) For this range of widths, we find that smaller networks perform worse in train and test error, not only in terms of the single models (solid) but also in terms of bias (dashed). The delayed training of ensembled finite width models indicates that the correction to the mean order parameters (App. F) is non-negligible. (c) Alignment of the average kernel to test labels is also not conserved across width. (d) The ratio of the test MSE for a single model to the ensembled logit MSE. (e) The fitted rate \(R_{N}\) of training width \(N\) models as a function of \(N^{-1}\). We rescale the time axis by \(R_{N}\) to allow for a fair comparison of prediction variance for networks at comparable performance levels. (f) In rescaled time, ensembled network training losses (dashed) are coincident. Discussion We studied the leading order fluctuations of kernels and predictions in mean field neural networks. Feature learning dynamics can reduce undesirable finite size variance, making finite networks order parameters closer to the infinite width limit. In several toy models, we revealed some interesting connections between the influence of feature learning, depth, sample size, and large learning rate and the variance of various DMFT order parameters. Lastly, in realistic tasks, we illustrated that bias corrections can be significant as rates of learning can be modified by width. Though our full set of equations for the leading finite size fluctuations are quite general in terms of network architecture and data structure, the leading term involving only \(\mathbf{\Sigma}\) does not capture the complete finite size distribution defined in Eq. (3), especially as the sample size becomes comparable to the width. Future work could explore in greater detail the higher order contributions from averages involving powers of \(V(\mathbf{q})\) by examining cubic and higher derivatives of \(S\) in Eq. (3). It could also be worth examining in future work how finite size impacts other biologically plausible learning rules, where the effective NTK can have asymmetric (over sample index) fluctuations [46]. Further, even though we expect our perturbative expressions to give a precise asymptotic description of finite networks in mean field/\(\mu\)P, the resulting expressions are not realistically computable in deep networks trained on large dataset size \(P\) for long times \(T\) since the number of Hessian entries scales as \(\mathcal{O}(T^{4}P^{4})\) and a matrix of this size must be stored in memory and inverted in the general case. Future work could explore solveable special cases or high dimensional limits where the analysis may simplify. ### Acknowledgements This work was supported by NSF Award DMS-2134157. BB thanks Alex Atanasov, Jacob Zavatone-Veth, Boris Hanin and Greg Yang for helpful discussions and for their comments on this manuscript. BB also acknowledges Jeremy Cohen for discussions about large step size dynamical effects and their relationship to network width.
2305.05378
PLM-GNN: A Webpage Classification Method based on Joint Pre-trained Language Model and Graph Neural Network
The number of web pages is growing at an exponential rate, accumulating massive amounts of data on the web. It is one of the key processes to classify webpages in web information mining. Some classical methods are based on manually building features of web pages and training classifiers based on machine learning or deep learning. However, building features manually requires specific domain knowledge and usually takes a long time to validate the validity of features. Considering webpages generated by the combination of text and HTML Document Object Model(DOM) trees, we propose a representation and classification method based on a pre-trained language model and graph neural network, named PLM-GNN. It is based on the joint encoding of text and HTML DOM trees in the web pages. It performs well on the KI-04 and SWDE datasets and on practical dataset AHS for the project of scholar's homepage crawling.
Qiwei Lang, Jingbo Zhou, Haoyi Wang, Shiqi Lyu, Rui Zhang
2023-05-09T12:19:10Z
http://arxiv.org/abs/2305.05378v1
PLM-GNN: A Webpage Classification Method based on Joint Pre-trained Language Model and Graph Neural Network ###### Abstract The number of web pages is growing at an exponential rate, accumulating massive amounts of data on the web. It is one of the key processes to classify webpages in web information mining. Some classical methods are based on manually building features of web pages and training classifiers based on machine learning or deep learning. However, building features manually requires specific domain knowledge and usually takes a long time to validate the validity of features. Considering webpages generated by the combination of text and HTML Document Object Model(DOM) trees, we propose a representation and classification method based on a pre-trained language model and graph neural network, named PLM-GNN. It is based on the joint encoding of text and HTML DOM trees in the web pages. It performs well on the KI-04 and SWDE datasets and on practical dataset AHS for the project of scholar's homepage crawling. Keywords: Webpage Classification, Pre-trained Language Model, Graph Neural Network ## 1 Introduction Web page classification is one of the classic tasks in the process of web information mining. There are two major categories of solutions, manual web classification, and automatic web classification[1]. Manual web classification is the task for the domain experts to classify manually based on their knowledge of the domain[2]. Automatic web classification is a supervised learning problem that requires the construction of features that can significantly distinguish the web pages, then training a classifier by some means based on the labels. Obviously, manual web classification is a tedious and time-consuming task. Although it may take some time to build and train the classifier, the latter is better than manual methods. We propose a novel classification method, PLM-GNN, which is based on a joint pre-trained language model(PLM) and graph neural network(GNN). We consider the automatic web classification task as a two-stage process. Firstly, it constructs a representation of web pages. Secondly, using the representation to train a classifier. Some past approaches usually build features based on text or visual features and links between web pages. Text features are usually obtained using methods such as TF-IDF, Word2Vec, etc. Visual features are usually obtained through the calculation of coordinates. While links between web pages are used to construct features from other pages based on the assumption that interlinked pages are most likely to have similar features and thus migrate to the current page. However, the number of pages continues to increase, the variability of pages becomes larger. So we think that the links between pages can no longer be used as the basis for feature construction. Due to the proposed Transformer architecture and the advent of PLM, contextual information can be better learned, so we can get a better representation of the text. Our PLM-GNN is to process the text contained in web pages by PLMs. Since the structure of a web page can be reflected by the HTML DOM tree, and the structure of similar web pages is resemble, constructing the structure feature of the DOM tree should also be a key aspect. Considering that a tree is a directed acyclic graph, we introduce GNNs to learn the graph structure. We use a multi-layer perceptron(MLP) to build the final classifier to ensure the completeness and trainability of the model. Overall, we make the following contributions: * We introduced pre-trained language models to get a better representation of text contained in the web pages. * We introduced graph neural networks to learn HTML DOM tree features and get a representation of the DOM tree. * Our model constructs the features automatically, without manually constructing interactions. ## 2 Related Works ### Text Representation Text representation is a problem of studying how to turn a string into a vector and how well the vector can respond to the text features. The existing models are broadly classified into three categories: vector space-based models, topic-based models, and neural network-based approaches. Vector space-based methods have simple models and clear meanings, but they cannot handle synonym and near-synonym problems well. Topic models try to implement the representation of text from the perspective of probabilistic generative models, where each dimension is a topic. However, topic models suffer from problems such as long training time due to many training parameters and poor modeling of short texts. With the rise of deep learning, neural network-based representation methods have made a big splash in NLP research. Mikolov et al. proposed Word2Vec, Doc2Vec, fastText. Later, recurrent neural networks were proposed, such as RNN and LSTM. In recent years, the emergence of models based on attention mechanisms such as BERT[3], GPT[4], etc. has refreshed the baseline of NLP for various tasks. ### Graph Neural Networks Graphs describe pairwise relations between entities for real-world data from various domains, which are playing an increasingly important role in many applications. In recent years, GNNs have achieved tremendous success in representation learning on graphs. Most GNNs follow a message-passing mechanism to learn a node representation by propagating and transforming representations of its neighbors, which significantly helps them in capturing the complex information of graph data[5]. The most classical methods in GNNs are Graph Convolutional Networks(GCN)[6] which is based on spectral domain and Graph Attention Networks(GAT)[7] which is based on spatial domain. For GCN, the node features on the graph constitute the graph signal. For GAT, the effect of weighted convolution is achieved by computing attention scores on node pairs with the help of a self-attention mechanism. ## 3 Problem Formulation and Approach ### Problem Formulation In a web page classification task, we are trying to map input web pages into discrete categories. Here we focus on single-label classification tasks. Let \(C=c_{1},c_{2},\cdots,c_{M}\) be a set of pre-defined categories, where \(M\geq 2\). Given a set of web pages \(W=w_{1},w_{2},\cdots,w_{N}\), generally speaking, we expect to find a function \(f\colon C\times W\to C\) that can be obtained through learning to approximate the real assignment function \(f_{0}\colon C\times W\to C\). The function \(f\) is called a classifier or a model. As introduced in Section 1, we regard the web page classification task as a two-stage job. Usually, we are able to get the features \(X\) of a page \(w\in W\) in the first stage, either by manually constructing features or by some way of learning. Let function \(\varphi\) denote the mapping from a page to its features, i.e. \(\varphi(w)=X\), assuming \(X\in\mathbf{R}^{n}\). Secondly, we try to train a classifier \(\gamma\) whose input is the features obtained in the first stage and output is its label, i.e., \(\gamma(X)=c\), where \(c\in C\). The task in this paper is formalized as to find a representation function \(\varphi\) and the best classifier \(\gamma\), where \[c=\gamma\big{(}\varphi(w)\big{)},\quad\text{for }w\in W,\ c\in C\] to approximate the real assignment function \(f_{0}\), where \(\varphi\colon W\to\mathbf{R}^{n}\) and \(\gamma\colon\mathbf{R}^{n}\to C\). ### Approach Review Figure 1 shows the overall framework of the proposed PLM-GNN model for the web page classification task. We first parsed the web page into an HTML DOM tree. After that, we use the DOM tree to get the text information and DOM tree structure information respectively. Note that the text information is only on the leaf nodes of the DOM tree, so we get all the text information of the web page by directly traversing the whole DOM tree. We feed the text into a text encoder to get a representation of the web page text, where the text encoder is implemented by PLMs. For the DOM tree structure information, only the skeleton structure of the tree is available at the beginning. So we first construct the graph structure of the whole DOM tree. We chose the XPath as the information of each node. We train the XPath embedding layers to obtain the representation. After that, we fed the graph structure and the representation of the nodes to the graph encoder for learning. Here the graph encoder is implemented by GNNs. Since we eventually expect the representation of the whole DOM tree, we read out each graph by a global pooling strategy. After the above process, we concatenate the text representation and the DOM tree representation to get the representation for a web page. We input the vector into an MLP classifier and train it under a multi-classification task. ## 4 Encoder and Classifier The model consists of three components: the text encoder, the DOM Tree encoder, and the MLP-based classifier. ### Text Encoder We first obtain all the text on a web page. Given a web page \(w\in W\), we first clean the original HTML document to erase some useless tags. Then we parse the page into an HTML DOM tree \(\mathcal{T}=(\mathcal{N},\mathcal{C},\mathcal{R})\). \(\mathcal{N}\) denotes the nodes set in the DOM tree, which represents all the tags in HTML. \(\mathcal{C}\) denotes the contents contained in an HTML document, especially referring to the texts \(T\) in HTML. Obviously, \(T\) is a subset of \(\mathcal{C}\). \(\mathcal{R}\) denotes the relation between nodes in \(\mathcal{N}\), which usually includes parent-child and sibling relationships. We stipulate that the relationships in \(\mathcal{R}\) are directionless. Figure 1: Approach Overview After parsing the DOM tree \(\mathcal{T}\), we would like to extract texts \(T\) in HTML. Since the text exists on the leaf nodes of the DOM tree, we find the leaf nodes by traversing the DOM tree and then extracting all the text in the leaf nodes. Assuming leaf node \(\mathcal{N}_{i}\) has text \(T_{i}\), then \(T\) can be obtained by concatenating all \(T_{i}\), i.e., \(T=\|_{i}\ T_{i}\). We use PLMs to do the vectorization of the text. We use the tokenizer to divide the text into token sequences, i.e., \(T=\left[w_{1},w_{2},\cdots,w_{L_{0}}\right],\) where \(L_{0}\) denotes the length of the token sequences. Considering that all the text contained in a web page may be too long, however, some PLMs such as BERT have input length requirements. Let \(\eta\) denote the truncation and padding operator, i.e., \(\eta(T)=\left[w_{1},w_{2},\cdots,w_{L}\right]\), where \(L\) denotes the length required by the PLM. Finally, we feed the input sequence \(\eta(T)\) into a PLM, then we got the text representation \(\mathbf{x}_{T}\) through \[\mathbf{x}_{T}=\mathrm{PLM}\big{(}\eta(T)\big{)}\] where \(\mathbf{x}_{T}\) is a \(d_{T}\) dimensional vector. ### DOM Tree Encoder We use GNN as the DOM tree encoder. Firstly, we construct the graph structure using in GNNs. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) denotes the graph feeding to the networks. Obviously, we have \(\mathcal{V}=\mathcal{N}\). According to the definition of \(\mathcal{R}\), it contains two types of relationships, a parent-child relationship and a sibling relationship. Notice that the relationships in \(\mathcal{R}\) are directionless, but GNN's message passing requires the direction of edges. We only use the parent-child relationship here to construct directed edges \(v_{p}\to v_{c}\) and \(v_{c}\to v_{p}\), where \(v_{p}\) denotes the parent node and \(v_{c}\) denotes the child node. Secondly, we build the representation of the node \(v\in\mathcal{V}\). XPath is a path string that uniquely identifies an HTML DOM node, and can be used to easily locate a node in the document. We designed our embedding layer by referring to the implementation of the XPath embedding layer in MarkupLM[8]. Given a node \(v\), we add up the tag unit embedding and subscript unit embedding to obtain the embedding \(ue_{j}\) of the \(j\)-th unit. MarkupLM set max depth \(L_{xp}=50\) as default. However, after looking at the length of the XPath of DOM nodes in the dataset we use, we found that the actual length of the sequence obtained by splitting the XPath is much less than 50. Therefore, we take \(L_{xp}=15\). Finally, we concatenate all the unit embeddings to get the representation \(h_{0}\) of XPath of node \(v\), i.e., \(h_{0}=\|_{k=0}^{L}\ ue_{k}\). The final embedding of XPath \(h\) is obtained as \(h=\mathrm{Dropout}\left(\sigma\big{(}\mathrm{LayerNorm}(h_{0})\big{)}\right)\), where \(\mathrm{LayerNorm}(\cdot)\) stands for the Layer Normalization operation. For all the nodes in set \(\mathcal{V}\), we use GNNs to update the representation of each node. The updating process of the \(l\) -th GNN layer for each node \(v\in\mathcal{V}\) can be described as \(m_{v}^{(l)}=\mathrm{AGGREGATE}(\{h_{u}^{(l-1)}\colon u\in N(v)\},\ h_{v}^{(l)}= \mathrm{UPDATE}(h_{v}^{(l-1)},m_{v}^{(l-1)}),\ \text{where}\ m_{v}^{(l)}\ \text{and}\ h_{v}^{(l)}\ \text{denote}\ \text{the}\ \text{message vector}\) and the representation of the \(l\)-th layer, respectively. \(N(v)\) represents all the neighbor of node \(v\) in graph \(\mathcal{G}\). Function \(\mathrm{AGGREGATE}(\cdot)\) and \(\mathrm{UPDATE}(\cdot)\) represent the aggregation function and the update function used in GNN, respectively. Hence, we can construct the node feature matrix \(\mathrm{H}^{(L_{g})}\) by the nodes' representation after \(\mathrm{L_{g}}\) GNN layers updating. \(\mathrm{H}^{(L_{g})}\in\mathbf{R}^{|\mathcal{V}|\times\mathrm{d_{G}}},\ \text{where}\ \mathrm{d_{G}}\) denotes the dimension of each node's representation. Eventually, we use a readout function on the graph since what we want actually is a representation of the whole graph. The readout function is usually implemented by some pooling methods. Let \(\mathcal{P}\) be a readout function, which stands for a pooling strategy. Then we can obtain the whole graph representation \(\mathbf{x}_{\text{G}}\) as \(\mathbf{x}_{\text{G}}=\mathcal{P}\left(\mathrm{H}^{(L_{g})}\right),\ \text{where}\ \mathbf{x}_{\text{G}}\ \text{is}\ \ \text{a}\ \mathrm{d_{G}}\ \text{dimensional}\ \text{vector}\). We use a pooler to increase the expressiveness of the model and prevent overfitting, which is illustrated as \(\mathbf{x}_{G}=\text{pooler}(\mathbf{x}_{G})=\sigma\left(\text{BatchNormNorm}\big{(} \mathcal{F}(\mathbf{x}_{G})\big{)}\right)\), where \(\mathcal{F}\) denote a \(\mathbf{R}^{d_{G}}\rightarrow\mathbf{R}^{d_{G}}\) linear layer and BatchNorm(\(\cdot\)) stands for the Batch Normalization operation. ### Classifier In Sections 4.1 and 4.2, we have got the representation of text \(\mathbf{x}_{T}\) and graph \(\mathbf{x}_{G}\), respectively. Since \(\mathbf{x}_{T}\) and \(\mathbf{x}_{G}\) come from different models, to unify them to the same measure, we do normalization on \(\mathbf{x}_{T}\) and \(\mathbf{x}_{G}\), i.e., \(\mathbf{x}_{T}{}^{\prime}=\frac{\mathbf{x}_{T}}{\|\mathbf{x}_{T}\|_{2}}\), \(\mathbf{x}_{G}{}^{\prime}=\frac{\mathbf{x}_{G}}{\|\mathbf{x}_{G}\|_{2}}\) In order to get the representation \(\mathbf{x}_{H}\) for the whole HTML document, we simply concatenate the two representations as \(\mathbf{x}_{H}=\mathbf{x}_{T}{}^{\prime}\mathbf{x}_{G}{}^{\prime}\), where the dimension \(d_{H}\) of \(\mathbf{x}_{H}\) equals to \(d_{T}+d_{G}\). We fed the representation into an MLP for multi-class classification, as illustrated below: \[o=\text{MLP}(\mathbf{x}_{G}),\quad o\in\mathbf{R}^{|\mathcal{C}|}\] where \(|\mathcal{C}|\) denotes the number of elements in the categories set \(\mathcal{C}\), i.e., the number of pre-defined labels. Eventually, we apply the \(softmax(\cdot)\) function to normalize \(o\) and select the maximum probability as the prediction \(\hat{\gamma}\). We use the cross-entropy function as the loss function to do optimization. ## 5 Experiments ### Datasets We conduct the experiments on three datasets. KI-04 is a dataset under genre classification. SWDE[9] is a dataset commonly used to test the performance of information extraction(IE). Since SWDE stores web pages under separate categories by domain, we try to use this dataset here for web page classification. To test the performance of this model on real-world problems, we constructed our own dataset, AHS. It is the abbreviation of _Academic Homepages of Scholars_. The practical problem here is how to automatically perform information extraction on the homepages of scholars from different schools for influence evaluation later. We first crawled teachers' academic homepages of 22 universities, with the purpose of addressing the training data needed for an automatic web page information extraction model. Each university contains four colleges, and each college contains at least about 15 scholars' (or teachers') personal homepages. Based on the above data, we built the 1.0 version of AHS. After that, we found that in the pre-stage of the whole collection system, we need to determine whether the web page fed to the system is a member's academic homepage indeed. Then this problem evolves into a binary classification task for whether it is an academic homepage or not. Since the AHS 1.0 only contains the personal homepages of scholars, we incrementally crawled another part of the web pages as negative samples due to the above issues. We crawled the Nbaplayer homepage data and movie information homepage data from the web respectively. Based on the above work, we built AHS 2.0. The information of the datasets on which our experiments depend is shown in Table 1, Cat. is the short of categories. ### Implementation Details We implement our PLM text encoder based on pre-trained models provided in Huggingface Transformers[10]. We use RoBERTa-large as an English text encoder to encode KI-04 and SWDE datasets. At the same time, we use hfl/chinese-RoBERTa-wwm-ext-large as a cross-lingual text encoder to encode the AHS dataset. Secondly, we implement the GNNs relying on the DGL framework. We employ sum(\(\cdot\)) on each dimension as a readout function here to get the graph-level representation. Finally, we fed the vector to a 2 layer MLP for classification. We use AdamW as an optimizer with learning rate of 3e-4 to train the whole model. ### Results We evaluate the performance of the model by four metrics, which are accuracy, recall, precision, and Macro F1. The results are shown in Table 2. All the metrics on the KI-04 dataset reach 1.000, which is caused by the small volume. Meanwhile, PLM-GNN performed well on both versions of the AHS dataset. ### Ablation Study To test the effect of different modules in the model, we conducted the following sets of experiments. Since the ultimate goal of this paper is to solve practical problems, the following experiments are oriented to AHS 2.0. First, we replace the text encoder and readout function of GNN respectively. Some models are proposed to solve the problem of BERT input length limitation, such as \(L=4096\) for Longformer[11]. We replaced RoBERTa with Longformer and the sum(\(\cdot\)) readout function with max(\(\cdot\)). The results are shown in Figure 2. It can be seen that replacing RoBERTa with Longformer leads to some performance degradation. We believe this may be due to two reasons. One is that the complexity leads to overfitting of the model, and the other is that it is not necessary to input all the text in the web page to be able to achieve good results. In the DOM tree encoder, the impact of replacing the readout function is relatively small, but there is a slight performance degradation after changing. Furthermore, we explored the role of these two modules in the overall model by using the text encoder and the DOM tree encoder separately for classification, as shown in Figure 3. According to the results, we can see that the text of the web page is the key feature to distinguish the web page, but the graph structure features also complement the features well. ## 6 Conclusion In this paper, we propose a simple model for representing and classifying web pages, PLM-GNN. We use a pre-trained language model to encode the text in web pages, and a graph neural network to model the structural information of DOM trees. The model does not require manual feature construction for web pages, but can automatically learn to obtain a representation. With this model, we solve the problem of classifying web pages within the process of building an automatic academic information collection system.
2307.15712
Quantum-noise-limited optical neural networks operating at a few quanta per activation
Analog physical neural networks, which hold promise for improved energy efficiency and speed compared to digital electronic neural networks, are nevertheless typically operated in a relatively high-power regime so that the signal-to-noise ratio (SNR) is large (>10). What happens if an analog system is instead operated in an ultra-low-power regime, in which the behavior of the system becomes highly stochastic and the noise is no longer a small perturbation on the signal? In this paper, we study this question in the setting of optical neural networks operated in the limit where some layers use only a single photon to cause a neuron activation. Neuron activations in this limit are dominated by quantum noise from the fundamentally probabilistic nature of single-photon detection of weak optical signals. We show that it is possible to train stochastic optical neural networks to perform deterministic image-classification tasks with high accuracy in spite of the extremely high noise (SNR ~ 1) by using a training procedure that directly models the stochastic behavior of photodetection. We experimentally demonstrated MNIST classification with a test accuracy of 98% using an optical neural network with a hidden layer operating in the single-photon regime; the optical energy used to perform the classification corresponds to 0.008 photons per multiply-accumulate (MAC) operation, which is equivalent to 0.003 attojoules of optical energy per MAC. Our experiment used >40x fewer photons per inference than previous state-of-the-art low-optical-energy demonstrations, to achieve the same accuracy of >90%. Our work shows that some extremely stochastic analog systems, including those operating in the limit where quantum noise dominates, can nevertheless be used as layers in neural networks that deterministically perform classification tasks with high accuracy if they are appropriately trained.
Shi-Yuan Ma, Tianyu Wang, Jérémie Laydevant, Logan G. Wright, Peter L. McMahon
2023-07-28T17:59:46Z
http://arxiv.org/abs/2307.15712v1
# Quantum-noise-limited optical neural networks operating at a few quanta per activation ###### Abstract Analog physical neural networks, which hold promise for improved energy efficiency and speed compared to digital electronic neural networks, are nevertheless typically operated in a relatively high-power regime so that the signal-to-noise ratio (SNR) is large (\(>\)10). What happens if an analog system is instead operated in an ultra-low-power regime, in which the behavior of the system becomes highly stochastic and the noise is no longer a small perturbation on the signal? In this paper we study this question in the setting of optical neural networks operated in the limit where some layers use only a single photon to cause a neuron activation. Neuron activations in this limit are dominated by quantum noise from the fundamentally probabilistic nature of single-photon detection of weak optical signals. We show that it is possible to train stochastic optical neural networks to perform deterministic image-classification tasks with high accuracy in spite of the extremely high noise (SNR \(\sim\) 1) by using a training procedure that directly models the stochastic behavior of photodetection. We experimentally demonstrated MNIST handwritten-digit classification with a test accuracy of 98% using an optical neural network with a hidden layer operating in the single-photon regime; the optical energy used to perform the classification corresponds to 0.008 photons per multiply-accumulate (MAC) operation, which is equivalent to 0.003 attojoules of optical energy per MAC. Our experiment used \(>\)40\(\times\) fewer photons per inference than previous state-of-the-art low-optical-energy demonstrations, to achieve the same accuracy of \(>\)90%. Our work shows that some extremely stochastic analog systems, including those operating in the limit where quantum noise dominates, can nevertheless be used as layers in neural networks that deterministically perform classification tasks with high accuracy if they are appropriately trained. ## I Introduction The development and widespread use of very large neural networks for artificial intelligence [1; 2; 3] has motivated the exploration of alternative computing paradigms--including analog processing--in the hope of improving both energy efficiency and speed [4; 5]. Photonic implementations of neural networks using analog optical systems have experienced a resurgence of interest over the past several years [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. However, analog processors--including those constructed using optics--inevitably have noise and typically also suffer from imperfect calibration and drift. These imperfections can result in degraded accuracy for neural-network inference performed using them [6; 18; 19; 20]. To mitigate the impact of noise, noise-aware training schemes have been developed [21; 22; 23; 24; 25; 26; 27; 28; 29]. These schemes treat the noise as a relatively small perturbation to an otherwise deterministic computation, either by explicitly modeling the noise as the addition of random variables to the processor's output or by modeling the processor as having finite bit precision. Recent demonstrations of ultra-low optical energy usage in optical neural networks (ONNs) [13; 16] were in this regime of noise as a small perturbation and used hundreds to thousands of photons to represent the average neuron pre-activation signal prior to photodetection. In Ref. [13], we reported achieving 90% accuracy on MNIST handwritten-digit classification using slightly less than 1 photon per scalar weight multiplication (i.e., per MAC)--which is already counterintuitively small--and one might be tempted to think that it's not possible to push the number of photons per MAC much lower while preserving accuracy. More typically, millions of photons per activation are used [11; 12; 16; 30]. In this paper we address the following question: what happens if we use such weak optical signals in a ONN that each photodetector in a neural-network layer receives at most just one, or perhaps two or three, photons? Physical systems are subject to various sources of noise. While some noise can be reduced through improvements to the hardware, some noise is fundamentally unavoidable, especially when the system is operated with very little power--which is an engineering goal for neural-network processors. Shot noise is a fundamental noise that arises from the quantized, i.e., discrete, nature of information carriers: the discreteness of energy in the case of photons in optics, and of discreteness of charge in the case of electrons in electronics [31]. A shot-noise-limited measurement of a signal encoded with an average of \(N_{\mathrm{p}}\) photons (quanta) will have an SNR that scales as \(\sqrt{N_{\mathrm{p}}}\)[32].1 To achieve a suitably high SNR, ONNs typically use a large number of quanta for each detected signal. In situations where the optical signal is limited to just a few photons, photodetectors measure and can count individual quanta. Single-photon detectors (SPDs) are highly sensitive detectors that--in the typical _click detector_ setting--report, with high fidelity, the absence of a photon (_no click_) or presence of one or more photons (_click_) during a given measurement period [34]. In the quantum-noise-dominated regime of an optical signal with an average photon number of about 1 impinging on an SPD, the measurement outcome will be highly stochastic, resulting in a very low SNR (of about 1).2 Conventional noise-aware-training algorithms are not able to achieve high accuracy with this level of noise. **Is it possible to operate ONNs in this very stochastic regime and still achieve high accuracy in deterministic classification tasks?** The answer is _yes_, and in this work we will show how. Footnote 1: The _shot-noise limit_, which is sometimes also referred to as the _standard quantum limit_[33], can be evaded if, instead of encoding the signal in a thermal or coherent state of light, a quantum state—such as an intensity-squeezed state or a Fock state—is used. In this paper we consider only the case of _classical_ states of light for which shot noise is present and the shot-noise limit applies. Footnote 2: Again, this is under the assumption that the optical signal is encoded in an optical state that is subject to the shot-noise limit—which is the case for classical states of light. The stochastic operation of neural networks has been extensively studied in computer science as part of the broader field of stochastic computing [35]. In the field of machine learning, binary stochastic neurons (BSNs) have been used to construct stochastic neural networks [36; 37; 38; 39; 40; 41; 42], with training being a major focus of study. Investigations of hardware implementations of stochastic computing neural networks, such as those in Refs. [43; 44] (with many more surveyed in Ref. [45]), have typically been for deterministic complementary metal-oxide-semiconductor (CMOS) electronics, with the stochasticity introduced by random-number generators. While many studies of binary stochastic neural networks have been conducted with standard digital CMOS processors, there have also been proposals to construct them from beyond-CMOS hardware, motivated by the desire to minimize power consumption: direct implementation of binary Figure 1: **Deterministic inference using noisy neural-network hardware.****a**, The concept of a stochastic physical neural network performing a classification task. Given a particular input image to classify, repetitions exhibits variation (represented by different traces of the same color), but the class is predicted nearly deterministically. **b**, The single-to-noise ratio (SNR) of single-photon-detection neural networks (SPDNNs) compared to conventional optical neural networks (ONNs). Conventional ONNs operate with high photon budgets (SNR \(\gg\) 1) to obtain reliable results, whereas SPDNNs operate with low photon budgets—of up to just a few detected photons per shot (SNR \(\sim\) 1). The relation between the detected optical energy (in number of photons \(N_{\mathrm{p}}\)) and SNR is SNR \(=\sqrt{N_{\mathrm{p}}}\), which is known as the shot-noise limit. stochastic neurons using bistable systems that are noisy by design--such as low-barrier magnetic tunnel junctions (MTJs)--has been explored [46; 47; 48], and there have also been proposals to realize hardware stochastic elements for neural networks that could be constructed with noisy CMOS electronics or other physical substrates [49; 50]. ONNs in which noise has been intentionally added [51; 52; 25] have also been studied. Our work with low-photon-count optics is related but distinct from many of the studies cited here in its motivating assumption: instead of desiring noise and stochastic behavior--and purposefully designing devices to have them, we are concerned with situations in which physical devices have large and unavoidable noise but where we would like to nevertheless construct deterministic classifiers using these devices because of their potential for low-energy computing (Figure 1). The **key idea** in our work is that when ONNs are operated in the approximately-1-photon-per-neuron-activation regime and the detectors are SPDs, it is natural to consider the neurons as binary stochastic neurons: the output of an SPD is binary (_click_ or _no click_) and fundamentally stochastic. Instead of trying to train the ONN as a deterministic neural network that has very poor numerical precision, one can instead train it as a binary stochastic neural network, adapting some of the methods from the last decade of machine-learning research on stochastic neural networks [39; 40; 41; 42; 53; 54] and using a physics-based model of the stochastic single-photon detection (SPD) process during training. We call this _physics-aware stochastic training_. We experimentally implemented a stochastic ONN using as a building block an optical matrix-vector multiplier [13] modified to have SPDs at its output: we call this a _single-photon-detection neural network_ (SPDNN). We present results showing that high classification accuracy can be achieved even when the number of photons per neuron activation is approximately 1, and even without averaging over multiple shots. We also studied in simulation how larger, more sophisticated stochastic ONNs could be constructed and what their performance on CIFAR-10 image classification would be. II Single-photon-detection neural networks: optical neural networks with stochastic activation from single-photon detection We consider ONNs in which one or more layers are each constructed from an optical matrix-vector multiplier followed by an array of SPDs (Figure 2a-c), and in which the optical powers used are sufficiently low that in each execution of the layer, each SPD has at most only a few photons impinging on it, leading to stochastic measurement outcomes of _no click_ or _click_. In our setting, we aim to perform _inference_ using the SPDNN--with its implementation in physical hardware--(Figure 2d) and to perform _training_ of the SPDNN _in silico_ (Figure 2e-f). That is, training is performed entirely using standard digital electronic computing.3 Footnote 3: It is not required that the training be done _in silico_ for it to succeed but is just a choice we made in this work. _Hardware-in-the-loop_ training, such as used in Ref. [24], is a natural alternative to purely _in silico_ training that even can make training easier by relaxing the requirements on how accurate the _in silico_ model of the physical hardware process needs to be. ### Physics-aware stochastic training To train an SPDNN, we perform gradient descent using backpropagation, which involves a forward pass, to compute the current error (or loss) of the network, and a backward pass, which is used to compute the gradient of the loss with respect to the network parameters; our procedure is inspired by backpropagation-based training of stochastic and binary neural networks [39; 42]. We model the forward pass (upper part of Figure 2e) through the network as a stochastic process that captures the key physics of SPD of optical signals having Poissonian photon statistics [55]: the measurement outcome of SPD is a binary random variable (_no click_ or _click_) that is drawn from the Bernoulli distribution with a probability that depends on the mean photon number of the light impinging on the detector. However, during the backward pass (lower part of Figure 2e), we employ a deterministic mean-field estimator to compute the gradients. This approach avoids the stochasticity and binarization of the SPD process, which typically pose difficulties for gradient estimation. We now give a brief technical description of our forward and backward passes for training; for full details see Methods and Supplementary Notes 1A and 2A. We denote the neuron pre-activations of the \(l\)th stochastic layer of an SPDNN as \(\mathbf{z}^{(l)}=W^{(l)}\mathbf{a}^{(l-1)}\), where \(\mathbf{a}^{(l-1)}\) is the activation vector from the previous layer (\(\mathbf{a}^{(0)}\) denotes the input vector \(\mathbf{x}\) of the data to be classified). In the physical realization of an SPDNN, \(\mathbf{z}^{(l)}\) is encoded optically (for example, in optical intensity) following an optical matrix-vector multiplier (optical MVM, which computes the product between the matrix \(W^{(l)}\) and the vector \(\mathbf{a}^{(l-1)}\)) but before the light impinges on an array of SPDs. We model the action of an SPD with a stochastic activation function, \(f_{\rm SPD}\) (Figure 2b; Eq. 1). The stochastic output of the \(l\)th layer is then \(\mathbf{a}^{(l)}=f_{\rm SPD}(\mathbf{z}^{(l)})\). For an optical signal having mean photon number \(\lambda\) and that obeys Poissonian photon statistics, the probability of a _click_ event by an SPD is \(P_{\rm SPD}(\lambda)=1-e^{\lambda}\) (Figure 2c). We define the stochastic activation function \(f_{\rm SPD}\) as follows: \[f_{\rm SPD}(z)\coloneqq\begin{cases}1&\text{with probability }p=P_{\rm SPD}( \lambda(z)),\\ 0&\text{with probability }1-p,\end{cases} \tag{1}\] where \(\lambda(z)\) is a function mapping a single neuron's pre-activation value to a mean photon number. For an incoherent optical setup where the information is directly encoded in intensity, \(\lambda(z)=z\); for a coherent optical setup where the information is encoded in field amplitude and the SPD directly measures the intensity, \(\lambda(z)=|z|^{2}\). In general, the Figure 2: **Single-photon-detection neural networks (SPDNNs):**_physics-aware stochastic training_ **and inference.** **a**, A single layer of an SPDNN, comprising an optical matrix-vector multiplier (optical MVM, in grey) and single-photon detectors (SPDs; in red), which perform stochastic nonlinear activations. Each output neuron’s value is computed by the physical system as \(a_{i}\;=\;f_{\rm SPD}(z_{i})\)), where \(z_{i}\) is the weighted sum (shown in green) of the input neurons to the \(i\)th output neuron computed as part of the optical MVM, and \(a_{i}\) is the stochastic binary output from a single-photon detector. **b**, Forward and backward propagation through the SPD activation function. The optical energy (\(\lambda\)) incident on an SPD is a function of \(z_{i}\) that depends on the encoding scheme used. Forward propagation uses the stochastic binary activation function \(f_{\rm SPD}\), while backpropagation involves the mean-field function of the probability \(P_{\rm SPD}\). **c**, Probability of an SPD detecting a click (output \(a\;=\;1\)) or not (output \(a\;=\;0\)), as a function of the incident light energy \(\lambda\). **d**, Optical inference using an SPDNN with \(L\) layers. The activation values from the SPD array of each layer are passed to light emitters for the optical MVM of the next layer. The last layer uses a conventional photodetector (PD) array instead of an SPD array, and is operated with enough optical energy that the output of this layer has high SNR. **e**, _In silico_ training of an SPDNN with \(L\) layers. Each forward propagation is stochastic, and during backpropagation, the error vector is passed to the hidden layers using the mean-field probability function \(P_{\rm SPD}\) instead of the stochastic activation function \(f_{\rm SPD}\). In this figure, \(\partial x\) is shorthand for \(\partial C/\partial x\), where \(C\) is the cost function. form of \(\lambda(z)\) is determined by the signal encoding used in the optical MVM, and the detection scheme following the MVM. We use \(f_{\text{SPD}}\) in modeling the stochastic behavior of an SPDNN layer in the forward pass. However, during the backward pass, we make a deterministic mean-field approximation of the network: instead of evaluating the stochastic function \(f_{\text{SPD}}\), we evaluate \(P_{\text{SPD}}(\lambda(z))\) when computing the activations of a layer: \(\mathbf{a}^{(l)}=P_{\text{SPD}}(\lambda(\mathbf{z}^{(l)}))\) (Figure 2b). This is an adaptation of a standard machine-learning method for computing gradients of stochastic neural networks [39]. ### Inference When performing inference (Figure 2d), we can run just a single shot of a stochastic layer or we can choose to take the average of multiple shots--trading greater energy and/or time usage for reduced stochasticity. For a single shot, a neuron activation takes on the value \(a^{[1]}=a\in\{0,1\}\); for \(K\) shots, \(a^{[K]}=\frac{1}{K}\sum_{k=1}^{K}a_{k}\in\{0,1/K,2/K,\ldots,1\}\). In the limit of infinitely many shots, \(K\rightarrow\infty\), the activation \(a^{[\infty]}\) would converge to the expectation value, \(a^{[\infty]}=\mathbb{E}[a]=P_{\text{SPD}}(\lambda(z))\). In this work we focus on the single-shot (\(K=1\)) and few-shot \(K\leq 5\) regime, since the high-shot \(K\gg 100\) regime is very similar to the high-photon-count-per-shot regime that has already been studied in the ONN literature (e.g., in Ref. [13]). An important practical point is that averaging for \(K>1\) shots can be achieved by counting the clicks from each SPD, which is what we did in the experiments we report. We can think of \(K\) as a discrete integration time, so averaging need not involve any data reloading or sophisticated control. ## III Mnist Handwritten-Digit Classification with a Single-Photon-Detection Multilayer Perceptron We evaluated the performance--both in numerical simulations and in optical experiments--of SPDNNs on the MNIST handwritten-digit-classification benchmark task with a simple, \(784\to N\to 10\) multilayer perceptron (MLP) architecture (Figure 3a). The activation values in the hidden layer were computed by SPDs. The optical power was chosen so that the SNR of the SPD measurements was \(\sim 1\), falling in the low-SNR regime (Figure 1b). The output layer was implemented either with full numerical precision on a digital electronic computer, or optically with an integration time set so that the measured signal comprised enough photons that a high SNR (Figure 1b) was achieved, as in conventional ONNs. Our use of a full-precision output layer is consistent with other works on binary neural networks [56; 42; 57]. In a shallow neural network, executing the output layer at high SNR substantially limits the overall energy efficiency gains from using small photon budgets in earlier layers, but in larger models, the relatively high energy cost of a high-SNR output layer is amortized. Nevertheless, as we will see, even with just a single-hidden-layer network, efficiency gains of \(>\)40\(\times\) are possible by performing the hidden layer in the low-SNR regime. The models we report on in this section used non-negative weights in the hidden layers and real-valued weights in the output layers. This allows the hidden layers to be straightforwardly realized with optical MVMs using incoherent light.4 In Section IV and Supplementary Note 2, we report on extensions to the case of real-valued weights in coherent optical processors. Footnote 4: A high-SNR layer with real-valued weights can be realized with an incoherent optical MVM if some digital-electronic postprocessing is allowed [58; 13]—which is the approach we take for the optical output layer executions in our experiments. However, the postprocessing strategy doesn’t directly apply in the low-SNR regime because readout becomes inseparable from the application of a nonlinear activation function, so we are constrained to non-negative weights and activations in the hidden layers. ### Simulation results First, we digitally simulated the SPDNN models shown in Figure 3a. We report the simulated test accuracies in Figure 3b for the full test dataset of 10,000 images, as a function of the number of hidden neurons \(N\) and the number of shots \(K\) of binary SPD measurements integrated to compute each activation. Due to the stochastic nature of the model, the classification output for a fixed input varies from run to run. We repeated inferences on fixed inputs from the test set 100 times; we report the mean and standard deviation of the test accuracy as data points and error bars, respectively. The standard deviations of the test accuracies are around 0.1%. The accuracy achieved by the SPDNN is substantially higher than for linear models (\(<93\%\) classification accuracy on MNIST [59]). This both shows that despite the hidden layer being stochastic, high-accuracy determistic classification is possible, and that the SPD activation function serves as a suitable nonlinearity in the neural network. The Figure 3: **Performance of a single-photon-detection neural network (SPDNN) on MNIST handwritten-digit classification.****a**, An SPDNN realizing a multilayer perceptron (MLP) architecture of \(N\) neurons in the hidden layer. The hidden layer (\(784\,\rightarrow\,N\)) was computed using an incoherent optical matrix-vector-multiplier (MVM) followed by a single-photon-detector (SPD) array. Each SPD realized a stochastic activation function for a single hidden-layer neuron. During a single inference, the hidden layer was executed a small number of times (\(1\,\leq\,K\,\leq\,5\)), yielding averaged activation values. The output layer (\(N\,\rightarrow\,10\)) was realized either optically—using an optical MVM and high photon budget to achieve high readout SNRa, as in conventional ONNs, or with a digital electronic processor, yielding a result with full numerical precision. **b**, Simulated test accuracy of MNIST handwritten-digit classification for models with different numbers of hidden neurons \(N\) and shots per activation \(K\).b Each activation value is obtained by averaging \(K\) shots of stochastic binary SPD readouts. When \(K\,\rightarrow\,\infty\), the stochastic activations \(a_{i}\) become the expectations \(\mathbb{E}[a_{i}]\), which are deterministic. The test accuracy with few shots is close to the accuracy achieved in the deterministic limit. **c**, Experimental evaluation of the SPDNN, with the output layer performed with full numerical precision on a digital computer. Results are presented for both \(K\,=\,1\) (single-shot, i.e., no averaging; top) and \(K\,=\,2\) (bottom) shots per activation. **d**, Experimental evaluation of the SPDNN, with both the hidden and the output layer executed using the optical experimental apparatus. The average number of detected photons used per inference in the hidden layer was kept fixed and the number used per inference in the output layer was varied (see main text for numbers). The number of detected photons per inference is reported both as an aggregate optical energy (top axis) and as a per-MAC quantity (bottom axis), which we obtained by dividing the number of photons per inference by the number of MACs performed in a single inference. The mean and standard deviation of the test accuracy were estimated used 100 repetitions of inference for each image in the test set. sizes of the models we simulated (in number of neurons \(N\)) are similar to those of traditional deterministic neural networks for MNIST classification [60], so the high accuracies achieved are not a simple consequence of averaging over many noisy neurons [61]. If we integrated an infinite number of SPD measurements for each activation (\(K\rightarrow\infty\))--which is infeasible in experiment, but can be simulated--then the SPDNN output would become deterministic. The test accuracy achieved in this limit can be considered as an upper bound, as the classification accuracy improves monotonically with \(K\). Notably, even with just a single SPD measurement (\(K=1\)) for each activation, the mean test accuracy is around \(97\%\). The accuracy is substantially improved with just a few more shots of averaging, and approaches the deterministic upper bound when \(K\gtrsim 5\). The mean single-photon-detection probability, averaged over all neurons, is \(\approx 0.5\), so the simulated number of detected photons per shot is very small: \(\approx 0.5N\). As we will quantify in the next section reporting the results of optical experiments, this means high accuracy can be achieved using much less optical energy than in conventional ONNs. ### Optical experimental results In our experimental demonstrations, we based our SPDNN on a free-space optical matrix-vector multiplier (MVM) that we had previously constructed for high-SNR experiments [13], and replaced the detectors with SPDs so that we could operate it with ultra-low photon budgets (see Methods). The experiments we report were, in part, enabled by the availability of cameras comprising large arrays of pixels capable of detecting single photons with low noise [62]. We encoded neuron values in the intensity of incoherent light; as a result, the weights and input vectors were constrained to be non-negative. However, this is not a fundamental feature of SPDNNs--in the next section (Section IV), we present simulations of coherent implementations that lift this restriction. A single-photon-detecting camera measured the photons transmitted through the optical MVM, producing the stochastic activations as electronic signals that were input to the following neural-network layer (see Methods and Supplementary Note 3 and 4). In our first set of optical experiments, the hidden layer was realized optically and the output layer was realized _in silico_ (Figure 3c): the output of the SPD measurements after the optical MVM was passed through a linear classifier executed with full numerical precision on a digital electronic computer. We tested using both \(K=1\) (no averaging) and \(K=2\) shots of averaging the stochastic binary activations in the hidden layer. The results agree well with simulations, which differ from the simulation results shown in Figure 3b because they additionally modeled imperfections in our experimental optical-MVM setup (see Methods, Supplementary Note 7). The test accuracies were calculated using 100 test images, with inference for each image repeated 30 times. The hidden layer (the one computed optically in these experiments) used approximately 0.0008 detected photons per MAC, which is \(\geq 6\) orders of magnitude lower than is typical in ONN implementations [11; 12; 16; 30] and \(\geq 3\) orders of magnitude lower than the lowest photons-per-MAC numbers reported to date [13; 16]. We then performed experiments in which both the hidden layer and the output layer were computed optically (Figure 3d). In these experiments, we implemented a neural network with 400 hidden neurons and used 5 shots per inference (\(N=400\), \(K=5\)). The total optical energy was varied by changing the number of photons used in the output layer; the number of photons used in the hidden layer was kept fixed. The average value of the stochastic binary activations \(a_{i}\) in the hidden layer was \(\approx 0.522\). This corresponds to a total of \(0.522\times N\times K=1044\) photons being detected in the hidden layer per inference. The total detected optical energy per inference comprises the sum of the detected optical energy in the hidden (\(784\to 400\)) layer and in the output (\(400\to 10\)) layer (see Methods, Supplementary Table 6 and Supplementary Note 9). The results show that even though the output layer was operated in the high-SNR regime (Figure 1b), the full inference computation achieved high accuracy yet used only a few femtojoules of optical energy in total (equivalent to a few thousand photons). By dividing the optical energy by the number of MACs performed in a single inference, we can infer the per-MAC optical energy efficiency achieved: with an average detected optical energy per MAC of approximately 0.001 attojoules (0.003 attojoules), equivalent to 0.003 photons (0.008 photons), the test accuracy was \(92.0\pm 2.3\%\) (\(98.0\pm 1.3\%\)). ## IV Simulation study of possible future deeper, coherent single-photon-detection neural networks We have successfully experimentally demonstrated a two-layer SPDNN, but can SPDNNs be used to implement deeper and more sophisticated models? One of the limitations of our experimental apparatus was that it used an intensity encoding with incoherent light and as a result could natively only perform operations with non-negative numbers. In this section we will show that SPDNNs capable of implementing signed numbers can be used to realize multilayer models (with up to 6 layers), including models with more sophisticated architectures than multilayer perceptrons--such as models with convolutional layers. ONNs based on coherent light can naturally encode sign information in the phase of the light and have been realized in many different physical platforms [64, 65, 66, 7, 10, 11, 6]. We propose--and study in simulation--SPDNNs using coherent light. Neuron values are encoded in optical amplitudes that are constrained to have phases that are either 0 (positive values) or \(\pi\) (negative values). With this encoding, detection by an SPD--which measures intensity and is hence insensitive to phase--results in a stochastic nonlinear activation function that is symmetric about zero (Figure 4a; see Methods). Alternative detection schemes could be employed that would modify the activation function, but we have focused on demonstrating the capabilities of this straightforward case, avoiding introducing additional experimental complexity. We performed two sets of simulation experiments: one on coherent SPDNNs trained to perform MNIST handwritten-digit classification, and one on coherent SPDNNs trained to performed CIFAR-10 image classification. Figures 4d shows the architectures tested and simulation results for the MNIST benchmark (see Methods, Supplementary Note 2B). The accuracy achieved by MLPs with either one or two hidden layers was higher than that of the single-hidden-layer MLP simulated for the incoherent case (Figure 3b), and an architecture with a single convolutional layer followed Figure 4: **Simulation study predicting the performance of proposed _coherent_ single-photon-detection neural networks (SPDNNs).****a**, The probability of detecting a photon as a function of the input light amplitude in a coherent SPDNN. Real-valued numbers are encoded in coherent light with either 0 phase (positive numbers) or \(\pi\) phase (negative numbers). Measurement by a single-photon detector (SPD) results in the probabilistic detection of a photon that is proportional to the square of the encoded value \(z\), in comparison to intensity encodings with incoherent light. **b**, Structure of a convolutional SPDNN with a kernel size of \(5\times 5\). Single-shot SPD measurements (\(K=1\)) are performed after each layer (by an SPD array), except for the output layer. Average \(2\times 2\) pooling is applied after each convolutional operation. A digital rectified linear unit (ReLU) [63] activation function can also be used in the linear layer as an alternative. **c**, Schematic of a convolutional layer with SPD activations. **d**, Simulated test accuracy of coherent SPDNNs with varying architecture performing MNIST handwritten-digit classification. The multilayer perceptron (MLP) models had 400 neurons in each hidden layer. The convolutional model consisted of a convolutional layer with 16 output channels, followed by two linear layers with an SPD activation inbetween. **e**, Simulated test accuracy of coherent SPDNNs with varying architecture performing CIFAR-10 image classification. The models have four convolutional layers, each followed by SPD activation functions. The two linear layers can either be implemented in full-precision with a ReLU activation function (in purple) or using the SPD activation function. The number of output channels for each convolutional layer is indicated above the corresponding data point. by two linear layers achieved \(>\)99% accuracy even in the single-shot (\(K=1\)) regime. Figure 4e shows the results of simulating variants of a 6-layer convolutional SPDNN (comprising 4 convolutional layers and 2 fully connected, linear layers) on CIFAR-10 image classification. All these simulation results were obtained in the single-shot (\(K=1\)) regime. The number of channels in each convolution layer was varied, which affects the total number of MACs used to perform an inference. We observed that the test accuracy increased with the size of the SPDNN, with accuracies approaching those of conventional convolutional neural networks of comparable size [67], as well as of binarized convolutional neural networks [68; 42; 69]. In the models we simulated that only used SPD as the activation function (i.e., the ones in which there are no 'Digital ReLU' blocks), the high-SNR linear output layer had only 4000 MAC operations, so the number of MACs in the high-SNR layer comprises less than 0.01% of the total MACs performed during an inference. The models we simulated are thus sufficiently large that the total optical energy cost would be dominated by the (low-SNR) layers prior to the (high-SNR) output layer. Equivalently, the optical energy cost per MAC would be predominantly determined by the cost of the low-SNR layers. These simulation results illustrate the ability of SPDNNs to scale to larger and deeper models, enabling them to perform more challenging tasks. The symmetric stochastic activation function that is realized by SPD of coherently encoded real values yields good accuracies on both MNIST and CIFAR-10 benchmarks and is straightforward to implement experimentally. ## V Discussion In this paper we have shown that it is possible to construct an optical neural network (ONN) in which one or more layers use single-photon detection (SPD) of weak optical signals to perform stochastic neuron activation, and--despite the exceptionally low signal-to-noise ratio (SNR) of around 1 in the low-optical-power layers--such single-photon-detection neural networks (SPDNNs) can achieve high accuracy in deterministic classification tasks. This is enabled by physics-aware stochastic training, in which an ONN is trained as a stochastic neural network using a model that incorporates knowledge of the physics of photodetection of optical signals with average photon number around 1 that are subject to Poissonian photon statistics. We experimentally demonstrated a two-layer ONN in which the (large) hidden layer was operated in the low-optical-power, quantum-noise-limited, highly stochastic regime (SNR \(\sim 1\)) and the (small) output layer was operated in the higher-optical-power, low-noise regime (SNR \(\gtrsim 10\)). This ONN (when run with \(N=50\) hidden neurons and \(K=5\) shots of SPD measurements per activation; see Supplementary Figure 20) achieved a test accuracy of 90.6% on MNIST handwritten-digit recognition while using only an average of 1390 detected photons per inference (corresponding to \(\sim\)0.5 fJ of detected optical energy per inference), which is a large improvement over recent state-of-the-art low-optical-power ONN experiments in the following metric: 1390 photons per inference is \(>\)40\(\times\) less than used by the ONNs in Refs. [13; 16] to achieve the same accuracy (\(>\)90%) on the same task (MNIST classification). 5 Footnote 5: We could also very favorably compare the number of photons per MAC used in our experiments versus in the experiments reported in Refs. [13; 16], but we don’t wish to emphasize this metric here for two reasons. Firstly, and most importantly, we see energy per inference as a more important metric to focus on than energy per MAC, even though picking metrics is not necessarily straightforward [70]. Secondly, dot products computed for the hidden layer in our optical experiments are read out stochastically by single-photon detectors that output just 1 bit of information, whereas the dot products computed in the experiments reported by Refs. [13; 16] are read out with more bits of precision. This difference in the nature and precision of the readout means a MAC operation in our experiments is arguably not quite the same as a MAC operation in the experiments of Refs. [13; 16], and so careful interpretation is needed when comparing their costs. While we have demonstrated a fundamental point--that ONNs can be successfully operated in the few-photon-per-activation regime in which quantum shot noise causes very low SNR--an important practical consideration for the construction of ONNs is that the energy used by optical signals within the ONN are only part of the ONN's total energy consumption, and it is the total energy per inference that is generally what one wants to optimize for [28; 60; 71]. A practical limitation of our experiments is that they were conducted with a relatively slow6 single-photon-detector array, limiting the speed at which a single execution of a layer could be carried out, and the detector array was not optimized for energy efficiency. For our fundamental approach and methods to be applied to make ONNs that offer a practical advantage over state-of-the-art electronic processors as generic neural-network accelerators, there remains important work to be done in engineering an overall system that operates sufficiently fast while minimizing total energy cost. Recent progress in the development of large, fast arrays of single-photon detectors coupled with digital logic [72] suggest that there is a path towards this goal. Ref. [73] has also pointed out the possibility of using fast superconducting-nanowire single-photon detectors for realizing spiking neural networks. Furthermore, there is a complementary path toward utility in the nearer term: if instead of aiming to use ONNs to entirely replace electronic processors, one uses ONNs as a pre-processor for input data that is already optical [9; 74; 75], operating the ONN with single-photon detectors is a natural match with scenarios in which the optical input is very weak--for example, in low-light-imaging applications. Footnote 6: 19.8 kHz maximum frame rate. Our approach is not tied to a specific architecture of ONN--the free-space matrix-vector multiplier used in our experiments is just one of many possible choices of architecture. Other ONNs could be adapted to use our approach by replacing the photodetectors typically used for readout of neurons at the end of a layer with single-photon detectors. ONNs based on diffractive optics [7; 12; 64], Mach-Zehnder interferometer (MZI) meshes [76; 6; 77], and other on-chip approaches to matrix-vector multiplication [78; 10; 11] all appear compatible. In our optical experiments, we used single-photon detectors that output an electronic signal when a photon is detected. However, in multilayer ONNs, the input to each layer is optical. One can convert an electronic detector output to an optical input by modulating an optical source--which is what we did and what is often done in ONNs more generally [9]--but an alternative is to construct a device that performs SPD with high efficiency and gives the measurement result as an _optical_ signal that can be directly used as an input to the next layer in the ONN. Designing and demonstrating such a device is an interesting potential avenue for future work in applied quantum nonlinear optics [79; 80; 81; 82; 83; 84], and could lead to both lower electronic energy consumption and higher speed for single-photon-detection ONNs. We trained our demonstration SPDNN _in silico_ using backpropagation, but if SPDNNs with high overall energy efficiency are built, it would be a boon use this efficient hardware not only for inference but also for training. To this end, it could be interesting to study how to adapt _in situ_ training [85; 86; 87; 17], including backpropagation-free (e.g., Refs. [88; 89; 90; 91]), methods for SPDNNs. An open question related to training is whether it is possible to make SPDNNs that do not involve a final high-SNR layer while preserving task accuracy; this could help to reduce the overall energy per inference. Other future work could explore the extension of our research to neural networks with larger sizes (wider and more layers, which could both improve the capability of the neural network and further amortize the energy cost of the final, high-SNR layer, if used), more sophisticated classification tasks (beyond MNIST and CIFAR-10 image classification--such as has been shown with conventional binary neural networks [92; 56; 93]), and generative or other probabilistic tasks--for which the stochasticity can be harnessed rather than merely tolerated. Beyond machine-learning tasks, an SPDNN layer could be used as the core of a single-photon-regime photonic Ising machine [94] for heuristically solving combinatorial-optimization problems, realizing an optical version of p-bit computing [48]. Our research is an example of realizing a neural network using a stochastic physical system. Beyond optics, our work is related and complementary to recent investigations in electronic, spintronic, and quantum neuromorphic computing [95; 96; 97; 98; 99; 100; 4], including in training physical systems to perform neural-network inference [102; 103; 104; 105; 106]. Noise is a fundamental feature and the ultimate limit to energy efficiency in computing with all analog physical systems. It has long been realized that noise is not always detrimental: not only does it not necessarily prevent accurate computation, but can in some cases even enable fundamentally new and more efficient algorithms or types of computation. Our work shows that using a quantum physical model of a particular hardware's noise at the software level can enable surprisingly large gains in energy efficiency. The phenomena observed in our work seemingly relies on two key physical ingredients. First, the system's available states are effectively quantized, as in the photonic quantization of energy in our ONN demonstration, or the binarization that occurs in low-barrier, stochastic magnetic tunnel junctions [96]. Second, the noise in the system results in the quantized outputs of the system being stochastic. This suggests that ultra-low-SNR physical neural networks should be possible in many physical hardware platforms beyond photonics. Systems in which shot noise dominates are natural matches with our approach and methods. Our approach could also be relevant to systems in which thermal (Johnson) noise dominates--as is typically the case in room-temperature electronics--but this will depend on not just the noise but also the system's dynamics. Which hardware platforms and system architectures can yield an overall energy benefit by being operated in a stochastic regime while maintaining computational accuracy is an important open question. While there are many reasons computer science has traditionally favored the abstraction of hardware from software, our work is part of a broad trend, spanning many different physical platforms [107; 108; 5], in which researchers engineer computations in a physics-aware manner. By short-circuiting the abstraction hierarchy--in our case, going from a physics-aware software description of a stochastic neural network directly to a physical optical realization of the constituent operations--it is possible to achieve orders-of-magnitude improvements in energy efficiency [28; 9] versus conventional CMOS computing. _Physics-aware software_, in which software directly incorporates knowledge of the physics of the underlying computing hardware--such as in the _physics-aware stochastic training_ we used in this work--is understudied compared to purely software-level or hardware-level innovations (i.e., "at the top" or "at the bottom" of the hierarchy [109]). It is thus ripe for exploration: within the domain of neural networks, there are a multitude of emerging physical platforms that could be more fully harnessed if the physical devices were not forced to conform to the standard abstractions in modern computer architecture [24]. Beyond neural-network accelerators, communities such as computational imaging [110] have embraced the opportunity to improve system performance through co-optimizing hardware and software in a physics-aware manner. We believe there is an opportunity to make gains in even more areas and applications of computing technology by collapsing abstractions and implementing physics-aware software with physical hardware that could be orders of magnitude faster or more energy efficient than current digital CMOS approaches but that doesn't admit a clean, digital, deterministic abstraction. ## Data and Code Availability All the simulation and experimental data presented in the paper, demonstration data for data gathering, as well as training data for the SPDNN models, are available at [https://doi.org/10.5281/zenodo.8188270](https://doi.org/10.5281/zenodo.8188270). An expandable demonstration code to train SPDNNs as well as other stochastic physical systems is available at [https://github.com/mcmahon-lab/Single-Photon-Detection-Neural-Networks](https://github.com/mcmahon-lab/Single-Photon-Detection-Neural-Networks). ## Acknowledgements We wish to thank NTT Research for their financial and technical support (S.-Y.M., P.L.M., T.W. and L.G.W.). Portions of this work were supported by the National Science Foundation (award no. CCF-1918549; J.L., P.L.M. and T.W.), a Kavli Institute at Cornell instrumentation grant (P.L.M. and T.W.), and a David and Lucile Packard Foundation Fellowship (P.L.M.). P.L.M. acknowledges membership of the CIFAR Quantum Information Science Program as an Azrieli Global Scholar. T.W. acknowledges partial support from an Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship. We acknowledge valuable discussions with M. Anderson, F. Chen, R. Hamerly, T. Onodera, S. Prabhu, M. M. Sohoni and R. Yanagimoto. We also acknowledge Z. Eslami, V. Kremenetski, F. Presutti, C. Wan and F. Wu for helpful suggestions regarding the manuscript. ## Author Contributions S.-Y.M., L.G.W., T.W., and P.L.M. conceived the project. S.-Y.M. and T.W. designed the experiments and built the experimental setup. S.-Y.M. and J.L. performed the neural-network training. S.-Y.M. performed the experiments, the data analysis, and the numerical simulations. All authors contributed to preparing the manuscript. T.W., L.G.W. and P.L.M. supervised the project. ## References * [1] Y. LeCun, Y. Bengio, and G. Hinton, Deep learning. _Nature_**521**, 436-444 (2015). * [2] A. Canziani, A. Paszke, and E. Culurciello, An analysis of deep neural network models for practical applications. _arXiv:1605.07678_ (2016). * [3] N. C. Thompson, K. Greenewald, K. Lee, and G. F. Manso, The computational limits of deep learning. _arXiv:2007.05558_ (2020). * [4] D. Markovic, A. Mizrahi, D. Querlioz, and J. Grollier, Physics for neuromorphic computing. _Nature Reviews Physics_**2**, 499-510 (2020). * [5] D. V. Christensen, R. Dittmann, B. Linares-Barranco, A. Sebastian, M. Le Gallo, A. Redaelli, S. Slesazeck, T. Mikolajick, S. Spiga, S. Menzel et al. 2022 roadmap on neuromorphic computing and engineering. _Neuromorphic Computing and Engineering_**2**, 022501 (2022). * [6] Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund et al. Deep learning with coherent nanophotonic circuits. _Nature Photonics_**11**, 441 (2017). * [7] X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, All-optical machine learning using diffractive deep neural networks. _Science_**361**, 1004-1008 (2018). * [8] C. Rios, N. Youngblood, Z. Cheng, M. Le Gallo, W. H. Pernice, C. D. Wright, A. Sebastian, and H. Bhaskaran, In-memory computing on a photonic platform. _Science Advances_**5**, eaau5759 (2019). * [9] G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljacic, C. Denz, D. A. Miller, and D. Psaltis, Inference in artificial intelligence with deep optics and photonics. _Nature_**588**, 39-47 (2020). * [10] X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, 11 TOPS photonic convolutional accelerator for optical neural networks. _Nature_**589**, 44-51 (2021). * [11] J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja et al. Parallel convolutional processing using an integrated photonic tensor core. _Nature_**589**, 52-58 (2021). * [12] T. Zhou, X. Lin, J. Wu, Y. Chen, H. Xie, Y. Li, J. Fan, H. Wu, L. Fang, and Q. Dai, Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit. _Nature Photonics_**15**, 367-373 (2021). * [13] T. Wang, S.-Y. Ma, L. G. Wright, T. Onodera, B. C. Richard, and P. L. McMahon, An optical neural network using less than 1 photon per multiplication. _Nature Communications_**13**, 1-8 (2022). * [14] R. Davis III, Z. Chen, R. Hamerly, and D. Englund, Frequency-encoded deep learning with speed-of-light dominated latency. _arXiv:2207.06883_ (2022). * [15] F. Ashtiani, A. J. Geers, and F. Aflatouni, An on-chip photonic deep neural network for image classification. _Nature_**606**, 501-506 (2022). * [16] A. Sludds, S. Bandyopadhyay, Z. Chen, Z. Zhong, J. Cochrane, L. Bernstein, D. Bunandar, P. B. Dixon, S. A. Hamilton, M. Streshinsky et al. Delocalized photonic deep learning on the internet's edge. _Science_**378**, 270-276 (2022). * [17] S. Bandyopadhyay, A. Sludds, S. Krastanov, R. Hamerly, N. Harris, D. Bunandar, M. Streshinsky, M. Hochberg, and D. Englund, Single chip photonic deep neural network with accelerated training. _arXiv:2208.01623_ (2022). * [18] S. Moon, K. Shin, and D. Jeon, Enhancing reliability of analog neural network processors. _IEEE Transactions on Very Large Scale Integration (VLSI) Systems_**27**, 1455-1459 (2019). * [19] V. Joshi, M. Le Gallo, S. Haefeli, I. Boybat, S. R. Nandakumar, C. Piveteau, M. Dazzi, B. Rajendran, A. Sebastian, and E. Eleftheriou, Accurate deep neural network inference using computational phase-change memory. _Nature Communications_**11**, 1-13 (2020). * [20] N. Semenova, L. Larger, and D. Brunner, Understanding and mitigating noise in trained deep neural networks. _Neural Networks_**146**, 151-160 (2022). * [21] M. Klachko, M. R. Mahmoodi, and D. Strukov, Improving noise tolerance of mixed-signal neural networks. In _2019 International Joint Conference on Neural Networks (IJCNN)_, 1-8 (2019). * [22] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough, Noisy machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation. _arXiv:2001.04974_ (2020). * [23] X. Yang, C. Wu, M. Li, and Y. Chen, Tolerating Noise Effects in Processing-in-Memory Systems for Neural Networks: A Hardware-Software Codesign Perspective. _Advanced Intelligent Systems_**4**, 2200029 (2022). * [24] L. G. Wright, T. Onodera, M. M. Stein, T. Wang, D. T. Schachter, Z. Hu, and P. L. McMahon, Deep physical neural networks trained with backpropagation. _Nature_**601**, 549-555 (2022). * [25] C. Wu, X. Yang, H. Yu, R. Peng, I. Takeuchi, Y. Chen, and M. Li, Harnessing optoelectronic noises in a photonic generative network. _Science Advances_**8**, eabm2956 (2022). * [26] H. Borras, B. Klein, and H. Froning, Walking Noise: Understanding Implications of Noisy Computations on Classification Tasks. _arXiv:2212.10430_ (2022). * [27] N. Semenova and D. Brunner, Noise-mitigation strategies in physical feedforward neural networks. _Chaos: An Interdisciplinary Journal of Nonlinear Science_**32**, 061106 (2022). * [28] M. G. Anderson, S.-Y. Ma, T. Wang, L. G. Wright, and P. L. McMahon, Optical transformers. _arXiv:2302.10360_ (2023). * [29] Y. Jiang, W. Zhang, X. Liu, W. Zhu, J. Du, and Z. He, Physical Layer-aware Digital-Analog Co-Design for Photonic Convolution Neural Network. _IEEE Journal of Selected Topics in Quantum Electronics_ (2023). * [30] L. Bernstein, A. Sludds, C. Panuski, S. Trajtenberg-Mills, R. Hamerly, and D. Englund, Single-shot optical neural network. _Science Advances_**9**, eadg7904 (2023). * [31] C. Beenakker and C. Schonenberger, Quantum shot noise. _Physics Today_**56**, 37-42 (2003). * [32] G. S. Agarwal, (2012) _Quantum Optics_. (Cambridge University Press). * [33] S. Machida, Y. Yamamoto, and Y. Itaya, Observation of amplitude squeezing in a constant-current-driven semiconductor laser. _Physical Review Letters_**58**, 1000 (1987). * [34] R. H. Hadfield, Single-photon detectors for optical quantum information applications. _Nature Photonics_**3**, 696-705 (2009). * [35] A. Alaghi and J. P. Hayes, Survey of stochastic computing. _ACM Transactions on Embedded Computing Systems (TECS)_**12**, 1-19 (2013). * [36] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, A learning algorithm for Boltzmann machines. _Cognitive Science_**9**, 147-169 (1985). * [37] R. M. Neal, Learning stochastic feedforward networks. _Department of Computer Science, University of Toronto_**64**, 1577 (1990). * [38] R. M. Neal, Connectionist learning of belief networks. _Artificial Intelligence_**56**, 71-113 (1992). * [39] Y. Bengio, N. Leonard, and A. Courville, Estimating or propagating gradients through stochastic neurons for conditional computation. _arXiv:1308.3432_ (2013). * [40] C. Tang and R. R. Salakhutdinov, Learning stochastic feedforward neural networks. _Advances in Neural Information Processing Systems_**26** (2013). * [41] T. Raiko, M. Berglund, G. Alain, and L. Dinh, Techniques for learning binary stochastic feedforward neural networks. _arXiv:1406.2989_ (2014). * [42] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, Binarized neural networks. _Advances in Neural Information Processing Systems_**29** (2016). * [43] Y. Ji, F. Ran, C. Ma, and D. J. Lilja, A hardware implementation of a radial basis function neural network using stochastic logic. In _2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)_, 880-883 (2015). * [44] V. T. Lee, A. Alaghi, J. P. Hayes, V. Sathe, and L. Ceze, Energy-efficient hybrid stochastic-binary neural networks for near-sensor computing. In _Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017_, 13-18 (2017). * [45] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, A survey of stochastic computing neural networks for machine learning applications. _IEEE Transactions on Neural Networks and Learning Systems_**32**, 2809-2824 (2020). * [46] D. Vodenicarevic, N. Locatelli, A. Mizrahi, J. S. Friedman, A. F. Vincent, M. Romera, A. Fukushima, K. Yakushiji, H. Kubota, S. Yuasa et al. Low-energy truly random number generation with superparamagnetic tunnel junctions for unconventional computing. _Physical Review Applied_**8**, 054045 (2017). * Hassan et al. [2019] O. Hassan, R. Faria, K. Y. Camsari, J. Z. Sun, and S. Datta, Low-barrier magnet design for efficient hardware binary stochastic neurons. _IEEE Magnetics Letters_**10**, 1-5 (2019). * Chowdhury et al. [2023] S. Chowdhury, A. Grimaldi, N. A. Aadit, S. Niazi, M. Mohseni, S. Kanai, H. Ohno, S. Fukami, L. Theogarajan, G. Finocchio et al. A full-stack view of probabilistic computing with p-bits: devices, architectures and algorithms. _IEEE Journal on Exploratory Solid-State Computational Devices and Circuits_ (2023). * Hylton et al. [2021] T. Hylton, T. M. Conte, and M. D. Hill, A vision to compute like nature: Thermodynamically. _Communications of the ACM_**64**, 35-38 (2021). * Coles et al. [2023] P. J. Coles, C. Szczepanski, D. Melanson, K. Donatella, A. J. Martinez, and F. Sbahi, Thermodynamic AI and the fluctuation frontier. _arXiv:2302.06584_ (2023). * Wu et al. [2022] C. Wu, X. Yang, Y. Chen, and M. Li, Photonic Bayesian neural network using programmed optical noises. _IEEE Journal of Selected Topics in Quantum Electronics_**29**, 1-6 (2022). * Ma et al. [2023] B. Ma, J. Zhang, X. Li, and W. Zou, Stochastic photonic spiking neuron for Bayesian inference with unsupervised learning. _Optics Letters_**48**, 1411-1414 (2023). * Gu et al. [2015] S. Gu, S. Levine, I. Sutskever, and A. Mnih, Muprop: Unbiased backpropagation for stochastic neural networks. _arXiv:1511.05176_ (2015). * Liu et al. [2018] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, A stochastic computational multi-layer perceptron with backward propagation. _IEEE Transactions on Computers_**67**, 1273-1286 (2018). * Gerry and Knight [2005] C. Gerry and P. L. Knight, (2005) _Introductory Quantum Optics_. (Cambridge University Press). * Rastegari et al. [2016] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, Xnor-net: Imagenet classification using binary convolutional neural networks. In _Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV_, 525-542 (2016). * Zhou et al. [2016] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. _arXiv:1606.06160_ (2016). * Hayasaki et al. [1992] Y. Hayasaki, I. Tohyama, T. Yatagai, M. Mori, and S. Ishihara, Optical learning neural network using Selfoc microlens array. _Japanese Journal of Applied Physics_**31**, 1689 (1992). * LeCun et al. [1998] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. _Proceedings of the IEEE_**86**, 2278-2324 (1998). * Hamerly et al. [2019] R. Hamerly, L. Bernstein, A. Sludds, M. Soljacic, and D. Englund, Large-scale optical neural networks based on photoelectric multiplication. _Physical Review X_**9**, 021032 (2019). * Laydevant et al. [2021] J. Laydevant, M. Ernoult, D. Querlioz, and J. Grollier, Training dynamical binary neural networks with equilibrium propagation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 4640-4649 (2021). * Dhimitri et al. [2022] K. Dhimitri, S. M. Fullerton, B. Coyle, K. E. Bennett, T. Miura, T. Higuchi, and T. Maruno, Scientific CMOS (sCMOS) camera capabilities with a focus on quantum applications. In _Photonics for Quantum 2022_, PC122430L (2022). * Agarap [2018] A. F. Agarap, Deep learning using rectified linear units (ReLU). _arXiv:1803.08375_ (2018). * Chang et al. [2018] J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. _Scientific Reports_**8**, 12324 (2018). * Spall et al. [2020] J. Spall, X. Guo, T. D. Barrett, and A. Lvovsky, Fully reconfigurable coherent optical vector-matrix multiplication. _Optics Letters_**45**, 5752-5755 (2020). * Miscuglio et al. [2020] M. Miscuglio, Z. Hu, S. Li, J. K. George, R. Capanna, H. Dalir, P. M. Bardet, P. Gupta, and V. J. Sorger, Massively parallel amplitude-only Fourier neural network. _Optica_**7**, 1812-1819 (2020). * Lee et al. [2016] C.-Y. Lee, P. W. Gallagher, and Z. Tu, Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In _Artificial Intelligence and Statistics_, 464-472 (2016). * Esser et al. [2016] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch, C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, and D. S. Modha, Convolutional networks for fast, energy-efficient neuromorphic computing. _Proceedings of the National Academy of Sciences_**113**, 11441-11446 (2016). * Qin et al. [2020] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, Binary neural networks: A survey. _Pattern Recognition_**105**, 107281 (2020). * Sze et al. [2020] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, How to evaluate deep neural network processors: TOPS/W (alone) considered harmful. _IEEE Solid-State Circuits Magazine_**12**, 28-41 (2020). * Nahmias et al. [2019] M. A. Nahmias, T. F. De Lima, A. N. Tait, H.-T. Peng, B. J. Shastri, and P. R. Prucnal, Photonic multiply-accumulate operations for neural networks. _IEEE Journal of Selected Topics in Quantum Electronics_**26**, 1-18 (2019). * Bruschini et al. [2023] C. Bruschini, S. Burri, E. Bernasconi, T. Milanese, A. C. Ulku, H. Homulle, and E. Charbon, LinoSPAD2: A 512x1 linear SPAD camera with system-level 135-ps SPTR and a reconfigurable computational engine for time-resolved single-photon imaging. In _Quantum Sensing and Nano Electronics and Photonics XIX_ Vol. 12430, 126-135 (2023). * Shainline et al. [2017] J. M. Shainline, S. M. Buckley, R. P. Mirin, and S. W. Nam, Superconducting optoelectronic circuits for neuromorphic computing. _Physical Review Applied_**7**, 034013 (2017). * Wang et al. [2023] T. Wang, M. M. Sohoni, L. G. Wright, M. M. Stein, S.-Y. Ma, T. Onodera, M. G. Anderson, and P. L. McMahon, Image sensing with multilayer nonlinear optical neural networks. _Nature Photonics_**17**, 408-415 (2023). * Huang et al. [2023] L. Huang, Q. A. Tanguy, J. E. Froch, S. Mukherjee, K. F. Bohringer, and A. Majumdar, Photonic Advantage of Optical Encoders. _arXiv:2305.01743_ (2023). * Carolan et al. [2015] J. Carolan, C. Harrold, C. Sparrow, E. Martin-Lopez, N. J. Russell, J. W. Silverstone, P. J. Shadbolt, N. Matsuda, M. Oguma, M. Itoh et al. Universal linear optics. _Science_**349**, 711-716 (2015). * Bogaerts _et al._ [2020]W. Bogaerts, D. Perez, J. Capmany, D. A. B. Miller, J. Poon, D. Englund, F. Morichetti, and A. Melloni, Programmable photonic circuits. _Nature_**586**, 207-216 (2020). * Tait _et al._ [2015]A. N. Tait, J. Chang, B. J. Shastri, M. A. Nahmias, and P. R. Prucnal, Demonstration of WDM weighted addition for principal component analysis. _Optics Express_**23**, 12758-12765 (2015). * Mazets and Kurizki [2007]I. Mazets and G. Kurizki, Multiatom cooperative emission following single-photon absorption: Dicke-state dynamics. _Journal of Physics B: Atomic, Molecular and Optical Physics_**40**, F105 (2007). * Pinotsi and Imamoglu [2008]D. Pinotsi and A. Imamoglu, Single photon absorption by a single quantum emitter. _Physical Review Letters_**100**, 093603 (2008). * Sotier _et al._ [2009]F. Sotier, T. Thomay, T. Hanke, J. Korger, S. Mahapatra, A. Frey, K. Brunner, R. Bratschitsch, and A. Leitenstorfer, Femtosecond few-fermion dynamics and deterministic single-photon gain in a quantum dot. _Nature Physics_**5**, 352-356 (2009). * Kiilerich and Molmer [2019]A. H. Kiilerich and K. Molmer, Input-output theory with quantum pulses. _Physical Review Letters_**123**, 123604 (2019). * Li _et al._ [2023]Q. Li, K. Orcutt, R. L. Cook, J. Sabines-Chesterking, A. L. Tong, G. S. Schlau-Cohen, X. Zhang, G. R. Fleming, and K. B. Whaley, Single-photon absorption and emission from a natural photosynthetic complex. _Nature_ pp. 1-5 (2023). * Roques-Carmes _et al._ [2023]C. Roques-Carmes, Y. Salamin, J. Sloan, S. Choi, G. Velez, E. Koskas, N. Rivera, S. E. Kooi, J. D. Joannopoulos, and M. Soljacic, Biasing the quantum vacuum to control macroscopic probability distributions. _arXiv:2303.03455_ (2023). * Zhou _et al._ [2020]T. Zhou, L. Fang, T. Yan, J. Wu, Y. Li, J. Fan, H. Wu, X. Lin, and Q. Dai, In situ optical backpropagation training of diffractive optical neural networks. _Photonics Research_**8**, 940-953 (2020). * Guo _et al._ [2021]X. Guo, T. D. Barrett, Z. M. Wang, and A. Lvovsky, Backpropagation through nonlinear units for the all-optical training of neural networks. _Photonics Research_**9**, B71-B80 (2021). * Pai _et al._ [2023]S. Pai, Z. Sun, T. W. Hughes, T. Park, B. Bartlett, I. A. Williamson, M. Minkov, M. Milanizadeh, N. Abebe, F. Morichetti et al. Experimentally realized in situ backpropagation for deep learning in photonic neural networks. _Science_**380**, 398-404 (2023). * Bengio _et al._ [2015]Y. Bengio, D.-H. Lee, J. Bornschein, T. Mesnard, and Z. Lin, Towards biologically plausible deep learning. _arXiv:1502.04156_ (2015). * Lillicrap _et al._ [2020]T. P. Lillicrap, A. Santoro, L. Marris, C. J. Akerman, and G. Hinton, Backpropagation and the brain. _Nature Reviews Neuroscience_**21**, 335-346 (2020). * Hinton [2023]G. Hinton, The concept of mortal computation. Keynote address presented at the Neural Information Processing Systems conference, New Orleans (2023). * Stern and Murugan [2023]M. Stern and A. Murugan, Learning without neurons in physical systems. _Annual Review of Condensed Matter Physics_**14**, 417-441 (2023). * Bulat and Tzimiropoulos [2019]A. Bulat and G. Tzimiropoulos, Xnor-net++: Improved binary neural networks. _arXiv:1909.13863_ (2019). * Bulat _et al._ [2019]A. Bulat, J. Kossaifi, G. Tzimiropoulos, and M. Pantic, Matrix and tensor decompositions for training binary neural networks. _arXiv:1904.07852_ (2019). * Mohseni _et al._ [2022]N. Mohseni, P. L. McMahon, and T. Byrnes, Ising machines as hardware solvers of combinatorial optimization problems. _Nature Reviews Physics_**4**, 363-379 (2022). * Torrejon _et al._ [2020]J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima et al. Neuromorphic computing with nanoscale spintronic oscillators. _Nature_**547**, 428-431 (2017). * Grollier _et al._ [2020]J. Grollier, D. Querlioz, K. Camsari, K. Everschor-Sitte, S. Fukami, and M. D. Stiles, Neuromorphic spintronics. _Nature Electronics_**3**, 360-370 (2020). * Cai _et al._ [2020]F. Cai, S. Kumar, T. Van Vaerenbergh, X. Sheng, R. Liu, C. Li, Z. Liu, M. Foltin, S. Yu, Q. Xia et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. _Nature Electronics_**3**, 409-418 (2020). * Harabi _et al._ [2023]K.-E. Harabi, T. Hirtzlin, C. Turck, E. Vianello, R. Laurent, J. Droulez, P. Bessiere, J.-M. Portal, M. Bocquet, and D. Querlioz, A memristor-based Bayesian machine. _Nature Electronics_**6**, 52-63 (2023). * Islam _et al._ [2023]A. N. M. N. Islam, K. Yang, A. K. Shukla, P. Khanal, B. Zhou, W.-G. Wang, and A. Sengupta, Hardware in Loop Learning with Spin Stochastic Neurons. _arXiv:2305.03235_ (2023). * Markovic and Grollier [2020]D. Markovic and J. Grollier, Quantum neuromorphic computing. _Applied Physics Letters_**117** (2020). * Cerezo _et al._ [2022]M. Cerezo, G. Verdon, H.-Y. Huang, L. Cincio, and P. J. Coles, Challenges and opportunities in quantum machine learning. _Nature Computational Science_**2**, 567-576 (2022). * Prezioso _et al._ [2015]M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev, and D. B. Strukov, Training and operation of an integrated neuromorphic network based on metal-oxide memristors. _Nature_**521**, 61-64 (2015). * Hughes _et al._ [2019]T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, Wave physics as an analog recurrent neural network. _Science Advances_**5**, eaay6946 (2019). * Mitarai _et al._ [2018]K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning. _Physical Review A_**98**, 032309 (2018). * Cramer _et al._ [2022]B. Cramer, S. Billaudelle, S. Kanya, A. Leibfried, A. Grubl, V. Karasenko, C. Pehle, K. Schreiber, Y. Stradmann, J. Weis et al. Surrogate gradients for analog neuromorphic computing. _Proceedings of the National Academy of Sciences_**119**, e2109194119 (2022). * Chen _et al._ [2020]T. Chen, J. van Gelder, B. van de Ven, S. V. Amitonov, B. De Wilde, H.-C. Ruiz Euler, H. Broersma, P. A. Bobbert, F. A. Zwanenburg, and W. G. van der Wiel, Classification with a disordered dopant-atom network in silicon. _Nature_**577**, 341-345 (2020). * Berggren _et al._ [2020]K. Berggren, Q. Xia, K. K. Likharev, D. B. Strukov, H. Jiang, T. Mikolajick, D. Querlioz, M. Salinga, J. R. Erickson, S. Pi et al. Roadmap on emerging hardware and technology for machine learning. _Nanotechnology_**32**, 012002 (2020). * [108] G. Finocchio, S. Bandyopadhyay, P. Lin, G. Pan, J. J. Yang, R. Tomasello, C. Panagopoulos, M. Carpentieri, V. Puliafito, J. Akerman et al. Roadmap for unconventional computing with nanotechnology. _arXiv:2301.06727_ (2023). * [109] C. E. Leiserson, N. C. Thompson, J. S. Emer, B. C. Kuszmaul, B. W. Lampson, D. Sanchez, and T. B. Schardl, There's plenty of room at the Top: What will drive computer performance after Moore's law? _Science_**368**, eaam9744 (2020). * [110] M. Kellman, M. Lustig, and L. Waller, How to do physics-based learning. _arXiv:2005.13531_ (2020). * [111] G. Hinton, _Neural networks for machine learning_. Coursera, Video Lectures (2012). * [112] P. Yin, J. Lyu, S. Zhang, S. Osher, Y. Qi, and J. Xin, Understanding straight-through estimator in training activation quantized neural nets. _arXiv:1903.05662_ (2019). * [113] R. J. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_**8**, 229-256 (1992). * [114] L. Bottou, Stochastic gradient descent tricks. _Neural Networks: Tricks of the Trade: Second Edition_ pp. 421-436 (2012). * [115] I. Loshchilov and F. Hutter, Decoupled weight decay regularization. _arXiv:1711.05101_ (2017). * [116] P. De Chazal, J. Tapson, and A. Van Schaik, A comparison of extreme learning machines and back-propagation trained feed-forward networks processing the mnist database. In _2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, 2165-2168 (2015). * [117] A. Krizhevsky, Learning multiple layers of features from tiny images. (2009). ## Methods ### Stochastic optical neural networks using single-photon detection as the activation function In the single-photon-detection neural networks (SPDNNs), the activation function is directly determined by the stochastic physical process of single-photon detection (SPD). The specific form of the activation function is dictated by the detection process on a single-photon detector (SPD). Each SPD measurement produces a binary output of either 0 or 1, with probabilities determined by the incident light intensity. Consequently, each SPD neuron activation, which corresponds to an SPD measurement in experiments, is considered as a binary stochastic process [37; 44; 54]. Following the Poisson distribution, the probability of an SPD detecting a photon click is given by \(P_{\text{SPD}}(\lambda)=1-e^{-\lambda}\) when exposed to an incident intensity of \(\lambda\) photons per detection. Note that this photon statistics may vary based on the state of light (e.g. squeezed light), but here we only consider the Poissonian light. Therefore, the SPD process can be viewed as a Bernoulli sampling of that probability, expressed as \(f_{\text{SPD}}(z)=\mathbf{1}_{l<P_{\text{SPD}}(\lambda(z))}\), where \(t\) is a uniform random variable \(t\sim U[0,1]\) and \(\mathbf{1}_{x}\) is the indicator function that evaluates to 1 if \(x\) is true. This derivation leads to Equation 1 in the main text. In our approach, the pre-activation value \(z\) is considered as the direct output from an optical matrix-vector multiplier (MVM) that encodes the information of a dot product result. For the \(i\)th pre-activation value in layer \(l\), denoted as \(z_{i}^{(l)}\), the expression is given by: \[z_{i}^{(l)}=\sum_{j=1}^{N_{l-1}}w_{ij}^{(l)}\cdot a_{j}^{(l-1)}, \tag{1}\] where \(N_{l-1}\) is the number of neurons in layer \(l-1\), \(w_{ij}^{(l)}\) is the weight between the \(i\)th neuron in layer \(l\) and the \(j\)th neuron in layer \(l-1\), \(a_{j}^{(l-1)}\) is the activation of the \(j\)th neuron in layer \(l\). The intensity \(\lambda(z)\) is a function of \(z\) that depends on the detection scheme employed in the optical MVM. In optical setups using incoherent light, the information is directly encoded in the intensity, resulting in \(\lambda=z\). If coherent light were used in a setup, where 0 and \(\pi\) phases represent the sign of the amplitude, the intensity is determined by squaring the real-number amplitude if directly measured, resulting in \(\lambda=z^{2}\). While more sophisticated detection schemes can be designed to modify the function of \(\lambda(z)\), we focused on the simplest cases to illustrate the versatility of SPDNNs. During the inference of a trained model, in order to regulate the level of uncertainty inherent in stochastic neural networks, we can opt to conduct multiple shots of SPD measurements during a single forward propagation. In the case of a \(K\)-shot inference, each SPD measurement is repeated \(K\) times, with the neuron's final activation value \(a^{[K]}\) being derived from the average of these \(K\) independent stochastic binary values. Consequently, for a single shot, \(a^{[1]}=a\in\{0,1\}\); for \(K\) shots, \(a^{[K]}=\frac{1}{K}\sum_{k=1}^{K}a_{k}\in\{0,1/K,2/K,\ldots,1\}\). By utilizing this method, we can mitigate the model's stochasticity, enhancing the precision of output values. Ideally, with an infinite number of shots (\(K\rightarrow\infty\)), the activation \(a^{[\infty]}\) would equate to the expected value without any stochasticity, that is, \(a^{[\infty]}=\mathbb{E}[a]=P_{\text{SPD}}(\lambda(z))\). The detailed process of an inference of SPDNNs is described in Algorithm 2 in Supplementary Note 1A. The training of our stochastic neuron models takes inspiration from recent developments in training stochastic neural networks. We have created an effective estimator that trains our SPDNNs while accounting for the stochastic activation determined by the physical SPD process. To train our SPDNNs, we initially adopted the idea of the "straight-through estimator" (STE) [111; 112], which enables us to bypass the stochasticity and discretization during neural network training. However, directly applying STE to bypass the entire SPD process led to subpar training performance. To address this, we adopted a more nuanced approach by breaking down the activation function and treating different parts differently. The SPD process can be conceptually divided into two parts: the deterministic probability function \(P_{\text{SPD}}\) and the stochasticity introduced by the Bernoulli sampling. For a Bernoulli distribution, the expectation value is equal to the probability, making \(P_{\text{SPD}}\) the expectation of the activation. Instead of applying the "straight-through" method to the entire process, we chose to bypass only the Bernoulli sampling process. At the same time, we incorporate the gradients induced by the probability function, aligning them with the expectation values of the random variable. In this way, we obtained an unbiased estimator [113] for gradient estimation, thereby enhancing the training of our SPDNNs. In the backward propagation of the \(l\)th layer, the gradients of the pre-activation \(z^{(l)}\) can be computed as (the gradient with respect to any parameter \(x\) is defined as \(g_{x}=\partial C/\partial x\) where \(C\) is the cost function): \[g_{z^{(l)}}=\frac{\partial a^{(l)}}{\partial\lambda^{(l)}}\circ\frac{\partial \lambda^{(l)}}{\partial z^{(l)}}\circ g_{a^{(l)}}=P_{\text{SPD}}^{\prime}( \lambda^{(l)})\circ\frac{\partial\lambda^{(l)}}{\partial z^{(l)}}\circ g_{a^{ (l)}}, \tag{2}\] where \(a^{(l)}=f_{\text{SPD}}(z^{(l)})=\mathbf{1}_{l<P_{\text{SPD}}(\lambda(z^{(l)}))}\) and the gradients \(g_{a^{(l)}}\) is calculated from the next layer (previous layer in the backward propagation). Using this equation, we can evaluate the gradients of the weights \(W^{(l)}\) as \(g_{W^{(l)}}=g_{z^{(l)}}^{\top}a^{(l-1)}\) where \(a^{(l-1)}\) is the activation values from the previous layer. By employing this approach, SPDNNs can be effectively trained using gradient-based algorithms (such as SGD [114] or AdamW [115]), regardless of the stochastic nature of the neuron activations. For detailed training procedures, please refer to Algorithm 1 and 3 in Supplementary Notes 1A and 2A, respectively. ### Simulation of incoherent SPDNNs for deterministic classification tasks The benchmark MNIST (Modified National Institute of Standards and Technology database) [116] handwritten digit dataset consists of 60,000 training images and 10,000 testing images. Each image is a grayscale image with \(28\times 28=784\) pixels. To adhere to the non-negative encoding required by incoherent light, the input images are normalized so that pixel values range from 0 to 1. To assess the performance of the SPD activation function, we investigated the training of the MLP-SPDNN models with the structure of \(784\xrightarrow{W^{(1)}}N\xrightarrow{W^{(2)}}10\), where \(N\) represents the number of neurons in the hidden layer, \(W^{(1)}\) (\(W^{(2)}\)) represents the weight matrices of the hidden (output) layer. The SPD activation function is applied to the \(N\) hidden neurons, and the resulting activations are passed to the output layer to generate output vectors (Figure 3a). To simplify the experimental implementation, biases within the linear operations were disabled, as the precise control of adding or subtracting a few photons poses significant experimental challenges. We have observed that this omission has minimal impact on the model's performance. In addition, after each weight update, we clamped the elements of \(W^{(1)}\) in the positive range in order to comply with the constraint of non-negative weights of an incoherent optical setup. Because SPD is not required at the output layer, the constraints on the last layer operation are less stringent. Although our simulations indicate that the final performance is only marginally affected by whether the elements in the last layer are also restricted to be non-negative, we found that utilizing real-valued weights in the output layer provided increased robustness against noise and errors during optical implementation. As a result, we chose to use real-valued weights in \(W^{(2)}\). During the training process, we employed the LogSoftmax function on the output vectors and used cross-entropy loss to formulate the loss function. Gradients were estimated using the unbiased estimator described in the previous section and Algorithm 1. For model optimization, we found that utilizing the SGD optimizer with small learning rates yields better accuracy compared to other optimizers such as AdamW, albeit at the cost of slower optimization speed. Despite the longer total training time, the SGD optimizer leads to a better optimized model. The models were trained with a batch size of 128, a learning rate of 0.001 for the hidden layer and 0.01 for the output layer, over 10,000 epochs to achieve optimized parameters. To prevent gradient vanishing in the plateau of the probability function \(P_{\text{SPD}}\), pre-activations were clamped at \(\lambda_{\text{max}}=3\) photons. It should be noted that due to the inherent stochasticity of the neural networks, each forward propagation generates varying output values even with identical weights and inputs. However, we only used one forward propagation in each step. This approach effectively utilized the inherent stochasticity in each forward propagation as an additional source of random search for the optimizer. Given the small learning rate and the significant noise in the model, the number of epochs exceeded what is typically required for conventional neural network training processes. The training is performed on a GPU (Tesla V100-PCIE-32GB) and takes approximately eight hours for each model. We trained incoherent SPDNNs with a varying number of hidden neurons \(N\) ranging from 10 to 400. The test accuracy of the models improved as the number of hidden neurons increased (see Supplementary Note 1B for more details). During inference, we adjusted the number of shots per SPD activation \(K\) to tune the SNR of the activations within the models. For each model configuration with \(N\) hidden neurons and \(K\) shots of SPD readouts per activation, we repeated the inference process 100 times to observe the distribution of stochastic output accuracies. Each repetition of inference on the test set, which comprises 10,000 images, yielded a different test accuracy. The mean values and standard deviations of these 100 repetitions of test accuracy are plotted in Figure 3b (see Supplementary Table 1 for more details). It was observed that increasing either \(N\) or \(K\) led to higher mean values of test accuracy and reduced standard deviations. ### Experimental implementation of SPDNNs The optical matrix-vector multiplier setup utilized in this work is based on the design presented in [13]. The setup comprises an array of light sources, a zoom lens imaging system, an light intensity modulator, and a photon-counting camera. For encoding input vectors, we employed an organic light-emitting diode (OLED) display from a commercial smartphone (Google Pixel 2016 version). The OLED display features a \(1920\times 1080\) pixel array, with individually controllable intensity for each pixel. In our experiment, only the green pixels of the display were used, arranged in a square lattice with a pixel pitch of \(57.5\,\mathrm{\SIUnitSymbolMicro m}\). To perform intensity modulation as weight multiplication, we combined a reflective liquid-crystal spatial light modulator (SLM, P1920-500-1100-HDMI, Meadowdark Optics) with a half-wave plate (HWP, WPH10ME-532, Thorlabs) and a polarizing beamsplitter (PBS, CCM1-PBS251, Thorlabs). The SLM has a pixel array of dimensions \(1920\times 1152\), with individually controllable transmission for each pixel measuring \(9.2\times 9.2\,\,\mathrm{\SIUnitSymbolMicro m}\). The OLED display was imaged onto the SLM panel using a zoom lens system (Resolv4K, Navitar). The intensity-modulated light field reflected from the SLM underwent further de-magnification and was focused onto the detector using a telescope formed by the rear adapter of the zoom lens (1-81102, Navitar) and an objective lens (XLFLUOR4x/340, Olympus). We decompose a matrix-vector multiplication in a batch of vector-vector dot products that are computed optically, either by spatial multiplexing (parallel processing) or temporal multiplexing (sequential processing). To ensure a more accurate experimental implementation, we chose to perform the vector-vector dot products in sequence in most of the data collection. For the computation of an optical vector-vector dot product, the value of each element in either vector is encoded in the intensity of the light emitted by a pixel on the OLED and the transmission of an SLM pixel. The imaging system aligned each pixel on the OLED display with its corresponding pixel on the SLM, where element-wise multiplication occurred via intensity modulation. The modulated light intensity from pixels in the same vector was then focused on the detector to sum up the element-wise multiplication values, yielding the vector-vector dot product result. Since the light is incoherent, only non-negative values can be allowed in both of the vectors. For more details for the incoherent optical MVM, please refer to Supplementary Note 3. The calibration of the vector-vector dot products on the optical MVM is detailed in Supplementary Note 5. In this experiment, we used a scientific CMOS camera (Hamamatsu ORCA-Quest qCMOS Camera C15550-20UP) [62] to measure both conventional light intensity measurement and SPD. This camera, with \(4096\times 2304\) effective pixels of \(4.6\times 4.6\,\,\mathrm{\SIUnitSymbolMicro m}\) each, can perform SPD with ultra-low readout noise in its photon counting mode. This scientific CMOS camera is capable of carrying out the SPD process with ultra-low readout noise. When utilized as an SPD in the photon-counting mode, the camera exhibits an effective photon detection efficiency of 68% and a dark count rate of approximately 0.01 photoelectrons per second per pixel (Supplementary Note 4). We typically operate with an exposure time in the millisecond range for a single shot of SPD readout. For conventional intensity measurement that integrates higher optical energy for the output layer implementation, we chose another operation mode that used it as a common CMOS camera. Further details on validating the stochastic SPD activation function measured on this camera are available in Supplementary Note 6. We also adapted our SPDNNs training methods to conform to the real-world constraints of our setup, ensuring successful experimental implementation (see Supplementary Note 7). First, we conducted the implementation of the hidden layers and collect the SPD activations experimentally by the photon-counting camera as an SPD array. Then we performed the output layer operations digitally on a computer. This aims to verify the fidelity of collecting SPD activations from experimental setups. Supplementary Figure 16 provides a visual representation of the distribution of some of the output vectors. For the experiments with 1 shot per activation (\(K=1\)), we collected 30 camera frames from the setup for each fixed input images and weight matrix, which are regarded as 30 independent repetitions of inference. They were then used to compute 30 different test accuracies by performing the output linear layer on a digital computer. For the experiments with 2 shots per activation (\(K=2\)), we divided the 30 camera frames into 15 groups, with each group containing 2 frames. The average value of the 2 frames within each group serves as the activations, which are used to compute 15 test accuracies. For additional results and details, please refer to Supplementary Note 8. Second, to achieve the complete optical implementation of the entire neural networks, we utilized our optical matrix-vector multiplier again to carry out the last layer operations. For example, we first focused on the data from the model with 400 hidden neurons and performed 5 shots per inference. In this case, for the 30 binary SPD readouts obtained from 30 frames, we performed an averaging operation on every 5 frames, resulting in 6 independent repetitions of the inference. These activation values were then displayed on the SLM as the input for the last layer implementation. For the 5-shot activations, the possible values included 0, 0.2, 0.4, 0.6, 0.8, and 1. When the linear operation were performed on a computer with full precision, the mean test accuracy was approximately 99.17%. To realize the linear operation with real-valued weight elements on our incoherent optical setup, we divided the weight elements into positive and negative parts. Subsequently, we projected these two parts of the weights onto the OLED display separately and performed them as two different operations. The final output value was obtained by subtracting the results of the negative weights from those of the positive weights. This approach requires at least double the photon requirement for the output layer and offers room for optimization to achieve higher energy efficiency. Nevertheless, even with these non-optimized settings, we demonstrated a photon budget that is lower than any other ONN implementations known to us for the same task and accuracy. For additional data and details, please refer to Supplementary Note 9. ### Deeper SPDNNs operating with coherent light Optical processors with coherent light have the ability to preserve the phase information of light and have the potential to encode complex numbers using arbitrary phase values. In this work, we focused on coherent optical computing utilizing real-number operations. In this approach, positive and negative values are encoded in the light amplitudes corresponding to phase 0 and \(\pi\), respectively. As the intensity of light is the square of the amplitude, direct detection of the light amplitude, where the information is encoded, would involve an additional square operation, i.e., \(\lambda(z)=|z|^{2}\). This leads to a "V-shape" SPD probability function with respect to the pre-activation \(z\), as depicted in Figure 4a. We chose to focus on the most straightforward detection case to avoid any additional changes to the experimental setup. Our objective is to demonstrate the adaptability and scalability of SPDNN models in practical optical implementations without the need for complex modifications to the existing setup. #### Coherent SPDNNs for MNIST classification Mlp-SpdnnClassifying MNIST using coherent MLP-SPDNNs was simulated utilizing similar configurations as with incoherent SPDNNs. The only difference was the inclusion of the coherent SPD activation function and the use of real-valued weights. Contrary to the prior scenario with incoherent light, the input values and weights do not need to be non-negative. The models were trained using the SGD optimizer [114] with a learning rate of 0.01 for the hidden layers and 0.001 for the last linear layer, over a period of 10,000 epochs. Convolutional SPDNNsThe convolutional SPDNN model used for MNIST digit classification, illustrated in Figure 4b, consists of a convolutional layer with 16 output channels, a kernel size of \(5\times 5\), a stride size of 1, and padding of 2. The SPD activation function was applied immediately after the convolutional layer, followed by average pooling of \(2\times 2\). The feature map of \(14\times 14\times 16=3136\) was then flattened into a vector of size 3136. After that, the convolutional layers were followed by a linear model of \(3136\to 400\to 10\), with the SPD activation function applied at each of the 400 neurons in the first linear layer. The detailed simulation results of the MNIST test accuracies of the coherent SPDNNs can be found in Supplementary Table 2 with varying model structures and shots per activation \(K\). For additional information, see Supplementary Note 2B. #### Coherent convolutional SPDNNs for Cifar-10 classification The CIFAR-10 dataset [117] has 60,000 images, each having \(3\times 32\times 32\) pixels with 3 color channels, that belong to 10 different categories, representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. The dataset is partitioned into a training set with 50,000 images and a test set with 10,000 images. The pixel values have been normalized using the mean value of \((0.4914,0.4822,0.4465)\) and standard deviation of \((0.2471,0.2435,0.2616)\) for each of the color channels. To boost performance, data augmentation techniques including random horizontal flips (50% probability) and random \(32\times 32\) crops (with 4-pixel padding) were implemented during training. The convolutional SPDNN models for Cifar-10 classification have deeper structures. Same as the convolutional models trained for MNIST, the convolutional layers use a kernel size of \(5\times 5\), a stride size of 1 and padding of 2. Each convolutional layer is followed by the SPD activation function, average pooling of \(2\times 2\), as well as batch normalization. After \(N_{\text{conv}}\) convolutional layers (\(N_{\text{conv}}=4\) in Figure 4e) with the number of output channels of the last one to be \(N_{\text{chan}}^{\text{last}}\), the feature map of \((32/2^{N_{\text{conv}}})^{2}\times N_{\text{chan}}^{\text{last}}\) is flattened to a vector, followed by two linear layers of \((32/2^{N_{\text{conv}}})^{2}N_{\text{chan}}^{\text{last}}\to 400\to 10\). In the first linear layer, either SPD or ReLU [63] activation function were used for each of the 400 neurons, as depicted in Figure 4e. We vary the number of convolutional layers and number of output channels of them to change the different model size (Figure 4e and Supplementary Figure 5). In these results, we only used a single shot of SPD measurement (\(K=1\)) to compute the SPD activations in the models, including the convolutional and linear layers. For additional information, please refer to Supplementary Note 2C. # Quantum-noise-limited optical neural networks operating at a few quanta per activation Shi-Yuan Ma sm2725@cornell.edu School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA Tianyu Wang School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA Jeremie Laydevant School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA USRA Research Institute for Advanced Computer Science, Mountain View, CA 94035, USA Logan G. Wright Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA NTT Physics and Informatics Laboratories, NTT Research, Inc., Sunnyvale, CA 94085, USA Peter L. McMahon pmcmahon@cornell.edu School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY 14853, USA ###### Abstract The quantum-noise-limited optical neural networks operating at a few quanta per activation is a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation Discussion Supplementary Note 10. Robustness tests of SPDNNs Supplementary Note 11. Noise resilience compared to conventional models Supplementary Note 12. Distribution of expectation values for SPD activations ## Part I Simulation results In this part, we introduce the details of simulation of the single-photon-detection neural networks (SPDNNs). Each neuron activation in an SPDNN, as corresponding to a readout on a single-photon detector (SPD) in experiment, is modelled as a binary stochastic process [1, 2, 3]. For each SPD measurement, the single-shot output is either 0 or 1, with probabilities determined by the incident optical energy. The exact form of the activation function is defined by the actual physical process of single detection. For an incident beam with optical energy of \(\lambda\) photons per detection, due to Poissonian photon statistics, the probability for an SPD to detect a click \(P_{\mathrm{SPD}}(\lambda)=1-e^{-\lambda}\), as shown in Figure 2 in the main text. The detected binary results are used to compute the activation values. However, due to the stochasticity and discretization in the single-photon-detection process, estimation of the gradients in the loss function is challenging, so that conventional back propagation algorithms fail to train these models. Training stochastic neuron models has been investigated for many years. One of the major family of algorithms depend on such neurons being the Boltzmann machine [4, 5]. REINFORCE algorithms (RA) [6] update the weights along the direction of the gradients of expected reinforcement without explicitly computing them. They have been investigated and applied in different tasks to train stochastic neural networks effectively [7, 8]. In [9], many methods of estimating gradients through stochastic neurons are studied. They found that the fastest training in their experiments was achieved by the "straight-through estimator" (STE), which was previously introduced in Hinton's course in lecture 15b [10]. In our simulation of SPDNNs, we were inspired by both methods and found the estimator that trained our SPDNNs effectively, with the activation induced by the physical single-photon detection process. When using STE in a binary neural network, the binarization process, either deterministic or stochastic, is regarded as identity function during back propagation. However, if we directly use the STE to go "straight through" the whole SPD process, the training performance is not very good. That is because the STE is a biased estimator of the gradients [9, 11], which means the expectation value of the estimator is not the same as the true expectation value of the real random variable. The biased estimation of gradients harms the training accuracy over a large number of epochs. Then we try to look for an unbiased estimator inspired by RA [6, 9]. We can conceptually break the single-photon detection process into two parts, a deterministic probability function \(P_{\mathrm{SPD}}\), and the Bernoulli sampling that introduces the stochasticity. For Bernoulli distribution, the expectation value is the probability of 1 itself, so that \(P_{\mathrm{SPD}}\) is also the expectation value of the activation. Instead of going "straight through" the whole SPD process, we only skip the Bernoulli sampling process to avoid the uncertainty in backpropagation, and include the gradients induced by the probability function to meet the expectation values of the random variable. To enhance training effectiveness in certain cases, we introduced a slope variable, \(\eta\), which modifies the intensity value within the SPD activation function: \(P_{\mathrm{SPD}}^{\eta}(\lambda)=P_{\mathrm{SPD}}(\eta\lambda)\). The incorporation of a technique called "slope annealing" [12] allows controlled alteration of the gradients of the activation function, leading to more efficient navigation of the model's parameter space. Additionally, we impose an upper limit on the intensity by clamping it to a maximum value \(\lambda_{\mathrm{max}}\). This prevents the occurrence of vanishing gradients resulting from excessively large values and the plateauing probability function. Both the application of the slope variable and intensity clamping can be exclusively utilized during the training phase. In optical implementation, the annealing factor can be absorbed in the mapping from the trained weights to the controlled parameters on the experimental setup. The details of the back-propagation and training process are shown in Algorithm 1 and 3, with the exact activation functions of incoherent and coherent optical setups, respectively. In the following sections, we introduce the two SPDNN setups in details and test their performance with different tasks and architectures. ### Supplementary Note 1. SPDNN with incoherent optical setups ### Modelling and training When an optical neural network (ONN) operates with incoherent light, the values of the vector elements are encoded intensity of light. The encoded values are non-negative and the operations are performed by modulating the intensity of light. So for an optical matrix-vector multiplier (optical MVM) operating with incoherent light, the values in a output vector \(z\) is readily the intensity to be measured by the detector, i.e. \(\lambda=z\). The probability to have the SPD measurement of 1 is then \(P_{\mathrm{SPD}}(\lambda(z))=P_{\mathrm{SPD}}(z)\). This probability \(P_{\mathrm{SPD}}\) is determined by the pre-activation value \(z\). Thus, the SPD activation is a Bernoulli sampling of the probability \(P_{\mathrm{SPD}}\), \(f_{\mathrm{SPD}}^{\mathrm{Inch}}(z)=\mathbf{1}_{t<P_{\mathrm{SPD}}(z)}\), where \(t\) is a uniform random varible \(t\sim U[0,1]\) and \(\mathbf{1}_{x}\) is the indicator function on the true value of \(x\), i.e. \[f_{\mathrm{SPD}}^{\mathrm{Incoh}}(z)=\begin{cases}1&\text{with probability }p=P_{\mathrm{SPD}}(z),\\ 0&\text{with probability }1-p,\end{cases} \tag{1}\] where the probability function \(P_{\mathrm{SPD}}(z)=1-e^{-z}\). The activation in the forward propagation is calculated by \(a=f_{\mathrm{SPD}}^{\mathrm{Incoh}}(z)\). ``` 1:Input:\(\mathbf{1}_{x}\), \(\mathbf{1}_{x}\), \(\mathbf{1}_{y}\), \(\mathbf{1}_{z}\), \(\mathbf{1 Algorithm 1. With \(L\) layers in the neural network, the SPD activation function is applied after every layer except for the output layer. In the \(l\)th layer (\(l\neq L\)), \(z^{(l)}=a^{(l-1)}W^{(l)}\) is the direct output of the optical MVM that encodes the information of the dot product results. In an incoherent optical setup, the output values are directly encoded in light intensity, \(\lambda^{(l)}=z^{(l)}\). In the training process, we clamp the intensity to a maximum value \(\lambda_{\max}\) to avoid the vanishing gradient with a large value. Meanwhile, the clamped intensity vector \(\lambda^{(l)}\) is multiplied by the slope variable \(\eta\) to compute the probability of detecting a click on the SPDs according to \(P_{\text{SPD}}\), \(p^{(l)}=P_{\text{SPD}}(\eta\lambda^{(l)})\). Then the activation values are the Bernoulli sampling of the computed probabilities, \(a^{(l)}=\mathbf{1}_{t<p^{(l)}}\), which are sent to the next layer in the forward propagation. In backward propagation, our gradient estimator assumes that the gradients of the stochastic sampling process is \(1\), \(\partial a^{(l)}/\partial p^{(l)}=1\). Thus, during the backward pass of the \(l\)th layer, given the gradient with respect to \(a^{(l)}\), \(g_{a^{(l)}}=\partial C/\partial a^{(l)}\), calculated from the next layer (previous layer in the backward propagation), the gradient with respect to the pre-activation \(z^{(l)}\) is calculated as \[g_{z^{(l)}}=\frac{\partial a^{(l)}}{\partial z^{(l)}}\circ g_{a^{(l)}}=\frac{ \partial a^{(l)}}{\partial p^{(l)}}\circ\frac{\partial p^{(l)}}{\partial \lambda^{(l)}}\circ\frac{\partial\lambda^{(l)}}{\partial z^{(l)}}\circ g_{a^{ (l)}}=1\circ P^{\prime}_{\text{SPD}}(\lambda^{(l)})\circ 1\circ g_{a^{(l)}}=P^{ \prime}_{\text{SPD}}(z^{(l)})\circ g_{a^{(l)}}, \tag{2}\] so that the gradients with respect to the weights \(W^{(l)}\) is \(g_{W^{(l)}}=g_{z^{(l)}_{l^{(l)}}}^{\top}a^{(l-1)}\). In this way, the gradients can be efficiently calculated to optimize the weights with a gradient-based optimizer with a learning rate. Note that for an incoherent optical setup, the elements in the weights (realized by intensity modulations) are also non-negative, so that the updated weights need to be clamped to non-negative values after each optimization step. After each optimization step, the slope variable is updated by multiplying a factor \(\theta\), as the "slope annealing" trick [12] to improve the training performance when necessary. During the inference of a trained model, the forward pass of test inputs is similar to the training process, with the exception that the maximum clamping \(\lambda_{\max}\) is not applied. Additionally, to control the level of uncertainty in the stochastic neural networks, we can choose to use multiple shots of SPD measurements during each inference. In a "\(K\)-shot" inference, we use \(K\) shots of binary SPD readouts to the final activation value of the neuron, denoted as \(a^{[K]}\), is the average of the \(K\) independent stochastic binary values. This process is basically integrating a few more photons using the SPD, as people usually do for conventional ONN implementation [16]. For a single shot of SPD measurement per activation, \(a^{[1]}=a\in\{0,1\}\), while for \(K\) shots, \(a^{[K]}=\frac{1}{K}\sum_{k=1}^{K}a_{k}\in\{0,1/K,2/K,\ldots,1\}\). This approach reduces the uncertainty in the models, resulting in more precise output values. In the ideal case where an infinite number of shots are integrated (\(K\to\infty\)), the activation \(a^{[\infty]}\) would converge to the expectation value without stochasticity, denoted as \(a^{[\infty]}=\mathbb{E}[a]=P_{\text{SPD}}(z)\). As we will see in Supplementary Note 1.2, the SPDNN models have higher test accuracy as the shots per activation \(K\) increases. The detailed inference procedure is explained in Algorithm 2. ``` 0: A batch of test inputs \(a^{(0)}\) (\(N_{\text{batch}}\times N_{0}\)) and trained weights \(W^{(l)}\) (\(N_{l}\times N_{l-1}\), \(l\in\{0,1,\ldots,L\}\)), slope annealing factor \(\eta\). 0: The output \(a^{(L)}\). 1:for\(l=1\) to \(L\)do 2:\(z^{(l)}\gets a^{(l-1)}W^{(l)\top}\)\(\triangleright\) Linear operation to compute the pre-activation 3:\(\lambda^{(l)}\gets z^{(l)}\)\(\triangleright\) For incoherent light, intensity is directly modulated 4:if\(l<L\)then\(\triangleright\) SPD activation process 5:\(p^{(l)}\gets P_{\text{SPD}}(\eta\cdot\lambda^{(l)})\)\(\triangleright\) The probability of detecting a click, with the slope \(\eta\) applied 6:for\(k=1\) to \(K\)do\(\triangleright\)\(K\) shots in one inference 7:\(a^{(l),k}\leftarrow\text{Sampling}(p^{(l)})\)\(\triangleright\) SPD output for each shot 8:endfor 9:\(a^{(l)}\leftarrow\frac{1}{K}\sum_{k=1}^{K}a^{(l),k}\)\(\triangleright\) Average over all \(K\) shots for the activation values 10:endif 11:endfor 12:\(a^{(L)}\leftarrow\lambda^{(L)}\)\(\triangleright\) Use the output intensity directly in the inference ### MNIST classification task To illustrate the capability of the SPD activation function, we first use it in a simple multi-layer perceptron (MLP) architecture and train the models for the benchmark MNIST classification task. The models have the structure of \(784\to N\to 10\) with \(N\) neurons in the hidden layer, as discussed in the main text. The MNIST dataset has 60,000 images for training and 10,000 images for testing. Each image is grayscale and has \(28\times 28=784\) pixels. To meet the non-negative encoding of incoherent light, the input images are normalized to have pixel values of range 0 to 1. The models consist of two linear layers, the \(784\to N\) hidden layer has the weight matrix \(W^{(1)}\) with a shape of \(N\times 784\) and the \(N\to 10\) output layer has the weight matrix \(W^{(2)}\) with a shape of \(10\times N\). The SPD activation function is applied for each hidden neuron after the linear operation of \(W^{(1)}\) to compute the neuron activations, then the computed activation values are passed to the output layer to produce the output vectors. The elements in the first linear operation, \(W^{(1)}\), are clamped to be non-negative to meet the requirement of an incoherent optical setup. In general, real-valued weights can be realized with an incoherent optical MVM if some digital-electronic postprocessing is available. In our case that the activations are measured by SPDs, the activation function is directly applied in the single-photon detection process, which makes digital postprecessing impossible. Similarly, biases of the linear operations are also disabled. If we want to get away with digital postprocessing by applying a bias term directly to the optical intensity, at the level of a few photons the approach is also challenging in experiments. However, as the output layer is implemented as the conventional optical computing with a higher single-to-noise ratio (SNR), we can effectively implement real-values weight of \(W^{(2)}\). In optical implementation, this would involve extra operations to map these values onto the incoherent setup. During the training process, we apply the LogSoftmax function to the output vectors and use cross-entropy loss to construct the loss function. To avoid the issue of gradient vanishing, we clamp the pre-activations at \(\lambda_{\text{max}}=3\) photons. It is important to note that due to the stochastic nature of the neural networks, each forward pass generates different output values even with the same weights and inputs. However, we only use a single forward pass in each training epoch, which has shown to be the most efficient training approach. The stochasticity introduced in each forward propagation could add to the random search of the stochastic optimizer itself so that it helps with the training process. We have found that using the SGD optimizer [13] with small learning rates leads to better accuracy compared to other optimizers such as AdamW [15]. Although training with SGD takes longer overall, it helps us achieve a better optimized model in the end. For our final results, we used a batch size of 128 and a learning rate of 0.001 for the hidden layer and 0.01 for the output layer in the SGD optimizer. We trained each SPDNN model for 10,000 epochs to obtain optimized parameters, and a even higher number of epoches can be needed to achieve a better accuracy. Given the small learning rate and the significant amount of noise in the model, the number of epochs required is much larger than what is typically seen in common neural network training processes. The training and test errors for an incoherent MLP SPDNN with a structure of \(784\to 400\to 10\) are shown in Supplementary Figure 1. The training process was performed on a GPU (Tesla V100-PCIE-32GB) and took approximately eight hours to complete. **Supplementary Figure 1. Training curves of an incoherent SPDNN model for MNIST classification.** The plot illustrates the progression of test and training errors throughout the training process of an incoherent SPDNN model with an MLP architecture of \(784\to 400\to 10\). The optimization is conducted using an SGD optimizer with learning rates of 0.001 for the hidden layer and 0.01 for the output layer. The final trained model is obtained at 10,000 epochs. The magnitude of the weight element values in the first linear layer, \(W^{(1)}\), is influenced by the range of the input vectors, \(a^{(0)}\), and the specific form of the SPD activation function, \(f_{\text{SPD}}^{\text{Incoh}}(z)\). In the forward pass of an incoherent SPDNN, the pre-activation values, \(z^{(1)}\), are computed as \(z^{(1)}=a^{(0)}W^{(1)\top}\). The activation function, \(f_{\text{SPD}}^{\text{Incoh}}(z)\), is defined as \(\mathbf{1}_{t<P_{\text{SPD}}(z)}\), where \(t\) is a random variable uniformly distributed between 0 and 1, and \(P_{\text{SPD}}(z)\) represents the probability of photon detection. When the input vectors, \(a^{(0)}\), are normalized to the range of 0 to 1, the weight elements in \(W^{(1)}\) are optimized based on the specific form of \(P_{\text{SPD}}\) because it depends on the exact value of pre-activations \(z\). In our simulation of an incoherent SPDNN, the elements of \(z^{(1)}\) are represented in terms of photon numbers, where the value 1 corresponds to 1 photon. When \(z\gtrsim 3\), \(P_{\text{SPD}}\) reaches the plateau part of the probability function. Thus, we have to make sure the value of the pre-activation to be around 1 photon to ensure an effective forward pass. When considering a uniform bright image where each element has the maximum value of 1, and with an input vector size of \(28\times 28=784\), if we aim for an output value of approximately 1 photon, the average value of the weight elements in \(W^{(1)}\) should be around \(1/784\approx 0.0013\). The average pixel value in the MNIST dataset is approximately 0.13 (when each pixel value is normalized to the range of 0 to 1). Based on this, we can estimate that to achieve an output value of approximately 1 photon, the average weight element value should be around 0.01. Taking into account that both the input images and weight matrices tend to be sparse, this estimation may be slightly lower than the actual scenario. Supplementary Figure 2 illustrates the matrix elements of \(W^{(1)}\) for a model with \(N=100\) hidden neurons. The weight elements range from 0 to 5.18, with an average value of 0.07. Each block represents a row vector of size 784, rearranged in the form of \(28\times 28\). The average value of \(W^{(1)}\) may vary slightly in different network structures, ranging from 0.06 to 0.08. During the inference of SPDNNs, the pixel values of the test images are normalized to the range of 0 to 1 as well. This will be correspond to the dynamical range on the optical setup. We trained incoherent MLP-SPDNN models with varying numbers of hidden neurons, \(N\), ranging from 10 to 400. As discussed in Section Supplementary Note 1 A, we can adjust the number of SPD measurements per activation, denoted as \(K\), to control the level of stochasticity in the models. The results of the MNIST test accuracy for different combinations of \(N\) and \(K\) are summarized in Supplementary Table 1. The values of \(N\) include 10, 20, 50, 100, 200, 300, and 400, while \(K\) takes on the values of 1, 2, 3, 5, 7, 10, and \(\infty\). In the case of \(K\rightarrow\infty\), we use the expectation of the activation values, \(P_{\text{SPD}}\), as the activation function, which is equivalent to integrating an infinite number of shots per SPD detection. This serves as an upper bound to approach with the increase of \(K\). Due to the stochastic nature of SPDNNs, the output vectors vary across different repetitions of inference. To capture the overall behavior of the models, we repeated the full inference process 100 times for each structure with \(N\) hidden neurons and \(K\) shots per activation. This allows us to calculate the mean test accuracy and standard deviation, representing the distribution of test accuracies. Each independent repetition of inference involves the MNIST test dataset, consisting of 10,000 images. We observe that as either \(N\) or \(K\) increases, the mean test accuracy tends to improve while the standard deviation decreases. \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline Model & \(K=1\) & \(K=2\) & \(K=3\) & \(K=5\) & \(K=7\) & \(K=10\) & \(K\rightarrow\infty\) \\ \hline \hline 784–10–10 & \(78.03\pm 0.32\) & \(83.18\pm 0.26\) & \(84.79\pm 0.22\) & \(86.13\pm 0.17\) & \(86.65\pm 0.17\) & \(87.08\pm 0.16\) & \(87.91\pm 0.00\) \\ 784–20–10 & \(86.74\pm 0.24\) & \(89.98\pm 0.18\) & \(90.96\pm 0.15\) & \(91.71\pm 0.13\) & \(92.00\pm 0.13\) & \(92.22\pm 0.13\) & \(92.66\pm 0.00\) \\ 784–50–10 & \(93.04\pm 0.16\) & \(94.49\pm 0.15\) & \(94.92\pm 0.12\) & \(95.24\pm 0.11\) & \(95.38\pm 0.10\) & \(95.47\pm 0.09\) & \(95.73\pm 0.00\) \\ 784–100–10 & \(95.20\pm 0.16\) & \(96.24\pm 0.11\) & \(96.53\pm 0.10\) & \(96.75\pm 0.09\) & \(96.85\pm 0.07\) & \(96.91\pm 0.07\) & \(97.02\pm 0.00\) \\ 784–200–10 & \(96.62\pm 0.12\) & \(97.33\pm 0.10\) & \(97.54\pm 0.08\) & \(97.70\pm 0.08\) & \(97.75\pm 0.08\) & \(97.80\pm 0.06\) & \(97.98\pm 0.00\) \\ 784–300–10 & \(97.00\pm 0.12\) & \(97.61\pm 0.08\) & \(97.80\pm 0.08\) & \(97.93\pm 0.07\) & \(97.97\pm 0.06\) & \(98.01\pm 0.05\) & \(98.12\pm 0.00\) \\ 784–400–10 & \(97.31\pm 0.11\) & \(97.85\pm 0.10\) & \(98.01\pm 0.09\) & \(98.15\pm 0.06\) & \(98.20\pm 0.06\) & \(98.27\pm 0.05\) & \(98.41\pm 0.00\) \\ \hline \end{tabular} **Supplementary Table 1. Test accuracy (%) of incoherent MLP-SPDNNs on MNIST with varying hidden layer size \(N\) and shots per activation \(K\).** These models have an MLP structure of \(784\to N\to 10\), where \(N\) represents the number of hidden neurons. Each hidden neuron uses \(K\) shots of binary SPD readouts to compute its activation. The reported test accuracy values are obtained by averaging the mean value and standard deviation over 100 repetitions of inferences on the MNIST test set, which comprises 10,000 images. **Supplementary Figure 2. Visualization of weight elements in the first linear layer of an incoherent SPDNN.** The architecture of this model is \(784\to 100\to 10\), and we display the weight matrix \(W^{(1)}\) of the first layer (with dimensions \(100\times 784\)). Each block represents a row vector in \(W^{(1)}\) containing \(784\) elements. These column vectors are rearranged to form a 2D block with dimensions \(28\times 28\), matching the original shape of the MNIST input images. The 100 rows in \(W^{(1)}\), corresponding to the 100 hidden neurons in the neural network, are arranged in a \(10\times 10\) grid to be visualized. The average value of each block is indicated at the top, and the overall average value of the weight matrix is approximately \(\sim 0.07\). ## Supplementary Note 2 SPDNNs with coherent optical setups ### Modelling and training ``` 1:A batch of inputs \(a^{(0)}\) (\(N_{\text{batch}}\times N_{0}\)) with corresponding targets \(y\) (\(N_{L}\times 1\)), current weights \(W^{(l)}\) (\(N_{l}\times N_{l-1}\), \(l\in\{0,1,\dots,L\}\)), current slope variable \(\eta\), slope annealing factor \(\theta\), current learning rate \(\alpha\), learning rate decay coefficient \(\gamma\) and the clamped photon number \(\lambda_{\max}\). 2:Updated weights \(W^{(l)}\) (\(l\in\{0,1,\dots,L\}\)), slope \(\eta\) and learning rate \(\alpha\). 3:\(\boldsymbol{I}\). Forward pass 4:for\(l=1\) to \(L\)do 5:\(z^{(l)}\gets a^{(l-1)}W^{(l)\top}\)\(\triangleright\) Linear operation to compute the pre-activation 6:\(\lambda^{(l)}\leftarrow(z^{(l)})^{2}\)\(\triangleright\) For coherent light, intensity is the square of the amplitude 7:\(\lambda^{(l)}\leftarrow\min(\lambda^{(l)},\lambda_{\max})\)\(\triangleright\) Clamp the maximum intensity 8:if\(l<L\)then 9:\(p^{(l)}\gets P_{\text{SPD}}(\lambda^{(l)})\)\(\triangleright\) The probability of detecting a click on the SPDs 10:\(a^{(l)}\leftarrow\text{Sample}(p^{(l)})\)\(\triangleright\) SPD output for each shot 11:endif 12:endfor 13:\(a^{(L)}\leftarrow\text{Output}(\lambda^{(L)})\)\(\triangleright\) Final output function 14:\(\boldsymbol{II}\). Backward pass 15:Compute \(g_{a^{(L)}}=\frac{\partial C}{\partial a^{(L)}}\) knowing \(a^{(L)}\) and \(y\). 16:\(g_{z^{(L)}}\leftarrow\frac{\partial a^{(L)}}{\partial z^{(L)}}\circ g_{a^{(L)}}\) 17:for\(l=L\)do 18:if\(l<L\)then 19:\(g_{p^{(l)}}\gets g_{a^{(l)}}\)\(\triangleright\) "Straight-through" here, skip the Bernoulli process 20:\(g_{z^{(l)}}\gets 2z^{(l)}\circ P^{\prime}_{\text{SPD}}\left((z^{(l)})^{2} \right)\circ g_{p^{(l)}}\)\(\triangleright\)\(\frac{\partial p^{(l)}}{\partial z^{(l)}}=\frac{\partial p^{(l)}}{\partial\lambda^{(l)}} \circ\frac{\partial\lambda^{(l)}}{\partial z^{(l)}}=2z^{(l)}\circ P^{\prime}_{ \text{SPD}}\left((z^{(l)})^{2}\right)\) 21:endif 22:\(g_{a^{(l-1)}}\gets g_{z^{(l)}}W^{(l)}\)\(\triangleright\) The gradients with respect to \(W^{(l)}\) 23:\(\boldsymbol{W}\). Parameter update 24:for\(l=1\) to \(L\)do 25:\(W^{(l)}\leftarrow\text{Update}(W^{(l)},g_{W^{(l)}},\alpha)\)\(\triangleright\) Update the weights 26:endfor 27:\(\eta\leftarrow\theta\eta\)\(\triangleright\) Update the slope 28:\(\alpha\leftarrow\gamma\alpha\)\(\triangleright\) Update the learning rate ``` **Algorithm 3** Physics-aware stochastic training of an SPDNN with coherent light. \(N_{\text{batch}}\) is the batch size, \(N_{l}\) denotes the number of neurons in layer \(l\) and \(N_{0}\) is the input size. \(C\) is the loss function. \(L\) is the number of layers. \(P_{\text{SPD}}(\lambda)\) is the function of the probability to detect a click on the single-photon detector (SPD) with respect to the incident light intensity \(\lambda\) (in number of photons). Sample() is a probabilistic sampling of the probability. In SPDNNs, it refers to Bernoulli sampling, Sample(\(p\)) has a probability of \(p\) to be \(1\) and a probability of \(1-p\) to be \(0\) (i.e. Sample(\(p\)) \(\equiv\)\(\mathbf{1}_{t<p}\), \(t\sim U[0,1]\)). For a coherent setup, \(\lambda=z^{2}\) where \(z\) is the pre-activation, output of a matrix-vector multiplier. Output() determines the function applied to the pre-activation right before the final output, such as Softmax or LogSoftmax. Update() specifies how to update the parameters given the calculated gradients, using optimizers such as SGD [13], Adam [14] or AdamW [15]. **Require:** A batch of inputs \(a^{(0)}\) (\(N_{\text{batch}}\times N_{0}\)) with corresponding targets \(y\) (\(N_{L}\times 1\)), current weights \(W^{(l)}\) (\(N_{l}\times N_{l-1}\), \(l\in\{0,1,\dots,L\}\)), current slope variable \(\eta\), slope annealing factor \(\theta\), current learning rate \(\alpha\), learning rate decay coefficient \(\gamma\) and the clamped photon number \(\lambda_{\max}\). **Ensure:** Updated weights \(W^{(l)}\) (\(l\in\{0,1,\dots,L\}\)), slope \(\eta\) and learning rate \(\alpha\). **Algorithm 4** Physics-aware stochastic training of an SPDNN with coherent light. \(N_{\text{batch}}\) is the batch size, \(N_{l}\) denotes the number of neurons in layer \(l\) and \(N_{0}\) is the input size. \(C\) is the loss function. \(L\) is the number of layers. \(P_{\text{SPD}}(\lambda)\) is the function of the probability to detect a click on the single-photon detector (SPD) with respect to the incident light intensity \(\lambda\) (in number of photons). Sample() is a probabilistic sampling of the probability. In SPDNNs, it refers to Bernoulli sampling, Sample(\(p\)) has a probability of \(p\) to be \(1\) and a probability of \(1-p\) to be \(0\) (i.e. Sample(\(p\)) \(\equiv\)\(\mathbf{1}_{t<p}\), \(t\sim U[0,1]\)). For a coherent setup, \(\lambda=z^{2}\) where \(z\) is the pre-activation, output of a matrix-vector multiplier. Output() determines the function applied to the pre-activation right before the final output, such as Softmax or LogSoftmax. Update() specifies how to update the parameters given the calculated gradients, using optimizers such as SGD [13], Adam [14] or AdamW [15]. In coherent optical MVMs [17; 18; 19; 20; 21; 22; 23], the information is conveyed through both the amplitude and phase of light states. These multipliers have the potential to encode complex numbers using arbitrary phase, but in most applications, only phases of \(0\) and \(\pi\) are used for positive and negative real-number values, to align with conventional machine learning models. Our work focuses on real-valued coherent optical MVMs. Now that the information is encoded in the amplitude and phase instead of the intensity, the photon detection process involves measuring the square modulus of the complex number, which adds an extra square function to the pre-activation values. Thus, the coherent SPD activation function is \(f^{\text{Coh}}_{\text{SPD}}(z)=\mathbf{1}_{t<P_{\text{SPD}}(z^{2})}\), where \(t\) is a uniform random variable \(t\sim U[0,1]\) and \(\mathbf{1}_{x}\) is the indicator function on the true value of \(x\), i.e. \[f_{\text{SPD}}^{\text{Coh}}(z)=\begin{cases}1&\text{with probability }p=P_{\text{SPD}}(z^{2}),\\ 0&\text{with probability }1-p,\end{cases} \tag{3}\] where \(P_{\text{SPD}}(z^{2})=1-e^{-z^{2}}\). The activation in the forward propagation is calculated by \(a=f_{\text{SPD}}^{\text{Coh}}(z)\). The expectation of the coherent SPD activation is \(\mathbb{E}[f_{\text{SPD}}^{\text{Coh}}]=P_{\text{SPD}}(z^{2})\). The coherent activation function, depicted in Figure 4a of the main text, exhibits a distinct "V" shape due to the additional square operation, which is symmetric about the y axis. It could be problematic as an activation function [24]. One possible solution is to modify the information encoding and detection scheme to alter the exact form of \(\lambda(z)\) (e.g. [21]). However, in this section, we have chosen to employ the most straightforward intensity-detection scenario, which does not necessitate modifications to conventional ONN implementation. Remarkably, despite its simplicity, this activation function delivers comparable performance and demonstrates impressive results. By adopting this approach, we alleviate experimental complexities while ensuring reliable inference in our SPDNN models. ### MNIST classification task The MNIST handwritten-digit classification task was performed using the same simulation configurations as with incoherent SPDNNs, but with the difference of the coherent SPD activation function and real-number operations. Unlike the previous case, no clamping of the weights was necessary. The models were trained using the SGD optimizer with a learning rate of 0.01 for the hidden layers and 0.001 for the last linear layer, over a period of 10,000 epochs. To evaluate the impact of model size, we trained models with both one and two hidden layers. The training curves of the model with the structure of \(784\to 400\to 400\to 10\) are shown in Supplementary Figure 3. The results for models with different structures and shots of SPD measurements per activation can be found in Supplementary Table 2, and the weights of a model with the structure of \(784\to 100\to 10\) are illustrated in Supplementary Figure 4. Furthermore, convolutional SPDNNs were also used for MNIST classification. The architecture included a convolutional layers with 16 output channels, a kernel size of \(5\times 5\) and a stride of 1. An SPD activation function was immediately applied after each convolution layer, without the use of batch normalization. Average pooling of \(2\times 2\) was performed after each of the SPD activations. After the convolution layer, the total number of features was 3136, then the convolution layers were followed by a linear model of \(3136\to 400\to 10\), with the SPD activation function applied at each of the 400 hidden neurons as well. This structure is depicted in Figure 4b in the main text. For optimization, we used a SGD optimizer with a learning rate of 0.01 for the entire model. The convolutional SPDNN model can be optimized easily without fine tuning the parameters. After 200 epochs, the accuracy quickly reached 99.4%. **Supplementary Figure 4. Visualization of weight elements in the first linear layer of a coherent SPDNN.** The architecture of this model is \(784\to 100\to 10\), and we display the weight matrix \(W^{(1)}\) of the first layer (with dimensions \(100\times 784\)). Each block represents a row vector in \(W^{(1)}\) containing 784 elements. These column vectors are rearranged to form a 2D block with dimensions \(28\times 28\), matching the original shape of the MNIST input images. The 100 rows in \(W^{(1)}\), corresponding to the 100 hidden neurons in the neural network, are arranged in a \(10\times 10\) grid to be visualized. The average value and standard deviation of the elements in each block is indicated at the top. ### CIFAR-10 classification task The CIFAR-10 dataset [25] has 60,000 images, each having \(3\times 32\times 32\) pixels with 3 color channels, that belong to 10 different categories, representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. The dataset is partitioned into a training set with 50,000 images and a test set with 10,000 images. The pixel values have been normalized using the mean value of \((0.4914,0.4822,0.4465)\) and standard deviation of \((0.2471,0.2435,0.2616)\) for each of the color channels. To boost performance, data augmentation techniques including random horizontal flips (50% probability) and random \(32\times 32\) crops (with 4-pixel padding) were implemented during training. We used the AdamW optimizer [15] with a learning rate of 0.0005 and betas of (0.99, 0.98). The models were trained for thousands of epochs. The convolutional SPDNNs have a structure where the SPD activation function is applied after each convolution layer and before an average pooling of \(2\times 2\). The final architecture consists of a series of convolution layers followed by a linear layer of 400 neurons, and a last layer of \(400\to 10\) for the output. Same as the convolutional models trained for MNIST, the convolutional layers use a kernel size of \(5\times 5\), a stride size of 1 and padding of 2. Batch normalization was used in the models after each convolutional layer. Either SPD or ReLU activation function was applied to each of the 400 neurons in the first linear layer, as depicted in Figure 4e in the main text. After \(N_{\text{conv}}\) convolutional layers (\(N_{\text{conv}}=2\), 3 or 4 in this case) with the number of output channels of the last one to be \(N_{\text{chan}}^{\text{last}}\) (either 128 or 256 in this case), the feature map of \((32/2^{N_{\text{conv}}})^{2}\times N_{\text{chan}}^{\text{last}}\) is flattened to a vector, followed by two linear layers of \((32/2^{N_{\text{conv}}})^{2}N_{\text{chan}}^{\text{last}}\to 400\to 10\). In addition do the results presented in Figure 4e in the main text, we experimented with more architectures ranging from 2 to 4 convolution layers, and the results are displayed in Supplementary Figure 5. In these models, only SPD activation function was used. The x-axis of the plot represents the number of multiply-accumulate (MAC) operations in the convolutional layers. The layout of the number of channels for each convolution layer is noted around each data point for each model. For example, "64-128" indicates that there are two convolution layers each with 64 and 128 output channels, respectively. These mean values (data points) and standard deviations (shaded area) of the test accuracies are obtained using 100 repeated inference, and the activations only involved a single shot of SPD measurement (\(K=1\)) in all the neurons, including the convolutional and linear layers. We further investigated the effects of multiple shots of SPD measurments per activation in the SPD activations in convolutional (\(K_{\text{conv}}\)) and linear (\(K_{\text{lin}}\)) layers, respectively. We chose to test model with four convolutional layers of 128, 256, 256, and 256 output channels and varied the \(K_{\text{lin}}\) and \(K_{\text{conv}}\) to see the test accuracies. The results are summarized in Supplementary Table 3. In these SPDNNs, the number of operations in the output layer is negligible compared to the entire models. In terms of number of MAC operations (dot products, DPs), \(N_{\text{MAC}}^{\text{out}}=4000\) (\(N_{\text{DP}}^{\text{out}}=10\)). The number of dot products, or the activation size, is directly related to the number of optical detections in ONN implementations. The output layer is the only layer that needs to be implemented with a "high SNR" (see Figure 1 in the main text). The small portion of operations in this layer indicates the capability of low-SNR stochastic layers in a deeper model. This further suggests the potential to leverage stochastic physical systems with low SNRs to perform reliable neural-network inference. **Supplementary Table 3. Test accuracy (%) of the convolutional SPDNN on CIFAR-10 with varying shots per activation \(K\) in the convolutional and linear layers.** The SPDNN model in this table consists of four convolutional layers with 128, 256, 256, and 256 output channels, respectively. The convolutional layers are followed by a linear layer with 400 neurons and an output layer with 10 neurons. The SPD activation function is applied to each of the 400 neurons in the first linear layer. \(K_{\text{conv}}\) represents the number of shots of SPD readouts per activation in the convolutional layers, while \(K_{\text{lin}}\) represents the shots per activation in the linear layer. The mean accuracy and standard deviation are calculated based on 100 repetitions of inferences using the CIFAR-10 test set of 10,000 images. ## Part II Experimental setup ### Supplementary Note 3. INCOHERENT OPTICAL MATRIX-VECTOR MULTIPLIER The optical matrix-vector multiplier (optical MVM) setup is based on the setup designed in [26]. It consists of an array of light sources, a zoom lens imaging system, an intensity modulator, and a photodetector. We used an organic light-emitting diode (OLED) display of a commercial smartphone (Google Pixel 2016 version) as the light source for encoding input vectors. The OLED display consist of a \(1920\times 1080\) pixel array, with individually controllable intensity for each pixel. There are pixels of three colors on the display, only the green pixels (light wavelength of \(\sim 532\) nm) are used in the experiment. The green pixels are arranged in a square lattice with a pixel pitch of \(57.5\)\(\upmu\)m. A reflective liquid-crystal spatial light modulator (SLM, P1920-500-1100-HDMI, Meadowlark Optics) was combined with a half-wave plate (HWP, WPH10ME-532, Thorlabs) and a polarizing beamsplitter (PBS, CCM1-PBS251, Thorlabs) to perform intensity modulation as weight multiplication. The SLM has a pixel array of dimensions \(1920\times 1152\), with individually controllable transmission for each pixel. Each pixel has the size of \(9.2\times 9.2\)\(\upmu\)m. A zoom lens system (Resolv4K, Navitar) was used to image the OLED display onto the SLM panel (Supplementary Figure 6). The intensity-modulated light field reflected from the SLM was further de-magnified and imaged onto the detector, by a telescope formed by the rear adapter of the zoom lens (1-81102, Navitar) and an objective lens (XLFLUOR4x/340, Olympus). An additional band-pass filter (BPF, FF01-525/15-25, Semrock) and polarizer (LPVISE100-A, Thorlabs) were inserted into the telescope in order to reduce the bandwidth and purify the polarization of the light reflected by the PBS. A scientific CMOS camera (ORCA-Quest qCMOS Camera C15550-20UP) is used to measure the light intensity, as well as single-photon detection. The qCMOS camera has 4096 effective pixels with the size of \(4.6\times 4.6\)\(\upmu\)m. During the computation of a vector-vector dot product, the value of each element in either vector is encoded in the intensity of the light emitted by a pixel on the OLED and the transmission of an SLM pixel. Via the imaging system, each pixel on the OLED display is aligned to a corresponding pixel on the SLM, where element-wise multiplication takes place by the intensity modulation. The modulated light intensity from pixels in the same vector is then focused on the detector to sum up the element-wise multiplication values to be the dot product result. Since the light is incoherent, only non-negative values can be represented. Matrix-vector multiplication is realized by doing a batch of this kind of vector-vector multiplications in parallel, either multiplex in space or time. OLED pixels feature a high extinction ratio and high dynamic range in intensity, which are ideal for characterizing the accuracy of vector-vector dot products. A commercial-grade integrated OLED panel with a high pixel count is readily available at a low cost, which made it possible to encode very large vectors that were essential for demonstrating vector-vector dot products on our setup. The true darkness of OLED pixels allowed us to achieve high dynamic range in intensity modulation and to reduce noise caused by background light pollution. The intensity of each individual pixel can be controlled independently with 256 (8-bit) control levels. However, since the actual output intensity was not linear with the pixel control level, we calibrated a linear look-up table (LUT) that contains 124 distinct intensity levels (\(\sim\)7 bits, Supplementary Figure 7a). We converted a phase-only SLM into an intensity modulator with a half-wave plate (HWP) and a polarizing beam splitter (PBS). The SLM pixels are made of birefringent liquid crystal layers, whose refractive index can be tuned by applying voltage across them. By controlling the refractive index of extraordinary light, the SLM pixels introduce a phase difference between the extraordinary and ordinary light, whose polarizations are perpendicular to each other. When a PBS and HWP were placed in front of a reflective SLM, the light field passed the components twice, once during the trip towards the SLM and once after being reflected by the SLM (Supplementary Figure 6). One of the functions of PBS was to separate the output from the input light: the input light (incident to the SLM) was horizontally polarized and transmitted by the PBS, while the output light (reflected from the SLM) was vertically polarized, and therefore reflected by the PBS. The other function of the PBS is to convert the polarization state of the output light to its amplitude: the light modulated by the SLM was in general elliptically polarized, controlled by the phase difference. The amplitude of the light field (and intensity in this case too) was modulated by selecting only the vertical component of the SLM-modulated light at the output port of the PBS. The HWP was placed with its fast axis rotated 22.5 degrees from the extraordinary axis of the SLM such that the intensity transmission could be tuned from 0 to 100%. In the experiment, each of the SLM pixels can be independently controlled for intensity modulation with a 256 (8-bit) LUT(Supplementary Figure 7b). The maximum extinction ratio of the transmission intensity was measured to be \(\sim\)50. Alternatively, instead of using a phase-modulation SLM, the intensity modulator can be more compactly implemented with a monolithic LCD panel in a transmission geometry. Figure 6: **A photo of the experimental setup.** Input light source (OLED display) and detection parts are not included in this photo. PBS: polarizing beam splitter; HWP: half-wave plate; BPF: band-pass filter; SLM: spatial light modulator. ## Supplementary Note 4 Single-photon detection by a scientific CMOS camera Single-photon detection is the core to implementing an SPDNN. In our experiment, we use a scientific CMOS camera, the Hamamatsu ORCA-Quest qCMOS Camera, to realize the function of single-photon detectors. CMOS cameras usually cannot detect single photons due to the relatively high readout noise compared to the signals induced by individual photons. The ORCA-Quest qCMOS camera, however, has well-controlled readout noise as low as 0.3 equivalent photoelectrons. This makes viewing the individual spikes of photon response possible on the output of the camera. An example of the distribution of pixel values from the camera is shown in Supplementary Figure 8a. These pixel values are from a sequence of frames collected with some intensity of input light. The output pixel values have a digital bias of \(\sim 200\), and the analog gain is \(\sim 7.94\) pixel values per photoelectron. We can see the individual spikes corresponding to different number of detected photons, with the first peak referring to no detected photons. Due to readout noise of the camera, we can still see a near-Gaussian distribution around the peak value of each detected photon number. To do single-photon detection, a threshold can be set to tell if there is a photon (or more photons) detected. If the pixel value is larger than the threshold, we record a click; otherwise, there is no click. In this way, although the camera has already completed analog-to-digital conversion (ADC) before thresholding, the qCMOS camera can still emulate the function of a single-photon detector. This camera has an overall quantum efficiency of \(\sim 86\%\) at the our working wavelength of 532 nm and a dark count rate of 0.006 photoelectrons per second per pixel at the working temperature of -35\({}^{\circ}\)C with air cooling. Due to the readout noise, the thresholding process may add additional errors because of the overlap of the signal peaks. Supplementary Figure 8b shows the pixel value distribution of dark frames without input signals. We can see that using the same threshold, there is a small tail of the distribution on the right side of the threshold. This small portion of the pixel values from dark frames would trigger a photon click as well, which further adds to the dark count rate. Similarly, the output pixel values from detected photons also have a small probability to fall in the "no click" region, which make the effective photon detection efficiency a bit lower. In our experiment, we calibrated the qCMOS camera in the single-photon detection mode and found that the effective dark count of around 0.01 photoelectrons per second per pixel, and the effective photon detection efficiency to be 68%, on average. Note that there are also variations among different pixels. In the experiment implementation, only one single pixel is used for each vector-vector dot product. We will see that the photon detection efficiency and dark counts do not significantly influence the results, as discussed in Supplementary Note 10. Figure 7: **Look-up tables (LUTs) of the components in the incoherent optical matrix-vector multiplier (optical MVM) setup.****a,** The 7-bit LUT of the OLED display to control the pixel intensity. **b,** The 8-bit LUT of the SLM for intensity modulation. The minimum transmission was measured to be \(\sim 2\%\) of the maximum transmission. ## Supplementary Note 5 Validation of the Optical Vector-Vector Multiplications The major part of computation in the ONN implementation is the linear operations. The accuracy of matrix-vector multiplication is essential in a successful ONN inference. In this section, we calibrate the accuracy of our optical MVM. We use the setup either for single-photon detection and conventional intensity measurement that involves a much higher intensity. Focusing the lights to one pixel of \(\sim 5\) um is challenging, which reduces the dot product precision slightly. However, as we will see in Supplementary Note 10 that the SPDNN is very robust to this amount of errors. To generate a test dataset representative of general dot products, we randomly generated vector pairs \(\vec{x}\) and \(\vec{w}\) based on natural scene images from the STL10 dataset. Each vector was generated from a single color channel of one or more images patched together, depending on the target vector size (each image of size \(L\times L\) contributes \(N=L^{2}\) elements to the vector). We chose natural images since they are more representative of the inputs in image classification with globally inhomogeneous and locally smooth features. To adjust the sparsity of the vectors, different thresholds were applied to the image pixel values such that the dot product results cover a wider range of possible values. This was achieved by shifting the original pixel values (float point numbers normalized to the range 0-1) in the entire image up or down by a certain amount, unless the value was already saturated at 1 (the maximum) or 0 (dark). For example, a shift of -1 would make the whole image dark. A shift of +0.2 would make all the pixel values that were originally larger than 0.8 saturated, and would increase all other pixel values by 0.2. This method allowed us to tune the overall intensity of the modulated images without losing the randomness of the distribution. Calibration curves of vector-vector dot product results are shown in Supplementary Figure 9. The results are averaged over a large number of repetitions to get rid of the photon noise to see the systematic errors in the optical MVM. The vectors are randomly generated to cover the full range of the light intensity from the minimum to maximum transmission, as discussed in [26]. The vector size is \(28\times 28\), which is equivalent to the size of the first layer in MNIST classification. ## Supplementary Note 6 Validation of the SPD activation function To validate the SPD activation function in the SPDNN implementation, we need to consider not only the precision of the linear operations, but also the non-linear activation function. As the incident light onto the qCMOS camera is attenuated to just a few photons, the photon noise becomes significant and the measurement less accurate. To address this, we first measure a higher light intensity with long exposure times and estimate the exact light intensity with a shorter exposure time using the ratio of exposure times. We then use the shorter exposure time to perform single-photon detection to output a photon click (value 1) or no photon click (value 0). The probability of a photon click is estimated by averaging over a large number of repetitions. The intensity is tuned by adjusting both the exposure time and neutral density (ND) filters that attenuated the light. The expected theoretical curve is also plotted for comparison. The results are shown in Supplementary Figure 10. **Supplementary Figure 10. Validation of the SPD activation function.** The theory curve is the expected function of \(f(\lambda)=1-e^{\lambda}\) with the intensity \(\lambda\) in photon numbers. This experiment data was taken by the Hamamatsu ORCA-Quest qCMOS camera. ## Part III Implementation of SPDNNs ### Supplementary Note 7Adaption to experimental limitations The implementation of SPDNNs on an optical MVM can be challenged by experimental restrictions that affect the precision of the network inference. Some of these limitations include the non-negative encoding resulting from the use of an incoherent light source and limitations in precision of the setup. In this section, we describe how these limitations can be addressed to successfully implement SPDNNs on our setup. As discussed in Supplementary Note 3, our incoherent optical MVM has systematic errors in the dot-product results, even in the absence of photon noise. Additionally, the SLM used in the system has a finite extinction ratio of approximately 50 (Supplementary Figure 7b). These limitations present a significant challenge in the implementation of the SPDNNs because, in the models, both the input vectors and weights have many small values close to 0. This is problematic because, within the full range of 0 to 1, having a minimum value of 0.02 instead of 0 has a non-trivial effect on the accuracy of the dot product calculation. These small values are accumulated over many elements, leading to a relatively large value compared to the final dot product result. As a result, the performance of the SPDNNs is severely impacted by these limitations. Supplementary Figure 11a demonstrates the results of implementing the neural network models using the real LUTs from our setup. The test accuracy significantly drops, making the experimental implementation a failure. To address this issue, we used error-aware training techniques (as discussed in [27]) to train our models with an understanding of these experimental restrictions. During the error-aware training process, the real LUTs were used in the implementation of the models. The results of this error-aware training are shown in the red curves in Supplementary Figure 11a. It can be seen that, with error-aware training, the SPDNNs models are highly robust to changes in the input range, especially with a relatively large number of hidden neurons. ### Supplementary Figure 11Simulation of SPDNN performance with different experimental settings. **a,** MNIST test accuracy of SPDNN models considering experimental restrictions. The models have a structure of \(784\to 400\to 10\) (\(N=400\), \(K=1\)). **b,** MNIST test accuracy as a function of input light intensity. The intensity was varied by adjusting the range of input values using a constant factor. Both panels show results obtained with an incoherent setup and a single shot of SPD readout per activation (\(K=1\)). Conventional ONN inferences can operate effectively at various light intensity levels, as long as the intensity is sufficiently high to suppress photon noise and maximize detection precision. These systems can in principle integrate arbitrarily high light intensities to enhance detection precision. However, in the optical implementation of SPDNN inferences, the SPD activation function relies on the precise number of photons detected. As a result, controlling the operating intensity in the setup becomes crucial to ensure accurate quantization of the detected optical energy. Calibrating the intensity to the appropriate level for the SPD activation function presents a challenge, especially considering the inherent significant noise in intensity measurements at low photon counts. Despite these challenges, our simulation results demonstrate the robust performance of SPDNNs even with slight variations in input intensities. We systematically varied the input intensity across a range from 0.1 to 100 times the original expected intensity used during training. Supplementary Figure 11b illustrates that the model's performance remains stable within a wide range of intensities. The test accuracy remains nearly consistent, even when the input energy deviates significantly from the original training intensity. This observed stability highlights the resilience of SPDNNs to variations in input intensity levels. It further suggests that these SPDNN models can be successfully implemented with lower photon budgets, which is promising for practical applications where minimizing optical energy usage is desirable. Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix A Appendix **Supplementary Figure 12. An example of weight values to be displayed on the OLED display during the experiment.** This model has \(N=100\) hidden neurons, and we display the weight matrix \(W^{(1)}\) of the first layer (with dimensions \(100\times 784\)). Each block represents a row vector in \(W^{(1)}\) containing 784 elements. These column vectors are rearranged to form a 2D block with dimensions \(28\times 28\), matching the original shape of the MNIST input images. The 100 rows in \(W^{(1)}\), corresponding to the 100 hidden neurons in the neural network, are arranged in a \(10\times 10\) grid to be visualized. The weight values have been normalized to a range of 0 to 1 to fit the intensity range of the OLED display. The average value of each block is indicated at the top. The color map used in the plot has been selected to emulate the actual color on the OLED display, as only green pixels are utilized (\(\sim 532\) nm), thereby presenting what could be observed on the OLED display in the experimental setup. **Supplementary Figure 13. Test accuracy of individual images with 1 SPD measurement per activation (\(K=1\)).** The MLP-SPDNN models with a varying number of hidden neurons \(N\) from 50 to 400 (panels **a**-**e**) were evaluated using a single shot per SPD activation (\(K=1\)). Test accuracy was evaluated for each individual image, and each data point represents the average accuracy obtained from 30 inferences. We have not plotted error bars for experimental data, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The accuracy values in the legends are averaged over all test images. The simulation results used the same SPDNN models as in the experiment and considered the experimental restrictions. The error bars were calculated by repetitions of the whole process (average accuracy of 30 inferences). **Supplementary Figure 14. Test accuracy of individual images with 2 SPD measurements per activation (\(K=2\)).** The MLP-SPDNN models with a varying number of hidden neurons \(N\) from 50 to 400 (panels **a**-**e**) were evaluated using 2 shots per SPD activation (\(K=2\)). Test accuracy was evaluated for each individual image, and each data point represents the average accuracy obtained from 15 inferences. We have not plotted error bars for experimental data, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The accuracy values in the legends are averaged over all test images. The simulation results used the same SPDNN models as in the experiment and considered the experimental restrictions. The error bars were calculated by repetitions of the whole process (average accuracy of 15 inferences). **Supplementary Figure 15. Test accuracy of individual images with 3 SPD measurements per activation (\(K=3\)).** The MLP-SPDNN models with a varying number of hidden neurons \(N\) from 50 to 400 (panels **a**-**e**) were evaluated using 3 shots per SPD activation (\(K=3\)). Test accuracy was evaluated for each individual image, and each data point represents the average accuracy obtained from 10 inferences. We have not plotted error bars for experimental data, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The accuracy values in the legends are averaged over all test images. The simulation results used the same SPDNN models as in the experiment and considered the experimental restrictions. The error bars were calculated by repetitions of the whole process (average accuracy of 10 inferences). a total of 30 repetitions performed. For each repetition, if the prediction made was accurate, it was recorded as a 1; otherwise, it was recorded as a 0. To visualize the distribution of the output accuracy, we then calculated and plotted the mean values and standard deviations of the test accuracy based on the 30 repetitions. We conducted simulations of the same inference process on a digital computer using the same models and input images. To ensure a closer simulation to reality, we also incorporated realistic experimental restrictions, such as the limited extinguish value of the SLM, the dynamic range and precision of the LUT in both the SLM and the OLED display, and the systematic errors in the optical MVM. Similarly, we examined the results of the inferences with \(K=2\) (\(K=3\)) shots per activation, which are illustrated in Supplementary Figure 14 (15). In this setup, we combine every 2 (3) frames to be averaged to compute a neuron activation, and we repeated this process 15 (10) times to obtain the final results. By comparing the simulation results with the experimental results obtained from the collected SPD activation values, we aimed to validate the performance of the latter. The comparison revealed that, for the majority of input images, the predictions are highly resilient to the inherent stochasticity in the model. Interestingly, the results are not as unpredictable as one might expect, as a closer examination shows that most of the errors stem from a limited number of specific input images (see Supplementary Figure 13-15). The close correspondence between the experimental and simulated results for these specific "problematic" input images further validates the reliability of our experimental implementation. Although the experimental results are slightly inferior to the simulation results, the distribution of accuracy per input image is highly comparable. In particular, input images that exhibit high sensitivity to the model's stochasticity tend to result in larger deviations in the experimental results, while input images that are robust to the model stochasticity exhibit high accuracy both in simulations and in experiments. These results provide strong evidence of the reliability of the experimental implementation and demonstrate the robustness and noise-resilience of SPDNN implementations. To further understand the characteristics of the stochastic neural-network inference, we examined the output vectors of each input test image. As depicted in Supplementary Figure 16, the 30 output vectors from different repetition of each input image are plotted together to demonstrate the stochasticity in the neural network. These output vectors were computed by the experimentally measured SPD activations and digitally implemented output layer, with \(N=400\) hidden neurons and \(K=1\) shot of SPD measurement per activation (Supplementary Figure 13e). No additional operations were performed after the linear operation of the output layer (see Algorithm 2). Each of the 10 values in the output vector corresponds to the classes in MNIST digit classification, ranging from 0 to 9, as indicated at the bottom. The curves of the 30 output vectors were plotted with 10% transparency to show the distribution density. As shown in Supplementary Figure 13-15, most of the test images have very high accuracy and are predicted correctly by SPDNN with high certainty, such as the image 0 of digit "7" (depicted in the upper left in Supplementary Figure 16). Despite the stochastic distribution of the output values among the 30 repetitions, the value of class "7" remains consistently higher than the other elements, resulting in a 100% test accuracy for this image (see Figure 1a in the main text). We also examined these "problematic" images, such as the image 8 of digit "5" (lower left), which is predicted to be digit "6" for nearly half of the chance. This misclassification is not surprising to human observers, as the image shares features with both digits "5" and "6". Interestingly, the output values for class 8 in this case are relatively high but not the highest, which also aligns with human intuition. Similar phenomenon can be found for the other "problematic" images as well, indicating that the model has indeed learned meaningful features from the dataset. These findings solidify the fact that stochastic neural networks can perform reliable deterministic classification tasks, and the inherent stochasticity in the model does not compromise its ability to make accurate predictions. **Supplementary Figure 16. Visualization of the output vectors of the SPDNN with given input images.** In this figure, the SPDNN model has \(N=400\) hidden neurons and \(K=1\) shot of SPD measurement per activation (Supplementary Figure 13e). The prediction of a single inference of a particular test image is stochastic and is either correct or incorrect. For each test image, we performed 30 inferences for each test image and report the average accuracy. We have not plotted error bars, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The output vectors from the 30 repetitions of inference with the fixed corresponding image are plotted together, with a 10% transparency on the curves to show the density. These output vectors are computed from the experimentally collected SPD activations by performing the output layer digitally on a computer (Supplementary Table 5). ## Supplementary Note 9 Full-optical implementation of the entire SPDNN models In this section, we showcase a full-optical implementation of a neural network by demonstrating the implementation of the last linear layer optically as well, using the SPD activation values obtained from the inference of the first layer. This provides a comprehensive illustration of the feasibility of optical implementation for the entire network. It is important to note that, in conventional binarized neural networks, the last layer is usually implemented using full precision, as demonstrated in previous studies such as [28, 29, 30, 31]. Our results demonstrate that SPDNNs can be implemented entirely using optics with remarkably low energy requirements. This capability holds promise for further advancements, especially with the integration of coherent optical computing platforms, which will be discussed later. Similar to the first layer, we use the same setup to perform the optical matrix-vector multiplication. The difference is that now we do not need to perform single-photon detection that has to control the light intensity at a few photons per detection. In fact, the inference of the last linear layer can be implemented just as the conventional ONNs, where we accumulate a sufficiently high number of photons to reach a high SNR of each detection. The collected SPD activation values, as described in Supplementary Note 8, are used as inputs to the last linear layer. In the experimental implementation, we choose the data from the model with \(N=400\) hidden neurons and \(K=5\) shots per activation. For the 30 frames of one-shot binary SPD activations, every 5 frames of them are averaged to obtain the 6 independent repetitions of the inference. The input activation values to be displayed on the SLM are shown in Supplementary Figure 17. The possible values for the 5-shot activations are 0, 0.2, 0.4, 0.6, 0.8, and 1. If the linear operation was performed in full-precision on a computer, the mean test accuracy would be approximately \(99.2\%\). To perform the linear operation with real-valued weight elements on our incoherent setup, we divide the weight elements into positive and negative parts. We perform the operation separately for each part, and finally obtain the output value by subtracting the results with negative weights from those with positive weights. The two sets of weights to be projected onto the OLED display are shown in Supplementary Figure 18, where the ten blocks of weights corresponds to the ten output nodes. This approach at least double the photon budget required for the last layer and has the potential to be optimized for greater energy efficiency. However, even with these non-optimized settings, our results demonstrate that the optical energy budget is already several orders of magnitude lower than the start-of-the-art ONN implementations. **Supplementary Table 6. Optical energy consumption in SPDNN inference with varying photon budgets in the optical implementation of the output layer.** The first column displays the exposure time of the camera, which determines the amount of detected photons. The average photons per detection for both positive (pos.) and negative (neg.) output are calculated from the 6000 dot products derived from 100 input images, 6 repetitions in the first layer inference, and 10 output nodes. The total photons in the output layer are determined by averaging 600 inferences of the last layer, each computing 10 output values. The total detected photons in a full inference are the sum of photons detected in both layers. The average photons per multiplication is calculated by dividing the total number of multiplications by the total detected photons. Standard deviations are calculated based on 30 repetitions of the last layer detection. The total optical energy of a full inference along with photon numbers are displayed in the fifth and sixth columns, with standard deviations omitted for simplicity. These results add the \(\sim 1043.7\) photons used in the first layer. The last column shows the test accuracy of the inferences at each photon budget. In the implementation, we adjust the exposure time of the camera to control the optical energy per detection. In order to perform the inference on the 100 input images and 10 output nodes, along with 6 repetitions of the activation values and 2 sets of weights, we need to perform a total of \(100\times 6\times 10\times 2=12000\) vector-vector dot products, each with a size of 400. Each vector-vector dot product detection is repeated 100 times. The results are presented in Supplementary Table 6. The photons per detection of either positive or negative output are each averaged over \(100\times 6\times 10=6000\) dot products. The total photons detected in the last layer per inference are averaged over the 100 input images and 6 repetitions, totaling \(100\times 6=600\) inferences. The standard deviation of the photon numbers are calculated based on the 100 repeated detections for each dot product. The total detected photons in a full inference is the sum of those in the last layer and the first layer. The average value of the binary activations collected for the \(N=400\) model is \(\sim\)0.52186, resulting in a total of \(0.52186\times 400\times 5\approx 1043.7\) detected photons per inference in the first layer, with 5 shots per activation. This number is then combined with the total detected photons in the last layer to obtain the overall photon count for a full inference. We can see that the photon budget can be reduced by 5 folds if we only have one shot per inference. In a full inference with \(N=400\) hidden neurons and \(K=5\) shots per activation, the total number of vector-vector products in the first layer is \(400\times 5=2000\) and that in the last layer is 10 for the 10 output nodes. With dot products of size 784 in the first layer and 400 in the last layer, the total number of multiplications in one inference process is equal to 1,572,000 (\(2000\times 784+10\times 400\)). To calculate the number of detected photons per multiplication, we divide the total number of detected photons in a full inference by the total number of multiplications. The prediction of a given inference is made by directly evaluating the output values of each of the 10 output nodes. The output values is calculated as the difference between the positive and **Supplementary Figure 17. Visualization of activation values on the SLM during the last layer experiment.** This figure displays the activations obtained from the data collected for the model of \(N=400\) hidden neurons and \(K=5\) shots per activation. The possible values for the 5-shot activations are 0, 0.2, 0.4, 0.6, 0.8, and 1. The activations of size 400 are rearranged into a \(20\times 20\) shape, which corresponds to their physical layout on the SLM. Panels **a** to **i** display the activations of test images with indices 0, 1, 2, 25, 50, 75, 97, 98, and 99, respectively, each with 6 repetitions of inference. The average value of the activations in each block is indicated at the top. The overall average activation value of the 100 test images is \(\sim 0.5219\). negative output intensity. The label of the node with the highest output value is then determined to be the predicted label. The test accuracy on the 100 test images is presented with its mean and standard deviation in the final column of Supplementary Table 6. The standard deviation is determined by considering both the 6 repetitions of the first layer's inference and the 100 repetitions of detections in the last layer. To visualize the impact of photon noise on accuracy in ONN inferences with a limited photon budget, the data collected from the last layer inference is depicted in Supplementary Figure 19. In each panel, 6000 data points are plotted for either positive or negative output, considering the 100 input images, 6 repetitions in the first layer inference, and 10 output nodes. The ground truth dot product values are computed with high-precision operations on a computer. Both the raw camera pixel values and the corresponding photon count are shown on the vertical axes. As the number of detected photons per detection increases, the detected values becomes less noisy, resulting in a test accuracy that is closer to the ground truth of 99.2% (Supplementary Table 5). Similar to conventional optical neural networks, the decrease in accuracy is primarily due to shot noise. In addition, we performed the output layer optically for other configurations as well. The results are represented in Supplementary Figure 20). The activation values collected in experiments of other choices of number of hidden neuron \(N\) and shots of SPD readouts \(K\) are used as the input for the output layer. If the output layer is implemented with full numerical precision, the test accuraies were shown in Supplementary Table 5. These accuracies are the upper bound for the full-optical implementation with the presence of noise in optical implementation. For these configurations of numbers of hidden neurons (\(N\)) and shots of SPD measurements per activation (\(K\)), one inference through the \(784\to N\) hidden layer involves \(N\times K\) SPD measurements to compute the activation vector in the hidden layer of size \(N\). The detected number of photons for the SPD activation computation in the hidden layer of each configuration is denoted in the corresponding panel in Supplementary Figure 20. The total number of detecte photons per inference is the summation of this number and the total number of photons detected in the \(N\to 10\) output layer, similar to the procedure we discussed above for the configuration of \(N=400\) and \(K=5\). Similar to the plot in Figure 3d in the main text, the test accuracies increase with the detected optical energy in **Supplementary Figure 19. Calibration of collected experimental data of the last layer inference.** This figure shows the raw data of "high-SNR" optical measurement on the qCMOS camera with various exposure times, each depicted in a separate panel. For each exposure time, one output value was obtained by measuring the output from both positive and negative weights. Each plot includes 6,000 data points, representing 100 test images, 6 repetitions in the hidden layer activation, and 10 output nodes. The ground truth values were computed using full-precision operations on a digital computer. Both the raw camera pixel values and the corresponding detected photon numbers are displayed on the y-axis, with the average detected photon numbers for the 6,000 data points noted in each plot. a similar trend to that of the \(N=400\), \(K=5\) we discussed in detail above. We can also see that with a smaller number of neuron \(N\), the model seems to be more noise-resilient with a similar number of photons in the output layer implementation. Although the models with smaller \(N\) and \(K\) suffer from a lower noise-free accuracy due to a smaller network size and higher stochasticity, as shown in Supplementary Table 5. The final test accuracy is a combination of these two factors. Figure 20: **Experimental results of full-optical implementation with different SPDNN configurations.** In addition to Figure 3d in the main text, this figure shows the results obtained with different numbers of hidden neurons (\(N\)) and shots of SPD measurements per activation (\(K\)). Each model uses experimentally collected activation values as input for the optical implementation of the output layer. The number of detected photons in the first \(784\to N\) layer to compute the SPD activations in each configuration is denoted in the corresponding plot. The noise-free test accuracies with full-precision output layer are shown in Supplementary Table 5. The number of detected photons in the \(N\to 10\) output layer is varied to control the noise in the optical implementation, which is reflected in the resultant test accuracy. The total number of detected photons per inference is the sum of the photon budgets in the two layers. ## Part IV Discussion ### Supplementary Note 10. Robustness tests of SPDNNs The first thing to check is the errors induced by the single-photon detectors. The two key parameters to consider when choosing commercial SPDs are photon detection efficiency and dark count rate. Photon detection efficiency refers to the amount of incident light that can be detected by the SPD. Although low photon detection efficiency is a common issue in many photon experiments, it does not add extra noise to our SPDNN models. This is because any attenuation to the light still follows a Poisson distribution and cannot be noisier than a single-photon detector. Hence, a low photon detection efficiency will only add to the overall transmission loss in the setup, and the input light power is usually redundant, so it will not affect the performance much. On the other hand, dark counts, or false clicks, could pose a greater challenge in experiments with SPDs. False clicks are hard to distinguish from real signals, and the output of the detection is binary. The dark count rate of a functional SPD is typically between \(10^{-5}\) and \(10^{-2}\) false clicks per signal, depending on the experimental configuration. In some extreme circumstances, such as when the exposure time is very long or when it is hard to remove ambient light, the dark count rate could be as high as one false click in tens of detections, ruining the results of the experiment. However, our SPDNN models are resilient to high dark count rates. As shown in Supplementary Figure 21a, even with a false click in fewer than 10 measurements, we still obtain relatively good accuracy. The common range of \(<10^{-2}\) barely affects the performance of the SPDNNs. As introduced in Supplementary Note 4, the dark count rate with our SPD setting is 0.01 per second per pixel. Given the exposure time of milliseconds, the effects due to dark counts is trivial in the experimental implementation. In summary, the robustness of SPDNN models to noise obviates the need for selecting specialized SPDs for experimental realization. Cost-effective SPDs can be employed for implementing SPDNNs with high performance. Furthermore, considering the significant power consumption of cooling systems for state-of-the-art SPDs, relaxing the dark current requirement can greatly reduce the power consumption of the detection system. The precision of linear operations is a crucial factor in neural network inferences. As discussed in Supplementary Note 5, the accuracy of vector-vector multiplication may not be optimal when using a single-pixel camera for single-photon detection. To assess the effect of errors in dot product calculations on the performance, we conducted a simulation test by adding different levels of random noise to the dot product results in the first layer, which serve as the pre-activations to the SPD activation function. The results, shown in Supplementary Figure 21b, indicate that SPDNNs are robust to errors in linear operations, even with up to 20% relative noise. This robustness ensures the reliability of the experimental implementation. ## Supplementary Note 11 Noise Resilience Compared to Conventional Models In our SPD activation function, two key features set it apart from conventional neural networks: the quantization of activation values and the stochastic activation process. Both of these processes occur naturally through the detection of single photons. The intrinsic quantization of energy and detection process results in a nonlinear response to the input light intensity, eliminating the need for additional nonlinear operations in the neural network. This nonlinearity is evident in the higher MNIST classification test accuracy of SPDNNs compared to linear models. Additionally, the intrinsic photon noise in the activation function makes the output values stochastic. With more averaging, the stochasticity is reduced, resulting in a more precise output as seen in the implementation of SPD activations in the fully-connected layers. This may imply that the noise is unwanted in the neural network inferences. However, the stochastic inference is inevitable in many real-world tasks with a physical device, our stochastic models demonstrated a high noise-resilience that can still yield reliable outputs regardless this amount of stochasticity. To evaluate the noise resilience of our SPDNNs against conventional continuous-variable models, we conducted experiments to compare the test accuracy of the models under varying levels of photon noise. We adopted quantization-aware training (QAT) as a popular noise-aware training method, which involves quantizing the weights during training to make the model more noise-resilient. We trained deterministic QAT models with the same multi-layer perceptron (MLP) structure of \(784\to 400\to 10\) and quantized the weight precision to a specific number of bits. We then compared the MNIST test accuracy of these models to SPDNNs with the same level of photon noise added during the neural network inference of the hidden layer. For the real-valued QAT models that are compared to the coherent SPDNNs, we chose to use the ReLU activation functions. The QAT models adopted a deterministic quantization function and quantized the weights to the corresponding precision. During inferences, we performed computations with full precision, with the photon noise added to the pre-activation values of the hidden neurons. Supplementary Figure 22a shows that the ReLU models exhibit high noise resilience, and harsh quantization does not significantly enhance the noise resilience but harms the overall precision. In fact, decreasing the quantization levels leads to decreased model performance at this photon noise level. The accuracy almost converges at a precision of 5 bits or higher. For the non-negative QAT models that are compared to the incoherent SPDNNs, the non-negativity of the weights renders ReLU activation functions less effective. Hence, we use the Sigmoid activation function, more rigorously, the positive half of it, to train the QAT models. However, the models are not as noise-resilient as with real-number operations, and stronger quantization is required to enhance the model robustness. As the simulation results show, the performance of models of precision 3 bits or more almost converges. It is worth noting that, despite having over 98% test accuracy without photon noise, the performance of these models with 3-bit precision or more is worse under such noise levels. Decreasing the quantized precision is a tradeoff between noise resilience and overall accuracy. We observed that the 2-bit QAT performs the best over other precisions. These results showed that all the QAT models are inferior to SPDNNs in terms of accuracy under the same or lower photon budget. This finding indicates that SPDNNs are more effective in achieving high accuracy in photon-starved environments. Our results suggest that natural quantization of optical energy enhances noise resilience in neural networks, and that stochasticity could aid in searching for more accurate and noise-resilient models. However, we do not claim that the SPD activation function is the best way to train a noisy neural network, and we are open to exploring other noise-aware training methods that could further improve resilience. Our findings demonstrate that with appropriate training that takes into account the stochastic and quantized nature of optical energy in the realistic physical computing system, ONNs can achieve high performance even at very high noise levels, which was not previously possible. What makes it more intriguing about our approach is that it exploits the natural single-photon detection process. ## Supplementary Note 12 Distribution of expectation values for SPD activations In this study, we explored the use of highly stochastic SPDNN models to achieve high performance in deterministic classification tasks. At first glance, this may seem counter-intuitive, as deterministic classification typically requires stable and reliable outputs, while stochastic models introduce inherent uncertainty. However, a closer examination of the characteristics of the activation values in SPDNN inferences provides a more intuitive understanding of how this approach can achieve such high accuracy. In Supplementary Figure 23a, we present the distribution of expectation values for hidden neuron activations. This distribution is obtained using a single shot of SPD readout (\(K=1\)). Since the activations are binary (either 0 or 1), the expectation value represents the probability of the activation being 1. We constructed this histogram by considering the inferences for all input images in the test set and all hidden neurons' activation values, so that the distribution is averaged over many different samples to show the overall picture of the general behavior of the network inference. For example, a layer with 400 hidden neurons and 10,000 test input images would yield \(400\times 10,000=4\times 10^{6}\) expectation values included in the histogram. We utilized an optimized SPDNN model with an MLP structure of \(784\to 400\to 400\to 10\) to generate this histogram, and we also found that this distribution is consistent across models with varying numbers of hidden neurons or layers, as well as coherent or incoherent SPD detection schemes. Interestingly, we observed that the majority of neuron activations exhibit more deterministic expectation values rather than pure randomness. While some models trained with experimental limitations cannot reach absolute zero values, the peak at zero value shifts to a less sharp bump close to zero, still distributing towards either end rather than the middle of value of 0.5. In Bernoulli sampling, an expectation value of 0.5 signifies that the probability of being 0 or 1 is equivalent, indicating that there is no useful information in the process, and the entropy is at its maximum. Noisy channels with such characteristics cannot carry valuable information for neural network inference. Consequently, during the training process, the model should strive to learn from the training set and update the neural network weights accordingly to capture the essential features. This process involves storing information in the trained model, which can be reflected by decreasing the entropy of each stochastic binary neuron. In Supplementary Figure 23b, we observe that as the model undergoes more training epochs, the expectation value distribution of activations becomes more concentrated towards 0 or 1. This indicates that the model retains more information and generates more reliable outputs. However, it is important to note that while the entropy of each individual neuron decreases, at the network level, the average activation still tends to be around 0.5 photons when considering all the neurons, denoting maximum entropy. This suggests that the neural network is effectively utilizing its capacity to extract information using all its neurons by increasing the overall network entropy. In fact, a network with all neurons having the same expectation value (entropy of 0) would not be able to learn any meaningful features. In summary, while SPDNNs are inherently stochastic, the distribution of expectation values for hidden neuron activations leans towards deterministic outcomes, allowing the model to effectively learn features and achieve high accuracy in deterministic classification tasks. The training process shapes the probabilistic distribution of the neurons and allocates different neurons close to either 0 or 1 to learn the patterns of input images and output reliable inferences. Remarkably, the implementation of this allocation is exceptionally efficient in optical energy, as each activation only involves a photon click.
2305.12162
A Scalable Neural Network for DSIC Affine Maximizer Auction Design
Automated auction design aims to find empirically high-revenue mechanisms through machine learning. Existing works on multi item auction scenarios can be roughly divided into RegretNet-like and affine maximizer auctions (AMAs) approaches. However, the former cannot strictly ensure dominant strategy incentive compatibility (DSIC), while the latter faces scalability issue due to the large number of allocation candidates. To address these limitations, we propose AMenuNet, a scalable neural network that constructs the AMA parameters (even including the allocation menu) from bidder and item representations. AMenuNet is always DSIC and individually rational (IR) due to the properties of AMAs, and it enhances scalability by generating candidate allocations through a neural network. Additionally, AMenuNet is permutation equivariant, and its number of parameters is independent of auction scale. We conduct extensive experiments to demonstrate that AMenuNet outperforms strong baselines in both contextual and non-contextual multi-item auctions, scales well to larger auctions, generalizes well to different settings, and identifies useful deterministic allocations. Overall, our proposed approach offers an effective solution to automated DSIC auction design, with improved scalability and strong revenue performance in various settings.
Zhijian Duan, Haoran Sun, Yurong Chen, Xiaotie Deng
2023-05-20T10:42:00Z
http://arxiv.org/abs/2305.12162v3
# A Scalable Neural Network for DSIC Affine Maximizer Auction Design ###### Abstract Automated auction design aims to find empirically high-revenue mechanisms through machine learning. Existing works on multi item auction scenarios can be roughly divided into RegretNet-like and affine maximizer auctions (AMAs) approaches. However, the former cannot strictly ensure dominant strategy incentive compatibility (DSIC), while the latter faces scalability issue due to the large number of allocation candidates. To address these limitations, we propose AMenNet, a scalable neural network that constructs the AMA parameters (even including the allocation menu) from bidder and item representations. AMenuNet is always DSIC and individually rational (IR) due to the properties of AMAs, and it enhances scalability by generating candidate allocations through a neural network. Additionally, AMenuNet is permutation equivariant, and its number of parameters is independent of auction scale. We conduct extensive experiments to demonstrate that AMenuNet outperforms strong baselines in both contextual and non-contextual multi-item auctions, scales well to larger auctions, generalizes well to different settings, and identifies useful deterministic allocations. Overall, our proposed approach offers an effective solution to automated DSIC auction design, with improved scalability and strong revenue performance in various settings. ## 1 Introduction One of the central topics in auction design is to construct a mechanism that is both dominant strategy incentive compatible (DSIC) and individually rational (IR) while bringing high expected revenue to the auctioneer. The seminal work by Myerson (1981) characterizes the revenue-maximizing mechanism for single-parameter auctions. However, after four decades, the optimal auction design problem in multi-parameter scenarios remains incompletely understood, even in simple settings such as two bidders and two items (Dutting et al., 2019). To solve the problem, recently, there has been significant progress in _automated auction design_(Sandholm and Likhodedov, 2015; Dutting et al., 2019). Such a paradigm formulates auction design as an optimization problem subject to DSIC and IR constraints and then finds optimal or near-optimal solutions using machine learning. The works of automated auction design can be roughly divided into two categories. The first category is the RegretNet-like approach (Curry et al., 2020; Peri et al., 2021; Rahme et al., 2020, 2021; Duan et al., 2022; Ivanov et al., 2022), pioneered by RegretNet (Dutting et al., 2019). These works represent the auction mechanism as neural networks and then find near-optimal and approximate DSIC solutions using adversarial training. The second category is based on affine maximizer auctions (AMAs) (Roberts, 1979; Likhodedov and Sandholm, 2004; Likhodedov et al., 2005; Sandholm and Likhodedov, 2015; Guo et al., 2017; Curry et al., 2022). These methods restrict the auction mechanism to AMAs, which are inherently DSIC and IR. Afterward, they optimize AMA parameters using machine learning to achieve high revenue. Generally, RegretNet-like approaches can achieve higher revenue than AMA-based approaches. However, these works are not DSIC. They can only ensure approximate DSIC by adding a regret term in the loss function as a penalty for violating the DSIC constraints. There are no theoretical results on the regret upper bound, and the impact of such regret on the behaviors of strategic bidders. Even worse, computing the regret term is time-consuming (Rahme et al., 2020). AMA-based approaches, on the other hand, offer the advantage of guaranteeing DSIC and IR due to the properties of AMAs. However, many of these approaches face scalability issues because they consider all deterministic allocations as the allocation menu. The size of the menu grows exponentially, reaching \((n+1)^{m}\) for \(n\) bidders and \(m\) items, making it difficult for these approaches to handle larger auctions. Even auctions with 3 bidders and 10 items can pose challenges to the AMA-based methods. To overcome the limitations above, we propose a scalable neural network for the DSIC affine maximizer auction design. We refer to our approach as AMenuNet: Affine maximizer auctions with Menu Network. AMenuNet constructs the AMA parameters, including the allocation menu, bidder weights, and boost variables, from the bidder and item representations. After getting the parameters, we compute the allocation and payment results according to AMA. By setting the representations as the corresponding contexts or IDs, AMenuNet can handle both contextual (Duan et al., 2022) and non-contextual classic auctions. As AMenuNet only relies on the representations, the resulting mechanism is guaranteed to be DSIC and IR due to the properties of AMAs. Specifically, we employ two techniques to address the scalability issue of AMA-based approaches: (1) Firstly, we parameterize the allocation menu. We predefine the size of the allocation menu and train the neural network to compute the allocation candidates within the menu, along with the bidder weights and boost variables. This allows for more efficient handling of large-scale auctions. (2) Secondly, we utilize a transformer-based permutation-equivariant architecture. Notably, this architecture's parameters remain independent of the number of bidders or items. This enhances the scalability of AMenuNet, enabling it to handle auctions of larger scales than those in training. We conduct extensive experiments to demonstrate the effectiveness of AMenuNet. First, our performance experiments show that in both contextual and classic multi-item auctions, AMenuNet can achieve higher revenue than strong DSIC and IR baselines. AMenuNet can also achieve comparable revenue to RegretNet-like approaches that can only ensure approximate DSIC. Next, our ablation study shows that the learnable allocation menu provides significant benefits to AMenuNet, from both revenue and scalability perspectives. Thirdly, we find that AMenuNet can also generalize well to auctions with a different number of bidders or items than those in the training data. And finally, the case study of winning allocations illustrates that AMenuNet can discover useful deterministic allocations and set the reserve price for the auctioneer. ## 2 Related Work Automated mechanism design(Conitzer and Sandholm, 2002, 2004; Sandholm and Likhodedov, 2015) has been proposed to find approximate optimal auctions with multiple items and bidders (Balcan et al., 2008; Lahaie, 2011; Dutting et al., 2015). Meanwhile, several works have analyzed the sample complexity of optimal auction design problems (Cole and Roughgarden, 2014; Devanur et al., 2016; Balcan et al., 2016; Guo et al., 2019; Gonczarowski and Weinberg, 2021). Recently, pioneered by RegretNet (Dutting et al., 2019), there is rapid progress on finding (approximate) optimal auction through deep learning (Curry et al., 2020; Peri et al., 2021; Rahme et al., 2020, 2021; Duan et al., 2022; Ivanov et al., 2022). However, as we mentioned before, those RegretNet-like approaches can only ensure approximate DSIC by adding a hard-to-compute regret term. Our paper follows the affine maximizer auction (AMA) (Roberts, 1979) based approaches. AMA is a weighted version of VCG (Vickrey, 1961), which assigns weights \(\mathbf{w}\) to each bidder and assigns boosts to each feasible allocation. Tuning the weights and boosts enables AMAs to achieve higher revenue than VCG while maintaining DSIC and IR. Different subsets of AMA have been studied in various works, such as VVCA (Likhodedov and Sandholm, 2004; Likhodedov et al., 2005; Sandholm and Likhodedov, 2015), \(\lambda\)-auction (Jehiel et al., 2007), mixed bundling auction (Tang and Sandholm, 2012), and bundling boosted auction (Balcan et al., 2021). However, these works set all the deterministic allocations as candidates, the size of which grows exponentially with respect to the auction scale. To overcome such issue, we construct a neural network that automatically computes the allocation menu from the representations of bidders and items. Curry et al. (2022) is the closest work to our approach that also parameterizes the allocation menu. The key difference is that they optimize the AMA parameters explicitly, while we derive the AMA parameters by a neural network and optimize the network weights instead. By utilizing a neural network, we can handle contextual auctions by incorporating representations as inputs. Additionally, the trained model can generalize to auctions of different scales than those encountered during training. Contextual auctions are a more general and realistic auction format assuming that every bidder and item has some public information. They have been widely used in industry (Zhang et al., 2021; Liu et al., 2021). In the academic community, previous works on contextual auctions mostly focus on the online setting of some well-known contextual repeated auctions, such as posted-price auctions (Amin et al., 2014; Mao et al., 2018; Drutsa, 2020; Zhiyanov and Drutsa, 2020), where the seller prices the item to sell to a strategic buyer, or repeated second-price auctions (Golrezaei et al., 2021). In contrast, our paper focuses on the offline setting of contextual sealed-bid auctions, similar to Duan et al. (2022), a RegretNet-like approach. ## 3 Preliminary Sealed-Bid Auction.We consider a sealed-bid auction with \(n\) bidders and \(m\) items. Denote \([n]=\{1,2,\ldots,n\}\). Each bidder \(i\in[n]\) is represented by a \(d_{x}\)-dimensional vector \(\mathbf{x}_{i}\in\mathbb{R}^{d_{x}}\), which can encode her unique ID or her context (public feature) (Duan et al., 2022). Similarly, each item \(j\in[m]\) is represented by a \(d_{y}\)-dimensional vector \(\mathbf{y}_{j}\in\mathbb{R}^{d_{y}}\), which can also encode its unique ID or context. By using such representations, we unify the contextual auction and the classical Bayesian auction. We denote by \(X=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}]^{T}\in\mathbb{R}^{n\times d_{x}}\) and \(Y=[\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{m}]^{T}\in\mathbb{R}^{m\times d_{y}}\) the matrices of bidder and item representations, respectively. These matrices follow underlying joint probability distribution \(F_{X,Y}\). In an additive valuation setting, each bidder \(i\) values each bundle of items \(S\subseteq[m]\) with a valuation \(v_{i,S}=\sum_{j\in S}v_{ij}\). The bidder has to submit her bids for each item \(j\in[m]\) as \(\mathbf{b}_{i}\coloneqq(b_{1},b_{2},\ldots,b_{m})\). The valuation profile \(V=(v_{ij})_{i\in[n],j\in[m]}\in\mathbb{R}^{n\times m}\) is generated from a conditional distribution \(F_{V|X,Y}\) that depends on the representations of bidders and items. The auctioneer does not know the true valuation profile \(V\) but can observe the public bidder representations \(X\), item representations \(Y\), and the bidding profile \(B=(b_{ij})_{i\in[n],j\in[m]}\in\mathbb{R}^{n\times m}\). Auction Mechanism.An auction mechanism \((g,p)\) consists of an allocation rule \(g:\mathbb{R}^{n\times m}\times\mathbb{R}^{n\times d_{x}}\times\mathbb{R}^{m \times d_{y}}\to[0,1]^{n\times m}\) and a payment rule \(p:\mathbb{R}^{n\times m}\times\mathbb{R}^{n\times d_{x}}\times\mathbb{R}^{m \times d_{y}}\to\mathbb{R}^{n}_{\geq 0}\). Given the bids \(B\), bidder representations \(X\), and item representations \(Y\), \(g_{ij}(B,X,Y)\in[0,1]\) computes the probability that item \(j\) is allocated to bidder \(i\). We require that \(\sum_{i=1}^{n}g_{ij}(B,X,Y)\leq 1\) for any item \(j\) to guarantee that no item is allocated more than once. The payment \(p_{i}(B,X,Y)\geq 0\) computes the price that bidder \(i\) needs to pay. Bidders aim to maximize their own utilities. In the additive valuation setting, bidder \(i\)'s utility is \(u_{i}(\mathbf{v}_{i},B;X,Y)\coloneqq\sum_{j=1}^{m}g_{ij}(B,X,Y)v_{ij}-p_{i}(B,X,Y)\), given \(\mathbf{v},B,X,Y\). Bidders may misreport their valuations to benefit themselves. Such strategic behavior among bidding sould make the auction result hard to predict. Therefore, we require the auction mechanism to be _dominant strategy incentive compatible_ (DSIC), which means that for each bidder \(i\in[n]\), reporting her true valuation is her optimal strategy regardless of how others report. Formally, let \(B_{-i}=(\mathbf{b}_{1},\ldots,\mathbf{b}_{i-1},\mathbf{b}_{i+1},\ldots,\mathbf{b}_{n})\) be the bids except for bidder \(i\). A DSIC mechanism satisfies \[u_{i}(\mathbf{v}_{i},(\mathbf{v}_{i},B_{-i});X,Y))\geq u_{i}(\mathbf{v}_{i},(\mathbf{b}_{i},B_ {-i});X,Y)),\quad\forall i,\mathbf{v}_{i},B_{-i},X,Y,\mathbf{b}_{i}.\] (DSIC) Furthermore, the auction mechanism needs to be _individually rational_ (IR), which ensures that truthful bidding results in a non-negative utility for each bidder. Formally, \[u_{i}(\mathbf{v}_{i},(\mathbf{v}_{i},B_{-i});X,Y)\geq 0,\quad\forall i,\mathbf{v}_{i},B_{-i},X,Y.\] (IR) Affine Maximizer Auction (AMA).AMA (Roberts, 1979) is a generalized version of Vickrey-Clarke-Groves (VCG) auction (Vickrey, 1961) that is inherently DISC and IR. An AMA consists of positive weights \(w_{i}\in\mathbb{R}_{+}\) for each bidder and boost variables \(\lambda(A)\in\mathbb{R}\) for each allocation \(A\in\mathcal{A}\), where \(\mathcal{A}\) is the _allocation menu_, i.e., the set of all the feasible (possibly random) allocations. Given \(B,X,Y\), AMA chooses the allocation that maximizes the affine welfare: \[g(B,X,Y)=A^{*}\coloneqq\arg\max_{A\in\mathcal{A}}\sum_{i=1}^{n}w_{i}b_{i}(A)+ \lambda(A),\] (Allocation) where \(b_{i}(A)=\sum_{j=1}^{m}b_{ij}A_{ij}\) in additive valuation setting. The payment for bidder \(k\) is \[p_{k}(B,X,Y)=\frac{1}{w_{k}}\left(\sum_{i\neq k}w_{i}b_{i}(A^{*}_{-k})+\lambda (A^{*}_{-k})\right)-\frac{1}{w_{k}}\left(\sum_{i\neq k}w_{i}b_{i}(A^{*})+ \lambda(A^{*})\right).\] (Payment) Here we denote by \(A^{*}_{-k}\coloneqq\arg\max_{A\in\mathcal{A}}\sum_{i\neq k}w_{i}b_{i}(A)+ \lambda(A)\) the allocation that maximizes the affine welfare, with bidder \(k\)'s utility excluded. Our definition of AMA differs slightly from previous literature (Roberts, 1979), whose allocation candidates are all the \((n+1)^{m}\) deterministic solutions. Instead, we explicitly define the allocation menu so that our definition is more general. As we will show in Appendix A, such AMAs are still DSIC and IR. ## 4 Methodology In this section, we introduce the optimization problem for automated mechanism design and our proposed approach, AMenuNet, along with its training procedure. ### Auction Design as an Optimization Problem We begin by parameterizing our auction mechanism as \((g^{\theta},p^{\theta})\), where \(\theta\) represents the neural network weights to be optimized. We denote the parameter space as \(\Theta\ni\theta\) and the class of all parameterized mechanisms as \(\mathcal{M}^{\Theta}\). The optimal auction design seeks to find the revenue-maximizing mechanism that satisfies both DSIC and IR: \[\max_{\theta\in\Theta} \text{Rev}(g^{\theta},p^{\theta})\coloneqq\mathbb{E}_{(V,X,Y) \sim F_{V,X,Y}}\left[\sum_{i=1}^{n}p_{i}^{\theta}(V,X,Y)\right]\] (OPT) s.t. \[(g^{\theta},p^{\theta})\text{ is DSIC and IR}\] where \(\text{Rev}(g^{\theta},p^{\theta})\) is the expected revenue of mechanism \((g^{\theta},p^{\theta})\). However, there is no known characterization of a DSIC general multi-item auction (Dutting et al., 2019). Thus, we restrict the search space of auctions to affine maximizer auctions (AMAs) (Roberts, 1979). AMAs are inherently DSIC and IR, and can cover a broad class of mechanisms: Lavi et al. (2003) has shown that every DSIC multi-item auction (where each bidder only cares about what they get and pay) is almost an AMA (with some qualifications). The AMA parameters consist of positive weights \(\mathbf{w}\in\mathbb{R}_{+}^{n}\) for all bidders, an allocation menu \(\mathcal{A}\), and boost variables \(\mathbf{\lambda}\in\mathbb{R}^{|\mathcal{A}|}\) for each allocation in \(\mathcal{A}\). While the most straightforward way to define \(\mathcal{A}\) is to include all deterministic allocations, as done in prior work (Likhodedov and Sandholm, 2004; Likhodedov et al., 2005; Sandholm and Likhodedov, 2015), this approach suffers from scalability issues as the number of allocations can be as large as \((n+1)^{m}\). To address this challenge, we propose AMenuNet, which predefines the size of \(\mathcal{A}\) and constructs it along with the other AMA parameters using a permutation-equivariant neural network. ### AMenuNet Architecture Denote by \(s=|\mathcal{A}|\) the predefined size of the allocation menu. AMenuNet takes as input the representations of bidders \(X\) and items \(Y\), and constructs all the \(s\) allocations, the bidder weights \(\mathbf{w}\), and the boost variables \(\mathbf{\lambda}\in\mathbb{R}^{s}\) through a permutation-equivariant neural network architecture as illustrated in Figure 1. The architecture consists of an encode layer, a menu layer, a weight layer, and a boost layer. The final allocation and payment results can be computed by combining the submitted bids \(B\) according to AMA (see Equation (Allocation) and Equation (Payment)). Encode Layer.The encode layer is responsible for transforming the initial representations of all bidders and items into a joint representation that captures their mutual interactions. To achieve this, we first introduce a dummy bidder \(n+1\) to handle cases where no items are allocated to any of the \(n\) bidders. We set the representation \(\mathbf{x}_{n+1}\) of this dummy bidder to be a \(d_{x}\)-dimensional vector with all elements set to \(1\). We then set up the representations for other bidders and items. In contextual auctions, their representations are the contexts. In non-contextual auctions, similar to word embedding (Mikolov et al., 2013), we embed the unique ID of each bidder or item into a continuous vector space, and use this embedding as the representation. Based on that, we construct the initial encoded representations for all pairs of \(n+1\) bidders and \(m\) items \(E\in\mathbb{R}^{(n+1)\times m\times(d_{x}+d_{y})}\), where \(E_{ij}=[\mathbf{x}_{i};\mathbf{y}_{j}]\in\mathbb{R}^{d_{x}+d_{y}}\) is the initial encoded representation of bidder \(i\) and item \(j\). We further model the mutual interactions of bidders and items. Firstly, we capture the inner influence of each bidder-item pair by two \(1\times 1\) convolutions with a ReLU activation. By doing so, we get \(L=\text{Conv}_{2}\circ\text{ReLU}\circ\text{Conv}_{1}\circ E\in\mathbb{R}^{(n +1)\times m\times d}\), where \(d\) is the dimension of the new latent representation for each bidder-item pair, both \(\text{Conv}_{1}\) and \(\text{Conv}_{2}\) are \(1\times 1\) convolutions, and \(\text{ReLU}(x):=\max(x,0)\). Secondly, we model the mutual interactions between all the bidders and items by using the transformer (Vaswani et al., 2017) based interaction module, similar to Duan et al. (2022). Specifically, for each bidder \(i\), we model her interactions with all the \(m\) items through transformer on the \(i\)-th row of \(L\), and for each item \(j\), we model its interactions with all the \(n\) bidders through another transformer on the \(j\)-th column of \(L\): \[I^{\text{row}}_{i,\cdot}=\text{transformer}(L_{i,\cdot})\in\mathbb{R}^{m\times d _{h}},\quad I^{\text{column}}_{\cdot,j}=\text{transformer}(L_{\cdot,j})\in \mathbb{R}^{(n+1)\times d_{h}},\] where \(d_{h}\) is the size of the hidden nodes of transformer; For all the bidders and items, their global representation is obtained by the average of all the representations \(e^{\text{global}}=\frac{1}{(n+1)m}\sum_{i=1}^{n+1}\sum_{j=1}^{m}L_{ij}\). Thirdly, we get the unified interacted representation \(I_{ij}:=[I^{\text{row}}_{ij};I^{\text{column}}_{ij};e^{\text{global}}]\in \mathbb{R}^{2d_{h}+d}\) by combining all the three representations for bidder \(i\) and item \(j\). Two \(1\times 1\) convolutions with a ReLU activation are applied on \(I\) to encode \(I\) into the joint representation \(I^{\text{out}}:=\text{Conv}_{4}\circ\text{ReLU}\circ\text{Conv}_{3}\circ I\in \mathbb{R}^{(n+1)\times m\times d_{\text{out}}}\). By stacking multiple interaction modules, we can model higher-order interactions among all bidders and items. For the final interaction module, we set \(d_{\text{out}}=2s+1\) and we denote by \(J\in\mathbb{R}^{(n+1)\times m\times(2s+1)}\) its output. We then partition \(J\) into three tensors: \(J^{\mathcal{A}}\in\mathbb{R}^{(n+1)\times m\times s}\), \(J^{\mathbf{w}}\in\mathbb{R}^{(n+1)\times m}\) and \(J^{\mathbf{\lambda}}\in\mathbb{R}^{(n+1)\times m\times s}\), and pass them to the following layers. Figure 1: A schematic view of AMenNet, which takes the bidder representations \(X\) (including the dummy bidder) and item representations \(Y\) as inputs. These representations are assembled into a tensor \(E\in\mathbb{R}^{(n+1)\times m\times(d_{x}+d_{y})}\). Two \(1\times 1\) convolution layers are then applied to obtain the tensor \(L\). Following \(L\), multiple transformer-based interaction modules are used to model the mutual interactions among all bidders and items. The output tensor after these modules is denoted as \(J\). \(J\) is further split into three parts: \(J^{\mathcal{A}}\in\mathbb{R}^{(n+1)\times m\times s}\), \(J^{\mathbf{w}}\in\mathbb{R}^{(n+1)\times m}\) and \(J^{\mathbf{\lambda}}\in\mathbb{R}^{(n+1)\times m\times s}\). These parts correspond to the allocation menu \(\mathcal{A}\), bidder weights \(\mathbf{w}\), and boosts \(\mathbf{\lambda}\), respectively. Finally, based on the induced AMA parameters and the submitted bids, the allocation and payment results are computed according to the AMA mechanism. Menu Layer.To construct the allocation menu, we first normalize each column of \(J^{\mathcal{A}}\) through softmax function. Specifically, \(\forall j\in[m],k\in[s]\), we denote by \(J^{\mathcal{A}}_{\cdot,j,k}\in\mathbb{R}^{n+1}\) the \(j\)-th column of allocation option \(k\), and obtain its normalized probability \(\tilde{J}^{\mathcal{A}}_{\cdot,j,k}\) by \(\tilde{J}^{\mathcal{A}}_{\cdot,j,k}\coloneqq\text{Softmax}(\tau\cdot J^{ \mathcal{A}}_{\cdot,j,k})\) where \(\text{Softmax}(\mathbf{x})_{i}=e^{x_{i}}/(\sum_{k=1}^{n+1}e^{x_{k}})\in(0,1)\) for all \(\mathbf{x}\in\mathbb{R}^{n+1}\) is the softmax function, and \(\tau>0\) is the temperature parameter. Next, we exclude the probabilities associated with the \(n+1\)-th dummy bidder to obtain the allocation menu: \(\mathcal{A}=\tilde{J}^{\mathcal{A}}_{\cdot\text{In},\cdot,\cdot}\in\mathbb{R} ^{n\times m\times s}.\) Thus, for each allocation \(A\in\mathcal{A}\) we satisfy \(A_{ij}\in[0,1]\) and \(\sum_{i=1}^{n}A_{ij}\leq 1\) for any bidder \(i\) and any item \(j\). Weight Layer.In this work, we impose the constraint that each bidder weight lies in \((0,1]\). Given \(J^{\mathbf{w}}\in\mathbb{R}^{(n+1)\times m}\), for each bidder \(i\leq n\) we get her weight by \(w_{i}=\text{Sigmoid}(\frac{1}{m}\sum_{j=1}^{m}J^{\mathbf{w}}_{ij})\), where \(\text{Sigmoid}(x)\coloneqq 1/(1+e^{-x})\in(0,1)\) for all \(x\in\mathbb{R}\) is the sigmoid function. Boost layer.In boost layer, we first average \(J^{\mathbf{\lambda}}\) across all bidders and items. We then use a multi-layer perceptron (MLP), which is a fully connected neural network, to get the boost variables \(\mathbf{\lambda}\). Specifically, we compute \(\mathbf{\lambda}\) by \(\mathbf{\lambda}=\text{MLP}\left(\sum_{i=1}^{n+1}\sum_{j=1}^{m}J^{\mathbf{\lambda}}_{ ij}\right)\in\mathbb{R}^{s}\). After we get the output AMA parameters \(\mathcal{A},\mathbf{w},\mathbf{\lambda}\) from AMenNet, we can compute the allocation result according to Equation (Allocation) and the payment result according to Equation (Payment). The computation of \(\mathcal{A},\mathbf{w},\mathbf{\lambda}\) only involves the public representations of bidders and items, without access to the submitted bids. Therefore, the mechanism is DSIC and IR (see Appendix A for proof): **Theorem 4.1**.: _The mechanism induced by AMenuNet satisfies both DSIC and IR._ Moreover, As AMenuNet is built using equivariant operators such as transformers and \(1\times 1\) convolutions, any permutation of the inputs to AMenuNet, including the submitted bids \(B\), bidder representations \(X\), and item representations \(Y\), results in the same permutation of the allocation and payment outcomes. This property is known as permutation equivariance [14, 15]: **Definition 4.2** (Permutation Equivariance).: We say \((g,p)\) is permutation equivariant, if for any \(B,X,Y\) and for any two permutation matrices \(\Pi_{n}\in\{0,1\}^{n\times n}\) and \(\Pi_{m}\in\{0,1\}^{m\times m}\), we have \(g(\Pi_{n}B\Pi_{m},\Pi_{n}X,\Pi_{m}^{T}Y)=\Pi_{n}g(B,X,Y)\Pi_{m}\) and \(p(\Pi_{n}B\Pi_{m},\Pi_{n}X,\Pi_{m}^{T}Y)=\Pi_{n}p(B,X,Y)\). Permutation equivariant architectures are widely used in automated auction design [14, 15, 16]. Qin et al. [14] have shown that this property can lead to better generalization ability of the mechanism model. ### Optimization and Training Since the induced AMA is DSIC and IR, we only need to maximize the expected revenue \(\text{Rev}(g^{\theta},p^{\theta})\). To achieve this, following the standard machine learning paradigm [14], we minimize the negative empirical revenue by set the loss function as \(\ell(\theta,S)\coloneqq\frac{1}{|S|}\sum_{k=1}^{|S|}\sum_{i=1}^{n}-p_{i}^{ \theta}(V^{(k)},X^{(k)},Y^{(k)})\), where \(S\) is the training data set, and \(\theta\) contains all the neural network weights in the encode layer and the boost layer. However, the computation of \(\ell(\theta,S)\) involves finding the affine welfare-maximizing allocation scheme \(A^{*}\) (and \(A^{*}_{-k}\)), which is non-differentiable. To address this challenge, we use the softmax function as an approximation. During training, we compute an approximate \(A^{*}\) by \[\widetilde{A^{*}}\coloneqq\frac{1}{Z}\sum_{A\in\mathcal{A}}\exp\left(\tau_{A} \cdot\left(\sum_{i=1}^{n}w_{i}b_{i}(A)+\mathbf{\lambda}(A)\right)\right)\cdot A,\] where \(Z\coloneqq\sum_{A^{\prime}\in\mathcal{A}}\exp\left(\tau_{A}\cdot(\sum_{i=1}^{ n}w_{i}b_{i}(A^{\prime})+\mathbf{\lambda}(A^{\prime}))\right)\) is the normalizer, and \(\tau_{A}\) is the softmax temperature. We can control the approximation level of \(\widetilde{A^{*}}\) by tuning \(\tau_{A}\): when \(\tau_{A}\to\infty\), \(\widetilde{A^{*}}\) recover the true \(A^{*}\), and when \(\tau_{A}\to 0\), \(\widetilde{A^{*}}\) tends to a uniform combination of all the allocations in \(\mathcal{A}\). The approximation \(\widetilde{A^{*}_{-k}}\) of \(A^{*}_{-k}\) is similar. By approximating \(A^{*}\) and \(A^{*}_{-k}\) through the differentiable \(\widetilde{A^{*}}\) and \(\widetilde{A^{*}_{-k}}\), we make it feasible to optimize \(\ell(\theta,S)\) through gradient descent. Notice that in testing, we still follow the standard computation in Equation (Allocation) and Equation (Payment). Experiments In this section, we present empirical experiments that evaluate the effectiveness of AMenuNet. All experiments are run on a Linux machine with NVIDIA Graphics Processing Unit (GPU) cores. Each result is obtained by averaging across 5 different runs. In all experiments, the standard deviation of AMenuNet across different runs is less than 0.01. Baseline Methods.We compare AMenuNet against the following baselines: 1. VCG (Vickrey, 1961), which is the most classical special case of AMA; 2. Item-Myerson, a strong baseline used in Dutting et al. (2019), which independently applies Myerson auction with respect to each item; 3. Lottery AMA (Curry et al., 2022), an AMA-based approach that directly sets the allocation menu, bidder weights, and boost variables as all the learnable weights. 4. RegretNet (Dutting et al., 2019), the pioneer work of applying deep learning in auction design, which adopts fully-connected neural networks to compute auction mechanisms. 5. CITransNet (Duan et al., 2022), a RegretNet-like deep learning approach for contextual auctions. Note that both RegretNet and CITransNet can only achieve approximate DSIC by adding a regret term in the loss function. We train both models with small regret, less than 0.005. Hyperparameters.We train the models for a maximum of 8000 iterations, with 32768 generated samples per iteration. The batch size is 2048, and we evaluate all models on 10000 samples. We set the softmax temperature as 500 and the learning rate as \(3\times 10^{-4}\). We tune the menu size in \(\{32,64,128,256,512,1024\}\). For the boost layer, we use a two-layer fully connected neural network with ReLU activation. Given the induced AMA parameters, our implementation of the remaining AMA mechanism is built upon the implementation of Curry et al. (2022). Further implementation details can be found in Appendix B. Auction Settings.AMenuNet can deal with both contextual auctions and classic Bayesian auctions. We construct the following multi-item contextual auctions: 1. We generate each bidder representations \(\mathbf{x}_{i}\in\mathbb{R}^{10}\) and item representations \(\mathbf{y}_{j}\in\mathbb{R}^{10}\) independently from a uniform distribution in \([-1,1]^{10}\) (i.e., \(U[-1,1]^{10}\)). The valuation \(v_{ij}\) is sampled from \(U[0,\operatorname{Sigmoid}(\mathbf{x}_{i}^{T}\mathbf{y}_{j})]\). This contextual setting is also used in Duan et al. (2022). We choose the number of bidders \(n\in\{2,3\}\) and the number of items \(m\in\{2,5,10\}\). 2. We set 2 items, and the representations are generated from the same way as in Setting (A). For valuations, we first generate an auxiliary variable \(v^{\prime}_{i}\) from \(U[0,1]\) for each bidder \(i\), and then we set \(v_{i1}=v^{\prime}_{i}\cdot\operatorname{Sigmoid}(\mathbf{x}_{i}^{T}\mathbf{y}_{1})\) and \(v_{i2}=(1-v^{\prime}_{i})\cdot\operatorname{Sigmoid}(\mathbf{x}_{i}^{T}\mathbf{y}_{2})\). We do this to make the valuations of the 2 items highly correlated. We choose the number of bidders \(n\in\{4,5,6,7\}\). For classic auctions, we can assign each bidder and item a unique ID and embed them into a multidimensional continuous representation. We construct the following classic auction settings: 1. For all bidders and items, \(v_{ij}\sim U[0,1]\). Such setting is widely evaluated in RegretNet-like approaches (Dutting et al., 2019). We select \(n\times m\) (the number of bidders and items) in \(\{2\times 5,3\times 10,5\times 5\}\). 2. 3 bidders and 1 item, with \(v_{i1}\sim\operatorname{Exp}(3)\) (i.e. the density function is \(f(x)=1/3e^{-1/3x}\) for all \(i\in\{1,2,3\}\)). The optimal solution is given by Myerson auction. 3. 1 bidder and 2 items, with \(v_{11}\sim U[4,7]\) and \(v_{12}\sim U[4,16]\). The optimal auction is given by Daskalakis et al. (2015). * 1 bidder and 2 items, where \(v_{11}\) has the density function \(f(x)=5/(1+x)^{6}\), and \(v_{12}\) has the density function \(f(x)=6/(1+x)^{7}\). The optimal solution is given by Daskalakis et al. (2015). Revenue Experiments.The results of revenue experiments are presented in Table 1. We can see that AMenuNet achieves the highest revenue among all the DSIC approaches. The comparison between AMenuNet and VCG reveals that incorporating affine parameters into VCG leads to higher revenue. Notably, AMenuNet even surpasses the strong baseline Item-Myerson, which relies on prior distribution knowledge - a over strong assumption that may not hold especially in contextual auctions. In contrast, AMenuNet constructs the mechanism solely from sampled data, highlighting its data efficiency. Furthermore, AMenuNet demonstrates its ability to approach near-optimal solutions in known settings (Setting (D)-(F)), even when those solutions are not AMAs themselves. This underscores the representativeness of the AMAs induced by AMenuNet. Remarkably, the comparison between AMenuNet and Lottery AMA in classic auctions showcases the effectiveness of training a neural network to compute AMA parameters. This suggests that AMenuNet captures the underlying mutual relations among the allocation menu, bidder weights, and boosts, resulting in a superior mechanism after training. Lastly, while CITransNet and RegretNet may achieve higher revenue in certain cases, it is important to note that they are not DSIC, whereas AMenuNet consistently performs comparably to them while maintaining the crucial DSIC property. Ablation Study.We present an ablation study that compares the full AMenuNet model with the following three ablated versions: AMenuNet with fixed \(\mathbf{w}=\mathbf{1}\), AMenuNet with fixed \(\lambda=\mathbf{0}\) and AMenuNet with \(\mathcal{A}_{\rm 4tm}\), i.e., we fix the allocation menu as all the \((n+1)^{m}\) deterministic allocations. The revenue results are presented in Table 2. For large values of \(n\) and \(m\), the experiment result of AMenuNet with \(\mathcal{A}_{\rm 4tm}\) can be intractable due to the large size of the allocation menu (\(59049\) for \(2\times 10\) and \(1048576\) for \(3\times 10\)). Such phenomenon indicates that we will face scalability issue if we consider all deterministic allocations. In contrast, the full model \begin{table} \end{table} Table 1: The experiment results of average revenue. For each case, we use the notation \(n\times m\) to represent the number of bidders \(n\) and the number of items \(m\). The best revenue among all DSIC methods is highlighted in bold. The regret of both CITransNet and RegretNet is less than \(0.005\). achieves the highest revenue in all cases except for \(2\times 10\)(A), where it also performs comparably to the best result. Notice that in Setting (C), \(\mathbf{w}=\mathbf{1}\) is coincidently the optimal solution due to the symmetry of the setting. Overall, the ablation study highlights the benefits of making the allocation menu, weights, and boost variables learnable for both effectiveness and scalability. Out-of-Setting Generalization.The architecture of AMenuNet is designed to be independent of the number of bidders and items, allowing it to be applied to auctions with varying sizes. To evaluate such out-of-setting generalizability (Rahme et al., 2021), we conduct experiments whose results are shown in Figure 2. The experimental results demonstrate that the generalized AMenuNet achieves revenue comparable to that of Item-Myerson, particularly in cases with varying items where AMenuNet often outperforms Item-Myerson. This highlights the strong out-of-setting generalizability of AMenuNet. Furthermore, The fact that AMenuNet can generalize to larger settings enhances its scalability. Winning Allocations.We conducted experiments to analyze the winning allocations generated by AMenuNet. In order to assess the proportion of randomized allocations, we recorded the ratio of such allocations in the last row of Table 0(a). Here, we define an allocation as randomized if it contains element in the range of \([0.01,0.99]\). The results indicate that randomized allocations account for a significant proportion, especially in larger auction scales. For instance, in settings such as \(2\times 10\)(A), \(3\times 10\)(A), and \(7\times 2\)(B), the proportion of randomized allocations exceeds \(17\%\). Combining these findings with the results presented in Table 2, we observe that the introduction of randomized allocations leads to an improvement in the revenue generated by AMenuNet. Additionally, we present the top-10 winning allocations based on their winning rates (i.e., allocations that are either \(A^{*}\) or \(A^{*}_{-k}\) for some \(k\in[n]\)) in Figure 3, specifically for the \(2\times 5\)(C) and \(5\times 5\)(C) settings. Notably, the top-10 winning allocations tend to be deterministic, suggesting that AMenuNet is capable of identifying useful deterministic allocations within the entire allocation space. Furthermore, in the \(2\times 5\)(C) setting, we observed instances where the \begin{table} \begin{tabular}{l c c c c c} \hline \hline & 2\(\times\)2(A) & 2\(\times\)10(A) & 3\(\times\)2(B) & 5\(\times\)2(B) & 2\(\times\)3(C) & 3\(\times\)10(C) \\ \hline \hline AMenuNet & **0.4293** & 2.3815 & **0.6197** & **0.7548** & **1.3107** & **5.5896** \\ With \(\mathbf{w}=\mathbf{1}\) & 0.4209 & **2.3848** & 0.6141 & 0.7010 & 1.2784 & 5.5873 \\ With \(\mathbf{\lambda}=\mathbf{0}\) & 0.3701 & 2.2229 & 0.3892 & 0.3363 & 1.2140 & 5.4145 \\ With \(\mathcal{A}_{\mathrm{dtm}}\) & 0.3633 & - & 0.5945 & 0.7465 & 1.2758 & - \\ \hline \hline \end{tabular} \end{table} Table 2: The revenue results of ablation study. For each case, we use the notation \(n\times m\) to represent the number of bidders \(n\) and the number of items \(m\). Some results of \(\mathcal{A}_{\mathrm{dtm}}\) are intractable because the menu size is too large. Figure 2: Out-of-setting generalization results. We use the notation \(n\times m\) to represent the number of bidders \(n\) and the number of items \(m\). We train AMenuNet and evaluate it on the same auction setting, excepts for the number of bidders or items. For detailed numerical results, please refer to Appendix C. winning allocation was the empty allocation with a substantial boost. This can be seen as a reserve price, where allocating nothing becomes the optimal choice when all submitted bids are too small. ## 6 Conclusion and Future Work In this paper, we introduce AMenuNet, a scalable neural network for the DSIC affine maximizer auction (AMA) design. AMenuNet constructs AMA parameters, including the allocation menu, bidder weights and boosts, from the public bidder and item representations. This construction ensures that the resulting mechanism is both dominant strategy compatible (DSIC) and individually rational (IR). By leveraging the neural network to compute the allocation menu, AMenuNet offers improved scalability compared to previous AMA approaches. The architecture of AMenuNet is permutation equivariant, allowing it to handle auctions of varying sizes and generalize to larger-scale auctions. Such out-of-setting generalizability further enhances the scalability of AMenuNet. Our various experiments demonstrate the effectiveness of AMenuNet, including its revenue, scalability, and out-of-setting generalizability. As for future work, since we train AMenuNet using an offline learning approach, a potential direction is to explore online learning methods for training AMenuNet. Additionally, considering that the allocation menu size still needs to be predefined, it would be interesting to investigate the feasibility of making the allocation menu size learnable as well.
2305.09224
Privacy-Preserving Ensemble Infused Enhanced Deep Neural Network Framework for Edge Cloud Convergence
We propose a privacy-preserving ensemble infused enhanced Deep Neural Network (DNN) based learning framework in this paper for Internet-of-Things (IoT), edge, and cloud convergence in the context of healthcare. In the convergence, edge server is used for both storing IoT produced bioimage and hosting DNN algorithm for local model training. The cloud is used for ensembling local models. The DNN-based training process of a model with a local dataset suffers from low accuracy, which can be improved by the aforementioned convergence and Ensemble Learning. The ensemble learning allows multiple participants to outsource their local model for producing a generalized final model with high accuracy. Nevertheless, Ensemble Learning elevates the risk of leaking sensitive private data from the final model. The proposed framework presents a Differential Privacy-based privacy-preserving DNN with Transfer Learning for a local model generation to ensure minimal loss and higher efficiency at edge server. We conduct several experiments to evaluate the performance of our proposed framework.
Veronika Stephanie, Ibrahim Khalil, Mohammad Saidur Rahman, Mohammed Atiquzzaman
2023-05-16T07:01:44Z
http://arxiv.org/abs/2305.09224v1
Privacy-Preserving Ensemble Infused Enhanced Deep Neural Network Framework for Edge Cloud Convergence ###### Abstract We propose a privacy-preserving ensemble infused enhanced Deep Neural Network (DNN) based learning framework in this paper for Internet-of-Things (IoT), edge, and cloud convergence in the context of healthcare. In the convergence, edge server is used for both storing IoT produced bioimage and hosting DNN algorithm for local model training. The cloud is used for ensembling local models. The DNN-based training process of a model with a local dataset suffers from low accuracy, which can be improved by the aforementioned convergence and Ensemble Learning. The ensemble learning allows multiple participants to outsource their local model for producing a generalized final model with high accuracy. Nevertheless, Ensemble Learning elevates the risk of leaking sensitive private data from the final model. The proposed framework presents a Differential Privacy-based privacy-preserving DNN with Transfer Learning for a local model generation to ensure minimal loss and higher efficiency at edge server. We conduct several experiments to evaluate the performance of our proposed framework. Edge cloud convergence, deep learning, ensemble learning, transfer learning, privacy preserving deep learning, differential privacy, ensemble infused deep learning ## I Introduction The massive improvement in Internet-of-Things (IoT) technology has enabled rapid data collection in different applications, including healthcare. For example, Alexapath has developed an IoT-enabled microscope for instant collection of microscopic images, which can be shared with specialists working remotely [1]. Giving another example, an IoT device can capture lung images by passing a small amount of current on cross areas of human lung1. The captured data can later be used for diagnosis and research work by the health practitioners by leveraging Artificial Intelligence (AI) techniques such as Deep Learning (DL)[2]. Nevertheless, IoT devices are resource-constrained and cannot offer data storage and DL tasks alone due to the computational power requirements of AI tasks. Footnote 1: [https://buildforcovid19.io/lung-imaging-iot-for-remote-monitoring-in-isolation/](https://buildforcovid19.io/lung-imaging-iot-for-remote-monitoring-in-isolation/) Cloud is a popular platform for traditional IoT data storage and DL-based machine learning in both industry and academia. However, the cloud is not suitable for realtime data analysis services due to the high bandwidth requirement and network latency [3]. Edge computing technology is getting attention from machine learning practitioners and researchers due to several reasons. _First_, edge devices can be integrated with IoT devices for rapid data collection and to store data in private edge data servers. _Second_, a part of cloud DL tasks can be offloaded in the edge servers and executed with private data to reduce bandwidth requirements. _Third_, edge devices enable the convergence of IoT, edge, cloud, and AI to solve machine learning tasks at the close proximity of data source and offer realtime services. By leveraging the aforementioned convergence, a healthcare service provider (e.g., hospital) can deploy several IoT devices and edge devices to collect bioimages and store them, respectively. In addition, hospitals can deploy edge devices to host DL algorithms to train models for data analysis based on the collected bioimages. The accuracy of the DL approach is a necessary requirement that depends on the variety of used models. A hospital may apply a model which may not be enough to achieve high prediction accuracy during data analysis. Hence, the hospital may need to collaborate with other hospitals to improve the accuracy of prediction. Moreover, a dataset of a single hospital may be homogeneous, which leads to model overfitting and causes significant utility loss in a DL model [4]. Hence, collaborating datasets with other hospitals should increase the dataset volume and help improve the DL model accuracy. This large-scale dataset is often obtained from multi-institutional or multi-national data accumulation, and voluntary data sharing [4]. Nevertheless, collaborating datasets with other hospitals introduces a privacy risk as the dataset contains sensitive information about the patient. In a centralized deep learning model training process, it is common for medical institutions to anonymize or pseudonymize patients' data before sending it to public analysis and model training sites. However, it is proven that anonymization is insufficient to protect against re-identification attack [5]. Moreover, once the anonymized medical data are transmitted to a public site, the data cannot be easily revoked or augmented [6]. An ensemble learning method is a collective machine learning approach to obtain better prediction performance by strategically combining multiple learning models. The ensemble learning approach gives high accuracy without sufficient data representation [7]. The training process of DNN is training a loss function to find out a set of weights that are considered suitable for a given problem. The loss surface in a complex network is more chaotic with many local optimal solutions [8]. Ensemble learning effectively utilizes these multiple local optimal solutions to improve the accuracy of the prediction significantly [9]. Therefore, ensemble learning is a suitable mechanism for convergence IoT, edge, cloud, and AI. ### _Problem Statement_ Although ensemble learning improves the model accuracy, this approach alone is not sufficient to provide users privacy [10]. To describe the problem in ensemble learning-based collaborative deep learning model, we choose a healthcare scenario as shown in Fig. 1. We assume that several hospitals, called participants, collect patient lung images via IoT devices and store them in private edge data servers. Hospitals also own private edge servers that host DL algorithms. A private edge server of a hospital learns from local data to generate a local training model. All private edge servers outsource local models to a centralized public cloud server that hosts the ensemble algorithm. The public cloud server aggregates received models using an ensemble method to generate a final model shared with all hospitals for their data analysis. As shown in Fig. 1, an adversary can perform several attacks on shared local models and the final models to leak sensitive information. Authors in [11] exhibit a successful model inversion attack that utilizes shared local model parameters on collaborative learning setting to reconstruct most of the data used for training. [12] developed a membership inference attack that can determine whether a record was used as part of the machine learning model's training. Hence, existing studies have tried to combine the distributed learning models with some privacy-preserving methods such as Secure Multi-party Computation (SMPC) [13, 14], Differential Privacy (DP) [15, 16] and Homomorphic Encryption (HE) [17] to enhance the system's privacy. However, the integration of the privacy preservation method and distributed DL system introduces another issue. For example, in distributed learning using DP, adding too much noise yield poor performance of the resulting model. On the other hand, in HE and SMPC integrated distributed DL systems, massive computation and communication overhead are their profound issues. Furthermore, in most distributed DL systems, as shown in Figure 1, additional communication overhead is usually found due to constant local model exchange with the server to enhance the performance of the public model. ### _Contributions_ In this paper, we propose a privacy-preserving ensemble infused enhanced Deep Neural Network (DNN) based learning framework for IoT, edge and cloud convergence. Initially, we propose a privacy-preserving architecture for the edge and cloud-based ensemble-assisted DNN framework. In the proposed architecture, multiple participants train their own models individually at their private edge servers based on the local dataset. Next, the local training model generation process is made privacy-preserving with Differential Privacy to prevent privacy leakage. We use Stochastic Gradient Descent (SGD) based mechanism in Differential Privacy to add noise to the training model parameters. As applying Differential Privacy results in a significant loss in the model, we apply Transfer Learning[18] to mitigate the loss. We assume that a trusted third party generates an initial model using a Convolutional Neural Network (CNN) with a public dataset before beginning the local model generation. An edge server gets the initial model from the trusted third party and transfers the knowledge to repair the loss. Applying transfer learning also improves efficiency and reduces the computational load of the edge server. Finally, an ensemble-based collective mechanism is developed to generate a final model. The ensemble process is performed and the final model is distributed by the cloud. Our contributions are summarized as follows: * A framework is proposed for Deep Neural Network (DNN) based learning to ensure high learning accuracy in IoT, edge, and cloud convergence. * Our framework leverages ensemble learning concept to generate a generalized final model which is robust and ensures good accuracy in the context of IoT, edge, and cloud convergence. * Differential Privacy-based privacy-preserving technique is used with Transfer Learning during local model generation to protect from privacy attacks with higher efficiency and reduced computational load at edge servers. ### _Organization_ The remainder of this paper is organized as follows. Section II discusses some of the closely relevant works. Some preliminary topics are presented in Section III. The system architecture of the proposed framework is presented in Section IV. The methodology used in the proposed framework is described in Section V. Experimental results and performance evaluation are shown in Section VI. Finally, Section VII concludes this paper. ## II Related Work To preserve privacy in collaborative learning scheme, recent studies tried to integrate it with other privacy-preserving methodologies, such as Multi-Party Computation (MPC), Homomorphic Encryption (HE), and Differential Privacy (DP). [19] proposed a privacy-preserving deep learning system via weight transmission. In preserving the model privacy, participants share symmetric keys to keep the model secret from the server. The server acts as a transfer station for the local model to be distributed and trained. The fact that the training model needs to be updated and trained in a sequence is not efficient. Fig. 1: Centralized collaborative learning without privacy-preservation method A method proposed by [20] takes advantage of a centralized and distributed training scheme to achieve efficiency and reduce computational cost. The proposed model employs differential privacy, blinding, and HE techniques to ensure the model's ability to resist collusion attacks, model attacks, and data attacks. Another DP based privacy-preserving collaborative learning introduced by [21]. To minimize information leakage from the shared model, they only selected some gradients over a certain threshold to be transmitted. However, the proposed method is claimed to provide moderate privacy as a subset of the parameters is shared with other participants for each training iteration [13]. In tackling the issue, [13] proposed a collaborative deep learning scheme, whose privacy preservation method is based on secure MPC. The proposed model shows that the scheme is able to protect the local dataset and learning model from the cloud server. [22] also proposed a secure MPC based collaborative learning model which is resistant to generative adversarial network based attack by ensuring that participants are isolated from the model parameters. However, due to its high cost in calculating complex functions, its application in a privacy-preserving deep learning environment may not be suitable. Focusing on reducing computational cost, [14] proposed partial model encryption using MPC in the distributed deep learning system. Although the computational cost is significantly reduced, the underlying high communication cost problem of MPC can still be found where synchronous updates are required through all participants for each training iteration. Aside from MPC, a recent study in privacy-preserving collaborative deep learning using HE is also proven to provide privacy guarantees against honest-but-curious servers and parameter leakage. For example, [23] introduces a privacy-preserving deep learning model using an additive HE scheme. The proposed scheme encrypts local model's gradients before being sent to the server. The model is proven to be secure against curious servers at the cost of increased communication between the learning participants and the cloud server. The aforementioned method above tries to collaboratively enhance public model performance by updating model parameters in a privacy-preserving manner. On the other hand, one of the state-of-the-art approaches, Private Aggregation of Teacher Ensembles (PATE) proposed by [24], is a DP based method that allows us to predict an unlabeled public data in a privacy-preserving manner to train another learning model. One method that is closely related to our work is proposed by [18], which is a transfer learning based PATE ensemble learning method. This method tries to transfer the knowledge of existing public data to each of the ensemble learning teacher models. This allows the proposed method to increase the performance of their model in predicting unlabeled data. However, when a different participant owns a teacher model in a collaborative learning scheme, the proposed method may cause a privacy leak. This is because each teacher model is made visible to other participants. Hence, the model parameters can be used to do a membership inference attack or model inversion attack on a specific participant. From the discussed methods above, we note that each privacy preservation method has trade-offs between accuracy, computation cost, and communication cost. HE and MPC based method provides better accuracy and privacy at the cost of increased communication and computational cost. On the other hand, DP based methods are more efficient in terms of their communication and computational cost in return for their performance loss. In addition to that, most of the privacy-preserving distributed learning methods mentioned above consider periodical local model updates from the local devices and then send the aggregated model back to all devices. This results in substantial communication overhead [25]. While considering both aspects, we propose a high-performing ensemble distributed learning system with knowledge transfer based on differential privacy, which does not require a continuous local model update. ## III Preliminaries In this section, we present the background of DL, ensemble learning, distributed learning, and DP that serve as the fundamentals in this article. ### _Deep Learning_ DL is a branch of machine learning that allows us to gain a high level of abstractions on a set of data. Essentially, DL neural network is composed of more than three layers. These layers are proposed to imitate the human brain. This allows DL to learn from a large amount of data, hence its success on complex tasks such as computer vision, image classification, and language processing. Typical process in deep learning usually involves forward propagation and backpropagation. Forward propagation allows us to retrieve output value from the given input data, while backpropagation updates the parameter of the neural network model. To update the weight parameters, backpropagation has an optimizer which role is to calculate loss and update the model parameters to reduce the loss. Generally, a loss function can be denoted as \(l(y,\hat{y})\), where \(y\) is the true label and \(\hat{y}\) is the prediction. In Stochastic Gradient Decent (SGD), we can compute the updated parameter as follows. \[\Theta_{k+1}=\Theta_{k}-\eta\cdot\frac{1}{N}\sum_{i=1}^{N}\nabla_{\Theta_{k}}l (f(x_{i}),y_{i})\] Here \(\Theta_{k}\) is the parameter of the current step \(k\), \(\eta\) is the learning rate, \(N\) is the number of samples within a batch, \(\nabla\) is used to refer to the derivative with respect to every parameter, and \(l(f(x_{i}),y_{i})\) is our loss function, which takes prediction of an input data \(x_{i}\) from a prediction function \(f()\) and a true label of the input data \(y_{i}\) as the inputs. ### _Ensemble Learning_ Ensemble learning is introduced as a method that combines multiple learning models, whose primary goal is to improve the capability of its base models \(\mathcal{M}\). Ensemble learning can be classified into three classes. They are bagging, boosting, and stacking. In this particular paper, we focus on the use of the ensemble averaging method, which is a Bootstrap Aggregation or bagging based ensemble learning introduced in [26]. In Deep Neural Network (DNN), only one model is usually kept for training and predicting a dataset. However, in ensemble averaging, several neural network models are kept, and the prediction obtained by each learning model are aggregated to reduce the base model bias and variance error. Generally, ensemble averaging can be calculated using Equation 1. \[\tilde{P}(x,\alpha)=\sum_{i=1}^{k}\alpha_{i}P_{i}(x) \tag{1}\] Here, \(\tilde{P}()\) represents the predictions of the ensemble model, \(x\) is input data, \(\alpha\) is a list of weights, where \(\alpha_{i}\in\alpha\) is a weight assigned to a particular base model \(\mathcal{M}_{i}\in\mathcal{M}\), \(k\) is the number of participants, and \(P_{i}(x)\) is the resulting prediction probabilities of input data \(x\) from base model \(\mathcal{M}_{i}\). A raw average in ensemble averaging can be achieved by replacing the value of \(\alpha_{i}\in\alpha\) with the value of one over the total number of \(\mathcal{M}\). ### _Transfer Learning_ Transfer learning can be seen as a mechanism that allows a system to utilize a learned knowledge from one task to another task [27]. This can help address limitations in medical fields where healthcare image data can hardly be shared for DL model training. Transfer learning comprises two concepts: domain \(D\) and learning task \(T\). Formally, a domain can be represented as \(D=\{\mathcal{X},P(X)\}\). Here, \(\mathcal{X}\) represents the feature space, and \(P(X)\) is the marginal probability distribution of sample \(X\) in \(\mathcal{X}\). A task can be denoted as \(T=\{\mathcal{Y},P(Y|X)\}\). Here, \(\mathcal{Y}\) is the label space, and \(P(Y|X)\) is a predictive probability function, which predicts the conditional probability of \(Y\in\mathcal{Y}\) given \(X\in\mathcal{X}\). Given a target task \(\mathcal{T}_{t}\) and a target domain \(\mathcal{D}_{t}\), we can transfer the knowledge from a source domain \(\mathcal{D}_{s}\) with a source task \(\mathcal{T}_{s}\) to improve the performance of the predictive function in \(\mathcal{T}_{t}\), where \(\mathcal{D}_{t}\neq\mathcal{D}_{s}\) and/or \(\mathcal{T}_{t}\neq\mathcal{T}_{s}\). A typical transfer learning can be done by transferring the learned parameters of a well-trained DL model on a large dataset (e.g., ImageNet) to the \(\mathcal{D}_{t}\). In this project, we focus on the use of transfer learning in a Deep Neural Network (DNN), where the source and target have the same domain and task. ### _Differential Privacy_ DP is a mechanism that aims to minimize the risk of privacy breaches in a particular database. The definition of differential privacy can be formalized as follows. **Definition III.1**: _A randomized mechanism (\(M\)) provides (\(\varepsilon\), \(\delta\))-differential privacy if any datasets \(D\) and \(D^{\prime}\) that differ at most one element, and for any subset \(S\subseteq Range(M)\), where \(range(M)\) represent the range of possible outcomes produced by \(M\),_ \[P(M(D)\in S)\leq e^{\varepsilon}\times P(M(D^{\prime})\in S)+\delta \tag{2}\] As given in Equation 2, the \(\varepsilon\) is privacy metric loss which provides an insight into the loss of privacy in the corresponding differentially private algorithm. Initially, the original differential privacy was proposed by [28] as \(\varepsilon\)-differential privacy. However, to loosen the definition of \(\varepsilon\)-differential privacy, \(\delta\) was introduced. \(\delta\) is defined as the probability of information leakage accident. This \(\delta\) is expected to be smaller than \(\frac{1}{|D|}\) where \(D\) is the data size within the database. ## IV Proposed System architecture Figure 2 shows the architecture overview of the proposed method. We assume that there are \(n\) hospitals, denoted as a set \(\{H_{1},H_{2},\ldots,H_{n}\}\), in the proposed systems. Each hospital \(H_{i}(1\leq i\leq n)\) comprises of several entities, namely, IoT devices, clients, edge nodes, and cloud. The roles of each component are described below: * _IoT devices:_ In the proposed framework, IoT devices are owned by a hospital \(H_{i}\). Each IoT device is integrated into a medical device and acts as a data source. IoT devices of \(H_{i}\) capture medical images and store them in a local dataset. For \(n\) hospitals, there are \(n\) local datasets. * _Edge Data Server:_ An edge data server \(E_{Di}\) is an edge device owned by hospital \(H_{i}\) that stores local dataset generated by IoT devices of \(H_{i}\). Data in \(E_{Di}\) is considered private and cannot be shared with other hospitals to ensure privacy. For the sake of simplicity, we assume that each \(H_{i}\) owns a single edge data server \(E_{Di}\). Users from the same hospital are connected to the same edge data server, while users from different hospitals must be connected to different edge data servers. The data in the edge data server \(E_{Di}\) is used to train a model for the hospital \(H_{i}\) privately. * _Private Edge Server:_ A private edge server \(E_{Si}\) is an entity that is locally owned by a hospital \(H_{i}\). The private edge server \(E_{Si}\) uses local dataset in \(E_{Di}\) to train an initial model \(M_{I}\) locally. \(E_{Si}\) gets an \(M_{I}\) from a trusted source called _trusted third party_. \(M_{I}\) is distributed among private edge servers \(\{E_{Si},E_{S2},\ldots,E_{Sn}\}\) of all hospitals to create their respective privacy-preserving training models. * _Trusted Third Party:_ In our proposed framework, the trusted third party (TTP) is a secure cloud. TTP leverages a public dataset to train a model (i.e., \(M_{I}\)) using CNN based deep neural network. We discuss the detailed process later in this section. * _Cloud:_ The cloud is a public entity that collects all locally trained models from all hospitals. The set \(M\) of locally trained models are represented as set \(\{M_{1},M_{2},\ldots,M_{n}\}\). The locally trained models are ensembled together to generate an aggregated model \(M_{\Sigma}\), which is then sent to all private edge servers for updating their respective local models. ## V Methodology In this section, we discuss our proposed framework for privacy-preserving high-performing ensemble assisted DL with Transfer learning in edge cloud consortium. Our proposed architecture has three major steps. The first step includes generating an initial model from a public dataset. The second step is generating a locally trained model based on a local dataset while preserving privacy. The final step ensembles all local training models to obtain an aggregated model. Each of the steps are discussed below. ### _Initial Training Model Generation_ The first step of the proposed framework is the generation of _initial model_\(M_{I}\) by a TTP. TTP takes a public dataset \(D_{pub}\) as input and applies Convolutional Neural Network (CNN)[29] to train \(M_{I}\). A typical CNN consists of three types of operation layers: _convolutional layer_ (CONV), _pooling layer_ (POOL), and flattening (FLAT) and fully connected layer (FC). The CONV layer consists of multiple sub-layers that are used for feature extraction. The POOL layer acts as the merging layer. In initial model generation, we use _max pooling_. The FLAT layer formats the extracted features to forward to the FC layer. An overview of the initial model generation process is illustrated in Fig. 3. The process is summarized in Algorithm 1. In the initial model generation process, we use the Lung Cancer [30] public dataset that contains 15K 2D-images. Initial model training is performed by a trusted third party. TTP creates \(M_{I}\) with random parameter \(\theta\). During this process, TTP initializes the other model training parameters, such as the number of training epochs \(ep\) and batch size \(b\). In each epoch, TTP uses an optimizer to update parameters and a loss function to calculate how well the model is performing. We use the _Categorical Crossentropy_ function [31] as the loss function and the _Stochastic Gradient Decent (SGD)_[32] as the optimizer, which formula can be seen in Equation 3 and Equation 4 respectively. \[l(y,\hat{y})=-\frac{1}{N}\sum_{i=0}^{N}\sum_{j=0}^{J}y_{j}\cdot log(\hat{y}_{ j})+(1-y_{j})\cdot log(1-\hat{y_{j}}). \tag{3}\] Here, \(N\) is the number of data, \(j\) is the number of classes, \(y\) is a vector representing true label, and \(\hat{y}\) is a vector representing the probability of class prediction. \[\theta_{t+1}=\theta_{t}-\eta_{t}\cdot\frac{1}{N}\sum_{i=1}^{N}\nabla_{\theta_ {t}}l(y_{i},\hat{y}_{i}), \tag{4}\] where, \(t\) is the current step, \(\eta\) is the learning rate, and \(\nabla\) is the derivative with respect to every parameter. \begin{table} \begin{tabular}{l l} \hline \(H_{i}\) & Hospitals \\ \(E_{Di}\) & Edge Data Server \\ \(E_{Si}\) & Private Edge Server \\ \(M_{I}\) & Initial Model \\ \(M_{n}\) & Locally Trained Model \\ \(D_{pub}\) & Public Dataset \\ \(ep\) & Epochs \\ \(b\) & Batch Size \\ \(D_{label}\) & A Set of Labeled Data Partitions \\ \(\overline{y}\) & Prediction on Input Data \\ \(\mathcal{L}\) & Computed Loss \\ \(grad\) & Local Computed Gradient \\ \(DB_{Li}\) & Local Dataset owned by \(H_{i}\) \\ \(M_{Pi}\) & Locally Trained Private Model \\ \(mn\) & Noise Multiplier \\ \(nc\) & Clipping threshold \\ \(mb\) & Mini-batch Size \\ \(l\) & Layer Partition \\ \(L\) & Layers within CNN Model \\ \(\epsilon\) & Privacy Budget \\ \(\delta\) & Probability of Information Leakage \\ & Accident \\ \(M_{P}\) & Local Private Models \\ \(M_{E}\) & Ensembled model \\ \hline \end{tabular} \end{table} TABLE I: Notations Fig. 2: Overview of the proposed architecture In the CNN-based training process, we first randomly sample data from \(D_{pub}\) using a function called \(trainLoader()\) according to the batch size \(b\) and results in multiple partition with labeled data \(D_{label}\). An element in \(D_{label}\), represented as \(D_{label}^{j}\), is a pair \(\{input_{j},label_{j}\}\). Here, \(input_{j}\) is an image with \(label_{j}\). Each partition holds \(b\) number of data which is feed into the initial model \(M_{I}\). Next, current prediction \(\overline{y}\) is computed for \(D_{label}^{j}\) using a function \(computePrediction()\). The prediction \(\overline{y}\) is then used to compute the loss \(\mathcal{L}\) of the model along with the true label of the input data using a function \(computeLoss()\). Further, we compute the gradient \(grad\) from the loss \(\mathcal{L}\) using a function \(computeGradient()\). Finally, the SGD parameters of the initial model \(M_{I}\) are updated using the function \(updateSGDParam()\) with the current \(grad\). Once the model is finalized, \(M_{I}\) is ready to be collected by any participants (i.e., hospitals) from TTP. ``` Input: Public dataset, \(D_{pub}\) Output: Initial public model, \(M_{I}\) 1 Initialization: 2 Initial CNN model, \(M_{I}=\emptyset\) 3 number of epochs, \(ep\) 4 Batch size, \(b\) 5 begin 6for each\(i\leq ep\)do 7\(D_{label}\gets trainLoader(D_{pub},b)\) 8for each\(\{D_{label}^{j}\}\in D_{label}\)do 9\(\overline{y}\gets D_{label}^{j}.computePrediction()\) 10\(\mathcal{L}\gets D_{label}^{j}.computeLoss(\overline{y})\) 11\(grad\gets D_{label}^{j}.computeGradient(\mathcal{L})\) 12\(M_{I}.updateSGDParam(grad)\) 13 endfor 14endForEach 15return\(M_{I}\) 16 17 end for ``` **Algorithm 1**Initial model training ### _Generating a Locally Trained Model with Privacy_ In this step, each participant (i.e., hospital) \(H_{i}\) generates a local training model \(M_{Li}\) at the private edge server \(E_{Si}\) from their local dataset \(DB_{Li}\). As \(DB_{Li}\) contains sensitive information, \(E_{Si}\) applies Differential Privacy (DP)-based privacy-preserving mechanism. An \(E_{Si}\) collects an initial model \(M_{I}\) from TTP and applies CNN with Transfer Learning (TL) approach to generate the local model. An overview of the process is illustrated in Fig. 4. In our proposed training process (see Algorithm 2), we assume that there are \(p\) layers that are denoted as \(\{L_{1},L_{2},\ldots,L_{p-1},L_{p}\}\). Initially, \(E_{Si}\) freezes the last two layers (i.e., \(L_{p-1}\) and \(L_{p}\) ) of the initial model \(M_{I}\) and executes first \(p-2\) steps of CNN. We assume that the local model \(M_{Li}\) has a model parameter \(\theta_{i}\). To minimize the privacy leakage of \(\theta_{i}\), a differentially private SGD is applied which is named as DP-SGD [33] in the CNN layers. The _parameter update_ process in DP-SGD is similar to original SGD. Nevertheless, noise is added with the parameters to ensure the privacy of \(\theta_{i}\). Let, \(X\) is the set of private data such that \(X\subseteq DB_{Li}\). At first, a set \(\overline{X}\) of random data points of size \(m\) is selected from \(X\). The set of data points \(\overline{X}=\{x_{1},x_{2},\ldots,x_{m}\}\). Next, gradients are computed for each \(x_{k}\in\overline{X}(1\leq k\leq m)\) as follows: \[g_{t}[x_{k}]=\nabla_{\theta_{k}}l(f(x_{k}),y_{k}), \tag{5}\] where, \(t\) is the current step, \(\theta_{k}\) is the current state of the model parameter, \(\nabla\) is used to refer to the derivative with respect to every parameter, \(f(x_{k})\) is the model prediction with respect to input \(x_{k}\), \(y_{k}\) is the true label of input \(x_{k}\), and \(l()\) is the loss function. Finally, the gradients are used to update the model parameters. To apply differential privacy during the local model training, DP-SGD uses a few additional steps after the gradient computation. The additional steps includes _gradient norm clipping_ and _noise addition_. The gradient norm clipping limits how each individual training point is sampled in a mini-batch and influences the resulting gradient computation. In DP-SGD, the gradient norm clipping can be calculated as follows: \[g_{t}[x_{k}]=\frac{g_{t}[x_{k}]}{max[1,||g_{t}[x_{k}]||/nc]}, \tag{6}\] where \(nc\) is the level 2 (L2) norm clip threshold. This threshold is to ensure \(L_{2}\)-sensitivity since the privacy guarantee in Gaussian mechanism requires that the noise vector standard deviation of each coordinate to scale linearly with the \(L_{2}\)-sensitivity of gradient estimate \(g_{t}[x_{k}]\)[34]. Noise addition step adds a noise to the clipped gradient to provide privacy to the model. DP-SGD uses Gaussian noise mechanism to calculate the noise, which can be shown using the following equation: \[\xi_{t,i}\sim Norm_{\xi_{t,i}}[0,nm^{2}nc^{2}mb], \tag{7}\] \[g_{t}[x_{k}]=\frac{1}{L}(\sum g_{t}[x_{k}]+[0,nm^{2}nc^{2}mb]) \tag{8}\] where \(nm\) is a noise multiplier value, and \(mb\) is mini-batch size. The last two layers of the CNN model produced by this training process are then removed to produce a differentially private model \(M_{Pi}\) of the private edge server \(E_{Si}\) for the hospital \(H_{i}\). This allows the output of each private model to be averaged together and produce an ensembled output to be Fig. 3: Generation process of Initial Model by Trusted-Third Party fed as an input to the last two layers of the initial model \(M_{I}\). \(E_{Si}\) sends \(M_{Pi}\) to cloud for ensemble. ``` Input: Initial public model \(M_{I}\), Local data \(DB_{Li}\) of \(E_{Si}\) Output: Local private model of \(E_{Si}\), \(M_{Pi}\) 1 Initialization: Noise multiplier, \(nm\) 2 Clipping threshold, \(nc\) 3 Min-batch size, \(mb\) 4 layer partition, \(l\) 5 Number of epochs, \(ep\) 6 Set of layers, \(L=\{L_{1},L_{2},\ldots,L_{p}\}\) 7 8begin 9for each\(e\in ep\)do 10\(\overline{X}\gets trainLoader(DB_{Li},mb)\) 11foreach\(L_{j}>(p-2)\)do 12\(M_{I}.freeze(L_{j})\) 13 endforEach\(x_{i}\in\overline{X}\)do 14\(\overline{y}\gets computePrediction(x_{i})\) 15\(\mathcal{L}\leftarrow l(\overline{y},M_{I}.labels)\) 16\(grad\leftarrow computeGradient(\mathcal{L})\) 17\(M_{Pi}.updateSGDParameter(grad)\) 18 endforEach 19 20 endfor 21return\(M_{Pi}\) 22 23 endfor ``` **Algorithm 2**Generating local private model \(M_{Pi}\) ### _Ensembled Model Generation from Local Private Models_ The ensembled model construction is the final step of the proposed method. This step is performed by a public cloud when all local private models are received. The set of all differentially private models are represented as: \(M_{P}=\{M_{P1},M_{P1},\ldots,M_{Pn}\}\), where \(n\) is the hospital id. To aggregate the model, a Bootstrap Aggregation or BAGGing [35] based ensemble averaging technique is used (see Fig. 5). The averaging function denoted as \(\bar{X}\), takes \((p-2)\)-th layers output of local private models as input. The operation of \(\bar{X}\) can be expressed as: \[\bar{X}=\frac{1}{n}\sum_{k=1}^{n}z_{ik}, \tag{9}\] where \(n\) is the number of private models, and \(z_{ik}\) is a specific \(z\) value in the \(i\)th position within an output vector. Next, the results are fed as an input to the last two layers (\(L_{p-1}\) and \(L_{p}\)) of the initial model \(M_{I}\). Finally, we get the ensembled model \(M_{E}\) which is distributed to all private edge servers. ``` Input: Set of local private models, \(M_{P}=\{M_{P1},M_{P1},\ldots,M_{Pn}\}\) Initial model, \(M_{I}\) Output: Ensembled model, \(M_{E}\) 1 Initialization: Ensembled model, \(M_{E}\leftarrow\emptyset\) 2 Vector of \((p-2)\) layer values \(z\leftarrow\emptyset\) 3foreach\(M_{Pi}\in M_{P}\)do 4 z.add(\(M_{Pi}.getValues(L_{p-2})\)) 5 endforEach 6\(avg=computeAvg(z)\) using Eq. (9) 7\(M_{E}\gets generateModel(avg)\) 8// Add the last two layers for prediction 9\(M_{E}.addLayer(L_{p-1})\) 10\(M_{E}.addLayer(L_{p})\) return\(M_{E}\) ``` **Algorithm 3**Ensemble model construction ## VI Results and Discussion In this section, we provide information on the experimental setup used. Then, we perform experiments on MNIST and Lung Cancer datasets to evaluate the model performance of our scheme. ### _Experimental Setup_ #### Vi-A1 Testing Environment We used AWS Sagemaker for our experiment. We chose AWS g4dn.2xlarge machines, which contain 1 NVIDIA T4 GPU with 16 GB GPU memory and 32 GB RAM. The experiments were carried out using Python version 3.7. #### Vi-A2 Datasets For all experiments, we consider five clients participating in the training process. The training datasets and models are defined as follows: * **MNIST**. MNIST dataset is a 28 \(\times\) 28 multi-class handwritten digits consisting of numbers ranging from 1 to 10. The dataset consists of 70,000 images with training Fig. 4: Local model transfer and private model training Fig. 5: Ensemble assisted aggregated model generation and testing datasets combined. For MNIST dataset experiment, we consider a CNN model as shown in Figure 6. * **Lung and colon cancer**. Lung and colon cancer dataset retrieved from [30] consists of 768 \(\times\) 768 images from five classes (lung_n, lung_scc, lung_aca, colon_n,colon_aca). Each class consists of 5000 images. In this experiment, we are using three classes out of the five classes, which are lung_n, lung_scc, and lung_aca. The CNN model we use for the Lung Cancer dataset is shown in Figure 7. Data in our experiment is divided into four different parts as shown in Figure 8. We partition our data by assuming a real-world scenario. Validation data is a data partition to represent any unforeseen or future data to be predicted. This partition will be used to test our initial public model and the final ensemble model. Public training data is used to train our initial public model. Its size is set to be smaller than other partitions. Private training data represents the data held by private institutions. This data is used to train our private model. On the other hand, the private test data will be used to test the privately trained model. In order to simplify our experiment, the same data partition sizes will not be changed unless stated otherwise. During our experiments with MNIST dataset, validation data, public data, private training data, and private test data are set to be 28000, 420, 6653, and 1663 respectively. For Lung Cancer dataset, validation data, public data, private train data, and private test data are set to 6000, 90, 1426, and 356, respectively. ### _Results Analysis_ For our experiment of results analysis, we first observe the effect of public data size used to train the initial public model towards the private model training accuracy. In Fig. 9 and Fig. 10, we use different number of public data size to train our initial public model. Figure 9 shows the experiments using the MNIST dataset. we use four settings on the public and private datasets. The size of public and private data used for training is defined as follows: (a): 42 public data and 6713 private data; (b): 210 public data and 6687 private data; (c): 420 public data and 6653 private data; (d): 2100 public data and 6384 private data. For the model training configuration setup, the MNIST dataset is trained in 60 epochs, and the batch size is set to 250. In regards to privacy preservation parameter, we set the clipping threshold value to 1.5 Figure 10 shows the experiments for Lung Cancer dataset. We also used four different settings. The size of public and private data used for training is defined as follows: (a): 9 public data and 1439 private data; (b): 45 public data and 1433 private data; (c): 90 public data and 1426 private data; (d): 450 public data and 1368 private data. For the model training configuration setup, the Lung Cancer dataset is trained in 200 epochs, and the batch size is set to 18. The clipping threshold value used for Lung Cancer dataset experiment is 1.0 As can be seen in both figures, the increase of public data size increases private model accuracy. However, the growth of private model accuracy becomes less significant as the initial public model performance increases. Next, we tried to observe the effect of the noise multiplier on the private model training from the two datasets. Fig. 11 shows the experiment of using 0.9, 1.1, 1.3, and 1.5 noise multipliers on MNIST dataset. Results show that there is no significant difference between the four cases. We employ the same configuration for its noise multiplier for Lung Cancer dataset. Compared to MNIST dataset, the increase of noise in Lung Cancer dataset results in dispersed accuracy on each private model (see Fig. 12). We summarise our experiments in Table II and Table III. Table II exhibits a summary of model performance on MNIST Fig. 6: CNN model for MNIST Fig. 7: CNN model for Lung Cancer dataset when the initial model is trained with 0.001 learning rate and private model with 0.15 learning rate. Table III presents the summary of model performance on Lung Cancer dataset when the initial model is trained with 0.001 learning rate and private model with 0.015 learning rate. From Table II, it can be seen that the final ensemble model accuracy tends to provide accuracy higher than the average of all private models accuracies. However, when the \(\epsilon\) value is 1.9, the ensemble model accuracy remains the same as the average of all private models' accuracy. While slight improvement can be seen on the final ensembled model in Table II, a more significant accuracy improvement on the final ensemble model can be seen on the Lung Cancer dataset (see Table III), specifically when the value of \(\epsilon\) is 1.7, improvement of final model accuracy reaches up to 1.7, which is 1.7, which is 1. 10 percent from the private model accuracy. However, despite having a significant improvement on final model accuracy, when the noise is big enough, the performance of the private model lies below the initial model accuracy. Finally, we compare our proposed method's performance with the existing methods, TrPATE [18] and COFEL [36] on MNIST dataset to prove the effectiveness of our strategy. For comparison, we use the same model configurations provided in the paper. The results are summarised in Table IV. As can be seen from the table, our method outperformed the other existing method even for a smaller value of \(\epsilon\). ## VII Conclusion This paper proposes an ensemble and transfer learning infused framework for privacy-preserving DNN model generation in IoT, edge, and cloud convergence. Differential Privacy is used to add noise in the local model to ensure privacy of the model. As adding noise to the model significantly reduces the model performance, Transfer Learning is used with CNN to reduce the loss and improve the efficiency of the local model. The proposed framework involves multiple participants. Hence, local models from different participants are ensembled at the cloud to generate a collective learning model, called the final model, to ensure higher prediction accuracy. From our experiment, we demonstrated that transferring the knowledge of public data in ensemble learning enhances the accuracy of the final model. The effectiveness of our model has been compared against state-of-the-art methods such as TrPATE and COFEL. Experimental results show that our method outperformed the existing work. In this article, we assume that all participants use the same knowledge domain. Further research should also investigate the performance while transferring knowledge from different domains. ## Acknowledgement This work is supported by the Australian Research Council Discovery Project (DP210102761).
2306.07981
Feature Engineering-Based Detection of Buffer Overflow Vulnerability in Source Code Using Neural Networks
One of the most significant challenges in the field of software code auditing is the presence of vulnerabilities in software source code. Every year, more and more software flaws are discovered, either internally in proprietary code or publicly disclosed. These flaws are highly likely to be exploited and can lead to system compromise, data leakage, or denial of service. To create a large-scale machine learning system for function level vulnerability identification, we utilized a sizable dataset of C and C++ open-source code containing millions of functions with potential buffer overflow exploits. We have developed an efficient and scalable vulnerability detection method based on neural network models that learn features extracted from the source codes. The source code is first converted into an intermediate representation to remove unnecessary components and shorten dependencies. We maintain the semantic and syntactic information using state of the art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM Autoencoder, word2vec, BERT, and GPT2 to classify the possible vulnerabilities. We maintain the semantic and syntactic information using state of the art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM Autoencoder, word2vec, BERT, and GPT2 to classify the possible vulnerabilities. Furthermore, we have proposed a neural network model that can overcome issues associated with traditional neural networks. We have used evaluation metrics such as F1 score, precision, recall, accuracy, and total execution time to measure the performance. We have conducted a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information.
Mst Shapna Akter, Hossain Shahriar, Juan Rodriguez Cardenas, Sheikh Iqbal Ahamed, Alfredo Cuzzocrea
2023-06-01T01:44:49Z
http://arxiv.org/abs/2306.07981v1
Feature Engineering-Based Detection of Buffer Overflow Vulnerability in Source Code Using Neural Networks ###### Abstract One of the most significant challenges in the field of software code auditing is the presence of vulnerabilities in software source code. Every year, more and more software flaws are discovered, either internally in proprietary code or publicly disclosed. These flaws are highly likely to be exploited and can lead to system compromise, data leakage, or denial of service. To create a large-scale machine learning system for function-level vulnerability identification, we utilized a sizable dataset of C and C++ open-source code containing millions of functions with potential buffer overflow exploits. We have developed an efficient and scalable vulnerability detection method based on neural network models that learn features extracted from the source codes. The source code is first converted into an intermediate representation to remove unnecessary components and shorten dependencies. We maintain the semantic and syntactic information using state-of-the-art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we have proposed a neural network model that can overcome issues associated with traditional neural networks. We have used evaluation metrics such as F1 score, precision, recall, accuracy, and total execution time to measure the performance. We have conducted a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We have found that all neural network models provide higher accuracy when we use semantic and syntactic information as features. However, this approach requires more execution time due to the added complexity of the word embedding algorithm. Moreover, our proposed model provides higher accuracy than LSTM, BiLSTM, LSTM-Autoencoder, word2vec and BERT models, and the same accuracy as the GPT-2 model with greater efficiency. **Keywords**: Cyber Security; Vulnerability Detection; Neural Networks; Feature Extraction; ## I Introduction Security in the digital realm is becoming increasingly important, but there is a significant threat to cyberspace from invasion. Attackers can breach systems and applications due to security vulnerabilities caused by hidden software defects. Internally, proprietary programming contains thousands of these flaws each year [1]. For example, the ransomware Wannacry swept the globe by using a flaw in the Windows server message block protocol [2]. According to the Microsoft Security Response Center, there was an industry-wide surge in high-severity vulnerabilities of 41.7% in the first half of 2015. This represents the greatest proportion of software vulnerabilities in at least three years, accounting for 41.8% [3]. Furthermore, according to a Frost and Sullivan analysis released in 2018, severe and high severity vulnerabilities increased from 693 in 2016 to 929 in 2017, with Google Project Zero coming in second place in terms of disclosing such flaws. On August 14, 2019, Intel issued a warning about a high-severity vulnerability in the software it uses to identify the specifications of Intel processors in Windows PCs [4]. The paper claims that these defects, including information leaking and denial of service assaults, might substantially affect software systems. Although the company issued an update to remedy the problems, attackers can still use these vulnerabilities to escalate their privileges on a machine that has already been compromised. In June 2021, a vulnerability in the Windows Print Spooler service was discovered that allowed attackers to execute code remotely. The vulnerability, known as PrintNightmare, was caused by a buffer overflow and affected multiple versions of Windows in 2021 [5]. Microsoft released a patch to address the issue, but reports later emerged that the patch was incomplete and still left systems vulnerable. To reduce losses, early vulnerability detection is a good technique. The proliferation of open-source software and code reuse makes these vulnerabilities susceptible to rapid propagation. Source code analysis tools are already available; however, they often only identify a small subset of potential problems based on pre-established rules. Software vulnerabilities can be found using a technique called vulnerability detection. Conventional vulnerability detection employs static and dynamic techniques [6]. Static approaches evaluate source code or executable code without launching any programs, such as data flow analysis, symbol execution [7], and theorem proving [8]. Static approaches can be used early in software development and have excellent coverage rates, but they have a significant false positive rate. By executing the program, dynamic approaches like fuzzy testing and dynamic symbol execution can confirm or ascertain the nature of the software. Dynamic methods depend on the coverage of test cases, which results in a low recall despite their low false positive rate and ease of implementation. The advancement of machine learning technology incorporates new approaches to address the limitations of conventional approaches. One of the key research directions is to develop intelligent source code-based vulnerability detection systems. It can be divided into three categories: using software engineering metrics, anomaly detection, and weak pattern learning [9]. Initially, software engineering measures, including software complexity [10], developer activity [11], and code commits [12] were investigated to train a machine learning model. This strategy was motivated by the idea that software becomes more susceptible as it becomes more complicated, but accuracy and recall need to be improved. Allamanis et al. [13] have shown that the syntactic and semantic information in the code increases the detection accuracy in anomaly detection. Moreover, one work has shown the detection of the anomaly using fully-fledged codes [14]. It reveals previously unidentified weaknesses, but false positive and false negative rates are high. Another work has shown an approach with clean and vulnerable samples to learn vulnerable patterns [15]. This method performs very well but relies on the quality of the dataset. In our work, we propose a solution for detecting software buffer overflow vulnerability using neural networks such as Simple RNN, LSTM, BiLSTM, word2vec, BERT, GPT2, and LSTM-Autoencoder. We first transform source code samples into the minimum intermediate representations through a tokenizer provided by the Keras library. Later, we extract semantic features using word embedding algorithms such as GloVe and fastText. After finishing the data preprocessing stage, we feed the input representation to the neural networks for classification. Moreover, we develop a neural network that works best among all the models. All the models have been evaluated using evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time. The following is a summary of our contributions: 1. Extracting semantic and syntactic features using GloVe and fastText. 2. Vulnerability Detection in Source Code using LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 with an minimal intermediate feature representation of the texts. 3. Vulnerability Detection in Source Code using LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 with semantic and syntactic features. 4. Proposal of a neural network that outperforms the results derived from existing models. Comparison between results derived from neural networks trained with a minimal intermediate feature representation of the texts and semantic and syntactic features. The rest of the paper is organized as follows: we provide a brief background study on software vulnerability detection in section 2. Then we explain the methods we followed for our experimental research in section 3. The results derived from the experiment are demonstrated in Section 4. Finally, section 5 concludes the paper. ## II Literature Review Researchers are interested in the recently developed machine learning strategy for identifying and preventing software and cybersecurity vulnerabilities in order to address the shortcomings of conventional static and dynamic code analysis techniques. Various machine learning techniques, including naive bayes, logistic regression, recurrent neural networks (RNN), decision trees, and support vector machines are successfully used for classifying software security activities like malware, ransomware, and network intrusion detection. We have examined machine learning-related papers that have been applied to the software security domain. Previously, Zeng et al. [16] reviewed software vulnerability analysis and discovery using deep learning techniques. They found four game-changing methods that contributed most to software vulnerability detection using deep learning techniques. These concepts are automatic semantic feature extraction using deep learning models, end-to-end solutions for detecting buffer overflow vulnerabilities, applying a bidirectional Long Short-Term Memory (BiLSTM) model for vulnerability detection, and deep learning-based vulnerability detectors for binary code. Zhou et al. [17] proposed a method called graph neural network for vulnerability identification with function-level granularity to address the issue of information loss during the representation learning process. They transformed the samples into a code property graph format. Then, a graph neural network made up of a convolutional layer and a gated graph recurrent layer learned the vulnerable programming pattern. This method improves the detection of intra-procedural vulnerabilities. However, they did not address inter-procedural vulnerabilities. Iorga et al. [18] demonstrated a process for early detection of cyber vulnerabilities from Twitter, building a corpus of 650 annotated tweets related to cybersecurity articles. They used the BERT model and transfer learning model for identifying cyber vulnerabilities from the articles. The BERT model shows 91% accuracy, which they found adequate for identifying relevant posts or news articles. Sauerwein et al. [19] presented an approach for automated classification of attackers' TTPs by combining NLP with ML techniques. They extracted the attackers' TTPs from unstructured text. To extract the TTPs, they used a combination of NLP with ML techniques. They assessed all potential combinations of the specified NLP and ML approaches with 156 processing pipelines and an automatically generated training set. They found that tokenization, POS tagging, IoC replacement, lemmatization, one-hot encoding, binary relevance, and support vector machine performed best for the classification of techniques and tactics. Harer et al. [20] created a dataset composed of millions of open-source functions annotated with results from static analysis. The performance of source-based models is then compared against approaches applied to artifacts extracted from the build process, with source-based methods coming out on top. The best performance is found when combining characteristics learned by deep models with tree-based models. They evaluated the use of deep neural network models alongside more conventional models like random forests. Finally, their best model achieved an area under the ROC curve of 0.87 and an area under the precision-recall curve of 0.49. Pistoia et al. [21] surveyed static analysis methods for identifying security vulnerabilities in software systems. They discussed three topics that have been linked to security vulnerability sources: application programming interface conformance, information flow, and access control. They addressed static analysis methods for stack-based access control and role-based access control separately since access control systems can be divided into these two main types. They reviewed some effective static analysis techniques, including the Mandatory Access Rights Certification of Objects (MARCO) algorithm, the Enterprise Security Policy Evaluation (ESPE) algorithm, the Static Analysis for Validation of Enterprise Security (SAVES) algorithm, and Hammer, Krinke, and Snelting's algorithm. However, static analysis produces false positive results and relies on predefined rules. For new errors, the static analysis method is unsuitable, as it cannot recognize and detect them. ## III Methodology From the standpoint of source code, the majority of flaws originate in critical processes that pose security risks, such as functions, assignments, or control statements. Adversaries can directly or indirectly affect these crucial operations by manipulating factors or circumstances. To successfully understand patterns of security vulnerabilities from code, neural network models must be trained on a large number of instances. In this study, we analyze the lowest level of codes in software package functions, capable of capturing vulnerable flows. We utilized a sizable dataset containing millions of function-level examples of C and C++ code from the SATE IV Juliet Test Suite, the Debian Linux distribution, and open-source Git repositories on GitHub, as mentioned in Russell's work [22]. Our project employs the CWE-119 vulnerability feature, which indicates issues related to buffer overflow vulnerability. Buffer overflow occurs when data written to a buffer exceeds its length, overwriting storage units outside the buffer. According to a 2019 Common Weakness Enumeration report, buffer overflow vulnerability has become the most adversely affected issue. Although we focus on buffer overflow, our method can identify other vulnerabilities. Figure 1 illustrates an intra-procedural buffer overflow vulnerability. Our dataset is divided into three subfolders--train, validation, and test--each containing a CSV file with 100,000, 20,000, and 10,000 data instances, respectively. The CSV files store text data and corresponding labels, allowing systematic evaluation of the model's performance and adaptability throughout the learning process. We analyzed the dataset and found some common words (shown in Table 1) with their corresponding counts. The visualization of common words in the dataset provides a preliminary understanding of what kind of important features the dataset might have. #### Iii-1 Data Preprocessing In this study, we conducted a series of data preprocessing techniques to prepare our dataset for the neural networks. The data preprocessing steps we employed include tokenization, stop word removal, stemming, lemmatization, and the use of pre-trained embeddings. Initially, we performed tokenization, which is the process of breaking down the source code into smaller units called tokens. Tokens represent the basic units of analysis for computational purposes in natural language processing tasks. For this process, we utilized the Keras tokenizer, which provides methods such as tokenize() and detokenize() to process plain text and separate words [23]. Following tokenization, we applied stop word removal, stemming, and lemmatization techniques to further preprocess the tokens. Stop word removal eliminates common words that do not provide significant information, while stemming and lemmatization normalize the tokens by reducing them to their root form. These techniques help in reducing noise and improving the efficiency of the neural networks. We first converted the tokens into numerical representations using minimal intermediate representation with the Keras tokenizer. The Keras tokenizer assigns a unique integer index to each token in the vocabulary and represents the source code as a sequence of these integer indices. This representation is more efficient than one-hot encoding, as it does not involve creating large, sparse vectors. However, it still lacks semantic information about the tokens. To further enhance the representation of the source code tokens and better capture semantic and syntactic information, we utilized pre-trained embeddings, namely GloVe and fastText. We stacked GloVe \begin{table} \begin{tabular}{|l|l|l|} \hline index & Common\_words & Count \\ \hline 0 & = & 505570 \\ \hline 1 & if & 151663 \\ \hline 2 & \{\}\(\backslash\)n & 113301 \\ \hline 3 & \(==\) & 92654 \\ \hline 4 & return & 77438 \\ \hline 5 & * & 71897 \\ \hline 6 & the & 71595 \\ \hline 7 & \(\backslash\)n & 63182 \\ \hline 9 & int & 53673 \\ \hline 10 & /* & 51910 \\ \hline 11 & i & 43703 \\ \hline 12 & */\(\backslash\)n & 43591 \\ \hline 13 & + & 41855 \\ \hline 14 & to & 39072 \\ \hline 15 & \& 36180 \\ \hline 16 & for & 35849 \\ \hline 17 & \(\backslash\)n\(\backslash\)n & 34017 \\ \hline 18 & char & 33334 \\ \hline 19 & else & 31358 \\ \hline \end{tabular} \end{table} TABLE I: Most common words and their frequencies Fig. 1: An example of buffer overflow vulnerability. and fastText embeddings together for extracting the semantic and syntactic information from the source code. Both of these embeddings have demonstrated strong performance in various NLP tasks and can effectively capture the relationships between words in the source code. GloVe is an unsupervised learning algorithm that generates vector representations of words based on global word-word co-occurrence statistics from a corpus [24]. FastText, an extension of the skip-gram method, generates character n-grams of varying lengths for each word and learns weights for each n-gram, as well as the entire word token, allowing the model to capture the meaning of suffixes, prefixes, and short words [25]. We separately fed the minimal intermediate representation with Keras tokenizer and the semantic and syntactic representations derived from GloVe and fastText into our neural network models. This approach allowed us to compare the performance of the models when using different input representations, helping us identify the most effective method for detecting security vulnerabilities in the source code. ### _Classification Models_ In this section, we discuss various classification models that were utilized in our study. These models include Simple RNN, LSTM, BiLSTM, LSTM-Autoencoder, Word2vec, BERT, and GPT-2. These models are designed to work with different types of data, such as text, time series, and sequences, and have been widely employed in natural language processing and other related tasks. ### _Simple Recurrent Neural Network (RNN)_ The Simple Recurrent Neural Network (RNN) is a type of artificial neural network that can model sequential data by utilizing a directed graph and temporally dynamic behavior. RNNs consist of an input layer, a hidden layer, and an output layer [26]. These networks have a memory state added to each neuron, allowing them to capture temporal dependencies in the data. The dimensionality of the input layer in our Simple Recurrent Neural Network (RNN) model is determined based on the input data features. The hidden layer consists of 256 units, which use memory states to capture temporal dependencies in the data. We use the hyperbolic tangent (tanh) activation function in the hidden layer to introduce non-linearity into the model. We chose this activation function due to its ability to handle vanishing gradients more effectively compared to other activation functions like sigmoid. The output layer of the Simple RNN model is designed to generate predictions based on the processed input data. The number of units in the output layer corresponds to the number of classes, which is two. We use an appropriate activation function, such as sigmoid for binary classification, in the output layer to generate probability scores for each class. To optimize the model, we choose the Binary Cross entropy loss function and employ the Adam optimization algorithm. We set hyperparameters such as learning rate to 0.001, batch size to 32, and the number of training epochs to 50. ### _Long short-term memory (LSTM)_ The Long Short-Term Memory (LSTM) is a type of recurrent neural network designed to solve the vanishing and exploding gradient problem of traditional RNNs.It was first proposed by Hochreiter and Schmidhuber [27]. Using this model for sequential datasets is effective, as it can handle single data points. It follows the Simple RNN model's design and is an extended version of that model [28, 29]. Our LSTM model consists of an input layer that determines the dimensionality of the input data features. We incorporated three hidden layers, each containing 128 memory cells that can capture long-term dependencies in the input sequence. The output of each LSTM layer is fed into a dropout layer with a dropout rate of 0.2 to prevent overfitting. The final output of the last LSTM layer is fed into a dense layer with two units and a sigmoid activation function to produce the final binary classification output. The LSTM cell comprises three gates: the input gate, forget gate, and output gate, which regulate the flow of information into and out of the cell. To introduce non-linearity into the model, we use the hyperbolic tangent (tanh) activation function in the LSTM cell. Furthermore, we utilize the Rectified Linear Unit (ReLU) activation function in the output layer to generate non-negative predictions. We optimize the LSTM model using the Binary Cross-Entropy loss function and Adam optimization algorithm. The model's hyperparameters include a learning rate of 0.001, batch size of 32, and 50 training epochs. ### _Bidirectional Long short-term memory (BiLSTM)_ The Bidirectional Long Short-Term Memory (BiLSTM) is a type of recurrent neural network that enhances the capabilities of the traditional LSTM by introducing bidirectional processing of the input sequence. It was first proposed by Graves [30]. This idea sets it apart from the LSTM model, which can learn patterns from the past to the future [31].Our BiLSTM model comprises an input layer that determines the dimensionality of the input data features. We have incorporated three hidden layers, each containing 128 memory cells that can capture long-term dependencies in the input sequence. The output of each BiLSTM layer is fed into a dropout layer with a dropout rate of 0.2 to prevent overfitting. The final output of the last BiLSTM layer is fed into a dense layer with two units and a sigmoid activation function to produce the final binary classification output. The BiLSTM cell has two sets of three gates, namely the input gate, forget gate, and output gate, one set that processes the input sequence in the forward direction and another set that processes the input sequence in the backward direction. This bidirectional processing allows the model to capture dependencies in both the past and future context of the input sequence. To introduce non-linearity into the model, we use the hyperbolic tangent (tanh) activation function in the BiLSTM cell. Furthermore, we utilize the Rectified Linear Unit (ReLU) activation function in the output layer to generate non-negative predictions. We optimize the BiLSTM model using the Binary Cross-Entropy loss function and Adam optimization algorithm. The model's hyperparameters include a learning rate of 0.001, batch size of 32, and 50 training epochs. ### _LSTM-Autoencoder_ The LSTM-Autoencoder is a variant of the Long Short-Term Memory (LSTM) model that utilizes an autoencoder architecture. The LSTM-Autoencoder is designed to read input sequences, encode sequences, decode sequences, and reconstruct sequences for a given sequential dataset, referred to as encoder-decoder [32]. Its performance is estimated based on how well the model can recreate the sequence. LSTM autoencoder can be used on video, text, audio, and time-series sequence data. The model accepts a series of various lengths of inputs and outputs for various purposes, such as translating from one language to another. The series is transformed into a vector representation by the encoder, and the vector is transformed back into a sequence of outputs or texts by the decoder. The meaning of the outputs is maintained in the vector representation. In this model, we have an input layer that determines the dimensionality of the input data features. The LSTM encoder layer contains 128 memory cells that can capture long-term dependencies in the input sequence. The LSTM decoder layer has the same number of memory cells as the encoder layer, which allows the model to reconstruct the input sequence. To introduce non-linearity into the model, we use the hyperbolic tangent (tanh) activation function in the LSTM cells. Additionally, we utilize the Mean Squared Error (MSE) loss function to calculate the reconstruction loss of the autoencoder. The model's hyperparameters include a learning rate of 0.001, batch size of 32, and 50 training epochs. To evaluate the performance of the LSTM-Autoencoder, we calculate the reconstruction error between the input and reconstructed sequence. The lower the reconstruction error, the better the model's ability to capture the input sequence's structure. ### _Word2vec_ Word2vec is a word embedding model specifically designed for working with textual data. Word embedding is a technique for representing words that allows computer programs to understand words with similar meanings. By employing a neural network model to map words into vectors of real numbers, word2vec is capable of capturing significant accurate syntactic and semantic word relationships. After training, the two-layer neural network can recognize synonymous terms and suggest new words for incomplete phrases [33]. Our Word2vec model comprises an input layer that takes in the one-hot encoded words and a single hidden layer containing a specified number of neurons, which represent the latent dimensions of the word embeddings. We utilize the Skip-gram architecture with negative sampling to train the Word2vec model. In this architecture, the model predicts the surrounding words given a target word or predicts the target word given surrounding words. The negative sampling technique helps to efficiently train the model by reducing the computation required to update the weights of the model. The output layer is not used in the Word2vec model, and the trained weights of the hidden layer represent the learned word embeddings. These embeddings can be used in various downstream NLP tasks such as text classification, sentiment analysis, and machine translation. To optimize the model, we use the Stochastic Gradient Descent (SGD) optimization algorithm with an initial learning rate of 0.025 and decrease the learning rate linearly over time to 0.001. We set the batch size to 128 and the number of training epochs to 5. ### _Bert_ BERT (Bidirectional Encoder Representations from Transformers) is a state-of-the-art pre-trained language model developed by Google. BERT is a bidirectional transformer-based architecture that can capture the context of a word in a sentence by looking at the surrounding words [34]. The BERT model consists of 12 transformer blocks for the base version and 24 transformer blocks for the large version. Each transformer block has a multi-head attention mechanism and a feed-forward neural network, making it capable of modeling long-term dependencies in the input sequence. In our implementation of BERT, we utilized the pre-trained BERT model and fine-tuned it on our specific NLP task. We utilized the pre-trained BERT model with 12 transformer blocks, 12 attention heads, and 110 million parameters. We added a dense layer with 2 units and a sigmoid activation function to perform binary classification. We utilized the Binary Cross-Entropy loss function and Adam optimization algorithm to optimize the model. We set the learning rate to 2e-5 and the batch size to 32. To fine-tune the pre-trained BERT model, we trained it on our specific NLP task using a training set of 100,000 instances and a validation set of 20,000 instances. We trained the model for 3 epochs and evaluated its performance on a separate test set, which constists of 10,000 instances. ### _Gpt-2_ GPT-2 (Generative Pre-trained Transformer 2) is a state-of-the-art language model developed by OpenAI. It is a transformer-based language model that can generate coherent and fluent text in a wide range of styles and topics [35]. GPT-2 has a large number of parameters, with the base version having 117 million parameters, and the largest version having 1.5 billion parameters. In our implementation of GPT-2, we utilized the pre-trained GPT-2 model to generate text for our specific NLP task. We fine-tuned the pre-trained GPT-2 model on a large corpus of text relevant to our task to improve its performance. We used the GPT-2 model with 117 million parameters for our task. To fine-tune the pre-trained GPT-2 model, we used a training set of 100,000 instances and a validation set of 20,000 instances. We fine-tuned the model for 3 epochs and evaluated its performance on a separate test set. We used the perplexity metric to evaluate the performance of the model. We utilized the Adam optimization algorithm with a learning rate of 1e-5 and a batch size of 32 to optimize the model. ### _Proposed Model_ We propose a stacking ensemble learning approach to improve the performance of our system. Stacking ensemble is an advanced machine learning technique that combines multiple heterogeneous weak learners (base models) to form a single stronger model (meta-learner). In this approach, the base models' predictions are used as input to the meta-learner, which ultimately makes the final prediction. The meta-learner used in this case is a logistic regression model, while the base models consist of Simple RNN, LSTM, BiLSTM, word2vec, and LSTM-Autoencoder. These models are trained with one-dimensional data as input. Since the predicted dataset from Level 0 already contains the expected values' probabilities, the meta-learner can provide accurate probabilities from Level 0. To mitigate overfitting, the meta-learner is trained using both the validation dataset and the outputs. The final result is the level-1 prediction. The architecture is divided into two levels, Level 0 and Level 1, as shown in Figure 2. Level 0 consists of Simple RNN, LSTM, BiLSTM, word2vec, and LSTM-Autoencoder models. After learning the data patterns, each of the base models generates predictions simultaneously. All models in Level 0 contribute equally to the overall model performance. Level 1, also referred to as the meta-learner, is built using logistic regression. The meta-learner at Level 1 is fed the Level 0 predicted outputs as input. Based on the Level 0 predictions, the meta-learner calculates the best weighted outputs. A "meta-learner" is a model that can quickly learn a pattern or adapt to different datasets with a small amount of training data. It learns patterns from the outputs generated by the five base models. As a result, the model can effectively learn completely new data and produce acceptable output. The meta-learner's parameters are a combination of the parameters of the five neural networks in the base models. Mathematically, the stacking ensemble learning approach can be represented as follows: Let \(\boldsymbol{M}\) be the number of base models, \(p_{i}^{m}\) be the probability of the positive class for the \(i\)-th sample predicted by the \(m\)-th base model, and \(\boldsymbol{w}_{m}\) be the weight assigned to the \(m\)-th base model. The weighted probability \(\boldsymbol{\mu}^{weighted}\) for the \(i\)-th sample can be computed as: \[p_{i}^{weighted}=\sum_{m=1}^{\boldsymbol{\mathcal{X}}}\boldsymbol{w}_{m}\cdot p _{i}^{m}\] The weights \(\boldsymbol{w}_{m}\) are determined by the meta-learner using the Level 0 predictions and the validation data. The final prediction \(\boldsymbol{y}_{t}^{final}\) for the \(i\)-th sample can be computed using the logistic function: \[\boldsymbol{y}_{t}^{final}=\frac{1}{1+e^{-(p_{i}^{weighted})}}\] By using a diverse set of base models, we can mitigate the limitations of traditional stacking ensemble approaches that employ similar base models, leading to similar predictions. If a single base model performs poorly on the dataset, there is a high likelihood that the final output will also be inferior. Conversely, with a diverse set of base models, the strengths and weaknesses of individual models complement each other, which results in a more robust and accurate overall model. This is because each base model is able to capture different aspects or patterns in the data, thereby reducing the reliance on any single model's performance. Additionally, the meta-learner can effectively combine these diverse predictions to generate a more accurate and stable final prediction, minimizing the impact of individual model biases or errors. In conclusion, the utilization of heterogeneous base models in a stacking ensemble approach provides a more resilient and powerful predictive model, capable of handling various types of data and delivering superior performance compared to traditional ensemble methods. Fig. 2: Proposed stacking ensemble learning architecture. ``` **Function**stacking_ensemble(data, train_ratio, val_ratio, test_ratio) // InitializeLevel0basemodels simple_rnn & SimpleRNN(); lstm & LSTM(); bi_lstm & BiLSTM(); lstm_autoencoder & LSTMAutoencoder(); word2vec_model & Word2Vec(); models & [simple_rnn, lstm, bi_lstm, lstm_autoencoder, word2vec_model]; // InitializeLevel1meta-learner meta_learner & LogisticRegression() //Splitthedataintotraining, validation, andtestingsets X_train, X_val, X_test, y_train, y_val, y_test & data_split(data, train_ratio, val_ratio, test_ratio) //TrainLevel0basemodels foreachmodelinmodelsdo model.fit(X_train, y_train) // MakepredictionswithLevel0base models Level0outputs & list() foreachmodelinmodels pred & model.predict(X_val) Level0outputs.append(pred) // ConcatenateLevel0outputs Level0outputs_combined & concatenate(Level0outputs) // TrainLevel1meta-learner meta learner.fit(Level0outputs combined, y_val) // MakefinalpredictionswithLevel1 meta-learner meta-learner Level0_test_outputs & list() foreachmodelinmodelsdo test_pred & model.predict(X_test) Level0_test_outputs.append(test_pred) // ConcatenateLevel0test outputs Level0_test_outputs_combined & concatenate(Level0test_outputs) //GenerateLevel1finalpredictions final_predictions & meta_learner.predict(Level0_test_outputs) returnfinal_predictions ``` **Algorithm 1** Proposed Stacking Ensemble Learning Algorithm. ## IV Evaluation metrics In order to assess the performance of the Neural Networks and our proposed stacking ensemble model, we have employed a range of evaluation metrics that provide insight into various aspects of model performance. These metrics include precision, recall, F1-score, accuracy, and execution time. Each of these metrics contributes to a comprehensive understanding of the model's effectiveness, generalization, and efficiency [36, 37, 38]. Below, we provide a brief description of each evaluation metric: ### _Precision_ Precision is a measure of the accuracy of the positive predictions made by the model. It is calculated as the ratio of true positive predictions to the sum of true positive and false positive predictions. In other words, it quantifies the proportion of correct positive predictions among all the instances predicted as positive. A higher precision value indicates that the model is better at identifying relevant instances and minimizing false positive predictions. \[\text{Precision}=\frac{\text{True\ Positives}}{\text{True\ Positives}+\text{False\ Positives}} \tag{1}\] ### _Recall_ Recall, also known as sensitivity or true positive rate, measures the proportion of actual positive instances that are correctly identified by the model. It is calculated as the ratio of true positive predictions to the sum of true positive and false negative predictions. A higher recall value indicates that the model is better at detecting positive instances and minimizing false negative predictions. \[\text{Recall}=\frac{\text{True\ Positives}}{\text{True\ Positives}+\text{False\ Negatives}} \tag{2}\] ### _F1-score_ F1-score is the harmonic mean of precision and recall, and it provides a balanced measure of both metrics. It is particularly useful when dealing with imbalanced datasets, where one class is significantly more prevalent than the other. The F1-score ranges from 0 to 1, with a higher value indicating better overall performance of the model in terms of both precision and recall. \[\text{F1-score}=2\cdot\frac{\text{Precision}\cdot\text{Recall}}{\text{ Precision}+\text{Recall}} \tag{3}\] ### _Accuracy_ Accuracy is a widely-used metric that quantifies the proportion of correct predictions made by the model, both positive and negative, relative to the total number of instances. It provides an overall indication of the model's performance, but it may not be a reliable metric when dealing with imbalanced datasets, as it can be biased towards the majority class. \[\text{Accuracy}=\frac{\text{True\ Positives}+\text{True\ Negatives}}{\text{Total\ Instances}} \tag{4}\] ### _Execution Time_ Execution time is a measure of the computational efficiency of the model. It refers to the amount of time required to train the model and make predictions. A shorter execution time indicates that the model is more efficient, which can be particularly important in real-world applications where time constraints are critical. By evaluating the execution time, we can assess the trade-offs between model performance and computational resources. These evaluation metrics provide a comprehensive and robust assessment of our neural network and proposed model's performance. By considering multiple aspects of performance, we can ensure that our model is not only accurate but also efficient, generalizable, and reliable across various datasets and application scenarios. ## V Result and Discussion In this study, we investigated the role of semantic and syntactic features in vulnerability prediction for CWE-119, focusing on buffer overflow vulnerabilities. We began by converting the text dataset into a minimal intermediate representation using a tokenizer provided by the Keras library. This basic representation assigns a numerical value to each word without considering semantic information. Since the meaning of code is often better captured by considering the context of multiple words, we employed state-of-the-art word embedding algorithms--GloVe and fastText--to extract semantic features from function-level codes. These features were then fed into neural network models for vulnerability prediction. We used 100,000 instances for training, 20,000 for validation, and 10,000 for testing. Our evaluation metrics included accuracy, precision, recall, and F1 score, with a focus on minimizing false positives and false negatives. We trained seven neural network models (Simple RNN, LSTM, BiLSTM, word2vec, BERT, GPT-2, and LSTM-Autoencoder) and our proposed stacking ensemble neural network model. Our ensemble learning model outperformed single models, achieving the highest accuracy in vulnerability prediction. Table 2 presents the results of vulnerable source code classification using different neural network models without word embedding algorithms. The Simple RNN model achieves an accuracy of 0.89, precision of 0.88, recall of 0.88, and F1 score of 0.92, with an execution time of 42 minutes and 8 seconds. The LSTM model has slightly better performance with an accuracy of 0.90, precision of 0.90, recall of 0.90, and F1 score of 0.92, and takes 29 minutes and 48 seconds to run. The BiLSTM model shows further improvement, obtaining an accuracy of 0.91, precision of 0.93, recall of 0.90, and F1 score of 0.87, but requires 2 hours and 5 minutes for execution. The Word2vec model yields an accuracy of 0.89, precision of 0.92, recall of 0.95, and F1 score of 0.93, with a runtime of 40 minutes and 2 seconds. The LSTM Autoencoder model has an accuracy of 0.91, precision of 0.93, recall of 0.94, and F1 score of 0.94, taking 53 minutes and 13 seconds for execution. The BERT model performs better with an accuracy of 0.92, precision of 0.93, recall of 0.93, and F1 score of 0.95, but requires 2 hours and 38 minutes to run. The GPT-2 model has an accuracy of 0.92, precision of 0.97, recall of 0.98, and F1 score of 0.97, with a considerably longer execution time of 7 hours and 48 minutes. Lastly, the proposed model outperforms the other models with an accuracy of 0.94, precision of 0.99, recall of 0.98, and F1 score of 0.99, and takes 2 hours and 31 minutes to execute. Table 3 shows the results when using GloVe and FastText embeddings. In general, the performance of all models improved when using these embeddings. The Simple RNN, LSTM, BiLSTM, and Word2vec models show a similar trend in improvement, with their respective accuracies increasing to 0.92, 0.92, 0.93, and 0.94. The LSTM Autoencoder model's performance slightly decreased with an accuracy of 0.90. The BERT, GPT-2, and proposed models maintain their superior performance, with accuracies of 0.94, 0.95, and 0.95, respectively. The execution times for all models vary, with the proposed model having a runtime of 2 hours and 46 minutes. Figure 3 shows the performance metrics for different neural network models on vulnerable source code without using any word embedding algorithms. The models considered are Simple RNN, LSTM, BiLSTM, Word2vec, LSTMAutoencoder, BERT, GPT-2, and a proposed model. The metrics considered are accuracy, precision, recall, and F1 score. The results demonstrate that the proposed model outperforms all other models in terms of accuracy and F1 score, achieving an accuracy of 0.94 and an F1 score of 0.99. The execution time of the proposed model is also relatively fast compared to other models, taking only 2 hours and 31 minutes. Figure 4 presents the classification results of the same neural network models on vulnerable source code using GloVe and fastText word embedding algorithms. The results demonstrate that all models achieved higher accuracy and F1 score compared to the results in Figure 3. The proposed model continues to perform the best with an accuracy of 0.95 and an F1 score of 0.99. However, the execution time of the proposed model is longer compared to Figure 3, taking 2 hours and 46 minutes. These figures provide a clear comparison of the performance of different neural network models and highlight the effectiveness of using word embedding algorithms for improving the classification results of vulnerable source code. The proposed model performs well in both scenarios, showing its potential as a reliable classification model. In Table 4, we present a comparison analysis between our proposed model and previous works in the domain of vulnerability detection. The table highlights the differences in terms of the purpose of each study, the data used, whether semantic or syntactic feature extraction was performed, the highest performance achieved, and if efficiency measurements were conducted. Lorga et al. [18] aimed at vulnerability detection using Twitter text data, but they did not perform semantic or syntactic feature extraction. Their model achieved an accuracy of 94.96%, and they did not provide any efficiency measurements. Similarly, Foret et al. [39] worked on vulnerability detection using news articles without incorporating semantic or syntactic features, resulting in an 87% accuracy. No efficiency measurement analysis was conducted in their work as well. Harer et al. [20] and Russell et al. [22] both focused on vulnerability detection in source code but did not consider semantic or syntactic feature extraction. Their models achieved F1-scores of 49.99% and 56.6%, respectively, without any efficiency measurement analysis. Behzadan et al. [40] also worked on vulnerability detection in source code without extracting semantic or syntactic features. They reported an accuracy of 94.72%, but no efficiency measurement analysis was performed. Our proposed model targets vulnerability detection in source code and incorporates semantic and syntactic feature extraction using GloVe and fastText embeddings. As a result, our model achieves the highest accuracy of 95% compared to the previous works. Moreover, we contribute to efficient measurement analysis and perform an in-depth analysis of the features that were not considered in previous studies. This comprehensive approach allows us to better understand the factors influencing the performance of vulnerability detection models and develop more effective methods for detecting security vulnerabilities in source code. ## VI Conclusion Our research aims to detect implementation vulnerabilities early in the development cycle by leveraging the power of neural networks. We have collected a large dataset of open-source C and C++ code and developed a scalable and efficient vulnerability detection method based on various neural network models. We compared the performance of different models, including Simple RNN, LSTM, BiLSTM, LSTM-Autoencoder, \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution Time \\ \hline Simple RNN & 0.89 & 0.88 & 0.88 & 0.92 & 42min 8s \\ \hline LSTM & 0.90 & 0.90 & 0.90 & 0.92 & 29min 48s \\ \hline BiLSTM & 0.91 & 0.93 & 0.90 & 0.87 & 2h 5min \\ \hline Word2vec & 0.89 & 0.92 & 0.95 & 0.93 & 40min 2s \\ \hline LSTMAuteenorder & 0.91 & 0.93 & 0.94 & 0.94 & 53min 13s \\ \hline BERT & 0.92 & 0.93 & 0.93 & 0.95 & 2h 38min \\ \hline Gpt2 & 0.92 & 0.97 & 0.98 & 0.97 & 7h 48min \\ \hline Proposed Model & 0.94 & 0.99 & 0.98 & 0.99 & 2h 31min \\ \hline \end{tabular} \end{table} TABLE II: Vulnerable Source code Classification results using different Neural network models with no word embedding algorithms \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Previous authors & Purpose & Data & Semantic or Syntactic feature extraction? & Highest percentage & Efficiency Measurement? \\ \hline Lorga et al. [18] & Vulnerability detection & Twitter text data & No & 94.96\% (Accuracy) & No \\ \hline Foret et al. [39] & Vulnerability detection & News Articles & No & 87\% (Accuracy) & No \\ \hline Harer et al. [20] & Vulnerability detection & Source code & No & 49.99\% (F1-score) & No \\ \hline Russell et al. [22] & Vulnerability detection & Source code & No & 56.6\% (F1-score) & No \\ \hline Behzadan et al. [40] & Vulnerability detection & Source code & No & 94.72\% (Accuracy) & No \\ \hline **Our Proposed Model** & **Vulnerability detection** & **Source code** & **Yes** & **95\%** (Accuracy) & **Yes** \\ \hline \end{tabular} \end{table} TABLE IV: Comparative analysis with previous work \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline Simple RNN & 0.92 & 0.93 & 0.93 & 0.97 & 42min 8s \\ \hline LSTM & 0.92 & 0.93 & 0.95 & 0.97 & 33min 13s \\ \hline BiLSTM & 0.93 & 0.96 & 0.96 & 0.99 & 45min 3s \\ \hline Word2vec & 0.94 & 1.00 & 0.98 & 0.99 & 42min 56s \\ \hline LSTMAuteenorder & 0.90 & 0.93 & 0.94 & 0.95 & 59min 53s \\ \hline BERT & 0.94 & 0.95 & 0.95 & 0.99 & 5h 16min \\ \hline Gpt2 & 0.95 & 0.97 & 0.98 & 0.99 & 8h 33min \\ \hline Proposed Model & 0.95 & 0.97 & 0.98 & 0.99 & 2h 46min \\ \hline \end{tabular} \end{table} TABLE III: Vulnerable Source code Classification results using different Neural network models with embedding algorithms GloVe + fastText Word2Vec, BERT, and GPT-2, and found that models with semantic and syntactic information extracted using state-of-the-art word embedding algorithms such as GloVe and FastText outperform those with a minimal text representation. Our proposed neural network model has shown to provide higher accuracy with greater efficiency than the other models evaluated. We have also analyzed the execution time of the models and proposed a trade-off between accuracy and efficiency. Overall, our research contributes to the development of large-scale machine learning systems for function-level vulnerability identification in source code auditing. ## Acknowledgement The work is supported by the National Science Foundation under NSF Award #2209638, #2100115, #2209637, #2100134, #1663350. Any opinions, findings, recommendations, expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2308.04930
Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks
Accurately reconstructing a three-dimensional ocean sound speed field (3D SSF) is essential for various ocean acoustic applications, but the sparsity and uncertainty of sound speed samples across a vast ocean region make it a challenging task. To tackle this challenge, a large body of reconstruction methods has been developed, including spline interpolation, matrix/tensor-based completion, and deep neural networks-based reconstruction. However, a principled analysis of their effectiveness in 3D SSF reconstruction is still lacking. This paper performs a thorough analysis of the reconstruction error and highlights the need for a balanced representation model that integrates both expressiveness and conciseness. To meet this requirement, a 3D SSF-tailored tensor deep neural network is proposed, which utilizes tensor computations and deep neural network architectures to achieve remarkable 3D SSF reconstruction. The proposed model not only includes the previous tensor-based SSF representation model as a special case, but also has a natural ability to reject noise. The numerical results using the South China Sea 3D SSF data demonstrate that the proposed method outperforms state-of-the-art methods. The code is available at https://github.com/OceanSTARLab/Tensor-Neural-Network.
Siyuan Li, Lei Cheng, Ting Zhang, Hangfang Zhao, Jianlong Li
2023-08-09T12:58:40Z
http://arxiv.org/abs/2308.04930v1
Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks ###### Abstract Accurately reconstructing a three-dimensional ocean sound speed field (3D SSF) is essential for various ocean acoustic applications, but the sparsity and uncertainty of sound speed samples across a vast ocean region make it a challenging task. To tackle this challenge, a large body of reconstruction methods has been developed, including spline interpolation, matrix/tensor-based completion, and deep neural networks-based reconstruction. However, a principled analysis of their effectiveness in 3D SSF reconstruction is still lacking. This paper performs a thorough analysis of the reconstruction error and highlights the need for a balanced representation model that integrates both expressiveness and conciseness. To meet this requirement, a 3D SSF-tailored tensor deep neural network is proposed, which utilizes tensor computations and deep neural network architectures to achieve remarkable 3D SSF reconstruction. The proposed model not only includes the previous tensor-based SSF representation model as a special case, but also has a natural ability to reject noise. The numerical results using the South China Sea 3D SSF data demonstrate that the proposed method outperforms state-of-the-art methods. The code is available at [https://github.com/OceanSTARLab/Tensor-Neural-Network](https://github.com/OceanSTARLab/Tensor-Neural-Network). [[https://doi.org](https://doi.org)(DOI number)] Pages: 1-18 ## I Introduction Three-dimensional (3D) ocean sound speed fields (SSFs), which characterize sound propagation over 3D geographical regions [1], form the stepping stone towards the success of a myriad of ocean acoustic applications, including underwater detection, [2] localization, [3] and communications. [4] To realize sound speed awareness, ocean observing systems have been rapidly developing in recent years, leading to the proliferation of intelligent underwater floats [5] and vehicles. [6] Despite these advancements, sound speed samples are still sparsely scattered across a vast ocean region (see Fig. 1), making crafting an accurate and fine-grained 3D SSF a considerable challenge. The crux of the matter is that the number of samples is much less than the unknowns, resulting in a highly under-determined inverse problem. [7] To make the problem tractable, the primary approach involves modeling the prior knowledge of the unknowns and incorporating that information into the reconstruction process. This has led to a sizable body of related works on reconstruction methods, [8; 9; 10; 11; 12] not limited to SSF reconstruction, that have a near-universal three-step approach: 1) proposing a representation model that incorporates prior information, 2) learning the model parameters from limited samples, and 3) reconstructing the underlying signal using the learned model. For instance, the simple yet widely-adopted spline interpolation method assumes that the underlying signal is a linear combination of smooth Green functions centered at the samples, [13] which correspond to the linear regression model in machine learning (see detailed discussion in Sec. II of Ref. [8]). Using linear regression as a representation model, the spline interpolation method starts by learning the optimal weights for the Green functions from the samples, and then uses the learned model to reconstruct the unknowns. A step forward is the Gaussian process regression (GPR), [14] which models the underlying signal as a Gaussian random process with a kernel function describing spatial correlations. GPR first learns the hyper-parameters of the model from the samples and then reconstructs the unobserved values. In recent years, matrix/tensor decompositions and deep neural networks have been increasingly used as representation models for various data completion tasks, leading to state-of-the-art (SOTA) results. Our task is closely related to recent advances in 3D data inverse problems and field reconstruction, which includes video/image inpainting [15; 10], hyperspectral reconstruction [11; 16], and radio map cartography [17; 12]. The burgeoning literature on reconstruction methods invites the question: _Among numerous ways, which one is the most effective for 3D ocean SSF reconstruction?_ Rather than exhaustively comparing various methods, this paper sets out to answer this question in a principled manner by delving into the analysis of reconstruction error. Specifically, inspired by the well-known bias-variance trade-off in machine learning,[18] the reconstruction error can be mainly decomposed into two parts: the _representation error_, which measures the model's capacity to fit the underlying signal (e.g., 3D SSF), and the _identification error_, which assesses whether the desired model can be uniquely learned from limited samples. To clarify this notion, consider the case where an over-parameterized deep neural network (i.e., one with more parameters than the sound speed samples) is used as the representation model. Its high expressive power[19] makes it likely to precisely represent the underlying 3D SSF, resulting in negligible representation error. However, insufficient samples and numerous unknown model parameters can lead to difficulties in identifying the optimal parameter configuration across a vast solution space. Unless strong regularizations (e.g., early stopping[20]) are carefully employed, the overwhelming identification error may lead to poor reconstruction. On the other hand, choosing a simpler representation model with fewer parameters makes it easier to uniquely learn the model parameters from limited samples, but may result in a higher representation error. The reconstruction error analysis above _highlights the need for a balanced representation model_ in 3D SSF reconstruction, with both high expressive power to reduce representation error and conciseness (in terms of both parameter numbers and mathematical operations) to minimize identification error. To meet this requirement, we propose a 3D SSF-tailored tensor deep neural network, drawing on the advantages of both recent _succinct tensor-based_ ocean SSF representation[21, 22] and _powerful deep neural network-based_ image representation.[23] The network, shown in Fig. 1, aims to provide a concise and accurate representation of SSFs. The proposed 3D SSF-tailored tensor deep neural network leverages the strengths of tensor computations and deep neural network architectures to achieve remarkable 3D SSF reconstruction. The tensor computations, which are with a few parameters and serve as the model's backbone, effectively exploit the multi-dimensional correlations among sound speeds.[21, 22] On the other hand, the enrollment of deep neural network architectures enhances the model's expressive power, especially in capturing fine-grained variations of sound speeds. Due to its conciseness, a simple gradient descent algorithm[24] is sufficient to identify the parameters that yield a quite good reconstruction from limited samples. Relying on its high expressive power, the learned model successfully reconstructs 3D SSFs with intricate details. Two key features of the proposed method are highlighted in this paper. First, the model has a natural ability to reject noise. This property is backed by theoretical proof under the one-layer assumption, and experimental verification in multiple layer scenarios. Second, the proposed model includes the tensor Tucker decomposition model as a specific case, which is strongly linked to classical SSF basis functions such as empirical orthogonal functions (EOFs).[22] Moreover, the proposed model can easily incorporate additional structural assumptions to boost the reconstruction performance further. As an example, we integrate total variation[25] regularizations in this paper to exploit the spatial smoothness of SSF. Finally, to corroborate the advantages of the proposed model, we conduct comprehensive numerical experiments using real-life 3D SSF data and compare the performance of the proposed approach with SOTA reconstruction methods. The encouraging results of the proposed algorithm not only affirm our theoretical analysis, but also demonstrate the effectiveness of viewing the reconstruction task from the reconstruction error analysis perspective. _Notation_: Lower- and uppercase bold letters (e.g., \(\mathbf{x}\) and \(\mathbf{X}\)) are used to denote the vectors and matrices. Higher-order tensors (order three or higher) are denoted by upper-case bold calligraphic letters. For a tensor \(\boldsymbol{\mathcal{X}}\), \(\mathbf{X}_{(p)}\) stands for its model-\(p\) unfolding matrix. \(\boldsymbol{\mathcal{X}}\times_{p}\mathbf{A}\) denotes the \(p\)-mode product between tensor \(\boldsymbol{\mathcal{X}}\) and matrix \(\mathbf{A}\). The Kronecker product is denoted by \(\otimes\). The Hadamard product is denoted by \(*\). \(\|\cdot\|_{\mathrm{F}}\) stands for the Frobenius norm. \(\langle\cdot,\cdot\rangle\) denotes the tensor inner product. The superscript \(\mathrm{T}^{\mathrm{T}}\) stands for transposition. \(\mathbb{R}\) and \(\mathbb{C}\) are the field of real and complex numbers, respectively. Figure 1: Illustration of the reconstruction system considered in this paper, with the proposed 3D SSF-tailored tensor deep neural network serving as the representation model. ## II Reconstruction and error analysis In this section, we formulate the ocean 3D SSF reconstruction task as an optimization problem, and present a unified framework to analyze the reconstruction error. Based on this framework, two widely used reconstruction methods are analyzed. ### Reconstruction Problem Formulation The reconstruction system considered in this paper is illustrated in Fig. 1. Assuming that the sound speed samples are contaminated by independently and identically distributed (i.i.d) Gaussian noise, the SSF reconstruction problem can be formulated as \[\min_{\boldsymbol{\mathcal{X}}}\|\boldsymbol{\mathcal{Y}}-\boldsymbol{ \mathcal{O}}\ast\boldsymbol{\mathcal{X}}\|_{\mathrm{F}}^{2}, \tag{1}\] where \(\boldsymbol{\mathcal{X}}\in\mathbb{R}^{I\times J\times K}\) is the ground-truth 3D SSF (to be reconstructed) and \(\boldsymbol{\mathcal{O}}\) is a binary indicator tensor with \(\boldsymbol{\mathcal{O}}_{i,j,k}=1\) if entry \((i,j,k)\) is observed. Observation tensor \(\boldsymbol{\mathcal{Y}}\in\mathbb{R}^{I\times J\times K}\) collects noisy and limited SSF samples, i.e., \(\boldsymbol{\mathcal{Y}}_{i,j,k}\) is the sampled sound speed value if \(\boldsymbol{\mathcal{O}}_{i,j,k}=1\), and otherwise equals to zero. Denote \(N\) as the number of observed (non-zero) entries in \(\boldsymbol{\mathcal{Y}}\), and \(T=IJK\) as the total number of entries in \(\boldsymbol{\mathcal{X}}\). The SSF reconstruction problem in Eq. (1) is under-determined because \(N\) is usually much smaller than \(T\). To tackle this challenge, existing methods (e.g., Refs. [23, 26, 8, 13, 27]) usually take the following three steps. #### Ii-A1 SSF Representation Since ocean sound speeds exhibit strong spatial correlations, SSF can be effectively represented by one model with only _a few_ parameters, as demonstrated in prior literature. [22, 28] Mathematically, assume that ocean SSF is represented by: \[\boldsymbol{\mathcal{X}}\approx\boldsymbol{\mathcal{D}}(\boldsymbol{\Theta}), \tag{2}\] where \(\boldsymbol{\mathcal{D}}(\cdot)\) denotes an SSF representation model and \(\boldsymbol{\Theta}\) is the set of model parameters. #### Ii-A2 Parameter Learning Using the representation model \(\boldsymbol{\mathcal{D}}(\cdot)\), the SSF reconstruction problem defined in Eq. (1) can be recast as \[\min_{\boldsymbol{\Theta}}\|\boldsymbol{\mathcal{Y}}-\boldsymbol{\mathcal{O}} \ast\boldsymbol{\mathcal{D}}(\boldsymbol{\Theta})\|_{\mathrm{F}}^{2}. \tag{3}\] The model parameter estimate \(\hat{\boldsymbol{\Theta}}\) is then obtained by solving problem (3). #### Ii-A3 SSF Reconstruction Using the learned model parameter \(\hat{\boldsymbol{\Theta}}\), the SSF is reconstructed by \(\hat{\boldsymbol{\mathcal{X}}}=\boldsymbol{\mathcal{D}}(\hat{\boldsymbol{ \Theta}})\). ### Reconstruction Error Analysis We proceed to conduct a theoretical analysis of the reconstruction error, which is defined as \(E=\|\hat{\boldsymbol{\mathcal{X}}}-\boldsymbol{\mathcal{X}}\|_{\mathrm{F}}^{2}\). Since the reconstruction error is a least-squares function, its associated error analysis, such as the bias-variance trade-off, has already been developed in the machine learning and signal processing literature. For example, refer to Page 149 of Ref. [18]. The existing results are primarily developed for supervised regression tasks, but here we re-interpreted them in the context of our unsupervised SSF reconstruction task. Specifically, denote the set of parameters that can best represent the SSF data as \(\boldsymbol{\Theta}^{*}\), i.e., \(\boldsymbol{\Theta}^{*}=\min_{\boldsymbol{\Theta}}\|\boldsymbol{\mathcal{X}}- \boldsymbol{\mathcal{D}}(\boldsymbol{\Theta})\|_{\mathrm{F}}^{2}\). The reconstruction error can be shown (see Appendix B) to be decomposed as \[E=E_{1}+E_{2}+\epsilon, \tag{4}\] where \(E_{1}=\|\boldsymbol{\mathcal{X}}-\boldsymbol{\mathcal{D}}(\boldsymbol{\Theta}^ {*})\|_{\mathrm{F}}^{2}\) is the representation error; is the cross term defined as \(\epsilon=2\langle\boldsymbol{\mathcal{X}}-\boldsymbol{\mathcal{D}}(\boldsymbol{ \Theta}^{*}),\boldsymbol{\mathcal{D}}(\boldsymbol{\Theta}^{*})-\boldsymbol{ \mathcal{D}}(\hat{\boldsymbol{\Theta}})\rangle\). From the definition of the three terms in Eq. (4), it is easy to show (see Appendix B) that the cross term \(\epsilon\) is equal to zero if \(E_{1}\) or \(E_{2}\) is equal to zero, i.e., \[\epsilon=0\Leftarrow\boldsymbol{\mathcal{X}}=\boldsymbol{\mathcal{D}}( \boldsymbol{\Theta}^{*})\text{ or }\boldsymbol{\mathcal{D}}(\boldsymbol{\Theta}^{*})=\boldsymbol{\mathcal{D}}( \hat{\boldsymbol{\Theta}}). \tag{5}\] The error decomposition results show that minimizing the reconstruction error \(E\) requires simultaneously reducing the representation error \(E_{1}\) and the identification error \(E_{2}\). Like neural networks, models with high expressive powers usually have low representation errors but high identification errors. Conversely, concise models, like linear regression, have low identification errors but high representation errors due to their limited parameters and simple operations. The result in (5) suggests two possible ways to minimize the reconstruction error. To null the cross term \(\epsilon\), one can choose a complicated model with universal approximation property [19] to zero the representation error; or a relatively simple model to induce zero identification error. In the first case, the reconstruction error is up to the identification error, thus necessitating the model complexity optimization. In the latter case, the effort should be paid to enhance the expressive power of the model while not increasing the model's complexity. To draw more insights into the reconstruction error, two commonly used reconstruction methods are analyzed in the following subsection. ### Commonly Used Approaches #### Ii-C1 Spline Interpolation The biharmonic spline interpolation assumes that the SSF can be well represented by a linear combination of smooth Green functions centered at the samples, i.e., [13] \[\boldsymbol{\mathcal{X}}_{\mathbf{i}}=[\boldsymbol{\mathcal{D}}(\mathbf{w})]_ {\mathbf{i}}=\sum_{n=1}^{N}w_{n}g(\mathbf{i},\mathbf{i}_{n}), \tag{6}\] where \(\mathbf{i}_{n}=(i_{n},j_{n},k_{n})\in\mathbb{R}^{3}\) is the index of the sampled location; \(w_{n}\) is the weight associated with each sample; \(\boldsymbol{\Theta}=\mathbf{w}=[w_{1},\cdots,w_{N}]^{\mathrm{T}}\); and \(g(\cdot,\cdot)\) is the Green function. The representation error \(E_{1}\) is given by \[E_{1}=\|\boldsymbol{\mathcal{X}}-\boldsymbol{\mathcal{D}}(\mathbf{w}^{*})\|_{ \mathrm{F}}^{2}, \tag{7}\] where the optimal weights \(\mathbf{w}^{*}\) are determined by solving the following problem: \[\min_{\mathbf{w}}\sum_{n=1}^{N}\left[\boldsymbol{\mathcal{Y}}_{\mathbf{l}_{n}}- \sum_{k=1}^{N}w_{k}g(\mathbf{l}_{n},\mathbf{l}_{k})\right]^{2}. \tag{8}\] Note that the optimal weights can be determined uniquely from Eq. (8) since it is a well-posed problem, which leads to zero identification error, i.e., \(E_{2}=0\). However, the predefined Green function is not flexible and may not reflect the correlations among the sound speeds at different regions, resulting in a significant \(E_{1}\). According to the results in (4) and (5), since \(E_{2}=0\) and \(\epsilon=0\), the reconstruction error \(E=E_{1}\). ## 2 Deep Neural Networks In recent years, deep neural networks (DNN) have become highly effective tools in various areas, including computer vision[29], natural language processing[30], and acoustic signal processing[31, 32, 33]. Here we assume that the SSF can be represented by a \(L\)-layer neural network, i.e., \[\boldsymbol{\mathcal{X}}=\boldsymbol{\mathcal{D}}(\boldsymbol{\Theta})=F_{ \boldsymbol{\theta}_{L}}(F_{\boldsymbol{\theta}_{L-1}}(\cdots F_{\boldsymbol{ \theta}_{1}}(\boldsymbol{\mathcal{G}}))), \tag{9}\] where \(F_{\boldsymbol{\theta}_{1}}(\cdot)\) denotes the function of the \(l\)th layer with parameters (e.g., weights and biases) \(\boldsymbol{\theta}_{l}\); and \(\boldsymbol{\mathcal{G}}\) is the core tensor. The parameter set \(\boldsymbol{\Theta}\) includes \(\boldsymbol{\mathcal{G}}\) and \(\{\boldsymbol{\theta}_{l}\}_{l}\). For the functional forms of \(\{F_{\boldsymbol{\theta}_{l}}(\cdot)\}_{l}\), commonly used ones include the multi-layer perceptron (MLP)[34] and convolutional neural network (CNN)[35]. For the DNN-based SSF representation model, the identification error is given by \[E_{2}=\|F_{\boldsymbol{\theta}_{L}^{*}}(F_{\boldsymbol{\theta}_{L-1}^{*}}( \cdots F_{\boldsymbol{\theta}_{1}^{*}}(\boldsymbol{\mathcal{G}}^{*})))-F_{ \hat{\boldsymbol{\theta}}_{L}}(F_{\hat{\boldsymbol{\theta}}_{L-1}}(\cdots F_{ \hat{\boldsymbol{\theta}}_{1}}(\hat{\boldsymbol{\mathcal{G}}})))\|_{\mathrm{F }}^{2}, \tag{10}\] where \(\boldsymbol{\Theta}^{*}\) (includes \(\boldsymbol{\mathcal{G}}^{*}\) and \(\{\boldsymbol{\theta}_{l}^{*}\}_{l}\)) is the solution to the representation problem \[\min_{\boldsymbol{\Theta}}\|\boldsymbol{\mathcal{X}}-F_{\boldsymbol{\theta}_{ L}}(F_{\boldsymbol{\theta}_{L-1}}(\cdots F_{\boldsymbol{\theta}_{1}}( \boldsymbol{\mathcal{G}})))\|_{\mathrm{F}}^{2}; \tag{11}\] and \(\hat{\boldsymbol{\Theta}}\) (includes \(\hat{\boldsymbol{\mathcal{G}}}\) and \(\{\hat{\boldsymbol{\theta}}_{l}\}_{l}\)) is the solution to the reconstruction problem \[\min_{\boldsymbol{\Theta}}\|\boldsymbol{\mathcal{Y}}-\boldsymbol{\mathcal{O}}* F_{\boldsymbol{\theta}_{L}}(F_{\boldsymbol{\theta}_{L-1}}(\cdots F_{\boldsymbol{ \theta}_{1}}(\boldsymbol{\mathcal{G}})))\|_{\mathrm{F}}^{2}. \tag{12}\] Due to the universal approximation property[19] of DNN, the representation error \(E_{1}\) can be zero, i.e., \(E_{1}=0\), if a sufficient number of neurons are available and/or a highly sophisticated neural architecture is utilized. According to the results in (4) and (5), since \(E_{1}=0\) and \(\epsilon=0\), the reconstruction error \(E=E_{2}\), which, however, will be large since the solution space of problem (12) is vast (that is, there exists a large number of solutions that attain the same objective value of problem (12)). ## III Tensor Neural Network-Aided Reconstruction The reconstruction error analysis introduced in the last section shows that a highly expressive representation model is necessitated to accurately capture the complex details of sound speeds' spatial distribution, thus minimizing the representation error. But this often leads to increased parameters and complicated mathematic operations, resulting in a more complicated model with increased identification error. Consequently, the key to approaching the optimal reconstruction is to enhance the model's expressive power while maintaining the model's conciseness (in terms of both parameter number and mathematical operations). Towards this goal, in this section, inspired by the recent success of succinct tensor-based ocean SSF representation and powerful deep neural network-based image representation, we _take the best from them_ and propose a 3D SSF-tailored tensor deep neural network. Then, an effective reconstruction algorithm is derived. In the following subsections, we first briefly review the recent tensor-based SSF representation model (Sec. III.1), and then devise our proposed model that integrates the tensor model and deep neural network (Sec. III.2). Next, theoretical insights are drawn by analyzing the proposed model's noise rejection property and particular form (Sec. III.3). Finally, the reconstruction algorithm is derived (Sec. III.4). ### Tensor-based SSF Representation This subsection briefly reviews the tensor-based SSF representation framework established in Ref. [22], which relies on the Tucker decomposition model. For readers unfamiliar with tensors, a brief review in the context of SSF representation was provided in Sec. III of Ref. [22]. The definitions of the tensor operations utilized in this paper are elucidated in Appendix A. Mathematically, the tensor-based SSF representation model is: \[\boldsymbol{\mathcal{X}}\approx\boldsymbol{\mathcal{S}}\times_{1}\mathbf{U}^{( 1)}\times_{2}\mathbf{U}^{(2)}\times_{3}\mathbf{U}^{(3)}, \tag{13}\] where the columns of factor matrices \(\{\mathbf{U}^{(p)}\}_{p=1}^{3}\) can be interpreted as the spatial basis functions; and core tensor \(\boldsymbol{\mathcal{S}}\) contains the weighting coefficients. Symbol \(\times_{p}\) denotes the mode-\(p\) product; see the definition in Appendix A. Various constraints can be specified on factor matrices and the core tensor to incorporate additional knowledge, e.g., orthogonal constraints[22] and sparsity constraints[21]. Ref. [22] has shown the conciseness of the tensor-based representation model, which is mathematically succinct and has a small number of parameters, leading to a small identification error \(E_{2}\). On the other hand, the tensor-based representation model is also effective in characterizing the three-dimensional interactions among sound speeds. Notably, it includes the classical SSF basis functions, such as EOFs and Fourier basis functions, as special cases[22]. However, since the Tucker decomposition model is based on a multi-linear form, the representation model in Eq. (13) is difficult to capture the highly nonlinear variations of sound speeds (caused by small-scale/mesoscale ocean processes), resulting in a non negligible representation error \(E_{1}\), as demonstrated by the numerical results presented in Sec. IV.2. ### Integrating Tensor and Neural Networks for SSF Representation In this subsection, we propose an integrated model that can take both advantages of DNN and tensor-based representation models, i.e., expressiveness and conciseness, respectively. The proposed model is based on the tensor contraction layer (TCL),[36] as shown in Fig. 2. Given an input tensor \(\mathbf{\mathcal{X}}_{l}\in\mathbb{R}^{R_{1}^{l}\times R_{2}^{l}\times R_{3}^{l}}\), the output of TCL is: \[\mathbf{\mathcal{X}}_{l+1}=\varsigma(\mathbf{\mathcal{X}}_{l}\times_{1}\mathbf{W}_{l}^ {(1)}\times_{2}\mathbf{W}_{l}^{(2)}\times_{3}\mathbf{W}_{l}^{(3)}), \tag{14}\] where the output \(\mathbf{\mathcal{X}}_{l+1}\in\mathbb{R}^{R_{1}^{l+1}\times R_{2}^{l+1}\times R_{3} ^{l+1}}\); each matrix \(\mathbf{W}_{l}^{(i)}\in\mathbb{R}^{R_{i}^{l+1}\times R_{i}^{l}}\); and \(\varsigma(\cdot)\) is the activation function. Note that the TCL is based on the Tucker decomposition, enabling it to effectively capture the multidimensional interactions of sound speeds even with a small number of parameters. To further enhance the expressive power, we propose a tensor neural network (TNN) composed of multiple TCLs; see Fig. 3. Mathematically, a \(L\)-layer TNN can be defined as \[\mathbf{\mathcal{X}}=\varsigma\bigg{(} \cdots\varsigma\left(\mathbf{\mathcal{G}}\times_{1}\mathbf{W}_{1}^{( 1)}\times_{2}\mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)}\right)\cdots \tag{15}\] \[\times_{1}\mathbf{W}_{L}^{(1)}\times_{2}\mathbf{W}_{L}^{(2)} \times_{3}\mathbf{W}_{L}^{(3)}\bigg{)},\] where \(\mathbf{\mathcal{G}}\in\mathbb{R}^{R_{1}\times R_{2}\times R_{3}}\) is the core tensor, \(\{\{\mathbf{W}_{l}^{(i)}\}_{i=1}^{3}\}_{l=1}^{L}\) are the factor matrices. The choice of activation function depends on the data. The commonly used ones are rectified linear unit (ReLU), sigmoid, tanh, etc.[37]. ### Theoretical Insights Before deriving the reconstruction algorithm, theoretical analyses were performed to reveal the insights of the proposed TNN-based representation model. First, we show that the particular architecture of the proposed model enables it to reject noise effectively. Here the Gaussian noise is considered due to the least-squares loss function used in Eq. (1), which is a common choice in physical field reconstruction research[12, 38]. For simplicity, consider a _one-layer_ TNN equipped with a ReLU activation function. The TNN output can be written as: \[\mathbf{\mathcal{X}}=\text{ReLU}(\mathbf{\mathcal{G}}\times_{1}\mathbf{W}_{1}^{(1)} \times_{2}\mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)}). \tag{16}\] Then we have the following proposition. **Proposition 1**: Consider a _one-layer_ TNN with parameters \(\mathbf{\Theta}=\{\mathbf{\mathcal{G}},\mathbf{W}_{1}^{(1)},\mathbf{W}_{1}^{(2)}, \mathbf{W}_{1}^{(3)}\}\) and a noise tensor \(\mathbf{\mathcal{E}}\) with each element following a zero-mean i.i.d Gaussian distribution, i.e., \(\mathbf{\mathcal{E}}_{i,j,k}\sim\mathcal{N}(0,\sigma^{2})\). Then with probability at least \(1-e^{-R}-e^{-\frac{T}{16}}\), \[\min_{\mathbf{\Theta}}\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\rm F}^{2}\geq\|\bm {\mathcal{E}}\|_{\rm F}^{2}(1-\frac{10R}{T}), \tag{17}\] where \(R=R_{1}R_{2}R_{3}\) and \(T=IJK\). _Proof_: See Appendix B. The term \(\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\rm F}^{2}\) in Eq. (17) serves as a metric for assessing the model's capacity to fit the noise. A higher value of this term indicates that the model is less prone to fit the noise. **Proposition 1** provides a lower bound for \(\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\rm F}^{2}\), which is a product of the noise power \(\|\mathbf{\mathcal{E}}\|_{\rm F}^{2}\) and a constant \((1-\frac{10R}{T})\). Since \(R\) is typically much smaller than \(T\) to promote model conciseness (i.e., \(R\ll T\)), the constant \((1-\frac{10R}{T})\) is approximately equal to \(1\). As a consequence, the lower bound is close to the noise power, which is significantly greater than zero. Therefore, **Proposition 1** points out that the one-layer TNN rejects fitting the noise with a high probability. Note that different from classical low-rank tensor decomposition works (Refs. [39, 40, 41] and references therein) that are primarily based on multi-linear forms, **Proposition 1** theoretically quantifies the noise-fitting ability of the proposed tensor model with the ReLU nonlinear activation function, which is novel and has a standalone value beyond SSF reconstruction tasks. The key concept underlying **Proposition 1** is the intuition that a non-linear tensor decomposition model with intrinsically low-rank property, such as the one defined in Eq. (15), can effectively reject noise. While this proposition is specifically formulated for the _one Figure 3: Illustration of the tensor neural network (TNN), which is composed of multiple TCLs. \(\mathbf{\mathcal{G}}\) is the core tensor and \(\mathbf{\mathcal{X}}\) is the output tensor. layer_ TNN, the intuition behind noise rejection extends to TNN models with multiple layers. However, extending the results in **Proposition 1** to multiple-layer TNNs poses a daunting challenge. As a result, we experimentally validate its noise rejection property instead. The multiple-layer TNN model used in this experiment has the same architecture as the one used in Sec. IV (see Fig. 7(c)), which has three layers with dimensions being (5, 5, 5), (10, 10, 10) and (20, 20, 20) respectively. In our experiment, we fit the proposed model to three distinct datasets: white Gaussian noise with standard deviation \(\sigma=0.5\), SSF data, and SSF data with added noise. In Fig. 4, we present the fitting error, quantified by the mean squared error (MSE), with respect to the iteration number of the parameter learning process. Fig. 4 demonstrates that the proposed model effectively fits the SSF data, as evidenced by the smallest MSE values across all iteration numbers. Conversely, the MSE values of the noise fitting are the highest, substantiating the model's noise rejection property. Furthermore, when the SSF data is contaminated by noise, the proposed model tends to fit the SSF data while fitting only a minor portion of the noise, resulting in the MSE curve (red curve) being much closer to that of fitting the noise-free SSF data (blue curve). Subsequently, we establish a connection between the proposed TNN and the tensor Tucker model defined in Eq. (13). Specifically, if the activation function \(\varsigma(\cdot)\) is chosen to be the identity or linear function, following the definition of tensor Tucker decomposition,[39, 41] the \(L\)-layer TNN (defined in Eq. (15)) can be easily shown (see Appendix B) to be equivalent to the Tucker decomposition model (defined in Eq. (13)), that is, \[\begin{split}\mathbf{\mathcal{X}}&=\varsigma\left( \cdots\varsigma\left(\mathbf{\mathcal{G}}\times_{1}\mathbf{W}_{1}^{(1)}\times_{2} \mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)}\right)\cdots\right.\\ &\quad\times_{1}\mathbf{W}_{L}^{(1)}\times_{2}\mathbf{W}_{L}^{(2) }\times_{3}\mathbf{W}_{L}^{(3)}\right),\\ &=\mathbf{\tilde{\mathcal{G}}}\times_{1}\mathbf{\tilde{W}}^{(1)} \times_{2}\mathbf{\tilde{W}}^{(2)}\times_{3}\mathbf{\tilde{W}}^{(3)},\end{split} \tag{18}\] where \(\varsigma(x)=ax\) with \(a\in\mathbb{R}\) being a constant, \(\mathbf{\tilde{\mathcal{G}}}=a^{L}\mathbf{\mathcal{G}}\) and \(\mathbf{\tilde{W}}^{(i)}=\mathbf{W}_{L}^{(i)}\mathbf{W}_{L-1}^{(i)}\cdots \mathbf{W}_{1}^{(i)},i=1,2,3\). This result reveals that the tensor Tucker decomposition model, which is the cornerstone of the recently developed tensor-based SSF basis function learning framework that includes classical ones (such as EOFs and Fourier basis functions) as special cases, is _a particular instance_ of the proposed TNN model. In other words, _the proposed TNN model is the most general ocean SSF representation model to date_. The introduction of nonlinear activation functions significantly improves the tensor model's representation capability without compromising its concieness, allowing simultaneous decreases in both the representation error \(E_{1}\) and identification error \(E_{2}\). Specifically, the nonlinearity and hierarchical construction enable the effective capture of sound speed variations, leading to a small \(E_{1}\). Meanwhile, the Tucker decomposition-based backbone, or TCL, preserves the tensor model's conciseness, leading to a small \(E_{2}\). After establishing a strong link between the proposed tensor neural network and the tensor Tucker decomposition model, one may contemplate extending the recoverability analysis[42] using Tucker models under either systematic sampling[43, 44] or random sampling[27]. However, due to the involvement of nonlinear activation functions, the analysis becomes a formidable challenge. Instead, in Sec. IV, we present a numerical analysis of the recoverability of the proposed model, while preserving its theoretical analysis as an intriguing avenue for future research. ### DNN-aided 3D SSF Reconstruction Algorithm Based on the proposed TNN model, the 3D SSF reconstruction problem can be formulated as \[\begin{split}\min_{\mathbf{\Theta}}&\quad\underbrace{ \left\|\mathbf{\mathcal{Y}}-\mathbf{\mathcal{O}}\ast\mathbf{\mathcal{X}}\right\|_{\rm F}^ {2}+\lambda R(\mathbf{\mathcal{X}})}_{\triangleq f(\mathbf{\Theta})},\\ \text{s.t.}&\quad\mathbf{\mathcal{X}}=\varsigma\left( \cdots\varsigma\left(\mathbf{\mathcal{G}}\times_{1}\mathbf{W}_{1}^{(1)}\times_{2} \mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)}\right)\cdots\right.\\ &\quad\left.\times_{1}\mathbf{W}_{L}^{(1)}\times_{2}\mathbf{W}_{L}^ {(2)}\times_{3}\mathbf{W}_{L}^{(3)}\right),\end{split} \tag{20}\] where \(R(\mathbf{\mathcal{X}})\) is a regularization term that allows the incorporation of side information for further performance enhancement, and \(\lambda\) is a hyper-parameter that balances the importance of the data fitting and regularization terms. Various types of structural information can be Figure 4: The mean squared error (MSE) curves were obtained by fitting the proposed model to white Gaussian noise with standard deviation \(\sigma=0.5\), SSF data, and SSF data with added noise. The three-layer TNN model used in this experiment has the same architecture as the one used in Sec. IV (see Fig. 7(c)). The results demonstrate that the proposed model is inclined to fit the SSF data while effectively rejecting noise fitting. incorporated by choosing different \(R(\mathbf{\mathcal{X}})\). If \(\lambda=0\), \(R(\mathbf{\mathcal{X}})\) is no longer effective. In the experimental results (Sec. IV.3), we demonstrate using the total variation (TV) regularizer to capture spatial correlations among sound speeds. The model parameters \(\mathbf{\Theta}\) can be updated using the calculated gradients denoted as \(\nabla_{\mathbf{\Theta}}f(\mathbf{\Theta})\) via the gradient descent method [24]. This algorithm is widely used in deep learning and is straightforward. The gradients of the loss function with respect to the model parameters \(\mathbf{\Theta}\) can be obtained by utilizing the automatic differentiation technique in TensorFlow [45]. Appendix C provides some key steps of the derivation of the gradients. The update rule for the model parameters is given as follows: \[\mathbf{\Theta}_{k+1}\leftarrow\mathbf{\Theta}_{k}-\eta_{k}\nabla_{\mathbf{\Theta}}f(\bm {\Theta}_{k}), \tag{21}\] where \(\mathbf{\Theta}_{k}\) and \(\mathbf{\Theta}_{k+1}\) are the model parameters at iterations \(k\) and \(k+1\), respectively, and \(\eta_{k}\) is the learning rate. It's worth noting that the gradient descent method is a first-order method that can scale well to tackle big data [46]. Advanced acceleration schemes such as AdaGrad, RMSProp, and Adam can be used to further speed up the convergence [47]. We summarize the proposed algorithm in **Algorithm 1**. Note that the tensor \(\mathbf{\mathcal{G}}\) is a model parameter that is optimized during the reconstruction, while not the input of the reconstruction algorithm. ## IV Numerical results and discussions In this section, numerical results using real-life ocean 3D SSF data are presented to showcase the encouraging reconstruction performance of the proposed tensor neural network-aided reconstruction method (labeled as TNN). ### Experimental Settings **3D SSF Data**: The 3D South China Sea (SCS) data \(\mathbf{\mathcal{X}}\in\mathbb{R}^{20\times 20\times 20}\) on December 21, 2012, is considered in this paper and illustrated in Fig. 5. The data were derived from 3D conductivity, temperature, and depth (CTD) measurements using the hybrid coordinate ocean model (HYCOM). The spatial coverage of the dataset is 152 km \(\times\) 152 km \(\times\) 190 m, with a horizontal resolution of 8 km and a vertical resolution of 10 m. **Sound Speed Measurements**: Following the configuration of the recent ocean observing systems (e.g., the Argo program [5] and Ocean Internet of Things [48]), the sound speed measurements are randomly collected over the 3D spatial region. The sampling ratio is defined as \(\rho=\frac{\sum_{i,j,k}\mathbf{\mathcal{O}}_{i,j,k}}{IJK}\). Examples of the sampled SSF under different sampling ratios in one single Monte-Carlo trial are shown in Fig. 6. In addition, the sampled data are assumed to be corrupted by i.i.d. Gaussian noise with zero mean and standard deviation \(\sigma\). **Performance measure**: The reconstruction performances of different methods are assessed by the root mean squared error (RMSE), given by \[\text{RMSE}=\sqrt{\frac{1}{T}\|\mathbf{\mathcal{\hat{X}}}-\mathbf{\mathcal{X}}\|_{ \text{F}}^{2}}, \tag{22}\] where \(\mathbf{\mathcal{X}}\) is the ground-truth; \(\mathbf{\mathcal{\hat{X}}}\) is the reconstructed SSF; and \(T=IJK\) denotes the total number of SSF data entries. The RMSEs were averaged over three Monte-Carlo trials with different sampling patterns. All the experiments are conducted in a computer with a 2.2GHz 6-Core Intel i7 CPU. **Training Optimizer**: The gradient computation was carried out using the automatic differentiation mechanism of TensorFlow. To optimize the proposed TNN model, we use the popular Adam optimizer with an initial learning rate of 0.005 and fix the number of iterations at 15,000. ### Validating Reconstruction Error Analysis To validate the reconstruction error analysis results presented in Sec. II, numerical results are first presented in this subsection before comparing the proposed algorithm with SOTA methods. Specifically, we compare the reconstruction performance of the proposed tensor neural network-aided algorithm with the methods that use only deep learning models or tensor models. The results demonstrate the importance of striking the right balance between model conciseness and expressiveness for achieving superior reconstruction performance. In this subsection, no extra prior information is included in the TNN for a fair comparison, i.e., the regularization parameter \(\lambda\) is set to zero. And unless otherwise specified, \(\sigma\) is set to 0.1, indicating that the RMSEs of sound speed measurements/estimates mostly lie in the range of 0.1-0.3 m/s, a moderate range according to our real-life sea trials. The reconstruction performances under a wider range of noise powers are reported in the next subsection. ## 1 Conciseness Matters First, the importance of conciseness in reconstruction is demonstrated by comparing the reconstruction performances of different neural network-based methods. The baseline networks used for this comparison were a three-layer multi-layer perceptron (MLP) and a three-layer convolutional neural network (CNN). In addition, a three-layer tensor neural network (TNN) was used for the proposed method. The detailed network architectures are illustrated in Fig. 7. The reconstruction performances of these network-based methods under different sampling ratios are presented in Table 1. For visualization, Fig. 8 presents the recovered SSF of different methods under a sampling ratio of 0.3 in one Monte-Carlo trial. The results presented in Table 1 and Fig. 8 indicate that the SSF reconstruction performance of the MLP is poor. This is due to the large number of parameters in the MLP (around 8 million), which make it difficult to learn the optimal model from limited and noisy measurements (e.g., 2400), resulting in a significant identification error \(E_{2}\). In contrast, the CNN, which has fewer parameters (1565), can leverage the spatial smoothness of the SSF and exhibit better reconstruction performance. However, the CNN does not exploit the 3D spatial correlations among sound speeds as well as the TNN. Specifically, the proposed TNN model, utilizing tensor computation, can effectively capture the 3D spatial correlations with fewer parameters (875), resulting in significantly better reconstruction performance. Additionally, we compare the reconstruction performance of TNN and CNN with 3D convolutions. Fig. 9 shows the architecture of the three-layer CNN model with 3D convolutions, while the three-layer TNN model has the same neural network architectures as the one in Fig. 7(c). Here we denote the CNN models with \(C=10\) and \(C=20\) as CNN1 and CNN2 respectively, where \(C\) is the number of kernels in the hidden layer. Table 2 presents the averaged reconstruction RMSEs of different models under various sampling ratios. Our findings are as follows: Figure 5: Illustration of the 3D SSF data. Figure 6: Measurements of SSF under different sampling ratios in one single Monte-Carlo trial. 1) TNN achieves the best performance in all scenarios. While CNN models are capable of exploiting spatial correlations, they do not fully leverage the intrinsic low-rankness of the SSF. On the other hand, the TNN model, which nonlinearly concatenates several low-rank tensor models, better exploits the global coherence of the SSF. Notably, even with the minimum number of model parameters, the TNN model achieves the best reconstruction performance among the three models when the SSF is fully observed (\(\rho=1\)). This result indicates that the TNN model has a lower representation error because the identification error is negligible when \(\rho=1\), demonstrating the strong expressiveness of the TNN model for SSF representation. 2) CNN2 outperforms CNN1 in all scenarios. CNN2 has more parameters and higher expressive power, resulting in a lower representation error. Surprisingly, CNN2 also yields better reconstruction results under sparse sampling conditions (e.g., \(\rho=0.3\)). This could be attributed to the inductive bias of the network architecture, allowing it to produce reasonable outcomes even for highly ill-posed inverse problems, as evidenced by recent works in the deep learning literature (see Ref. [49]). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline \(\rho\) & \(N\) & **MLP**(\(\approx\) 8M) & **CNN**(1565) & **TNN**(875) \\ \hline 0.2 & 1600 & 5.67 & 0.63 & **0.50** \\ \hline 0.3 & 2400 & 5.37 & 0.59 & **0.34** \\ \hline 0.4 & 3200 & 4.94 & 0.56 & **0.28** \\ \hline \hline \end{tabular} \end{table} Table 1: RMSEs of different neural network-aided methods under different sampling ratio \(\rho\). The number of measurements \(N\) is presented in the second column. And the number of parameters for different models is in parentheses. Figure 8: Visual effects of reconstructed SSF of different neural network-based methods at depths 0 m, 90 m, and 190 m under a sampling ratio of 0.3 in one single Monte-Carlo trial. The RMSEs are shown at the top. Figure 7: Th network architecture of (a) MLP, (b) CNN, (c) TNN. The number of channels C is set to 10. The MLP is composed of three layers with dimensions of 125, 1000, and 8000, respectively. The convolutional layers have a kernel size of (2,2), and the Upsampling layers use the nearest-neighbor interpolation method. For all neural networks, the ReLU activation function is chosen for all layers except for the final layer, which uses the tanh activation function. ## 2 Expressiveness Matters Then, experiments were conducted to demonstrate the effectiveness of enhancing the model's representation capability (expressiveness) in improving reconstruction performance. We compare the proposed model with two tucker models with different architectures. The first model, named Tucker1, has the same structure as the TNN model but without non-linearity (i.e., using linear activation functions). The second model, named Tucker2, is just a tensor Tucker decomposition model (defined in Eq. (13)) with the dimension of the core tensor setting to \((7,8,8)\). The number of parameters in Tucker2 is \(20\times(7+8+8)+7\times 8\times 8=908\), while TNN and Tucker1 both have \((20\times 10+10\times 5)\times 3+5\times 5\times 5=875\) parameters. Therefore, the three models have a similar number of parameters. Table 3 presents the average RMSEs of the three models under different sampling ratios. Visualization of the reconstructed SSFs in one Monte-Carlo trial under \(\rho=0.3\) is shown in Fig. 10. Based on the experimental results shown in Table 3 and Fig. 10, we can draw the following conclusions. First, when the number of measurements is relatively large (e.g., \(N=2400\)), Tucker1 has the highest RMSE (poorest reconstruction) due to its limited degree of freedom. According to Eq. (18), Tucker1 is mathematically equivalent to a Tucker decomposition model (defined in Eq. (13)) with a core tensor of dimension (5, 5, 5), which results in a larger representation error \(E_{1}\) compared to the other two models. Second, although Tucker2 has slightly more parameters than TNN, it still has a larger RMSE due to its limited representation capability imposed by the multi-linear form. This demonstrates the importance of introducing non-linear activation functions in the model. Finally, the proposed TNN model achieves the best performance when the number of measurements is not too small, and shows a similar RMSE to Tucker1 under a small sampling ratio (e.g., \(\rho=0.2\)), thanks to its high expressive power and concise structure. ## 3 Recoverability, Trade-offs between being "Deeper" or "Wider" To evaluate the recoverability and draw more insights into the proposed model, we compare the reconstruction performance of TNN models with different configurations (see Table 4) assuming no observation noise. Fig. 11 shows the RMSEs under three different sampling ratios. More experimental results can be found in Appendix D. It can be seen from Fig. 11 that the recovery/reconstruction RMSEs of the proposed TNN models (with different layer numbers/configurations) keep decreasing to a very small number (e.g., \(<0.1\)) as the sam \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline \(\rho\) & TNN(875) & CNN1(1106) & CNN2(3686) \\ \hline 0.3 & **0.35** & 0.57 & 0.46 \\ \hline 0.5 & **0.23** & 0.51 & 0.40 \\ \hline 1 & **0.21** & 0.48 & 0.37 \\ \hline \hline \end{tabular} \end{table} Table 2: The averaged RMSEs of TNN and 3D CNN models under different sampling ratios. The number of model parameters is in parentheses. Figure 10: Visual effects of reconstructed SSF of different tensor-based models at depths 0 m, 90 m, and 190 m under a sampling ratio of 0.3 in one single Monte-Carlo trial. The RMSEs are shown at the top. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline \(\rho\) & \(N\) & **Tucker1**(875) & **Tucker2**(908) & **TNN**(875) \\ \hline 0.2 & 1600 & **0.47** & 1.20 & 0.50 \\ \hline 0.3 & 2400 & 0.43 & 0.70 & **0.34** \\ \hline 0.4 & 3200 & 0.42 & 0.34 & **0.28** \\ \hline \hline \end{tabular} \end{table} Table 3: RMSEs of different tensor-based methods under different sampling ratios. The number of measurements \(N\) is presented in the second column. And the number of parameters for different models is in parentheses. Figure 9: The architecture of the 3D CNN model, where \(C\) is the number of kernels in the hidden layer. pling ratio increases (i.e., with more samples). This suggests that the recoverability of the proposed model is highly likely to be guaranteed when samples are abundant and clean. On the other hand, despite having the largest number of layers, TNN1 gives the highest error when the \(\rho=1\) (i.e., the SSF is fully observed). The reason is that TNN1 has the smallest number of model parameters among the three models, as shown in Table 4. Generally, making the TNN "deeper" can significantly reduce the number of model parameters at the cost of slightly increasing the representation error. On the other hand, making the TNN "wider" can effectively increase expressiveness while also increasing the identification error, as seen in TNN3 in Table 4 and Fig. 11. The main idea behind the TNN model is to hierarchically reduce the tensor dimensionality through non-linear approximation, allowing for effective parameter reduction without significantly hampering expressiveness. The experimental results indicate that designing the TNN model involves trade-offs between being wider or deeper. Although it is challenging to determine the optimal sweet spot theoretically, we can offer some practical suggestions on how to strike a good balance in practice: * It is advisable to keep the number of unknown model parameters less than the number of samples in order to potentially prevent ill-posed reconstruction problems. * Consider increasing the dimensionality of the core tensor layer by layer when the number of layers is fixed. For example, the dimensionality of the \(l\)th layer can be set to twice that of the \((l-1)\)th layer. This technique has shown promise in reducing the number of parameters while maintaining expressiveness. * Recent studies have suggested that a one-layer Tucker model can effectively capture the main pattern of sound speeds, and adding just two or three more layers may be sufficient to capture non-linear variations and reduce the number of parameters. Consequently, the layer number of TNN is recommended to be set to 3 or 4. It should be noted that these configurations were tested under the experimental settings outlined in Sec. IV.1, and the optimal model configuration may vary depending on the specific application or scale. ### Enhancement by TV Regularization In this subsection, experiments were conducted to demonstrate how the reconstruction performance of TNN can be further boosted by incorporating the structural information of the regularizer. For illustration purposes, the widely-used total variation (TV) [25] regularizer was used to enforce spatial smoothness among sound speeds. Specifically, the TV regularizer is defined as follows: \[\begin{split} R(\mathbf{\mathcal{X}})&=\sum_{i,j,k}(| \mathbf{\mathcal{X}}(i+1,j,k)-\mathbf{\mathcal{X}}(i,j,k)|+|\mathbf{\mathcal{X}}(i,j+1,k) \\ &-\mathbf{\mathcal{X}}(i,j,k)|+|\mathbf{\mathcal{X}}(i,j,k+1)-\mathbf{ \mathcal{X}}(i,j,k)|).\end{split} \tag{23}\] The reconstructed results of the proposed TNN model with TV regularizer, denoted as TNN-TV, are presented in Fig. 12. The hyper-parameter \(\lambda\) was manually tuned to optimize performance. The results demonstrate that TNN-TV consistently outperforms TNN in SSF reconstruction, particularly under very sparse sampling (e.g., \(\rho=0.1\)). Note that when the sampling ratio \(\rho\geq 0.3\), the measurements can provide sufficient information for learning the model parameters, and hence TNN and TNN-TV have similar performances. In this case, \(\lambda\) can be set to be nearly zero, and TNN-TV gradually reduces to TNN (see Eq. (19)). Since TNN is a special case of TNN-TV, in the next subsection, we only consider TNN-TV and refer to it as TNN for simplicity. ### Comparisons with SOTA methods Previous subsections have demonstrated the conciseness and expressiveness of the proposed model. In this \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline Name & L & Dimensionality & M \\ \hline TNN1 & 3 & (5, 5, 5), (10, 10, 10), (20, 20, 20) & 875 \\ \hline TNN2 & 2 & (10, 10, 10), (20, 20, 20) & 1600 \\ \hline TNN3 & 2 & (15, 15, 15), (20, 20, 20) & 4275 \\ \hline \hline \end{tabular} \end{table} Table 4: The configurations of TNN models. M is the number of parameters. Figure 11: The averaged RMSEs of TNN models with different configurations under different sampling ratios. subsection, we compare the reconstruction performance of the proposed algorithm with several SOTA SSF reconstruction methods. **SOTA Methods**: The selected SOTA methods include: 1) low-rank matrix-based method represented by the recently proposed graph-guided Bayesian matrix completion (BMCG)[8]; 2) tensor-based methods represented by alternating least squares for the Tucker model (Tucker-ALS), low-rank tensor completion (LRTC)[50], and LRTC with total variation (LRTC-TV)[51]; and 3) statistical learning-based methods represented by Gaussian process regression (GPR)[14]. **Model Settings**: The kernel of GPR is the widely used radial basis function (RBF), with the hyper-parameters being learned via evidence optimization[18]. In BMCG, the graph Laplacian matrix is constructed using the same kernel[8]. The tensor rank surrogate function used in LRTC is the tensor nuclear norm, defined as \(\sum_{k=1}^{K}\|\widehat{\mathcal{X}}(:,:,k)\|_{*}\)[50], where \(\widehat{\mathcal{X}}\) is the Fourier transformation of \(\mathcal{X}\) along the third mode. **Results and Discussions**: The reconstruction RMSEs of different methods under different sampling ratios and noise powers are shown in Table 5. Visual inspections of reconstructed SSFs and error surfaces under different scenarios are presented in Figs. 13-16. As can be seen, the proposed tensor neural network-aided reconstruction algorithm outperforms all other methods in terms of reconstruction accuracy, regardless of the sampling ratio and noise power. The reason is that the proposed model has a high expressive power with relatively few parameters, achieving an outstanding trade-off between \(E_{1}\) and \(E_{2}\), as discussed in Sec. III. Additionally, the TNN model exhibits strong noise rejection (as proved in **Proposition 1**), making it robust against measurement noise. In the following, we provide additional discussions to draw further insights from the results. * The matrix-based method, BMCG, yields good reconstruction results for the horizontal SSF slices by explicitly exploring the spatial smoothness through graph models, as discussed in Sec. III of Ref. [8]. However, the vertical correlations of sound speeds are ignored in this method, leading to the loss of fine-grained details in the vertical dimension. This can be observed in the error surfaces of the vertical dimension shown in Fig. 17. * Among tensor completion methods (i.e., Tucker-ALS, LRTC, and LRTC-TV), Tucker-ALS gives the best reconstruction performance. It outperforms the matrix-based method BMCG when the observation is moderate (e.g., \(\rho=0.3\)) since it can effectively exploit the multi-dimensional correlations. However, the multi-linear form has limited the expressive power of Tucker-ALS, resulting in a considerable \(E_{1}\), making the associated reconstruction performance inferior to the proposed one. Moreover, these tensor completion methods are sensitive to the sampling pattern and exhibit significantly degraded performance when the measurements are sparse. * GPR is a non-parametric statistical model that paves another promising path for reconstructing SSFs. The expressiveness of the kernel determines the representation error \(E_{1}\), while the learning process of the kernel hyper-parameters determines the identification error \(E_{2}\). Although GPR has demonstrated great success in many areas in recent years[38, 52, 3], research on its applications in ocean SSF reconstruction is still in its infancy (including optimal kernel design and hyper-parameter learning). In our experiments, GPR, using the RBF kernel and evidence maximization-based hyper-parameter learning, achieved the second-best performance in almost all scenarios. However, to enhance its performance, more powerful kernels and advanced optimization techniques can be employed[53, 53], which require significant research efforts and are beyond the scope of this paper. Moreover, the high computational complexity of GPR limits its widespread application. The scalability of GPR is an active research area in machine learning, as evidenced by recent works[54, 55]. ## V Conclusions and Future Directions In this paper, under the unified framework of analyzing the reconstruction error, a tensor neural network-aided reconstruction algorithm is proposed. By leveraging the succinct form of tensor models and the high expressive power of deep neural networks, the proposed TNN model can concisely and accurately represent the 3D SSF, achieving an outstanding balance between the representation error \(E_{1}\) and identification error \(E_{2}\). A simple and efficient algorithm is developed based on the Figure 12: The RMSEs of TNN-TV and TNN under different sampling ratios. widely adopted gradient descent method. Experiments using real-life 3D SSF data demonstrate the effectiveness of the proposed reconstruction error analysis framework and showcase the encouraging performance of the proposed algorithm compared to other SOTA reconstruction methods. Our study demonstrated how the proposed model could effectively integrate structural assumptions, as exemplified by the use of the TV regularizer. Future research could explore the incorporation of additional prior information, such as the physical knowledge of ocean processes that drive sound speed variations, to achieve even better reconstruction performance. It would also be valuable to extend the analysis in Ref. [12] to quantify the recoverability of the proposed TNN model and establish the theoretical connection between TNN and CNN with 3D convolutions. Finally, rethinking the reconstruction problem from a Bayesian learning perspective [56, 57] would hold significant research value. This approach allows for automatic control of model complexity to prevent both overfitting and underfitting, and enables the utilization of uncertainty information to guide future sensing planning. \begin{table} \begin{tabular}{|c|c|c c c c c|} \hline \hline \(\rho\) & \(\sigma\) & **BMCG** & **GPR** & **Tucker-ALS** & **LRTC** & **LRTC-TV** & **TNN** \\ \hline & 0.1 & 1.33 & 0.68 & 1.32 & 1.47 & 1.23 & **0.47** \\ 0.1 & 0.3 & 1.34 & 0.71 & 1.36 & 1.50 & 1.25 & **0.51** \\ & 0.5 & 1.36 & 0.77 & 1.45 & 1.56 & 1.30 & **0.58** \\ \hline & 0.1 & 0.72 & 0.52 & 0.76 & 0.96 & 0.77 & **0.35** \\ 0.2 & 0.3 & 0.74 & 0.56 & 0.80 & 1.02 & 0.82 & **0.39** \\ & 0.5 & 0.77 & 0.61 & 0.84 & 1.12 & 0.89 & **0.46** \\ \hline & 0.1 & 0.56 & 0.44 & 0.42 & 0.64 & 0.53 & **0.30** \\ 0.3 & 0.3 & 0.59 & 0.48 & 0.44 & 0.73 & 0.60 & **0.33** \\ & 0.5 & 0.64 & 0.53 & 0.47 & 0.85 & 0.69 & **0.40** \\ \hline & 0.1 & 0.48 & 0.37 & 0.41 & 0.46 & 0.39 & **0.28** \\ 0.4 & 0.3 & 0.52 & 0.44 & 0.42 & 0.57 & 0.48 & **0.31** \\ & 0.5 & 0.57 & 0.49 & 0.45 & 0.71 & 0.59 & **0.34** \\ \hline \hline \end{tabular} \end{table} Table 5: The average RMSEs over three Monte-Carlo trials of different algorithms under different sampling ratios and noise powers. Figure 13: Visual effects of the reconstructed SSF of different SOTA reconstruction methods in one single Monte-Carlo trial with sampling ratio \(\rho=0.2\) and \(\sigma=0.1\). The RMSEs are shown above the top subfigures. ## VI Acknowledgement This work was supported in part by the National Key R&D Program of China, in part by the National Natural Science Foundation of China under Grant 62001309, in part by the Fundamental Research Funds for the Central Universities, in part by the Zhejiang University Education Foundation Qizhen Scholar Foundation, and in part by Science and Technology on Sonar Laboratory under Grant 6142109KF212204. ## Appendix A Tensor operation definitions **Definition 1** (mode-n product): _The mode-\(n\) product of a tensor \(\mathbf{\mathcal{A}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{N}}\) and a matrix \(\mathbf{B}\in\mathbb{R}^{J\times I_{n}}\) is defined by_ \[\mathbf{\mathcal{C}}=\mathbf{\mathcal{A}}\times_{n}\mathbf{B}\in\mathbb{R}^{I_{1} \times\cdots\times I_{n-1}\times J\times I_{n+1}\times\cdots\times I_{N}}, \tag{11}\] _whose entries are given by_ \[\mathbf{\mathcal{C}}(i_{1},\cdots,j,\cdots,i_{N})=\sum_{i_{n}=1}^{I_{n}}\mathbf{ \mathcal{A}}(i_{1},\cdots,i_{n},\cdots,i_{N})\mathbf{B}(j,i_{n}). \tag{12}\] **Definition 2** (mode-\(n\) unfolding): _Given a tensor \(\mathbf{\mathcal{A}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{N}}\), its mode-\(n\) unfolding gives a matrix \(\mathbf{A}_{(n)}\in\mathbb{R}^{I_{1}\times\cdots\times I_{N}}\)._ Figure 14: Visual effects of the reconstructed SSF of different SOTA reconstruction methods in one single Monte-Carlo trial with sampling ratio \(\rho=0.3\) and \(\sigma=0.1\). The RMSEs are shown above the top subfigures. Figure 15: Visual effects of the reconstructed SSF of different SOTA reconstruction methods in one single Monte-Carlo trial with sampling ratio \(\rho=0.3\) and \(\sigma=0.5\). The RMSEs are shown above the top subfigures. \(\mathbb{R}^{I_{n}\times\prod_{i=1,k\neq n}^{N}I_{k}}\). Each tensor element \(\mathbf{\mathcal{A}}(i_{1},\cdots,i_{n})\) is mapped to the matrix element \([\mathbf{A}_{(n)}](i_{n},j)\) where \(j=1+\sum_{k=1,k\neq n}^{N}(i_{k}-1)J_{k}\) with \(J_{k}=\prod_{m=1,m\neq n}^{k-1}I_{m}\). **Definition 3** (Tucker decomposition): _For a tensor \(\mathbf{\mathcal{A}}\in\mathbb{C}^{I_{1}\times\cdots\times I_{N}}\), the Tucker decomposition is defined as_ \[\mathbf{\mathcal{A}}=\mathbf{\mathcal{G}}\times_{1}\mathbf{U}^{(1)}\times_{2}\mathbf{ U}^{(2)}\times_{3}\cdots\times_{N}\mathbf{U}^{(N)}, \tag{10}\] _where each factor matrix \(\mathbf{U}^{(n)}\in\mathbb{C}^{I_{n}\times R_{n}},\forall n=1,\cdots,N\). The core tensor \(\mathbf{\mathcal{G}}\in\mathbb{C}^{R_{1}\times\cdots\times R_{N}}\). The tuple \((R_{1},\cdots,R_{N})\) is known as the multi-linear rank._ **Definition 4** (tensor inner product): _The inner product of tensor \(\mathbf{\mathcal{A}}\) and \(\mathbf{\mathcal{B}}\) with the same size is formulated as follows:_ \[c=\langle\mathbf{\mathcal{A}},\mathbf{\mathcal{B}}\rangle=\langle\mathrm{vec}(\mathbf{ \mathcal{A}}),\mathrm{vec}(\mathbf{\mathcal{B}})\rangle\in\mathbb{R}, \tag{11}\] _where \(\mathrm{vec}(\mathbf{\mathcal{A}})\) is the vectorization of \(\mathbf{\mathcal{A}}\)._ ## Appendix B Proof and derivation ### Derivation of Eq. (4) According to its definition, the reconstruction error \(E\) can be written as \[E =\|\mathbf{\mathcal{X}}-\hat{\mathbf{\mathcal{X}}}\|_{\mathrm{F}}^{2} \tag{12}\] \[=\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{D}}(\hat{\mathbf{\Theta}})\|_{ \mathrm{F}}^{2}\] \[=\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})+\mathbf{ \mathcal{D}}(\mathbf{\Theta}^{*})-\mathbf{\mathcal{D}}(\hat{\mathbf{\Theta}})\|_{ \mathrm{F}}^{2}\] \[=\underbrace{\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*}) \|_{\mathrm{F}}^{2}}_{\triangleq\ E_{1}}+\underbrace{\|\mathbf{\mathcal{D}}(\mathbf{ \Theta}^{*})-\mathbf{\mathcal{D}}(\hat{\mathbf{\Theta}})\|_{\mathrm{F}}^{2}}_{ \triangleq\ E_{2}}+\] \[\underbrace{2(\mathbf{\mathcal{X}}-\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*}),\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})-\mathbf{\mathcal{D}}(\hat{\mathbf{\Theta}}))}_{ \triangleq\ \epsilon}.\] Thus the reconstruction error can be decomposed as \(E=E_{1}+E_{2}+\epsilon\) and Eq. (4) holds. ### Derivation of Eq. (5) If \(\mathbf{\mathcal{X}}=\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})\), we have \(E_{1}=\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})\|_{\mathrm{F}}^{2}=0\) and \(\epsilon=2\langle\mathbf{0},\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})-\mathbf{\mathcal{D}}( \hat{\mathbf{\Theta}})\rangle=0\). Similarly, if \(\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})=\mathbf{\mathcal{D}}(\hat{\mathbf{\Theta}})\), then \(E_{2}=\|\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*})-\mathbf{\mathcal{D}}(\hat{\mathbf{\Theta}}) \|_{\mathrm{F}}^{2}=0\) and \(\epsilon=2\langle\mathbf{\mathcal{X}}-\mathbf{\mathcal{D}}(\mathbf{\Theta}^{*}),\mathbf{0} \rangle=0\). Figure 16: Visual effects of the error surfaces in one single Monte-Carlo trial of different methods under \(\rho=0.2\) and \(\sigma=0.1\). The RMSEs are shown above the top subfigures. Figure 17: The error surfaces of tensor-based method TNN and matrix-based method BMCG in the vertical dimension in one Monte-Carlo trial. The TNN can exploit multi-dimensional correlations thus having smoother error surfaces. ### Proof of Proposition 1 We start by rewriting the term \(\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2}\) in a convenient form. Specifically, since each element of \(\mathbf{\mathcal{E}}\) follows a zero-mean i.i.d Gaussian distribution, we have \(\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2}=\|\mathrm{vec}(\mathbf{ \mathcal{X}})-\zeta\|_{\mathrm{F}}^{2}\), where \(\mathrm{vec}(\mathbf{\mathcal{X}})\in\mathbb{R}^{T}\) is the vectorization of \(\mathbf{\mathcal{X}}\) and \(\zeta\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) where \(\mathbf{I}\) is a \(T\times T\) identity matrix. Consider a set of parameters \(\mathbf{\Theta}=\{\mathbf{\mathcal{G}},\mathbf{W}_{1}^{(1)},\mathbf{W}_{1}^{(2)}, \mathbf{W}_{1}^{(3)}\}\), denote \(\mathbf{\mathcal{O}}\) as the indicator tensor that contains one if \(\mathbf{\mathcal{X}}_{i,j,k}>0\) and zero otherwise. Then we have \[\mathrm{vec}(\mathbf{\mathcal{X}}) =\mathrm{vec}\left(\mathrm{ReLU}(\mathbf{\mathcal{G}}\times_{1} \mathbf{W}_{1}^{(1)}\times_{2}\mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{( 3)})\right) \tag{14}\] \[=\mathrm{vec}\left(\mathbf{\mathcal{O}}*(\mathbf{\mathcal{G}}\times_{1} \mathbf{W}_{1}^{(1)}\times_{2}\mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{( 3)})\right)\] \[=\underbrace{\left[\mathrm{diag}(\mathrm{vec}(\mathbf{\mathcal{O}}))( \mathbf{W}_{1}^{(3)}\otimes\mathbf{W}_{1}^{(2)}\otimes\mathbf{W}_{1}^{(1)}) \right]}_{\triangleq\mathbf{B}\in\mathbb{R}^{T\times R}}\mathrm{vec}(\mathbf{ \mathcal{G}})\] \[=\mathbf{B}\mathrm{vec}(\mathbf{\mathcal{G}}).\] Therefore, \(\mathrm{vec}(\mathbf{\mathcal{X}})\) is a weighted combination of \(R\)\(T\)-dimensional vectors and lies in a at-most-\(R\)-dimensional subspace \(S\) of \(\mathbb{R}^{T}\). It follows that \[\min_{\mathbf{\Theta}}\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2}\geq\|P _{S^{c}\zeta}\|_{2}^{2}, \tag{15}\] where \(P_{S^{c}\zeta}\) is the projection of \(\zeta\) outside the subspace \(S\). Next, we make use of the following lemma to give a bound on the projection of the noise \(\zeta\) onto a subspace. **Lemma 1**: Let \(S\subset\mathbb{R}^{T}\) be a subspace with dimension \(R\). Let \(\zeta\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) and \(\beta\geq 1\). Then, \[P\left[\frac{\|P_{S^{c}\zeta}\|_{2}^{2}}{\|\zeta\|_{2}^{2}}\geq 1-\frac{10 \beta R}{T}\right]\geq 1-e^{-\beta R}-e^{-T/16}. \tag{16}\] Proof.: From the Lemma 1 in Ref. [58], if \(X\sim\chi_{T}^{2}\), then \[P[X-T\geq 2\sqrt{Tx}+2x] \leq e^{-x}, \tag{17}\] \[P[X\leq T-2\sqrt{Tx}] \leq e^{-x}. \tag{18}\] We have \(\frac{\|P_{S^{c}\zeta}\|_{2}^{2}}{\|\zeta\|_{2}^{2}}=1-\frac{\|P_{S^{c}}\|_{2} ^{2}}{\|\zeta\|_{2}^{2}}\). Note that \(\|P_{S}\zeta\|_{2}\sim\chi_{R}^{2}\) and \(\|\zeta\|_{2}^{2}\sim\chi_{T}^{2}\). Applying the inequality (17) to bound \(\|P_{S}\zeta\|_{2}\) and inequality (18) to \(\|\zeta\|_{\mathrm{F}}^{2}\), a union bound gives that claim. With Lemma 1 and inequality (15), we have \[P\left[\frac{1}{\|\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2}}(\min_{\mathbf{\Theta}}\|\bm {\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2})\geq 1-\frac{10\beta R}{T} \right]\geq 1-e^{-\beta R}-e^{-T/16}, \tag{19}\] for \(\beta\geq 1\). Let \(\beta=1\), then \[P\left[\min_{\mathbf{\Theta}}\|\mathbf{\mathcal{X}}-\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2 }\geq\|\mathbf{\mathcal{E}}\|_{\mathrm{F}}^{2}(1-\frac{10R}{T})\right]\geq 1-e^{-R}-e^{-T/16}. \tag{20}\] Thus Proposition 1 holds. ### Derivation of Eq. (18) Before giving the derivation of Eq. (18), we present a few facts regarding \(n\)-mode matrix products. Specifically, for distinct modes in a series of multiplications, the order of the multiplication is irrelevant, i.e., \[\mathbf{\mathcal{X}}\times_{m}\mathbf{A}\times_{n}\mathbf{B}=\mathbf{\mathcal{X}} \times_{n}\mathbf{B}\times_{m}\mathbf{A}\ \ (m\neq n). \tag{21}\] If the modes are the same, then \[\mathbf{\mathcal{X}}\times_{m}\mathbf{A}\times_{m}\mathbf{B}=\mathbf{\mathcal{X}} \times_{m}(\mathbf{B}\mathbf{A}). \tag{22}\] Given these facts, a \(L\)-layer TNN with a linear activation function can be written as \[\mathbf{\mathcal{X}} =\varsigma(\cdots\varsigma\left(\mathbf{\mathcal{G}}\times_{1}\mathbf{W }_{1}^{(1)}\times_{2}\mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)}\right)\cdots \tag{23}\] \[\quad\times_{1}\mathbf{W}_{L}^{(1)}\times_{2}\mathbf{W}_{L}^{(2)} \times_{3}\mathbf{W}_{L}^{(3)})\] \[=a(\cdots a\left(\mathbf{\mathcal{G}}\times_{1}\mathbf{W}_{1}^{(1)} \times_{2}\mathbf{W}_{1}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)}\right)\cdots\] \[\quad\times_{1}\mathbf{W}_{L}^{(1)}\times_{2}\mathbf{W}_{L}^{(2)} \times_{3}\mathbf{W}_{L}^{(3)})\] \[=a^{L}\mathbf{\mathcal{G}}\times_{1}\mathbf{W}_{1}^{(1)}\cdots\times_{ 1}\mathbf{W}_{L}^{(1)}\times_{2}\mathbf{W}_{1}^{(2)}\cdots\] \[\quad\times_{2}\mathbf{W}_{L}^{(2)}\times_{3}\mathbf{W}_{1}^{(3)} \cdots\times_{3}\mathbf{W}_{1}^{(3)}\] \[=a^{L}\mathbf{\mathcal{G}}\times_{1}(\mathbf{W}_{L}^{(1)}\cdots\mathbf{ \mathcal{W}}_{1}^{(1)})\times_{2}(\mathbf{W}_{L}^{(2)}\cdots\mathbf{W}_{1}^{(2)})\] \[\quad\times_{3}(\mathbf{W}_{L}^{(3)}\cdots\mathbf{\mathcal{W}}_{1}^{(3)})\] \[=\mathbf{\mathcal{G}}\times_{1}\tilde{\mathbf{W}}^{(1)}\times_{2} \tilde{\mathbf{W}}^{(2)}\times_{3}\tilde{\mathbf{W}}^{(3)}.\] ## Appendix C Derivation of the gradient The gradient with respect to the model parameters can be obtained via the backpropagation technique.[59] Since the gradient of the activation function depends on the specific choice, here we mainly introduce the gradient propagation in the TCL. By unfolding the tensor to the matrix, we can calculate the gradients with respect to the factor matrices, given by \[\frac{\partial\mathbf{X}_{(k)}^{l+1}}{\partial\mathbf{W}_{l}^{(k)}}=\frac{ \partial\mathbf{W}_{l}^{(k)}\mathbf{X}_{(k)}^{l}(\mathbf{W}_{l}^{(-k)})^{ \mathrm{T}}}{\partial\mathbf{W}_{l}^{(k)}} \tag{24}\] where \(\mathbf{W}_{l}^{(-k)}=\mathbf{W}_{l}^{(3)}\otimes\cdots\mathbf{W}_{l}^{(k+1)} \otimes\mathbf{W}_{l}^{(k-1)}\cdots\mathbf{W}_{l}^{(1)}\), \(k=1,2,3\); \(\mathbf{X}_{(k)}^{l}\) is the mode-\(k\) unfolding of \(\mathbf{\mathcal{X}}_{l}\) and \(\mathbf{X}_{(k)}^{l+1}\) is the mode-\(k\) unfolding of \(\mathbf{\mathcal{X}}_{l+1}\). Similarly, the gradient with respect to the input tensor can be obtained by vectorizing the tensor, given by \[\frac{\partial\mathrm{vec}(\mathbf{\mathcal{X}}_{l+1})}{\partial\mathrm{vec}(\mathbf{ \mathcal{X}}_{l})}=\frac{\partial(\mathbf{W}_{l}^{(3)}\otimes\mathbf{W}_{l}^{(2)} \otimes\mathbf{W}_{l}^{(1)})\mathrm{vec}(\mathbf{\mathcal{X}}_{l})}{\partial \mathrm{vec}(\mathbf{\mathcal{X}}_{l})}. \tag{25}\] Then the gradient of the loss function with respect to the parameters can be calculated by the chain rule. ## Appendix D Further experimental results Table 6 shows the RMSEs of various TNN models at different sampling ratios. The model configurations used are the same as in Table 4.
2306.12817
Physics-guided neural networks for inversion-based feedforward control applied to hybrid stepper motors
Rotary motors, such as hybrid stepper motors (HSMs), are widely used in industries varying from printing applications to robotics. The increasing need for productivity and efficiency without increasing the manufacturing costs calls for innovative control design. Feedforward control is typically used in tracking control problems, where the desired reference is known in advance. In most applications, this is the case for HSMs, which need to track a periodic angular velocity and angular position reference. Performance achieved by feedforward control is limited by the accuracy of the available model describing the inverse system dynamics. In this work, we develop a physics-guided neural network (PGNN) feedforward controller for HSMs, which can learn the effect of parasitic forces from data and compensate for it, resulting in improved accuracy. Indeed, experimental results on an HSM used in printing industry show that the PGNN outperforms conventional benchmarks in terms of the mean-absolute tracking error.
Daiwei Fan, Max Bolderman, Sjirk Koekebakker, Hans Butler, Mircea Lazar
2023-06-22T11:32:03Z
http://arxiv.org/abs/2306.12817v1
Physics-guided neural networks for inversion-based feedforward control applied to hybrid stepper motors* ###### Abstract Rotary motors, such as hybrid stepper motors (HSMs), are widely used in industries varying from printing applications to robotics. The increasing need for productivity and efficiency without increasing the manufacturing costs calls for innovative control design. Feedforward control is typically used in tracking control problems, where the desired reference is known in advance. In most applications, this is the case for HSMs, which need to track a periodic angular velocity and angular position reference. Performance achieved by feed-forward control is limited by the accuracy of the available model describing the inverse system dynamics. In this work, we develop a physics-guided neural network (PGNN) feedforward controller for HSMs, which can learn the effect of parasitic forces from data and compensate for it, resulting in improved accuracy. Indeed, experimental results on an HSM used in printing industry show that the PGNN outperforms conventional benchmarks in terms of the mean-absolute tracking error. ## I Introduction Hybrid stepper motors (HSM) are widely used in industrial automation, such as pick-and-place robots [1, 2], additive manufacturing [3], professional printing applications [4], and more, see, e.g., [5] for an overview. HSMs can be operated in an open-loop configuration using microstepping [6]. However, the open-loop stepping often induces unwanted vibrations and is highly inefficient as it applies high currents to be robust for worst case loads. Consequently, for high-precision applications, closed-loop control schemes are often applied in the form of field-oriented control (FOC) [7, 8], see also [9] for control strategies of the inner current control loop. Since FOC requires measurements of both the currents and the angular position of the HSM, significant research has been done on sensorless FOC which does not require the additional angular position sensor, see, e.g., [4, 10]. For motion control systems, reference tracking performance is typically achieved via feedforward control, while feedback control stabilizes the system and rejects disturbances and feedforward imperfections [11]. For rotary motors however, the feedforward control design is largely neglected or restricted to be linear, see, e.g., [7] which employs a linear velocity-acceleration feedforward, or [12] which employs linear feedforward with a disturbance compensator. The complete dynamical behaviour of HSMs constitutes more complex phenomena, such as parasitic torques arising from manufacturing tolerances, as well as torque ripples caused by detent torque and back electromotive forces. Since performance achieved by feedforward control is limited by the accuracy of the model of the inverse system [13], designing a feedforward controller from a linear model intrinsically limits performance. Iterative learning control [14] provides the potential to improve tracking performance further, but requires multiple repetitions of the same reference. Physics-guided neural networks (PGNNs) have potential to improve performance achieved by linear, physics-based, feedforward controllers by accurately identifying the inverse system dynamics from data [15]. PGNNs effectively merge physics-based and NN-based models and thereby result in nonlinear feedforward controllers with improved performance, and the same reliability as physics-based feedforward controllers [16]. This is in contrast to black box NNs, which can fail to learn from presented data. The application of a PGNN feedforward controller to a rotary machine however remains unexplored. Hence, this motivates us to develop PGNN feedforward controllers for improving performance of HSMs. To this end, we define a PGNN architecture that embeds a simple, physics-based inverse model of the HSM within a black-box NN. Also, we impose the rotational reproducible behaviour, i.e., the same dynamics is expected for each rotation. With the PGNN architecture defined, the PGNN training identifies or learns the inverse system dynamics of the HSM from an available input-output data set, i.e., requiring measurements of the angular position. Since the PGNN feedforward controller does not require online measurements, it can also be implemented in a sensorless FOC scheme. The developed PGNN feedforward improves performance by a factor \(2\) in terms of the mean-absolute tracking error (MAE) in real-time on an HSM used in printing industry, without requiring additional computational hardware or measurements. ## II Preliminaries ### _First-principle modeling of an HSM_ Fig. 1 shows a schematic overview of the FOC structure with \(dq\)-transformation of the hybrid stepper motor, see, e.g., [17]. Note that, both the position and the current controllers are implemented in discrete-time, which are indicated by the sampler and ZOH blocks. The HSM is subdivided in a mechanical and an electromagnetic part. The mechanical dynamics, indicated with \(G_{\text{me}}\) is modeled using Newton-Euler relations, such that \[J\frac{d^{2}}{dt^{2}}y(t)=F(t)-f_{v}\frac{d}{dt}y(t), \tag{1}\] where \(y(t)\) is the position output at time \(t\in\mathbb{R}_{>0}\), \(J\in\mathbb{R}_{>0}\) the mass moment of inertia, \(f_{v}\in\mathbb{R}_{>0}\) the viscous friction coefficient, and \(F(t)\) the driving torque. The driving torque is modeled as [18] \[F(t)=k_{m}\left(-i_{a}(t)\sin\left(Ny(t)\right)+i_{b}(t)\cos\left(Ny(t)\right) \right), \tag{2}\] where \(k_{m}\in\mathbb{R}_{>0}\) is the motor constant, \(N\in\mathbb{Z}_{>0}\) the number of rotor teeth, and \(i_{a}(t)\) and \(i_{b}(t)\) the current through coils \(a\) and \(b\), respectively. The electromagnetic dynamics is modeled as \[\begin{split} L\frac{d}{dt}i_{a}(t)&=v_{a}(t)-Ri_{ a}(t)+k_{m}\big{(}\frac{d}{dt}y(t)\big{)}\sin\big{(}Ny(t)\big{)},\\ L\frac{d}{dt}i_{b}(t)&=v_{b}(t)-Ri_{b}(t)-k_{m} \big{(}\frac{d}{dt}y(t)\big{)}\cos\big{(}Ny(t)\big{)},\end{split} \tag{3}\] where \(L\in\mathbb{R}_{>0}\) is the inductance, \(R\in\mathbb{R}_{>0}\) the resistance, and \(v_{a}(t)\) and \(v_{b}(t)\) the terminal voltages of coil \(a\) and \(b\), respectively. The latter terms in (3) are the self-induced voltage, also known as the back electromotive force. Since the HSM has two inputs, i.e., the voltages \(v_{a}\) and \(v_{b}\), and only a single output \(y\), often the \(dq\)-transformation [17]\(\Psi_{dq}\) is employed \[\begin{split}\begin{bmatrix}i_{d}(t)\\ i_{q}(t)\end{bmatrix}&=\Psi_{dq}\big{(}y(t)\big{)}\begin{bmatrix}i_{a}(t) \\ i_{b}(t)\end{bmatrix}\\ &:=\begin{bmatrix}\cos\big{(}Ny(t)\big{)}&\sin\big{(}Ny(t)\big{)} \\ -\sin\big{(}Ny(t)\big{)}&\cos\big{(}Ny(t)\big{)}\end{bmatrix}\begin{bmatrix}i_{a }(t)\\ i_{b}(t)\end{bmatrix}.\end{split} \tag{4}\] As a result, we observe that the driving torque in (2) simplifies to \(T(t)=k_{m}i_{q}(t)\), and the mechanical dynamics into \[J\frac{d^{2}}{dt^{2}}y(t)=k_{m}i_{q}(t)-f_{v}\frac{d}{dt}y(t). \tag{5}\] Note, from (5) we observe that the position control only requires \(i_{q}(t)\). Finally, energy consumption can be approximated by the squared sum of currents, which, using \(dq\)-transformation (4), yields \[i_{a}^{2}+i_{b}^{2}=i_{d}^{2}+i_{q}^{2}. \tag{6}\] Since \(i_{d}\) does not contribute to the driving torque, we aim to have it equal to zero and thereby minimize the energy consumption. **Remark II.1**: _It is possible to derive a more complex description of the HSM dynamics, e.g., by including detent torque, reluctance, and other effects. However, the goal of this work is to demonstrate effectiveness of the PGNN framework for feedforward control, which should compensate for unmodeled effects by learning these effects from data._ ### _Field-oriented control architecture of an HSM_ The inner current control loop aims to have the driving torque \(T(t)\) become equal to the input \(u(t)\). The currents \(i_{a}(t)\) and \(i_{b}(t)\) are controlled using the voltages \(v_{a}(t)\) and \(v_{b}(t)\), such that, in \(dq\)-coordinates these follow the references \(i_{d}^{*}(t)=0\) and \(i_{q}^{*}(t)=\frac{1}{k_{m}}u(t)\). In order to achieve this, the inverse \(dq\)-transformation is applied to the voltages, such that \[\begin{split}\begin{bmatrix}v_{a}(t)\\ v_{b}(t)\end{bmatrix}&=\Psi_{dq}^{-1}\big{(}y(t)\big{)}\begin{bmatrix}v_{ d}(t)\\ v_{q}(t)\end{bmatrix}\\ &=\begin{bmatrix}\cos\big{(}Ny(t)\big{)}&-\sin\big{(}Ny(t)\big{)} \\ \sin\big{(}Ny(t)\big{)}&\cos\big{(}Ny(t)\big{)}\end{bmatrix}\begin{bmatrix}v_{ d}(t)\\ v_{q}(t)\end{bmatrix}.\end{split} \tag{7}\] Substituting the \(dq\)-transformation (4) and the inverse \(dq\)-transformation (7) in the electromagnetic model (3), gives the electromagnetic model in \(dq\)-coordinates as \[\begin{split} L\frac{d}{dt}i_{d}(t)&=v_{d}(t)-Ri_{d}(t)+ LNi_{q}(t)\frac{d}{dt}y(t),\\ L\frac{d}{dt}i_{q}(t)&=v_{q}(t)-Ri_{q}(t)-k_{m}\frac{d}{ dt}y(t)-LNi_{d}\frac{d}{dt}y(t).\end{split} \tag{8}\] The voltages in \(dq\)-coordinates are computed using the discrete-time feedback controller \(C_{i}(z)\) as \[\begin{split} v_{d}(k)&=-C_{i}(z)i_{d}(k),\\ v_{q}(k)&=C_{i}(z)\big{(}i_{q}^{*}(k)-i_{q}(k)\big{)}, \end{split} \tag{9}\] Fig. 1: FOC architecture including the HSM, the current control with \(dq\)–transform and the position feedback–feedforward control setup. where \(k\in\mathbb{Z}_{\geq 0}\) indicates the discrete-time instant. The inverse \(dq\)-transformation \(\Psi_{dq}^{-1}\) in (7) and \(dq\)-transformation in (4) are evaluated at discrete time indices, i.e., for \(t=kT_{s}\), with \(T_{s}\in\mathbb{R}_{>0}\) the sampling time. The feedback controller \(C_{i}(z)\) is a discretized version of the PI-controller \[C_{i}(s)=k_{p}+\frac{k_{i}}{s}, \tag{10}\] with \(k_{p}\in\mathbb{R}\) and \(k_{i}\in\mathbb{R}\) the proportional and integral gain, respectively. **Remark II.2**: _The use of the \(dq\)-transformation can be omitted by directly transforming the control input \(u(k)\) into current references \(i_{a}^{*}(k)\) and \(i_{b}^{*}(k)\), see [7]. When following a constant velocity reference and assuming a constant load (viscous friction), both \(i_{a}^{*}(k)\) and \(i_{b}^{*}(k)\) follow a sinusoidal reference, whereas \(i_{q}^{*}(k)\) remains constant. Correspondingly, the \(dq\) current control is expected to work better for reference tracking control._ The outer angular position control loop consists of a feedback and a feedforward controller, such that \[u(k)=u_{\text{fb}}(k)+u_{\text{ff}}(k), \tag{11}\] where \(u_{\text{fb}}(k)\) is the feedback and \(u_{\text{ff}}(k)\) the feedforward input. The feedback input is computed as \[u_{\text{fb}}(k)=C_{\text{fb}}(z)\big{(}y^{*}(k)-y(k)\big{)}, \tag{12}\] where \(C_{\text{fb}}(z)\) is the transfer function of the discrete-time feedback controller, and \(y^{*}(k)\) the reference. We develop a data-driven feedforward controller following the same steps as in [15], where linear motors were considered. First, we have an input-output data set generated on the system, i.e., \[Z^{N}:=\{u_{0},y_{0},...,u_{N-1},y_{N-1}\}, \tag{13}\] where \(N\in\mathbb{Z}_{>0}\) are the number of samples, and \(u_{i}\), \(y_{i}\) are \(u(i)\), \(y(i)\) for the data generating experiment. Second, we parametrize the inverse system dynamics according to \[\hat{u}\big{(}\theta,\phi(k)\big{)} :=f\big{(}\theta,\phi(k)\big{)}, \tag{14}\] \[\phi(k) :=[y(k+n_{k}+1),...,y(k+n_{k}-n_{a}+1),\] \[\qquad u(k-1),...,u(k-n_{b}+1)]^{T}.\] In (14), \(\hat{u}\) is the prediction of the input \(u\), \(f:\mathbb{R}^{n_{a}\times(n_{a}+n_{b})}\rightarrow\mathbb{R}\) is a model of the inverse dynamics, \(\theta\in\mathbb{R}^{n_{\theta}}\) are the parameters, and \(\phi(k)\) is the regressor with \(n_{a},n_{b}\in\mathbb{Z}_{\geq 0}\) describing the order of the dynamics and \(n_{k}\in\mathbb{Z}_{\geq 0}\) the number of pure input delays. The values for \(n_{a}\), \(n_{b}\), and \(n_{k}\) can be obtained, e.g., by discretizing a first-principle model of the continuous-time dynamics, or by analyzing a frequency response function. In order to have the model (14) fit the inverse system dynamics, the parameters are chosen according to an identification criterion \[\hat{\theta}=\arg\min_{\theta}\frac{1}{N}\sum_{i=0}^{N-1}\big{(}u_{i}-\hat{u}( \theta,\phi_{i})\big{)}. \tag{15}\] Finally, the feedforward controller is obtained by computing the input that is required to follow the reference, such that \[u_{\text{ff}}(k) =\hat{u}\big{(}\hat{\theta},\phi_{\text{ff}}(k)\big{)}, \tag{16}\] \[\phi_{\text{ff}}(k) :=[y^{*}(k+n_{k}+1),...,y^{*}(k+n_{k}-n_{a}+1),\] \[\qquad\qquad u_{\text{ff}}(k-1),...,u_{\text{ff}}(k-n_{b}+1)]^{T}.\] In order to implement the feedforward controller (16), we assume that reference values up until time \(k+n_{k}+1\) are known at time \(k\). ## III Problem Statement The choice of the model class \(f\) in (14) determines the effects to be identified, and, consequently, compensated for by the feedforward controller (16). For mechatronic systems, it is typically assumed that the current loop operates significantly faster compared to the position loop, such that the feedforward controller can be designed solely for the mechanical part of the dynamics, i.e., (1) with \(T(k)=u(k)\). Consequently, using the physical knowledge, a suitable candidate for the model class is given as \[\hat{u}\big{(}\theta,\phi(k)\big{)} =f_{\text{phy}}\big{(}\theta_{\text{phy}},\phi(k)\big{)} \tag{17}\] \[=\theta_{\text{phy}}^{T}\left[\begin{matrix}\delta^{2}y(k)\\ \delta y(k)\end{matrix}\right],\] where \(\delta=\frac{q-q^{-1}}{2T_{s}}\) with \(q\) the forward shift operator, such that \(\phi(k)=[y(k+2),...,y(k-2)]^{T}\). Additionally, \(\theta_{\text{phy}}\) are the physical parameters which represent the inertia \(J\) and viscous friction coefficient \(f_{v}\). **Remark III.1**: _It is possible to use more accurate discretization schemes to find \(n_{a}\), \(n_{b}\), and \(n_{k}\) in (17). For example, ZOH discretization is exact for linear dynamics if the input is kept constant between two consequtive samples. However, the experimental results in Sec. V show that the parasitic effects are dominant over the discretization error made in (17). Additionally, this discretization scheme has the advantage that \(n_{b}=0\), such that the feedforward controller (16) is stable. For \(n_{b}>0\), [16] presents tools to both validate (after training) and impose (during training) stability of the PGNN feedforward controllers._ The physics-based feedforward controller (17) can only identify and compensate for the inertia and viscous friction, while real-life applications comprise of more complex behaviour. Consequently, it was first proposed in [19] to employ a black-box NN as a model class (14), such that \[\hat{u}\big{(}\theta,\phi(k)\big{)} =f_{\text{NN}}\big{(}\theta_{\text{NN}},\phi(k)\big{)} \tag{18}\] where \(\alpha_{l}:\mathbb{R}^{n_{l}}\rightarrow\mathbb{R}^{n_{l}}\) denotes the aggregation of activation functions with \(n_{l}\in\mathbb{Z}_{>0}\) the number of neurons in layer \(l\in\{0,...,L\}\), and \(L\in\mathbb{Z}_{>0}\) the number of hidden layers. The parameters \(\theta_{\text{NN}}:=[\text{col}(W_{1})^{T},B_{q}^{T},...,\text{col}(W_{L+1})^{T},B_{L+1}^{T}]^{T}\) are the concatenation of all weights \(W_{l}\in\mathbb{R}^{n_{l}\times n_{l-1}}\) and biases \(B_{l}\in\mathbb{R}^{n_{l}}\), where \(\text{col}(W_{l})\) stacks the columns of \(W_{l}\). Although the NN (18) has the potential to approximate the inverse dynamics up to any accuracy, it lacks the robustness of the physics-based model (17). For example, the NN easily fails to learn and generalize from the presented data [20]. To illustrate this, we make use of a closed-loop simulation model of an HSM with some parasitic friction forces, see [21] for details on the parameters, feedback controllers and friction model. The simulation closely resembles the real-life setup discussed in Sec. V, and follows the same data generation experiment. We employ feedforward controllers (16) based on the physical model (17) and the NN model (18) with a single hidden layer \(L=1\) with \(n_{1}=16\) neurons. Fig. 2 shows the feedforward signal and resulting tracking error resulting from both feedforward controllers on the HSM simulation. Even though the physical model (17) significantly improves performance with respect to the situation where no feedforward is applied, there remain some errors that are caused by the inability of the physical model to capture the complete dynamics. The NN (18) on the other hand, has the capability to learn more complex dynamics. However, the NN fails to learn and generalize from the presented data which is observed by, e.g., the offset during standstill, and the spikes at the start of the acceleration. This results in poor tracking performance when the NN-based feedforward controller is applied. This issue might be reduced by using a different training data set, or by adjusting the NN dimensions and regularization parameters. However, the example showcases the sensitivity of the NN. Consequently, the goal of this work is to effectively embed the known physics-based feedforward controller within a NN-based feedforward controller, termed PGNN, to improve the tracking performance of HSMs. To this end, we will use a two-step sequential procedure: first, we identify the parameters of a physics-based feedforward controller as in (17). Second, we train a NN model (18) on the residuals of the identified physics-based model. Then, the physics-based model and the NN model are combined in a single PGNN feedforward controller. ## IV Feedforward Control of Hsms Using Physics-Guided Neural Networks With the aim to obtain a feedforward controller with the same reliability as the physics-based model (17) and the high accuracy of the NN model (18), the PGNN model was first proposed in [15], see Fig. 3. The PGNN predicts the input according to \[\hat{u}\big{(}\theta,\phi(k)\big{)}=f_{\text{phy}}\big{(}\theta_{\text{phy}}, \phi(k)\big{)}+f_{\text{NN}}\big{(}\theta_{\text{NN}},T\big{(}\phi(k)\big{)} \big{)}, \tag{19}\] where \(\theta:=[\theta_{\text{NN}}^{T},\theta_{\text{phy}}]^{T}\) are the PGNN parameters, and \(T:\mathbb{R}^{n_{a}+n_{b}}\rightarrow\mathbb{R}^{n_{0}}\) is an input transformation, with \(n_{0}\in\mathbb{Z}_{>0}\) the number of NN inputs. To train the PGNN, we employ the following two-step sequential procedure. First, the physical parameters \(\hat{\theta}_{\text{phy}}\) are identified according to identification criterion (15) with physics-based model (17). Afterwards, the NN parameters \(\hat{\theta}_{\text{NN}}\) are identified on the residual of the identified physics-based model, such that \[\begin{split}\hat{\theta}_{\text{NN}}=&\text{arg} \min_{\theta_{\text{NN}}}\frac{1}{N}\sum_{i=0}^{N-1}\big{(}u_{i}-\hat{u}([ \theta_{\text{NN}}^{T},\hat{\theta}_{\text{phy}}^{T}]^{T},\phi_{i})\big{)}\\ &\quad\quad\quad\quad\quad\quad\quad+\left\|\Lambda_{\text{NN}} \theta_{\text{NN}}\right\|_{2}^{2},\end{split} \tag{20}\] where \(\Lambda_{\text{NN}}\) is a regularization matrix. Note that, first identifying \(\hat{\theta}_{\text{phy}}\) and then identifying \(\hat{\theta}_{\text{NN}}\) with \(\hat{\theta}_{\text{phy}}\) fixed can yield a suboptimal solution. This is prevented by identifying \(\hat{\theta}_{\text{phy}}\) and \(\hat{\theta}_{\text{NN}}\) simultaneously as in, e.g., [16]. It is expected that, for each rotation, the HSM exhibits the same dynamical behaviour. It is crucial that the PGNN (19) incorporates this rotational reproducibility, since it is otherwise difficult to generate a training data set which describes all relevant rotations \(y^{*}(k)\), e.g., when the HSM rotates in one direction. Therefore, we aim to identify a PGNN model (19) which satisfies \[\hat{u}\big{(}\theta,\phi(k)\big{)}=\hat{u}\left(\theta,\phi(k)+\begin{bmatrix} 1^{(n_{a}+1)\times 1}\\ 0^{(n_{b}-1)\times 1}\end{bmatrix}n2\pi\right),\ n\in\mathbb{Z}. \tag{21}\] In order to impose the rotational reproducible behaviour, i.e., to make the PGNN (19) comply with (21), we consider a specific design of the physics-guided input transform \(T(\cdot)\). To do so, we restate that the system order was approximated as \(n_{k}=1\), \(n_{a}=4\), and \(n_{b}=0\) from the physical model (17), Fig. 3: Schematic overview of the physics–guided neural network. Fig. 2: Reference (top window), feedforward signal (middle window), and the resulting tracking error (bottom window) for the feedforward controllers using the physical model (17) and the NN (18) on a simulation example. such that we consider the following physics-guided input transform \[T\left(\begin{bmatrix}y(k+2)\\ \vdots\\ y(k-2)\end{bmatrix}\right)=\begin{bmatrix}\delta^{2}y(k)\\ \delta y(k)\\ \text{mod}\big{(}y(k),2\pi\big{)}\end{bmatrix}, \tag{22}\] where \(\text{mod}\big{(}y(k),2\pi\big{)}\) is the remainder after division of \(y(k)\) with \(2\pi\). Note that, (22) is adopted rather than wrapping all \(y\) into the domain \([0,2\pi)\), since \(\delta\text{mod}\big{(}y(k)\big{)}\neq\delta y(k)\) for all \(k\). Therefore, \(T(\cdot)\) includes discrete-time approximations of derivatives of the output \(y(k)\), which can also improve the training convergence, e.g., when high sampling rates with respect to the velocity are taken, such that \(y(k)\approx y(k-1)\). **Remark IV.1**: _The physical model (17) only inputs discrete-time angular velocity and acceleration, such that it is reproducible for any offset \(y(k)+\Delta\). Then, combined with the NN using transform (22), the PGNN (19) satisfies (21)._ As an example of the physics-guided input transform \(T(\cdot)\), consider the situation in which a NN is used to learn \[u(k)=\cos\big{(}y(k)\big{)}, \tag{23}\] with data generated from one period. The top window in Fig. 4 shows that \(n_{1}=2\) hidden layer neurons (with \(\tanh\) activation) give a reasonably accurate identification. The lack of data however, causes the NN trained with \(T\big{(}y(k)\big{)}=y(k)\) to extrapolate poorly, in contrast to the NN trained with \(T\big{(}y(k)\big{)}=\text{mod}\big{(}y(k)\big{)}\). On the other hand, when the full range of interest is covered with data, the NN with \(T\big{(}y(k)\big{)}=\text{mod}\big{(}y(k)\big{)}\) requires significantly less neurons compared to the NN with \(T\big{(}y(k)\big{)}=y(k)\) to yield an approximation of similar accuracy, see the bottom window of Fig. 4. ## V Experimental Validation The PGNN-based feedforward controller (19) is validated on a real-life HSM used in printing industry shown in Fig. 5. For simplicity, the current and position controllers are only proportional gains tuned as \[C_{i}(s)=6.6,\quad C_{\text{fb}}(s)=5. \tag{24}\] Training data is generated by sampling the input \(u(k)\) and output \(y(k)\) with a sampling time of \(Ts=10^{-4}\) s while operating the HSM in closed-loop with a third order reference moving back-and-forth between \(-3\) to \(+3\) rotations with a velocity of \(15\ \frac{\text{rad}}{\text{s}}\), acceleration of \(80\ \frac{\text{rad}}{\text{s}^{2}}\) and jerk \(1000\ \frac{\text{rad}}{\text{s}^{3}}\) for a duration of \(80\) s. The PGNN (19) uses the physical model (17) and a single hidden layer with \(16\ \tanh\) neurons with physics-guided input transform (22) trained according to (20) with \(\Lambda_{\text{NN}}=0\). It was observed that adding more neurons or hidden layers did not further improve performance. Fig. 6 shows the reference, generated feedforward signals, and the tracking error resulting from the physics-based feedforward and the PGNN. The presented forward motion was preceded by a back-and-forward motion of the same reference to remove the transients caused by differences in initial conditions, and thereby facilitate a fair comparison. Although the physics-based and the PGNN-based feedforward inputs are largely similar, the small deviations especially during the acceleration part of the reference yield significantly smaller overshoot for the PGNN. Fig. 7 shows the mean-absolute error (MAE) \[\frac{1}{N_{r}}\sum_{t=0}^{N_{r}-1}|y^{*}(k)-y(k)|, \tag{25}\] for a reference of \(N_{r}\in\mathbb{Z}_{>0}\) samples as in Fig. 6 with different maximum velocities. The PGNN outperforms the physics-based feedforward controllers for all velocities smaller than \(15\ \frac{\text{rad}}{\text{s}}\). For velocities larger than \(15\ \frac{\text{rad}}{\text{s}}\), the physics-based feedforward controller achieves slightly better performance, which is explained by the fact that the training data did not contain information for velocities exceeding \(15\ \frac{\text{rad}}{\text{s}}\). It is possible to enhance robustness to non-training data via the regularization approach discussed in [22], which penalizes the deviation of the PGNN output with respect to the output of the physical model for non-training data. Fig. 4: Example for imposing physical knowledge via \(T(\cdot)\), i.e., improved extrapolation capabilities when training a NN with \(T(y)=\text{mod}(y)\) compared to \(T(y)=y\) on a limited data set (top window), and the reduction of the required amount of neurons \(n_{1}\) to achieve an approximation of similar quality (bottom window). Fig. 5: HSM FL57/STH51–2804A by FULLING MOTOR with encoder. ## VI Conclusions A PGNN-based feedforward controller for HSMs was developed and tested in real-time experiments. The PGNN was designed to physically embed the rotational reproducible behaviour of the HSM, which improved performance with respect to a physics-based approach on a real-life HSM without requiring an increase in costs. Further research will focus on the feedforward controller design for HSMs as part of a complex industrial printer, as well as reducing the effect of predictable disturbances on the closed-loop system. ## VII Acknowledgements The authors thank Steven Schalm and Will Hendrix for making the HSM setup operational.
2301.11375
Neural networks learn to magnify areas near decision boundaries
In machine learning, there is a long history of trying to build neural networks that can learn from fewer example data by baking in strong geometric priors. However, it is not always clear a priori what geometric constraints are appropriate for a given task. Here, we consider the possibility that one can uncover useful geometric inductive biases by studying how training molds the Riemannian geometry induced by unconstrained neural network feature maps. We first show that at infinite width, neural networks with random parameters induce highly symmetric metrics on input space. This symmetry is broken by feature learning: networks trained to perform classification tasks learn to magnify local areas along decision boundaries. This holds in deep networks trained on high-dimensional image classification tasks, and even in self-supervised representation learning. These results begins to elucidate how training shapes the geometry induced by unconstrained neural network feature maps, laying the groundwork for an understanding of this richly nonlinear form of feature learning.
Jacob A. Zavatone-Veth, Sheng Yang, Julian A. Rubinfien, Cengiz Pehlevan
2023-01-26T19:43:16Z
http://arxiv.org/abs/2301.11375v3
# Neural networks learn to magnify areas near decision boundaries ###### Abstract We study how training molds the Riemannian geometry induced by neural network feature maps. At infinite width, neural networks with random parameters induce highly symmetric metrics on input space. Feature learning in networks trained to perform classification tasks magnifies local areas along decision boundaries. These changes are consistent with previously proposed geometric approaches for hand-tuning of kernel methods to improve generalization. ## 1 Introduction In a series of influential papers, Amari and Wu proposed that one could improve the generalization performance of support vector machine (SVM) classifiers through data-dependent transformations of the kernel to expand the Riemannian volume element near decision boundaries (Amari & Wu, 1999; Williams et al., 2007; Wu & Amari, 2002). This proposal was based on the idea that this local magnification of areas improves class discriminability (Amari & Wu, 1999; Burges, 1999; Cho & Saul, 2011). Over the past decade, SVMs have largely been eclipsed by neural networks, whose ability to flexibly learn features from data is believed to underlie their superior generalization performance (LeCun et al., 2015; Zhang et al., 2021). Previous works have explored some aspects of the geometry induced by neural networks feature maps with random parameters (Amari et al., 2019; Benfenati & Marta, 2023; Cho & Saul, 2009; 2011; Hauser & Ray, 2017; Poole et al., 2016; Zavatone-Veth & Pehlevan, 2022), but have not characterized data-dependent changes in representational geometry over training. In this work, we explore the possibility that neural networks learn to enhance local input discriminability automatically over the course of training. Our primary contributions are: * In SS4, we study general properties of the metric induced by shallow fully-connected neural networks. Next, in SS4.2, we compute the volume element and curvature of the metric induced by infinitely wide shallow networks with Gaussian weights and smooth activation functions, showing that it is spherically symmetric. * In SS5, we empirically show that training shallow networks on simple classification tasks expands the volume element along decision boundaries, consistent with the hand-engineered modifications proposed by Amari and Wu. In SS6, we provide evidence that deep residual networks trained on more complex tasks behave similarly. In total, our results provide a preliminary picture of how feature learning shapes local input discriminability. ## 2 Preliminaries We begin by introducing the basic idea of the Riemannian geometry of feature space representations. Our setup and notation largely follow Burges (1999), which in turn follows the conventions of Dodson & Poston (1991). ### Feature embeddings as Riemannian manifolds Consider \(d\)-dimensional data living in some submanifold \(\mathcal{D}\subseteq\mathbb{R}^{d}\). Let the _feature map_\(\mathbf{\Phi}:\mathbb{R}^{d}\rightarrow\mathcal{H}\) be a map from \(\mathbb{R}^{d}\) to some separable Hilbert space \(\mathcal{H}\) of possibly infinite dimension \(n\), with \(\mathbf{\Phi}(\mathcal{D})=\mathcal{M}\subseteq\mathcal{H}\). We index input space dimensions by Greek letters \(\mu,\nu,\rho,\ldots\in[d]\) and feature space dimensions by Latin letters \(i,j,k,\ldots\in[n]\). We use the Einstein summation convention; summation over all repeated indices is implied. Assume that \(\mathbf{\Phi}\) is \(\mathcal{C}^{k}\) for \(k\geq 3\), and is everywhere of rank \(r=\min\{d,n\}\). If \(r=d\), then \(\mathcal{M}\) is a \(d\)-dimensional \(\mathcal{C}^{k}\) manifold immersed in \(\mathcal{H}\). If \(k=\infty\), then \(\mathcal{M}\) is a smooth manifold. In contrast, if \(r<d\), then \(\mathcal{M}\) is a \(d\)-dimensional \(\mathcal{C}^{k}\) manifold submersed in \(\mathcal{H}\). The flat metric on \(\mathcal{H}\) can then be pulled back to \(\mathcal{M}\), with components \[g_{\mu\nu}=\partial_{\mu}\Phi_{i}\partial_{\nu}\Phi_{i}, \tag{1}\] where we write \(\partial_{\mu}\equiv\partial/\partial x^{\mu}\). If \(r=d\) and the pullback metric \(g_{\mu\nu}\) is full rank, then \((\mathcal{M},g)\) is a \(d\)-dimensional Riemannian manifold (Burges, 1999; Dodson & Poston, 1991). However, if the pullback \(g_{\mu\nu}\) is a degenerate metric, as must be the case if \(r<d\), then \((\mathcal{M},g)\) is a singular semi-Riemannian manifold (Benfenati & Marta, 2023b; Kupeli, 2013). In this case, if we let \(\sim\) be the equivalence relation defined by identifying points with vanishing pseudodistance, the quotient \((\mathcal{M}/\sim,g)\) is a Riemannian manifold (Benfenati & Marta, 2023b). Unless noted otherwise, our results will focus on the non-singular case. We denote the matrix inverse of the metric tensor by \(g^{\mu\nu}\), and we raise and lower input space indices using the metric. If we define the feature kernel \(k(\mathbf{x},\mathbf{y})=\Phi_{i}(\mathbf{x})\Phi_{i}(\mathbf{y})\) for \(\mathbf{x},\mathbf{y}\in\mathcal{D}\), then the resulting metric can be written in terms of the kernel as \(g_{\mu\nu}=(1/2)\partial_{x_{\mu}}\partial_{x_{\nu}}k(\mathbf{x},\mathbf{x})- [\partial_{y_{\mu}}\partial_{y_{\nu}}k(\mathbf{x},\mathbf{y})]_{\mathbf{y}= \mathbf{x}}\). This formula applies even if \(n=\infty\), giving the metric induced by the feature embedding associated to a suitable Mercer kernel (Burges, 1999). ### Volume element and curvature With this setup, \((\mathcal{M},g)\) is a Riemannian manifold, hence we have at our disposal a powerful toolkit with which we may study its geometry. We will focus on two geometric properties of \((\mathcal{M},g)\). First, the volume element is given by \[dV=\sqrt{\det g}\,d^{d}x, \tag{2}\] where the factor \(\sqrt{\det g}\) measures how local areas in input space are magnified by the feature map (Amari & Wu, 1999; Burges, 1999; Dodson & Poston, 1991). Second, we consider the intrinsic curvature of the manifold, which is characterized by the Riemann tensor \(R^{\mu}_{\nu\alpha\beta}\)(Dodson & Poston, 1991). If \(R^{\mu}_{\nu\alpha\beta}=0\), then the manifold is intrinsically flat. As a tractable measure, we focus on the Ricci curvature scalar \(R=g^{\beta\nu}R^{\alpha}_{\nu\alpha\beta}\), which measures the deviation of the volume of an infinitesimal geodesic ball in the manifold from that in flat space (Dodson & Poston, 1991). In the singular case, we can compute the volume element on \(\mathcal{M}/\sim\) at a given point by taking the square root of the product of the non-zero eigenvalues of the degenerate metric \(g_{\mu\nu}\) at that point (Benfenati & Marta, 2023b). However, the curvature in this case is generally not straightforward to compute; we will therefore leave this issue for future work. ### Shallow neural network feature maps In this work, we consider a particular class of feature maps: those given by the hidden layer representations of neural networks (Benfenati & Marta, 2023b; Cho & Saul, 2009; 2011; Hauser & Ray, 2017; LeCun et al., 2015; Lee et al., 2018; Matthews et al., 2018; Neal, 1996; Williams, 1997). We will mostly focus on shallow fully-connected neural networks, i.e., those with only a single hidden layer followed by readout. Concretely, such a feature map is of the form \[\Phi_{j}(\mathbf{x})=n^{-1/2}\phi(\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}) \tag{3}\] for weights \(\mathbf{w}_{j}\), biases \(b_{j}\), and an activation function \(\phi\). For convenience, we abbreviate the Euclidean inner product on feature or input space by -, e.g., \(\mathbf{w}\cdot\mathbf{x}=w_{\mu}x_{\mu}\). In this case, the feature space dimension \(n\) is equal to the number of hidden units, and is referred to as the _width_ of the hidden layer. In (3), we scale the components of the feature map by \(n^{-1/2}\) such that the associated kernel \(k(\mathbf{x},\mathbf{y})=\Phi_{i}(\mathbf{x})\Phi_{i}(\mathbf{y})\) and metric (4) have the form of averages over hidden units, and therefore should be well-behaved at large widths (Neal, 1996; Williams, 1997). We will assume that \(\phi\) is \(\mathcal{C}^{k}\) for \(k\geq 3\), so that this feature map satisfies the smoothness conditions required in the setup above. We will also assume that the activation function and weight vectors are such that the Jacobian \(\partial_{\mu}\Phi_{j}\) is full-rank, i.e., is of rank \(\min\{d,n\}\). Then, the shallow network feature map satisfies the required conditions for the feature embedding to be a (possibly singular) Riemannian manifold. These conditions extend directly to deep fully-connected networks formed by composing feature maps of the form (3) (Benfenati & Marta, 2023b; Hauser & Ray, 2017). ## 3 Related works Having established the geometric preliminaries of SS2, we can give a more complete overview of related works. As introduced above, our hypothesis for how the Riemannian geometry of neural network representations changes during training is directly inspired by the work of Amari & Wu (1999). In that and subsequent works (Amari & Wu, 1999; Williams et al., 2007; Wu & Amari, 2002), they proposed to modify the kernel of an SVM as \(k(\mathbf{x},\mathbf{y})=h(\mathbf{x})h(\mathbf{y})k(\mathbf{x},\mathbf{y})\) for some positive scalar function \(h(\mathbf{x})\) chosen such that the magnification factor \(\sqrt{\det g}\) is large near the SVM's decision boundary. Concretely, they proposed to fit an SVM with some base kernel \(k\), choose \(h(\mathbf{x})=\sum_{\mathbf{v}\in\text{SV}(k)}\exp[-\|\mathbf{x}-\mathbf{v}\|^ {2}/2\tau^{2}]\) for \(\tau\) a bandwidth parameter and \(\text{SV}(k)\) the set of support vectors for \(k\), and then fit an SVM with the modified kernel \(\tilde{k}\). Here, \(\|\cdot\|\) denotes the Euclidean norm. This process could then be iterated, yielding a sequence of modified kernels. They found that this method can improve generalization performance relative to the original kernel (Amari & Wu, 1999; Williams et al., 2007; Wu & Amari, 2002). This approach is a hand-designed form of iterative feature learning. The geometry induced by common kernels was investigated in detail by Burges (1999), who established a broad range of technical results. He showed that translation-invariant kernels of the form \(k(\mathbf{x},\mathbf{y})=k(\|\mathbf{x}-\mathbf{y}\|^{2})\) yield flat, con stant metrics,1 and gave a detailed characterization of polynomial kernels \(k(\mathbf{x},\mathbf{y})=(\mathbf{x}\cdot\mathbf{y})^{q}\). Cho and Saul (2011) subsequently analyzed the geometry induced by arc-cosine kernels, i.e., the feature kernels of infinitely-wide shallow neural networks with threshold-power law activation functions \(\phi(x)=\max\{0,x\}^{q}\) and random parameters (Cho and Saul, 2009). Our results on infinitely-wide networks for general smooth activation functions build on these works. Footnote 1: It is interesting to note that this holds for the simplest form of a method for learning data-adaptive kernels recently proposed by Radhakrishnan et al. (2022); see Appendix F. The representational geometry of deep networks with random Gaussian parameters in the limit of large width and depth was studied by Poole et al. (2016), and in later work by Amari et al. (2019). These works tie into a broader line of research on infinite-width limits of deep neural networks in which inference and prediction is captured by a kernel machine (Bordelon and Pehlevan, 2022; Daniely et al., 2016; Lee et al., 2018; Matthews et al., 2018; Neal, 1996; Williams, 1997; Yang, 2019; Yang and Hu, 2021; Zavatone-Veth and Pehlevan, 2022; Zavatone-Veth et al., 2021). Our results on the representational geometry of wide shallow networks with smooth activation functions build on these ideas, particularly those relating activation function derivatives to input discriminability (Daniely et al., 2016; Poole et al., 2016; Zavatone-Veth and Pehlevan, 2021, 2022). Particularly closely related to our work are several recent papers that aim to study the curvature of neural network representations. Benfenati and Marta (2023b); Hauser and Ray (2017) discuss formal principles of Riemannian geometry in deep neural networks, but do not characterize how training shapes the geometry. Kaul and Lall (2020) aimed to study the curvature of metrics induced by the outputs of pretrained classifiers. However, their work is limited by the fact that they estimate input-space derivatives using inexact finite differences under the strong assumption that the input data is confined to a _known_ smooth submanifold of \(\mathbb{R}^{d}\). Recent works by Kuhnel et al. (2018); Shao et al. (2018); Wang and Ponce (2021) have studied the Riemannian geometry of the latent representations of deep generative models. Finally, in very recent work Benfenati and Marta (2023a) have used the geometry induced by the full input-output mapping to reconstruct iso-response curves of deep networks. In contrast, our work focuses on hidden representations, and seeks to characterize the representational manifolds themselves. ## 4 Representational geometry of shallow neural network feature maps We begin by studying general properties of the Riemannian metrics induced by shallow neural networks feature maps. ### Finite-width networks We first consider finite-width networks with fixed weights, assuming that \(n\geq d\). Writing \(z_{j}=\mathbf{w}_{j}\cdot\mathbf{x}+b_{j}\) for the preactivation of the \(j\)-th hidden unit, the general formula (1) for the metric yields \[g_{\mu\nu}=\frac{1}{n}\phi^{\prime}(z_{j})^{2}w_{j\mu}w_{j\nu}. \tag{4}\] This metric has the useful property that \(\partial_{\alpha}g_{\mu\nu}\) is symmetric under permutation of its indices, hence the formula for the Riemann tensor simplifies substantially (Appendix A). Then, using the Leibniz formula for determinants, we show in Appendix B that the determinant of the metric can be expanded as a sum over \(d\)-tuples of hidden units: \[\det g=\frac{1}{n^{d}d!}M_{j_{1}\cdots j_{d}}^{2}\phi^{\prime}(z_{j_{1}})^{2} \cdots\phi^{\prime}(z_{j_{d}})^{2}, \tag{5}\] where \[M_{j_{1}\cdots j_{d}}=\det\begin{pmatrix}w_{j_{1}1}&\cdots&w_{j_{1}d}\\ \vdots&\ddots&\vdots\\ w_{j_{d}1}&\cdots&w_{j_{d}d}\end{pmatrix} \tag{6}\] is the minor of the weight matrix obtained by selecting units \(j_{1},\ldots,j_{d}\). For the error function \(\phi(x)=\operatorname{erf}(x/\sqrt{2})\), \(\det g\) expands as a superposition of Gaussian bump functions, one for each tuple of hidden units (B.55). This is reminiscent of Amari and Wu's approach, which yields a Gaussian contribution to \(\sqrt{\det g}\) from each support vector (SS3). We can also derive similar expansions for the Riemann tensor and Ricci scalar. The resulting expressions are rather unwieldy, so we give their general forms only in Appendix B.3. However, in two dimensions the situation simplifies, as the Riemann tensor is completely determined by the Ricci scalar (Dodson and Poston, 1991; Misner et al., 2017) (Appendix B.1). In this case, we have the compact expression \[(\det g)^{2}R=-\frac{3}{n^{3}}M_{jk}^{2}M_{ij}M_{ik}\\ \times\phi^{\prime}(z_{i})^{2}\phi^{\prime}(z_{j})\phi^{\prime}(z_ {k})\phi^{\prime\prime}(z_{j})\phi^{\prime\prime}(z_{k}). \tag{7}\] This shows that in \(d=2\) the curvature acquires contributions from each triple of distinct hidden units, hence if \(n=2\) we have \(R=0\). This follows from the fact that the feature map is in this case a change of coordinates on the two-dimensional manifold (Dodson and Poston, 1991). ### Geometry of infinite shallow networks We now characterize the metric induced by infinite-width networks (\(n\to\infty\)) with Gaussian weights and biases \[\mathbf{w}_{j}\sim_{\text{i.i.d}}\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_ {d});\quad b_{j}\sim_{\text{i.i.d.}}\mathcal{N}(0,\zeta^{2}), \tag{8}\] as commonly chosen at initialization (LeCun et al., 2015; Lee et al., 2018; Matthews et al., 2018; Poole et al., 2016; Yang, 2019; Yang & Hu, 2021). For such networks, the hidden layer representation is described by the neural network Gaussian process (NNGP) kernel (Lee et al., 2018; Matthews et al., 2018; Neal, 1996; Williams, 1997): \[k(\mathbf{x},\mathbf{y}) =\lim_{n\rightarrow\infty}n^{-1}\mathbf{\Phi}(\mathbf{x})\cdot \mathbf{\Phi}(\mathbf{y})\] \[=\mathbb{E}_{\mathbf{w},b}[\phi(\mathbf{w}\cdot\mathbf{x}+b)\phi (\mathbf{w}\cdot\mathbf{y}+b)]. \tag{9}\] This kernel also completely describes the representation after training for networks in the lazy regime (Bordelon & Pehlevan, 2022; Yang & Hu, 2021). In Appendix C, we show that the metric associated with the NNGP kernel, \(g_{\mu\nu}=\mathbb{E}_{\mathbf{w},b}[\phi^{\prime}(\mathbf{w}\cdot\mathbf{x} +b)^{2}w_{\mu}w_{\nu}]\), can be written more illuminatingly as \[g_{\mu\nu}=e^{\Omega(\|\mathbf{x}\|^{2})}[\delta_{\mu\nu}+2\Omega^{\prime}(\| \mathbf{x}\|^{2})x_{\mu}x_{\nu}], \tag{10}\] where the function \(\Omega(\|\mathbf{x}\|^{2})\) is defined via \[e^{\Omega(\|\mathbf{x}\|^{2})}=\sigma^{2}\mathbb{E}_{z\sim\mathcal{N}(0, \sigma^{2}\|\mathbf{x}\|^{2}+\zeta^{2})}[\phi^{\prime}(z)^{2}]. \tag{11}\] For these results, we must also assume that \(\phi\) and its (weak) derivatives satisfy suitable boundedness assumptions for \(\Omega\) to be twice-differentiable (Daniely et al., 2016). Therefore, like the metrics induced by other dot-product kernels, the NNGP metric has the form of a projection (Burges, 1999). Such metrics have determinant \[\det g=e^{\Omega d}(1+2\|\mathbf{x}\|^{2}\Omega^{\prime}) \tag{12}\] and Ricci scalar \[R=- \frac{3(d-1)e^{-\Omega}(\Omega^{\prime})^{2}\|\mathbf{x}\|^{2}}{ (1+2\|\mathbf{x}\|^{2}\Omega^{\prime})^{2}}\] \[\times\left[d+2+2\|\mathbf{x}\|^{2}\left((d-2)\Omega^{\prime}+2 \frac{\Omega^{\prime\prime}}{\Omega^{\prime}}\right)\right]. \tag{13}\] Thus, all geometric quantities are spherically symmetric, depending only on \(\|\mathbf{x}\|^{2}\). Thanks to the assumption of independent Gaussian weights, the geometric quantities associated to the shallow Neural Tangent Kernel and to the deep NNGP will share this spherical symmetry (Appendix D) (Lee et al., 2018; Matthews et al., 2018; Yang, 2019; Yang & Hu, 2021). This generalizes the results of Cho & Saul (2011) for threshold-power law functions to arbitrary smooth activation functions. The relation between Gaussian norms of \(\phi^{\prime}\) and input discriminability indicated by this result is consistent with previous studies (Daniely et al., 2016; Poole et al., 2016; Zavatone-Veth & Pehlevan, 2021, 2022). In short, unless the task depends only on the input norm, the geometry of infinite-width networks will not be linked to the task structure. ### Examples In Appendix C.2, we evaluate the geometric quantities of the NNGP for certain analytically tractable activation functions. The resulting expressions for \(\sqrt{\det g}\) and \(R\) are rather lengthy, so we discuss only their qualitative behavior here. For the error function \(\phi(x)=\mathrm{erf}(x/\sqrt{2})\), \(R\) is negative for all \(d>1\), and both \(R\) and \(\sqrt{\det g}\) are monotonically decreasing functions of \(\|\mathbf{x}\|\) for all \(\zeta\) and \(\sigma\). For monomials \(\phi(x)\propto x^{q}\) for integer \(q>1\), \(\sqrt{\det g}\) is a monotonically increasing function of \(\|\mathbf{x}\|^{2}\), while \(R\) is again non-positive. However, in this case the behavior of \(R\) depends on whether or not bias terms are present: if \(\zeta=0\), then \(R\) is a non-decreasing function of \(\|\mathbf{x}\|^{2}\) that diverges towards \(-\infty\) as \(\|\mathbf{x}\|^{2}\downarrow 0\), while if \(\zeta>0\), \(R\) may be non-monotonic in \(\|\mathbf{x}\|^{2}\). In Figure 1, we illustrate this behavior, and show convergence of the empirical geometry of finite networks with random Gaussian parameters to the infinite-width results. Figure 1: Convergence of geometric quantities for finite-width networks with Gaussian random parameters to the infinite-width limit. **a**. The magnification factor \(\sqrt{\det g}\) (_left_) and Ricci scalar \(R\) (_right_) as functions of the input norm \(\|\mathbf{x}\|\) for networks with \(\phi(x)=\mathrm{erf}(x/\sqrt{2})\). Empirical results for finite networks, computed using (5) and (B.15) are shown in blue, with solid lines showing the mean and shaded patches the standard deviation over \(25\) realizations of random Gaussian parameters. In all cases, \(\sigma=\zeta=1\). The infinite-width result is shown as a black dashed line. **b**. As in **a**, but for normalized quadratic activation functions \(\phi(x)=x^{2}/\sqrt{3}\). ## 5 Changes in shallow network geometry during training We now consider how the geometry of the pullback metric changes during training. Changes in the volume element and curvature during gradient descent training are challenging to study analytically, because models for which the learning dynamics are solvable--deep linear networks (Saxe et al., 2013)--trivially yield flat, constant metrics. More generally, the dependence of the metric on the instantaneous configuration of parameters makes it difficult to gain intuition for its evolution over training, even for two-dimensional inputs. ### Wide Bayesian neural networks We can make slightly more analytical progress for Bayesian neural networks at large but finite width. This setting is convenient because there is a fixed parameter posterior; one does not need to solve kernel or metric dynamics through time (Bordelon and Pehlevan, 2022). In Appendix E, we use recent results on perturbative feature-learning corrections to the NNGP kernel (Roberts et al., 2022; Zavatone-Veth et al., 2021) to compute corrections to the posterior mean of the volume element. In general, it is not possible to evaluate these corrections in closed form (Zavatone-Veth et al., 2021). For networks with monomial activation functions, no bias terms, and linear readout constrained to interpolate a single training example \((\mathbf{x}_{a},\mathbf{y}_{a})\), we can show that the correction to \(\sqrt{\det g}\) is maximal for \(\mathbf{x}\parallel\mathbf{x}_{a}\), and minimal for \(\mathbf{x}\perp\mathbf{x}_{a}\) (E.38). The sign of the correction is positive or negative depending on whether the second moment of the prior predictive is greater than or less than the norm of the output, respectively. For example, if we train on a single point from the XOR task, \((1,1)\mapsto 0\), \(\sqrt{\det g}\) will be contracted maximally along \(x_{1}=x_{2}\). This simple case Figure 2: Evolution of the volume element over training in a network trained to classify points separated by a sinusoidal boundary \(y=\frac{3}{5}\sin(7x-1)\) (single hidden layer with 5 hidden units (top), 20 hidden units (mid), and 250 hidden units (bottom)). Red lines indicate the decision boundaries of the network. See Appendix G.1 for experimental details. More hidden units offer better approximation to the sinusoid curve. shows how interpolating a single point shapes the network's global representational geometry. ### Changes in representational geometry for networks trained on two-dimensional toy tasks Thus, given the intractability of studying changes in geometry analytically, we resort to numerical experiments. For details of our numerical methods, see Appendix G. To build intuition, we first consider networks trained on simple two-dimensional tasks, for which we can directly visualize the input space. We first consider a simple two-dimensional binary classification task with sinusoidal boundary, inspired by that considered in the original work of Amari & Wu (1999). We train networks with sigmoidal activation functions of varying widths to perform this task, and visualize the resulting geometry over the course of training in Figure 2. At initialization, the peaks in the volume element lack a clear relation to the structure of the task, with approximate rotational symmetry at large widths as we would expect from SS4.2. As the network's decision boundary is gradually molded to conform to the true boundary, the volume element develops peaks in the same vicinity. At all widths, the final volume elements are largest near the peaks of the sinusoidal decision boundary. At small widths, the shape of the sinusoidal curve is not well-resolved, but at large widths there is a clear peak in the close neighborhood of the decision boundary. This result is consistent with the proposal of Amari & Wu (1999). In Appendix G, Figure G.5, we plot the Ricci curvature for these trained networks. Even for these small networks, the curvature computation is computationally expensive and numerically challenging. Over training, it evolves dynamically, with task-adapted structure visible at the end of training. However, the patterns here are harder to interpret than those in the volume element. ### Changes in representational geometry for shallow networks trained to classify MNIST digits We now provide evidence that a similar phenomenon is present in networks trained to classify MNIST images. We give details of these networks in Appendix G.2; note that all reach above 95% train and test accuracy within 200 epochs. In Figure 3, we plot the induced volume element at synthetic images generated by linearly interpolating between two input images (see Appendix G for details). We emphasize that linear interpolation in pixel space of course does not respect the structure of the image data, and results in unrealistic images. However, this approach has the advantage of being straightforward, and also illustrates how small Euclidean perturbations are expanded by the feature map (Novak et al., 2018). At initialization, the volume element varies without clear structure along the interpolated path. However, as training progresses, areas near the center of the path, which roughly aligns with the decision boundary, are expanded, while those near the endpoints defined by true training examples remain relatively small. This is again consistent with the proposal of Amari & Wu (1999). We provide additional visualizations of this behavior in Appendix G.2. To gain an understanding of the structure of the volume element beyond one-dimensional slices, in Figure 3 we also plot its value in the plane spanned by three randomly-selected example images, at points interpolated linearly within their convex hull. Here, we only show the end of training; in Appendix G.2 we show how the volume element in this plane changes over the course of training. The edges of the resulting ternary plot are one-dimensional slices like those shown in the top row of Figure 3, and we observe consistent expansion of the volume element along these paths. The volume element becomes large near the centroid of the triangle, where multiple decision boundaries intersect. Because of the computational complexity of estimating the curvature--the Riemann tensor has \(d^{2}(d^{2}-1)/12\) independent components (Dodson & Poston, 1991; Misner et al., 2017)--and its numerical sensitivity (Appendix G.1), we do not attempt to estimate it for this high-dimensional task. ## 6 Extensions to deep networks Thus far, we have focused on the geometry of the feature maps of single-hidden-layer neural networks. However, these analyses can also be applied to deeper networks, regarding the representation at each hidden layer as defining a feature map, and study how the geometry changes with depth (Benfenati & Marta, 2023b; Hauser & Ray, 2017). As a simple version of this, in Figure G.6 we consider a network with three fully-connected hidden layers trained on the sinusoid task. The metrics induced by the feature maps of all three hidden layers all show the same qualitative behavior as we observed in the shallow case in Figure 2: areas near the decision boundary are magnified. As one progresses deeper into the network, the contrast between regions of low and high magnification factor increases. As a more realistic example, we consider deep residual networks (ResNets) (He et al., 2016) trained to classify the CIFAR-10 image dataset (Krizhevsky, 2009). To make the feature map differentiable, we replace the rectified linear unit (ReLU) activation functions used in standard ResNets with Gaussian error linear units (GELUs) (Hendrycks & Gimpel, 2016). With this modification, we achieve comparable test accuracy (92%) with a ResNet-34--the largest model we can consider given computational constraints--to that obtained with ReLUs (Appendix G.3). Importantly, the feature map defined by the input-to-final-hidden-layer map ping of a ResNet-34 gives a submersion of CIFAR-10, as the input images have \(32\times 32\times 3=3072\) pixels, while the final hidden layer has 512 units. Empirically, we find that the Jacobian of this mapping is full-rank (Appendix G.3); we therefore consider the volume element on \((\mathcal{M}/\sim,g)\) defined by the product of the non-zero eigenvalues of the degenerate pullback metric (SS2, Appendix G.3). In Figures 4, we visualize the resulting geometry in the same way we did for networks trained on MNIST, along 1-D interpolated slices and in a 2-D interpolated plane (see Appendix G.3 for details and additional figures). In the 1-D slices, we see a clear trend of large volume elements near decision boundaries, as we observed for shallow networks. However, in two dimensions the picture is less clear. The decision boundaries are more complicated than for MNIST, reflecting the more complex structure of the task. This also highlights the deficiency of our approach of linear interpolation in pixel space, which we further discuss and illustrate in Appendix G.3. We observe some magnification of areas in the vicinity of decision boundaries, though here it is harder to interpret all forms of structure that are present. Thus, even in this more realistic setting, we observe shaping of geometry over training that appears consistent with the proposal of Amari and Wu (1999). ## 7 Discussion To conclude, we have shown that training on simple tasks shapes the Riemannian geometry induced by neural network representations by magnifying areas along decision boundaries, consistent with the proposal of Amari and Wu for geometrically-inspired kernel learning (Amari and Wu, 1999; Williams et al., 2007; Wu and Amari, 2002). Our results on the geometry induced by the NNGP kernel provide a preliminary picture of the geometric priors of neural networks, and our experimental results begin to show how representational geometry is shaped over the course of training. These results are relevant to the broad goal of leveraging non-Euclidean geometry in deep learning (Bronstein et al., 2021; Weber et al., 2020). We now discuss the limitations of our work, Figure 3: _Top panel_: \(\log_{10}(\sqrt{\det g})\) induced at interpolated images between 7 and 6 by a single-hidden-layer fully-connected network trained to classify MNIST digits. _Bottom panel_: Digit class predictions and \(\log_{10}(\sqrt{\det g})\) for the plane spanned by MNIST digits 7, 6, and 1 at the final training epoch (200). Sample images are visualized at the endpoints and midpoint for each set. Each line is colored by its prediction at the interpolated region and end points. As training progresses, the volume elements bulge in the middle (near the decision boundary) and taper off when travelling towards endpoints. See Appendix G.2 for experimental details and Figure G.7 for images interpolated between other digits. as well as directions for future study. Perhaps the most important limitation of our work is the fact that we focus either on toy tasks with two-dimensional input domains, or on low-dimensional slices through high-dimensional domains. This is a fundamental limitation of how we have attempted to visualize the geometry. We are also restricted by computational constraints (see in particular Appendix G.3); to characterize the geometry of state-of-the-art network architectures, more efficient and numerically stable algorithms for computing these quantities must be developed. An important question that we leave open for future work is whether expanding areas near decision boundaries generically improves generalization in deep neural networks, consistent with Amari & Wu (1999)'s original motivations. Indeed, it is easy to imagine a scenario in which the geometry is overfit, and the trained network becomes too sensitive to small changes in the input. This possibility is consistent with prior work on the sensitivity of deep networks Novak et al. (2018), and with the related phenomenon of adversarial vulnerability Goodfellow et al. (2014); Szegedy et al. (2013). Investigating these links will be an important objective for future work. Because of the smoothness conditions required by the definition of the pullback metric and the requirement that \((\mathcal{M},g)\) be a differentiable manifold Benfenati & Marta (2023); Hauser & Ray (2017), the approach pursued in this work does not apply directly to networks with ReLU activation functions, which are not differentiable. Deep ReLU networks are continuous piecewise-linear maps, with many distinct activation regions Hanin & Rolnick (2019);b). Within each region, the corresponding linear feature map will induce a flat metric on the input space, but the magnification factor will vary from region to region. It will be interesting to investigate the resulting geometry in future work. One possible area of application of our results is to the general problem of how to analyze and compare neural network representations Kornblith et al. (2019); Williams et al. (2021). Importantly, one could compute and plot the volume element induced by a feature map even when one Figure 4: _Top panel_: \(\log_{10}(\sqrt{\det g})\) induced at interpolated images between a horse and a frog by ResNet34 trained to classify CIFAR-10 digits. _Bottom panel_: Digits classification of a horse, a frog, and a car. The volume element is the largest at the intersection of several binary decision boundaries, and smallest within each of the decision region. The one-dimensional slices along the edges of each ternary plot are consistent with the top panel. See Appendix G.3 for experimental details, Figure G.12 for linear interpolation and plane spanned by other classes, and how the plane evolves during training. does not have access to explicit class labels. This could allow one to study networks trained with self-supervised learning, or even differentiable approximations to biological neural networks (Wang and Ponce, 2022). Exploring the rich geometry induced by these networks is an exciting avenue for future investigation. ## Acknowledgements We thank Alexander Atanasov and Blake Bordelon for helpful comments on our manuscript. JAZV, CP and this research were supported by a Google Faculty Research Award and NSF DMS-2134157. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University.
2303.09343
Real-time elastic partial shape matching using a neural network-based adjoint method
Surface matching usually provides significant deformations that can lead to structural failure due to the lack of physical policy. In this context, partial surface matching of non-linear deformable bodies is crucial in engineering to govern structure deformations. In this article, we propose to formulate the registration problem as an optimal control problem using an artificial neural network where the unknown is the surface force distribution that applies to the object and the resulting deformation computed using a hyper-elastic model. The optimization problem is solved using an adjoint method where the hyper-elastic problem is solved using the feed-forward neural network and the adjoint problem is obtained through the backpropagation of the network. Our process improves the computation speed by multiple orders of magnitude while providing acceptable registration errors.
Alban Odot, Guillaume Mestdagh, Yannick Privat, Stéphane Cotin
2023-03-16T14:23:34Z
http://arxiv.org/abs/2303.09343v1
# Real-time elastic partial shape matching using a neural network-based adjoint method ###### Abstract Surface matching usually provides significant deformations that can lead to structural failure due to the lack of physical policy. In this context, partial surface matching of non-linear deformable bodies is crucial in engineering to govern structure deformations. In this article, we propose to formulate the registration problem as an optimal control problem using an artificial neural network where the unknown is the surface force distribution that applies to the object and the resulting deformation computed using a hyper-elastic model. The optimization problem is solved using an adjoint method where the hyper-elastic problem is solved using the feed-forward neural network and the adjoint problem is obtained through the backpropagation of the network. Our process improves the computation speed by multiple orders of magnitude while providing acceptable registration errors. Keywords:Optimal control Artificial neural network Hyper-elasticity. ## 1 Introduction We consider an elastic shape-matching problem between a deformable solid and a point cloud. Namely, an elastic solid in its reference configuration is represented by a tridimensional mesh, while the point cloud represents a part of the solid boundary in a deformed configuration. The objective of the procedure is not only to deform the mesh so that its boundary matches the point cloud, but also to estimate the displacement field inside the object. This situation also arises in computer-assisted liver surgery, where augmented reality is used to help the medical staff navigate the operation scene [3]. Most methods for intra-operative organ shape-matching revolve around a biomechanical model to describe how the liver is deformed when forces are applied to its boundary. Sometimes, a deformation is created by applying forces [13] or constraints [11; 7] to enforce surface correspondence. Other approaches prefer to solve an inverse problem, where the final displacement minimizes a cost functional among a range of admissible displacements [5]. However, while living tissues are known to exhibit a highly nonlinear behavior [8], using hyperelastic models in the context of real-time shape matching is prohibited due to high computational costs. For this reason, the aforementioned methods either fall back to linear elasticity [5] or to the linear co-rotational model [13]. In this paper, we perform real-time hyperelastic shape matching by predicting nonlinear displacement fields using a neural network. The network is included in an adjoint-like method, where the backward chain is executed automatically using automatic differentiation. Neural networks are used to predict solutions to partial differential equations, in compressible aerodynamics [14], structural optimization [15] or astrophysics [6]. Here we work at a small scale, but try to obtain real-time simulations using complex models. Also, the medical image processing literature is full of networks that perform shape-matching in one step [12]. However, the range of available displacement fields is limited by the training dataset of the network, and thus less robust to unexpected deformations. On the other hand, assigning a very generic task to the network results in a very flexible method, where details of the physical model, including the range of forces that can be applied to the liver and the zones where they apply may be chosen after the training. Therefore, our shape-matching approach provides a good compromise between the speed of learning-based methods with the flexibility of standard simulations. We want to mention that for the rest of this article due to how the method is formulated we interchangeably use the terms "shape-matching" and "registration". We start by presenting the method split into three parts. First, the optimization problem; second, the used neural network and finally, the adjoint method computed using an automatic differentiation framework. We then present the results considering a toy problem involving a square section beam and a more realistic one involving a liver. ## 2 Methods ### Optimization problem To model the registration problem, we use the optimal control formulation introduced in Mestdagh and Cotin [9]. The deformable object is represented by a tetrahedral mesh, endowed with a hyperelastic model. In its reference configuration, the elastic object occupies the domain \(\Omega_{0}\), whose boundary is \(\partial\Omega_{0}\). When a displacement field \(\mathbf{u}\) is applied to \(\Omega_{0}\), the deformed domain is denoted by \(\Omega_{\mathbf{u}}\), and its boundary is denoted by \(\partial\Omega_{\mathbf{u}}\) as shown in Figure 1. Applying a surface force distribution \(\mathbf{g}\) onto the object boundary results in the elastic displacement \(\mathbf{u}_{\mathbf{g}}\), solution to the static equilibrium equation \[\mathbf{F}(\mathbf{u}_{\mathbf{g}})=\mathbf{g}, \tag{1}\] where \(\mathbf{F}\) is the residual from the hyperelastic model. Displacements are discretized using continuous piecewise linear finite element functions so that the system state is fully known through the displacement of mesh nodes, stored in \(\mathbf{u}\). Note that \(\mathbf{g}\) contains the nodal forces that apply on the mesh vertices. As we only consider surface loadings, nodal forces are zero for nodes inside the domain. Finally, the observed data are represented by a point cloud \(\Gamma=\{y_{1},\dots,y_{m}\}\). We compute a nodal force distribution that achieves the matching between and \(\Gamma\) by solving the optimization problem \[\min_{\mathbf{g}\in G} \quad\Phi(\mathbf{g})+\tfrac{\alpha}{2}\|\mathbf{g}\|^{2} \tag{2}\] \[\quad\text{where}\qquad\Phi(\mathbf{g})=J(\mathbf{u_{g}}), \tag{3}\] where, \(\alpha>0\) is a regularization parameter, \(G\) denotes the set of admissible nodal forces distributions, and \(J\) is the least-square term \[J(\mathbf{u})=\tfrac{1}{2m}\sum_{j=1}^{m}d^{2}(y_{j},\partial\Omega_{\mathbf{u }}). \tag{4}\] Here, \(d(y,\partial\Omega_{\mathbf{u}})=\min_{x\in\partial\Omega_{\mathbf{u}}}\|y-x\|\) denotes the distance between \(y\in\Gamma\) and \(\partial\Omega_{\mathbf{u}}\). The functional \(J\) measures the discrepancy between \(\partial\Omega_{\mathbf{u}}\) and \(\Gamma\), and it evaluates to zero whenever every point \(y\in\Gamma\) is matched by \(\partial\Omega_{\mathbf{u}}\). A wide range of displacement fields \(\mathbf{u}\) are minimizers of problem (2), but most of them have no physical meaning. Defining a set of admissible controls \(G\) is critical to generate only displacements that are consistent with a certain physical scenario. The set \(B\) decides, among others, on which vertices nodal forces may apply, but also which magnitude they are allowed to take. Selecting zones where surface forces apply is useful to obtain physically plausible solutions. ### A neural network to manage the elastic problem Nonlinear elasticity problems are generally solved using a Newton method, which yields very accurate displacement fields at a high computational cost. In this paper, we give a boost to the direct solution procedure by using a pre-trained neural network to compute displacements from forces. This results in much faster estimates, while the quality of solutions depends on the network training. Artificial neural networks are composed of elements named artificial neurons grouped into multiple layers. A layer applies a transformation on its input data Figure 1: Schematic of the problem which we are trying to optimize for. and passes it to the associated activation layer. The result of this operation is then passed to the next layer in the architecture. Activation functions play an important role in the learning process of neural networks. Their role is to apply a nonlinear transformation to the output of the associated layers thus greatly improving the representation capacity of the network. While a wide variety of architectures are possible we will use the one proposed by Odot et al. [10]. It consists of a fully-connected feed-forward neural network with 2 hidden layers (see Figure 2). The connection between two adjacent layers can be expressed as follows \[\mathbf{z}_{i}=\sigma_{i}(\mathbf{W}_{i}\mathbf{z}_{i-1}+\mathbf{b}_{i})\text{ for }1\leqslant i\leqslant n+1, \tag{5}\] where \(n\) is the total number of layers, \(\sigma(.)\) denotes the element wise activation function, \(\mathbf{z}_{0}\) and \(\mathbf{z}_{n+1}\) denotes the input and output tensors respectively, \(\mathbf{W}_{i}\) and \(\mathbf{b}_{i}\) are the trainable weight matrices and biases in the \(i^{th}\) layer. In our case the activation functions \(\sigma(.)\) are PReLU [4], which provides a learnable parameter \(a\), allowing us to adaptively consider both positive and negative inputs. From now on, we denote the forward pass operation in the network by \[\mathbf{u}_{\mathbf{g}}=\mathbf{N}(\mathbf{g}). \tag{6}\] ### An adjoint method involving the neural network We now give a closer look at the procedure to evaluate \(\Phi\) and its derivatives. We use an adjoint method, where the only variable controlled by the optimization solver is \(\mathbf{g}\). As \(J\) only operates on displacement fields, the physical model plays the role of an intermediary between these two protagonists. The adjoint method is well suited to the network-based configuration, as the network can be used as a black box. Figure 2: The proposed architecture is composed of 4 fully connected layers of size the number of degrees of freedom with a PReLU activation function. The input is the nodal forces and the output is the respective nodal displacements. In a standard adjoint procedure, a displacement is computed from a force distribution by solving (1) using a Newton method, and it is then used to evaluate \(\Phi(\mathbf{g})\). The Newton method is the algorithm of choice when dealing with non-linear materials, it iteratively solves the hyper-elastic problem producing accurate solutions. This method is also known for easily diverging when the load is reaching a certain limit that depends on the problem. To compute the deformation, one requires the application of multiple substeps of load which highly increases the computation times. The backward chain requires solving an adjoint problem to evaluate the objective gradient, namely \[\nabla\Phi(\mathbf{g})=\mathbf{p_{g}}\quad\text{where}\quad\nabla\mathbf{F}( \mathbf{u_{g}})^{\mathrm{T}}\mathbf{p_{g}}=\nabla J(\mathbf{u_{g}}). \tag{7}\] In (7), the adjoint state \(\mathbf{p_{g}}\) is solution to a linear system involving the hyper-elasticity Jacobian matrix \(\nabla\mathbf{F}(\mathbf{u_{g}})\). When the network is used, however, the whole pipeline is much more straightforward, as the network forward pass is only composed of direct operations. The network-based forward and backward chains read \[\Phi(\mathbf{g})=J\circ\mathbf{N}(\mathbf{g})\quad\text{and}\quad\nabla\Phi( \mathbf{g})=\mathbf{p_{g}}=\left[\nabla\mathbf{N}(\mathbf{g})\right]^{\mathrm{ T}}\nabla J(\mathbf{u_{g}}), \tag{8}\] respectively. On a precautionnary basis, let us take a brief look at the (linear) adjoint operator \(\nabla\mathbf{N}(\mathbf{g})^{\mathrm{T}}\). When \(\nabla\mathbf{N}(\mathbf{g})^{\mathrm{T}}\) is applied, the information propagates backward in the network, following the same wires as the forward pass. The displacement gradient \(\nabla J(\mathbf{u_{g}})\) is fed to the output tensor \(\mathbf{s}_{n+1}\) and the adjoint state is read at the network entry \(\mathbf{s}_{0}\). In between, the relation between two layers is the adjoint operation to (5). It reads \[\mathbf{s}_{i-1}=\mathbf{W}_{i}^{\mathrm{T}}\,\nabla\sigma_{i}(\mathbf{W}_{i} \mathbf{z}_{i-1}+\mathbf{b}_{i})\,\mathbf{s}_{i}\quad\text{ for }\quad 1\leqslant i \leqslant n+1, \tag{9}\] where \(\nabla\sigma_{i}(\mathbf{W}_{i}\mathbf{z}_{i-1}+\mathbf{b}_{i})\) is a diagonal matrix saved during the forward pass. The network-based adjoint procedure is summarized in Algorithm 1, keeping in mind the backward chain is handled automatically. Given a nodal force vector \(\mathbf{g}\), evaluating \(\Phi(\mathbf{g})\) and \(\nabla\Phi(\mathbf{g})\) requires one forward pass and one backward pass in the network. Then, (2) may be solved iteratively using a standard gradient-based optimization algorithm. Because both network passes consist only of direct operations, the optimization solver is less likely to fail for accuracy reasons, compared to a \(\Phi\) evaluation based on an iterative method. ``` Data: Current iterate \(\mathbf{g}\) Perform the forward pass \(\mathbf{u_{g}}=\mathbf{N}(\mathbf{g})\) Evaluate \(J(\mathbf{u}_{g}v)\) and \(\nabla J(\mathbf{u_{g}})\) Perform the backward pass \(\mathbf{p_{g}}=\left[\nabla\mathbf{N}(\mathbf{g})\right]^{\mathrm{T}}\nabla J( \mathbf{u_{g}})\) Result:\(\nabla\Phi(\mathbf{g})=\mathbf{p_{g}}\) ``` **Algorithm 1**Network-based adjoint method to evaluate \(\Phi\). ## 3 Results Our method is implemented in Python. To be more specific, we use PyTorch to handle the network and evaluate \(J\) on the GPU, while the optimization solver is a limited memory BFGS algorithm [1] available in the Scipy package. Our numerical tests run on a Titan RTX GPU and AMD Ryzen 9 3950x CPU, with 32 GiB of RAM. ### Surface-matching tests on a beam mesh To assess the validity of our method, we first consider a toy problem involving a square section beam with 304 hexahedral elements. The network is trained using 20,000 pairs \((\mathbf{g},\mathbf{u_{g}})\), computed using a Neo-Hookean material law with a Young modulus \(E=4,500\) Pa and a Poisson ratio \(\nu=0.49\). We create 10,000 additional synthetic deformations of the beam, distinct from the training dataset, using the SOFA finite element framework [2]. Figure 3 shows three examples of synthetic deformations, along with the sampled point clouds. Generated deformations include bending (Figure 2(a)), torsion (Figure 2(c)) or a combination of them (Figure 2(b)). For each deformation, we sample the deformed surface to create a point cloud. We then apply our algorithm with a relative tolerance of \(10^{-4}\) on the objective gradient norm. We computed some statistics regarding the performance of our method over a series of 10,000 different scenarios and obtained the following results: mean registration error: \(6\times 10^{-5}\pm 6.15\times 10^{-5}\), mean computation time: 48 ms \(\pm 19\) ms and mean number of iterations: 27 \(\pm\) 11. Figure 3: Deformations from the test dataset. The red dots represent the target point clouds, and the color map represents the Von Mises stress error of the neural network prediction. Using a FEM solver, each sample of the test dataset took between 1 and 2 seconds to compute. This is mostly due to the complexity of the deformations as shown in Figure 3. Such displacement fields require numerous costly Newton-Raphson iterations to reach equilibrium. The neural network provides physical deformations in less than a millisecond regardless of the complexity of the force or resulting deformation, which highly improves the computation time of the method. From our analysis, the time repartition of the different tasks in the algorithm is pretty consistent, even with denser meshes. Network predictions and loss function evaluations represent 10% to 15% each, gradient computations represent up to the last 80% of the whole optimization process. This allows us to reach an average registration error of \(5.37\times 10^{-5}\) in less time than it takes to compute a single simulation of the problem using a classic FEM solver. Due to the beam shape symmetry, some point clouds may be compatible with several deformed configurations, resulting in wrong displacement fields returned by the procedure. However, our procedure achieved a satisfying surface matching in each case. These results on a toy scenario prove that our algorithm provides fast and accurate registrations. In the next section, we apply our method in the field of augmented surgery with the partial surface registration of a liver and show that with no additional computation our approach produces with satisfying accuracy the forces that generate such displacements. ### An application in augmented surgery and robotics We now turn to another test case involving a more complex domain. The setting is similar to [9, Sect. 3.2]. In this context, a patient-specific liver mesh is generated from tomographic images and the objective is to provide augmented reality by registering, in real-time, the mesh to the deformed organ. During the surgery, only a partial point cloud of the visible liver surface can be obtained. The contact zones with the surgical instruments can also be estimated. In our Figure 4: Mesh of the liver used in this section. Composed of 3,046 vertices and 10,703 tetrahedral elements which represents a challenge compared to the one used in Section 3.1 case, the liver mesh contains 3,046 vertices and 10,703 tetrahedral elements. Homogeneous Dirichlet conditions are applied at zones where ligaments hold the liver, and at the hepatic vein entry. Like previously, we use a Neo-Hookean constitutive law with \(E=4,500\) Pa and \(\nu=0.49\), and the network is trained on 20,000 force/displacement pairs. We create 5 series of synthetic deformations by applying a variable local force, distributed on a few nodes, on the liver mesh boundary. For each series, 50 incremental displacements are generated, along with the corresponding point clouds. The network-based registration algorithm is used to update the displacement field and forces between two frames. We also run a standard adjoint method involving the Newton algorithm, to compare with our approach. As the same mesh is used for data generation and reconstruction, the Newton-based reconstruction is expected to perform well. ### Liver partial surface matching for augmented surgery In this subsection, we present two relevant metrics: target registration error and computation times. In augmented surgery, applications such as robot-aided surgery or holographic lenses require accurate calibrations that rely on registration. One of the most common metrics in registration tasks is the target registration error (TRE), which is the distance between corresponding markers not used in the registration process. In our case we work on the synthetic deformation of a liver, thus, the markers will be the nodes of the deformed mesh. The 5 scenarios present similar results with TRE between 3.5 \(mm\) and 0.5 \(mm\). Such errors are entirely acceptable and preserve the physical properties of the registered mesh. We point out that the average TRE for the classic method is around 0.1 \(mm\) which shows the impact of the network approximations. Figure 5: Average target registration error and computation times of each sequence. Due to the non-linearity introduced by the Neo-Hookean material used to simulate the liver we need multiple iterations to converge toward the target point cloud. Considering the complexity of the mesh, computing a single iteration of the algorithm using a classical solver takes multiple seconds which leads to an average of 14 minutes per frame. Our proposed algorithm uses a neural network to improve the computation speed of both the hyper-elastic and adjoint problems. The hyper-elastic problem takes around 4 to 5 milliseconds to compute while the adjoint problem takes around 11 \(ms\). This leads to great improvement in convergence speed as seen in Figure 3 where on average we reduce the computation time by a factor of 6000. ### Force estimation for robotic surgery In the context of liver computer-assisted surgery, the objective is to estimate a force distribution supported by a small zone on the liver boundary. Such a local force is for instance applied when a robotic instrument manipulates the organ. In this case, it is critical to estimate the net force magnitude applied by the instrument, to avoid damaging the liver. To represent the uncertainty about the position of the instruments the reconstructed forces are allowed to be nonzero on a larger support than the original distribution. Figure 6 shows the Figure 6: Synthetic liver deformations and force distributions (left), reconstructed deformations and forces using the Newton method (middle) and the network (right) for test case 3. reference and reconstructed deformations and nodal forces for three frames of the same series. While the Newton-based reconstruction looks similar to the reference one, network-based nodal forces are much noisier. This is mostly due to the network providing only an approximation of the hyperelastic model. The great improvement in speed comes at the cost of precision. As shown in Figure 6 the neural network provides noisy force reconstructions. This is mostly due to prediction errors since the ANN only approximates solutions. These errors also propagate through the backward pass (adjoint problem), thus, accumulate in the final solution. Although the force estimation is noisy for most cases it remains acceptable as displayed in Figure 7. The red dotted line corresponds to the average error obtained with the classical adjoint method (10.04 %). While we are not reaching such value, some sequences such as 1 and 3 provide good reconstructions. The difference in errors between scenarios is mostly due to training force distribution. This problem can be corrected by simply adding more data to the dataset thus providing better coverage of the force and deformation space. These results show that this algorithm can produce fast and accurate registration at the expense of force reconstruction accuracy. This also shows that the force estimation is not directly correlated to registration accuracy. For example sequence 1 has the worst TRE but a good force reconstruction compared to sequence 4. ## 4 Conclusion We presented a physics-based solution for a partial surface-matching problem that works with non-linear material using deep learning and optimal control formalism. The results are obtained on two main scenarios that differ both in scale and complexity. We showed that a fast and accurate registration can be obtained in both cases and can, in addition, predict the set of external forces that led to the deformation. Such results show that deep learning and optimal control have a lot in common and can be easily coupled to solve optimization problems very efficiently. Current limitations of our work are mostly due to the limited accuracy of the network and the need to retrain the network when the shape or material parameters of the model change. Figure 7: Force estimation error of the 5 sequences using our method, in red the average force reconstruction error with the classical method.
2305.18856
A Federated Channel Modeling System using Generative Neural Networks
The paper proposes a data-driven approach to air-to-ground channel estimation in a millimeter-wave wireless network on an unmanned aerial vehicle. Unlike traditional centralized learning methods that are specific to certain geographical areas and inappropriate for others, we propose a generalized model that uses Federated Learning (FL) for channel estimation and can predict the air-to-ground path loss between a low-altitude platform and a terrestrial terminal. To this end, our proposed FL-based Generative Adversarial Network (FL-GAN) is designed to function as a generative data model that can learn different types of data distributions and generate realistic patterns from the same distributions without requiring prior data analysis before the training phase. To evaluate the effectiveness of the proposed model, we evaluate its performance using Kullback-Leibler divergence (KL), and Wasserstein distance between the synthetic data distribution generated by the model and the actual data distribution. We also compare the proposed technique with other generative models, such as FL-Variational Autoencoder (FL-VAE) and stand-alone VAE and GAN models. The results of the study show that the synthetic data generated by FL-GAN has the highest similarity in distribution with the real data. This shows the effectiveness of the proposed approach in generating data-driven channel models that can be used in different regions
Saira Bano, Pietro Cassarà, Nicola Tonellotto, Alberto Gotta
2023-05-30T08:50:22Z
http://arxiv.org/abs/2305.18856v1
# A Federated Channel Modeling System using Generative Neural Networks ###### Abstract The paper proposes a data-driven approach to air-to-ground channel estimation in a millimeter-wave wireless network on an unmanned aerial vehicle. Unlike traditional centralized learning methods that are specific to certain geographical areas and inappropriate for others, we propose a generalized model that uses Federated Learning (FL) for channel estimation and can predict the air-to-ground path loss between a low-altitude platform and a terrestrial terminal. To this end, our proposed FL-based Generative Adversarial Network (FL-GAN) is designed to function as a generative data model that can learn different types of data distributions and generate realistic patterns from the same distributions without requiring prior data analysis before the training phase. To evaluate the effectiveness of the proposed model, we evaluate its performance using Kullback-Leibler divergence (KL), and Wasserstein distance between the synthetic data distribution generated by the model and the actual data distribution. We also compare the proposed technique with other generative models, such as FL-Variational Autoencoder (FL-VAE) and stand-alone VAE and GAN models. The results of the study show that the synthetic data generated by FL-GAN has the highest similarity in distribution with the real data. This shows the effectiveness of the proposed approach in generating data-driven channel models that can be used in different regions. Federated learning, Unmanned aerial vehicles, Channel modeling, Generative neural networks ## I Introduction Non-terrestrial networks (NTNs), such as near-earth satellite constellations (LEO), high-altitude platforms, and unmanned aerial vehicles (UAVs) have traditionally been used for disaster management and remote sensing [1]. However, they are now being seen as promising technologies for providing ubiquitous connectivity in the future generation of the Internet [2]. Such radio access networks (RAN), operating in the millimeter wave (mmWave) range, are very promising, providing global coverage and high capacity for reliable and efficient communications services [3]. The 3rd Generation Partnership Project (3GPP), has also recognized the potential of mmWave technology to support satellite communications. Accurate statistical channel models are essential to characterize the mmWave link and to determine the underlying channel parameters to improve the transmission performance of wireless communication systems. Extensive research has been conducted to develop effective methods for accurate channel modeling, such as the mathematical propagation model proposed in [4] for estimating ground-to-air path loss between wireless devices and low-altitude platforms using mmWave frequency bands. Furthermore, deterministic channel models, such as ray-tracing techniques, as well as stochastic channel models are commonly used and require extensive technical knowledge and expertise for analyzing measurement data to estimate a comprehensive set of different channel parameters [5]. However, building statistical channel models to determine the underlying channel parameters that accurately capture the delay, direction, and path gains of individual links is difficult, especially in the mmWave domain. Machine Learning (ML) techniques, such as Neural Networks (NNs), can be used to develop statistical channel models that overcome the limitations of conventional channel modeling systems [6]. However, these models result in channel parameters that are site-specific and may not be generally applicable. In this regard, generative NNs, which have proven to be very successful in modeling images and text, provide a suitable approach to data-driven channel modeling and can accurately represent complex environments. Initial research has explored the use of generative NNs for site-specific wireless channels. For example, in [7], the authors proposed generative networks to model channel parameters and trained five different models for five different cities. In contrast, our main goal is to develop a general model that can be used for all participating cities, considering an acceptable model performance for each of these different locations. To this end, we propose a location-agnostic statistical channel propagation model based on Federated Learning (FL) that focuses on predicting the path loss component between a UAV and terrestrial nodes in mmWave communication networks. FL is a paradigm developed by Google that aims to build ML models with distributed datasets across multiple devices while maintaining privacy [8]. Participating users communicate parameters or gradients to a central server, which updates and distributes a global model without access to user data [9, 10, 11] @inproceedingsbano2022federated,. However, in this work, we used the FL frameworks as distributed training engines to train our models on different datasets and develop the generalized channel model using Variational Autoencoder (VAE) and Conditional Generative Adversarial Network (CGAN) architectures, i.e. FL-VAE and FL-GAN. In our study, we rely on the statistical characteristics of the urban environment of the target area collected through ray tracing simulations to train the models. The performance of the proposed approach is determined using various statistical parameters. The remainder of the paper is organised as follows. Section II discusses the system model, while sections III and IV present the federated VAE and GAN approaches for channel modeling, respectively. Section V shows the experimental evaluation performed. Finally, Section VI draws conclusions. ## II System Model In this work, we focus on channel parameter modeling with the main focus on the path loss component connecting UAVs to cellular base stations on the ground, i.e. gNB. We propose a distributed training approach using FL for channel model estimation with two generative NNs. For modelling purposes, we assume that the UAVs act as transmitters and the ground base stations act as receivers, but the roles can be reversed. To model the air-to-ground channel, we assume two ground gNBs, one terrestrial and the other aerial, as in [12]. The aerial gNBs serve as dedicated stations (mounted on rooftops and tilted upward), while the terrestrial gNBs are for ground users (mounted at street level), as shown in Figure 1. In addition, we assume three link states between the transmitter and receiver, including Line of Sight (LOS), Non-LOS (NLOS), and no link (i.e., no paths are available). However, when modelling path loss between UAVs and gNBs, we mainly focus on NLOS paths since for LOS, path loss can be calculated using Friis' law [13]. We adopt the channel parameters estimated with the raytracer package by [7] as a benchmark dataset for our investigation. The raytracer simulations estimate the channel parameters, including path losses, azimuth and elevation angles of arrival and departure, and propagation delays. According to the dataset, there is a total of 20 paths per link and six parameters per path, resulting in 120 parameters per link with a maximum path loss of 200 dB [7]. The dataset consists of channel parameters for different cities estimated by using the ray-tracer package. Using this dataset, we train the generative models for each city in a decentralized manner. These standalone models can learn the channel representation of a UAV's local dataset in a given region but may have biases and be applicable only in a limited spatial domain. Therefore, a general model that is not tied to a specific environment is essential. To this end, we use FL to aggregate these standalone models and obtain a global model. We validate the generated model using CDF of path loss. In the proposed approach, we use two generative NN models, both of which have a two-stage structure, i.e., link and path models [12]. In the first stage, an NN is used as a link model to determine the state of the link - whether it is LOS, NLOS, or no link, according to 3GPP requirements [14]. To determine the link state, the relative position of the UAV to the gNB and the type of gNB are used as inputs. After the link state is determined, a generative model, i.e., a path model, is used in the second stage to generate the path parameters. This generative model is trained to match the distribution of the training dataset. To perform the distributed training using FL, we trained the link-state model for each city and stored it on the corresponding station to use with the path model in FL. We then aggregate these generative models as described in Section III and Section IV, respectively. Once the model is trained, it can be used in simulations to statically determine channel parameters considering the link status. ## III Federated VAE In this section, we describe our FL-VAE for channel modeling of the path loss component. We first introduce the basic concepts of VAE to understand its content (Section III-A) and then describe in detail the FL-VAE approach (Section III-B) used for modeling the channel parameters. ### _Variational Autoencoder (VAE)_ VAE consists of encoder and decoder modules, where the encoder, defined as \(q_{\theta}(z|x)\), characterizes the distribution of the input variables \(x\) according to the encoding in the latent space \(z\) (encoded representation of the input variables). On the other hand, the decoder, defined as \(p_{\phi}(x|z)\), characterizes the distribution of the decoded variables based on the latent space, where \(\theta\) and \(\phi\) are the parameters of the encoder and decoder NNs, respectively. The loss function of the VAE given in [15] is as follows: \[\mathcal{L}(\phi,\theta)=-\mathbb{E}_{q_{\theta}(z|x_{i})}\big{[}\log p_{\phi} (x_{i}|z)\big{]}+KL(q_{\theta}(z|x_{i})\|p(z)) \tag{1}\] The first component of the expression represents the reconstruction loss corresponding to the expected negative log likelihood of each data point. The expected value is calculated based on the encoder's distribution over the representations, and this component is intended to provide an incentive for the decoder to acquire the ability to reconstruct the data. The second term is the KL divergence, which acts as a regularizer and measures the loss of information when we use \(q_{\theta}(z|x_{i})\) to represent \(p(z)\), which is the posterior distribution defined for the latent space, i.e., a Gaussian distribution. ### _Fl-Vae_ FL-VAE uses the same VAE architecture proposed in [12] and trains generative (path) model using the FL framework developed in [16]. The goal of FL-VAE is to capture the conditional distribution \(p(x|u)\) of all participating cities such that it tends to encode the local latent space of all cities into a single latent space and form the generic global model for generating channel parameters. VAEs can easily be trained in an FL framework since their encoder and decoder components comprise of NNs. Figure 1: System Model Let \(\mathcal{V}\coloneqq(\theta^{e},\theta^{d})\) be the VAE parameters, and \(\theta^{e}\) and \(\theta^{d}\) be the weights of the encoder and decoder, respectively. A centralized server initiates the training by communicating the initial weights of VAE \(\mathcal{V}^{t}\) to all agents in the participating city stations. Each agent in a city initializes its own VAE model with these weights and uses local training data and a pre-trained link model to obtain a latent representation of its own data. Local updates of each city \(k\) is given by: \[\mathcal{V}^{t+1}_{k}\longleftarrow\mathcal{V}^{t}_{k}-\eta\nabla\mathcal{L}( \mathcal{V}^{t}_{k}) \tag{2}\] Where \(\eta\) is the learning rate. Each city agent uses equation (2) to perform some local training epochs on local data and send the updates \(\mathcal{V}^{t+1}_{k}\) to the central server. The server finally amalgamates the received updates with a weighted average approach given by: \[\mathcal{V}^{t+1}=\sum_{k=1}^{K}\frac{n_{k}}{n}\mathcal{V}^{t}_{k} \tag{3}\] \(n_{k}\) are the number of training examples at each agent \(k\) and \(n\) is the total number of training data of each city. The server continues training until it obtains a global latent representation sufficient to represent all training data. ## IV Federated CGAN In this section, we describe our FL-GAN approach to channel modeling. We first describe the Generative Adversarial Network (GAN) (Section IV-A) and then the FL-GAN (Section IV-B) used to model the channel parameters to form the generalised or universal model. ### _Generative Adversarial Network (GAN)_ The GAN is a popular concept first proposed in [17]. Its main purpose is to generate synthetic data that closely resembles real data. GANs use an unsupervised learning approach to detect patterns in the input data and generate new samples with the same distribution as the original data. It consists of two NNs: the generator (G) and the discriminator (D), which compete in a "min-max two-player game." The G generates synthetic (fake) data from the learned latent vector, while the D discriminates the synthetic data from the real data. These models are trained until the G replicates the original data so well that it becomes difficult for the D to distinguish between the fake and the real data. To generate samples from a given target, the CGAN was introduced in [18]. A CGAN learns the mapping from an observed sample \(x\) and a random noise vector \(z\) to an output sample \(y\), represented as \(G:x,z\to y\), where \(G\) is the generator function. Both networks in CGAN aim to solve a "min-max loss" like GAN given by [18]: \[\mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D})=\mathbb{E}_{x,y}\big{[} \log(\mathcal{D}(x,y))\big{]}+\\ \mathbb{E}_{x,z}\big{[}1-\log(\mathcal{D}(x,\mathcal{G}(x,z))) \big{]} \tag{4}\] G and D compete according to equation (2), where D tries to maximize the probability of assigning correct labels, and G tries to minimize that probability. In the next section, we describe the distributed approach using FL to train the CGAN. ### _Fl-Gan_ We use the FL technique to train CGAN in a distributed manner. The training process is initiated by a central server, which communicates the initial parameters of generator and discriminator i.e., \(\theta^{G}\) and \(\theta^{D}\) to the agents in the cities. Each city agent initializes its own CGAN instance with the received parameters and trains it using local data and associated link state models. The updated parameters are then reported back to the server, which aggregates the updates from all cities as follows: \[\theta^{G}=\sum_{k=1}^{K}\frac{n_{k}}{n}\theta^{G}_{k}\quad;\quad\theta^{D}= \sum_{k=1}^{K}\frac{n_{k}}{n}\theta^{D}_{k} \tag{5}\] \(\theta^{G}\) and \(\theta^{D}\) in equation (5) are the aggregate parameter estimates of G and D, respectively. The server goes through this process until it develops a global CGAN that can generate synthetic samples from the distribution that captures the local data distributions. After training, each local city unit can generate the path parameters with \(\theta^{G}\). ## V Simulation Results In this section, we describe the performed experiments to assess the efficiency and effectiveness of the proposed FL approach. ### _Dataset and settings_ In this work, we use raytracer data provided by [7]. The dataset consists of channel parameters from five different cities, each with different landscapes and structures. However, for this work, we use the channel parameters of three cities (Beijing, London, and Boston). In the raytracer simulation, the transmitting UAVs are positioned at different horizontal locations in each environment, with four possible heights: 30, 60, 90 and 120 m, to create the whole city dataset. A total of 36k links were created for Beijing, 25.8k for London, and 23k for Boston. All simulations were performed at a frequency of 28 GHz. For our learning models, we used two generative NNs and trained them in a distributed manner using FL to make FL-VAE and FL-GAN model. The main goal is to develop a distributed model using FL framework that can be used universally for estimating channel parameters. In this context, we compare the generative models trained in a distributed manner and analyse which model is better at capturing channel characteristics of different latent spaces. We compared the results of these distributed models with the basic stand-alone models trained for each city using different statistical parameters, i.e., KL divergence and Wessterstein distance. The architecture and hyperparameters used to train these models are shown in Table I and Table II respectively. As mentioned earlier, in all cases our generative models consist of two cascaded models, the first of which is the link predictor and the second is the path generator. We first train the link predictor for each city separately and then use these pre-trained link models for simulation. ### _Results_ In this work, we propose a promising solution for extending the channel model to large-scale application scenarios by using a cooperative modeling approach with multiple distributed channel datasets. We first describe the results obtained in both centralized and distributed approaches. To ensure a fair comparison, we train all models with the same number of epochs and hyperparameters. In particular, we train the stand-alone models for 500 epochs and for the FL-VAE and FL-GAN models, we perform 100 rounds of local training, where each city trains its respective model for 5 epochs on its local data within each FL round. #### V-B1 Stand-alone Models Our goal is to measure the extent to which the data generated by the generative models (VAE and GAN networks) are comparable to the test data. To this end, we compare the CDF of the path losses of the generated and test data. Both trained generative models are able to capture the dual-slope nature of CDF, which is a crucial component for the effectiveness of our proposed framework. However, due to space constraints, we only show in Table III the distance between the distribution of the standalone models (VAE and GAN) and the distributed models, i.e., (FL-VAE) and (FL-GAN). #### V-B2 Fl-Vae To evaluate the performance of our proposed decentralized model, we created CDF plots for the path losses of both the test data and the path losses generated by the FL-VAE model for each city. This allowed us to evaluate the generalizability of our federated global model, particularly in terms of its ability to accurately capture the channel characteristics of all participating cities. The results in Figures 1(a), 1(b), and 1(c) show that our federated model performs better compared to the individual models of each city. In addition, the FL-VAE approach helps address potential privacy and security issues related to data sharing between different cities. These measures ensure that individual city data sets are not shared outside of the city, thus maintaining privacy and security. #### V-B3 Fl-Gan Now we use the CGAN instead of the VAE to generate the channel parameters and compare its performance with the results we obtained with the FL-VAE and standalone GAN models. Our results show that the generative network learns the distribution of the channel modelling data very well and generates samples that exactly reflect the same distribution of the training dataset. It is also clear from Figures 2(a), 2(b), and 2(c) that FL-GAN produces better results for the path loss component of the channel parameters compared to FL-VAE. The results show that the channel parameters reconstructed using the FL-GAN approach are closest to the original test data and outperform the VAE-based methods. This can be attributed to the fact that it is difficult for VAEs to encode heterogeneous datasets from different cities into a common latent space, while GANs are better at learning diverse data. The FL-GAN approach is therefore better suited to deal with the challenges of heterogeneous data and produce synthetic data that accurately represents the actual data distribution. ### _Performance Metrics and Evaluation Results_ This section presents the evaluation metrics used to assess the performance of the proposed distributed techniques FL-VAE and FL-GAN in generating synthetic data compared to the standalone models trained separately for each city. Table III summarizes the KL divergence and Wasserstein distance results obtained by comparing the test data distribution with the synthetic data distribution generated by the VAE, GAN, FL-VAE, and FL-GAN networks. These metrics are used to measure the distance between the test data distribution and the synthetic data generated by each model, and provide information about the accuracy and quality of the generated data. These evaluation metrics used in Table III show that the distribution of synthetic data generated by the FL-GAN network is much closer to the true distribution compared to the other methods, i.e., the standalone networks and FL-VAE. This highlights the superiority of the proposed approach FL-GAN in accurately modeling the data and generating synthetic data that is very similar to the real data. As shown in Table III, the KL-divergence between the test data distribution and the synthetic data distribution of the standalone GAN -model is much higher than that of the other alternatives. FL-GAN achieves the lowest KL -divergence among the alternatives, which is due to the fact that GANs generally require more training time than VAEs, but can generate better samples. We also evaluate our method using Wasserstein distance, which considers metric space. Table III shows that FL-GAN significantly outperforms all other methods and achieves satisfactory performance in developing a global model for channel estimation parameters. ## VI Conclusion NTNs are anticipated to play a crucial role in future wireless networks due to their cost efficiency and wide coverage area. In this paper, we present a comprehensive study that employs a generative framework based on NNs to model wireless channels in a distributed environment. In order to have a common model for different cities, we train distributed generative models and combine them into a unified and adaptable model. Specifically, we propose a channel model for air-to-ground communication of UAVs in mmWave frequency bands. Our distributed training method does not require any special knowledge or technical expertise, as it learns directly from massive raw channel data to develop a generic channel model. The use of generative NNs, \begin{table} \begin{tabular}{c c c c} \hline **Item** & **Link Model** & **Generative Model (VAE)** & **Generative Model (GAN)** \\ \hline Communication Bounds & \(N/A\) & \(100\) & \(100\) \\ Epocha & \(30\) & \(5\) & \(5\) \\ Batch Size & \(100\) & \(100\) & \(100\) \\ Learning Rate & \(10^{-3}\) & \(10^{-4}\) & \(10^{-4}\) \\ Optimizer & Adam & Adam & Adam \\ \hline \end{tabular} \end{table} TABLE II: Hyperparameter settings for link model, Federated VAE and CGAN models \begin{table} \begin{tabular}{c c c c} \hline **Model** & **Number of Inputs** & **Hidden Units** & **Number of Outputs** & **Number of Parameters** \\ \hline Link Model & \(5\) & \([25,10]\) & \(3\) & \(1,653\) \\ VAE (Enc) & \(125\) & \([200,80]\) & \(40\) & \(44,520\) \\ VAE (Dec) & \(25\) & \([80,200]\) & \(240\) & \(40,720\) \\ GAN (Disc) & \(125\) & \([1120,560,280]\) & \(40\) & \(1,055,761\) \\ GAN (Gen) & \(25\) & \([280,560,1120]\) & \(240\) & \(1,094,360\) \\ \hline \end{tabular} \end{table} TABLE I: Model summary of the link model, path model (VAE) and CGAN especially GANs and VAEs, is a suitable method for statistical channel modeling in complex scenarios. Although both models are capable of capturing data dependencies, our results show that the proposed FL-GAN approach outperforms the FL-VAE and centralized baseline methods in terms of learning the accuracy of path loss parameters. We validate our results with various statistical parameters, and the resulting model shows effective learning and interesting non-obvious predictions.
2310.16401
Graph Neural Networks with a Distribution of Parametrized Graphs
Traditionally, graph neural networks have been trained using a single observed graph. However, the observed graph represents only one possible realization. In many applications, the graph may encounter uncertainties, such as having erroneous or missing edges, as well as edge weights that provide little informative value. To address these challenges and capture additional information previously absent in the observed graph, we introduce latent variables to parameterize and generate multiple graphs. We obtain the maximum likelihood estimate of the network parameters in an Expectation-Maximization (EM) framework based on the multiple graphs. Specifically, we iteratively determine the distribution of the graphs using a Markov Chain Monte Carlo (MCMC) method, incorporating the principles of PAC-Bayesian theory. Numerical experiments demonstrate improvements in performance against baseline models on node classification for heterogeneous graphs and graph regression on chemistry datasets.
See Hian Lee, Feng Ji, Kelin Xia, Wee Peng Tay
2023-10-25T06:38:24Z
http://arxiv.org/abs/2310.16401v3
# Graph Neural Networks with a Distribution of Parametrized Graphs ###### Abstract Traditionally, graph neural networks have been trained using a single observed graph. However, the observed graph represents only one possible realization. In many applications, the graph may encounter uncertainties, such as having erroneous or missing edges, as well as edge weights that provide little informative value. To address these challenges and capture additional information previously absent in the observed graph, we introduce latent variables to parameterize and generate multiple graphs. We obtain the maximum likelihood estimate of the network parameters in an Expectation-Maximization (EM) framework based on the multiple graphs. Specifically, we iteratively determine the distribution of the graphs using a Markov Chain Monte Carlo (MCMC) method, incorporating the principles of PAC-Bayesian theory. Numerical experiments demonstrate improvements in performance against baseline models on node classification for heterogeneous graphs and graph regression on chemistry datasets. ## 1 Introduction Graph Neural Networks (GNNs) have facilitated graph representational learning by building upon Graph Signal Processing (GSP) principles and expanding their application in the domains of machine learning. Moreover, GNNs have demonstrated their effectiveness across a wide range of tasks in domains such as chemistry (Gilmer et al., 2017), recommendation systems (Ying et al., 2018; Chen et al., 2022), financial systems (Sawhney et al., 2021) and e-commerce settings (Liu et al., 2022), among others. However, GSP and GNNs conventionally rely on a fixed graph shift operator, such as the adjacency or Laplacian matrix, to analyze and learn from graph data, assuming that the given graph is accurate and noise-free. This approach has inherent limitations, considering that graph data is often uncertain. The uncertainty is due to the existence of multiple potential variations in graph constructions as a universal optimal method does not exist. Furthermore, structural noise, which includes missing or spurious edges, and the absence of informative edge weights, can also contribute to the uncertainty in graph data (Zhang et al., 2019; Dong and Kluger, 2023). It is important to handle this uncertainty as the graph directly influences the results of both GSP and GNNs (Li et al., 2021). Several GNN works have recognized that the provided graph in benchmark datasets is suboptimal. For example, in Topping et al. (2022), a method was introduced to enhance the provided graph by rewiring it at graph bottlenecks. Similarly, in Li et al. (2021) and Ye et al. (2020), approaches were developed to reweigh edges, to reduce information flow at cluster boundaries. Another perspective involves considering the given or observed graph as a particular realization of a graph model, as discussed in Zhang et al. (2019). In their work, a Bayesian framework was adopted to learn a more robust model that can withstand perturbations in graph topology. These collective efforts underscore the common observation that the observed graph is often imperfect, and determining the optimal graph is a non-trivial task, as it depends on both the physical connections and the edge weights, which regulate the rates of information transmission (Ji et al., 2023). Our work aligns with the viewpoint presented in Zhang et al. (2019). We conceptualize the observed graph as an individual instance originating from a distribution of graphs, which is influenced by one or more latent parameters. Nevertheless, in contrast to Zhang et al. (2019) which proposed a Bayesian framework, we propose an EM framework for graph learning and name our model EMGNN. Even though both are probabilistic frameworks, the focus is distinctly different. In the case of the Bayesian framework of Zhang et al. (2019), the focus is on estimating the posterior distribution of model parameters given the data. As such, model parameters are deemed as random variables trained by a series of characteristically similar graphs. Meanwhile, in our EM framework, we seek to maximize the log-likelihood of the observed data in conjunction with the latent variables. Additionally, we permit the generated graphs to demonstrate more pronounced variations. Our main contributions are as follows: * We introduce a general framework for modeling the distribution of graphs to handle the uncertainty in graph data. The learned distribution then serves as a valuable tool for comprehending our model and offering insights into its behavior. * We formulate the graph learning problem as a maximum likelihood estimation (MLE) so that tools from statistical learning can be applied. The new objective subsumes the classical objective of minimizing empirical loss if the graph is deterministic. * We conduct evaluations of our model using real datasets, carry out experiments in two distinct applications, and observe promising performance compared to the respective baseline methods. * We inspect the learned graph distribution, confirming that it effectively captures the intricacies of heterogeneous graph datasets, thus validating the utility of our model and framework. ## 2 Preliminaries ### Graph Neural Networks Graph neural networks (Chen et al., 2020; Kang et al., 2023; Brody et al., 2022; Lee et al., 2021; Zhao et al., 2023), which are neural networks designed to operate on graphs, typically employ the message-passing framework. Within this framework, the features of each node are integrated with those of its neighboring nodes to update its feature representation. More specifically, suppose that we have a graph \(G=(V,E)\), where \(V\) is the set of vertices and \(E\) is the set of edges. Moreover, each node \(v\in V\) is associated with (initial) node features represented by \(x_{v}^{0}\). The node features can then be updated in the \(k\)-th layer as follows: \[x_{v}^{k}=\sigma(W^{k}\text{AGGR}(\{x_{v}^{k}\mid v\in\mathcal{N}(v)\})) \tag{1}\] where \(\sigma\) is an activation function, \(W^{k}\) are the learnable weights in the \(k\)-th layer and \(\mathcal{N}(v)\) is the set of neighbors of \(v\). AGGR is a message aggregation function. The choice of AGGR defines various variants of GNNs (Xu et al., 2019). For example, the mean operator yields Graph Convolutional Networks (GCN) (Kipf and Welling, 2017), while using the attention mechanism results in Graph Attention Networks (GAT) (Velickovic et al., 2018). In a GNN with \(K\) layers, the last layer outputs features \(\{x_{1}^{K},\ldots,x_{|V|}^{K}\}\). For node classification, these features can be used directly. Meanwhile, for a graph-level task, a READOUT graph-level pooling function is needed to obtain the graph-level representation (Xu et al., 2019). ### Signal Processing over a Distribution of Graphs GNN is closely related to the theory of GSP (Shuman et al., 2013). Briefly, given an undirected graph \(G\), we consider a fixed graph shift operator \(S\) such as its adjacency or Laplacian matrix. A graph signal is a vector \(\mathbf{x}=(x_{v})_{v\in V}\) that associates a number \(x_{v}\) to each node \(v\in V\). Intuitively, applying the linear transformation \(S\) to \(\mathbf{x}\) is considered as a "shift" of \(\mathbf{x}\). If \(S\) is the normalized adjacency matrix, then it amounts to the AGGR step of (1) for GCN. More generally, if \(P(\cdot)\) is a single variable polynomial, then plugging in \(S\) results in the matrix \(P(S)\), which is called a _convolution filter_ in GSP. This notion of convolution appears in Michael Defferrard (2016), and has become widely used since then. On the signal processing side, Ji et al. (2023) has developed a theory that generalizes traditional GSP. The authors propose a signal processing framework assuming a family of graphs are considered simultaneously, to tackle uncertainties in graph constructions. Formally, it considers a distribution \(\mu\) of graph shift operators \(S_{\lambda}\) parametrized by \(\lambda\) in a sample space \(\Lambda\). The work develops corresponding signal processing notions such as Fourier transform, filtering, and sampling. In particular, a convolution takes the form \(\mathbb{E}_{\lambda\sim\mu}[P_{\lambda}(S_{\lambda})]\), where \(P_{\lambda}(\cdot)\) is a polynomial and \(P_{\lambda}(S_{\lambda})\) is an ordinary convolution with shift \(S_{\lambda}\). Our work is based on a similar idea but replaces \(P_{\lambda}(S_{\lambda})\) with a more general filter. Furthermore, we introduce an EM framework that is not present in Ji et al. (2023) to learn the distribution of graphs. ## 3 Proposed Method ### Problem Formulation: Maximum Likelihood Estimation We consider a distribution \(\mu\) on a parameter (sample) space \(\Lambda\subset\mathbb{R}^{r}\) of graphs \(\{G_{\lambda},\lambda\in\Lambda\}\), with a fixed set of nodes \(V\). For each \(\lambda\), there is a corresponding shift operator \(S_{\lambda}\). We usually assume that \(\mu\) has a density function \(p(\cdot)\) w.r.t. a base measure on \(\Lambda\). For example, if \(\Lambda\) is a finite set in \(\mathbb{R}^{r}\), we can use the discrete counting measure as the base measure. On the other hand, if \(\Lambda\) is a compact interval in \(\mathbb{R}\), then we can choose the Lebesgue measure as the base measure. Assume that each node \(v\in V\) is associated with features \(x_{v}\). They are collectively denoted by \(\mathbf{x}\). Suppose a GNN model \(\Psi\) outputs the learned embeddings \(\mathbf{z}=\Psi(\lambda,\mathbf{x};\boldsymbol{\theta})\) given the node features \(\mathbf{x}\), the graph parameter \(\lambda\), and the GNN model parameter vector \(\boldsymbol{\theta}\). These in turn are used to determine a vector of labels \(\hat{\mathbf{y}}\). For a task-specific loss \(\ell(\cdot,\cdot)\) that compares predicted \(\hat{\mathbf{y}}\) and true label (vector) \(\mathbf{y}\), we may compute \(L_{\mathbf{X}}(\lambda,\boldsymbol{\theta})=\ell(\hat{\mathbf{y}},\mathbf{y})\). We use \(\mathbf{X}\) to denote the full information \(\{\mathbf{x},\mathbf{y}\}\). We interpret \(\mathbf{X}\) as a sample from a random variable, denoted by \(\mathfrak{X}\), of collective information of features and labels. An example of \(\Psi\) is the model described by (1). The parameters \(\boldsymbol{\theta}=\{\theta_{1}^{k},\theta_{2}^{k}\mid 1\leq k\leq K\}\) and \(\lambda\) determine \(W^{k}\) in the form of a linear combination \(W^{k}=\theta_{1}^{k}+\lambda\theta_{2}^{k}\). Moreover, AGGR is determined by the shift \(S_{\lambda}\) associated with \(G_{\lambda}\). In general, as \(\lambda\) follows an unknown distribution \(\mu\), it is hard to find \(\boldsymbol{\theta}\) by minimizing \(\mathbb{E}_{\lambda\sim\mu}[L_{\mathbf{X}}(\lambda,\boldsymbol{\theta})]\) directly. On the other hand, the EM algorithm (Bishop, 2006) allows us to jointly estimate \(\mu\) and \(\boldsymbol{\theta}\) provided we can reformulate the objective as an MLE. To minimize the loss given \(\mathbf{X}\), the parameter \(\boldsymbol{\theta}\) is determined by \(\lambda\) and vice versa. Therefore, both of them can be viewed as random variables, and thus \(\Psi(\cdot,\mathbf{x};\cdot)\) becomes a random GNN model that depends on \(\lambda,\boldsymbol{\theta}\) and input \(\mathbf{x}\). We aim to _identify a realization of the random models that makes the observation \(\mathbf{X}\) likely_, i.e., there is less discrepancy between the estimator labels \(\hat{\mathbf{y}}\) and ground truth labels \(\mathbf{y}\) measured by the loss \(\ell(\hat{\mathbf{y}},\mathbf{y})\). Motivated by the discussions above, we consider the likelihood function \(p(\lambda,\mathbf{X}\mid\boldsymbol{\theta})\) on \(\boldsymbol{\theta}\) and formulate the following MLE as our objective: \[\boldsymbol{\theta}^{*}=\arg\max_{\boldsymbol{\theta}}p(\mathbf{X} \mid\boldsymbol{\theta})=\arg\max_{\boldsymbol{\theta}}\mathbb{E}_{\lambda \sim\mu}[p(\lambda,\mathbf{X}\mid\boldsymbol{\theta})]. \tag{2}\] Before discussing the main algorithm in subsequent subsections, we preview the roles of \(\mu\) and \(L_{\mathbf{X}}(\cdot,\cdot)\) in the algorithm. We shall see that the EM algorithm outputs a distribution \(\hat{\mu}\) of \(\lambda\), serving as an estimate of \(\mu\), by leveraging the PAC-Bayesian framework. In this framework, the density of \(\lambda\) is proportional to the Gibbs posterior, influenced by a risk function. Consequently, \(\hat{\mu}\) assigns higher probability density to \(\lambda\) when the loss \(L_{\mathbf{X}}(\lambda,\boldsymbol{\theta}^{*})\) is lower. Therefore, we still need to minimize the given loss as a main component of the algorithm. For a simple example, assume that \(\mu\) is the delta distribution \(\delta_{\lambda_{0}}\) supported on \(\lambda_{0}\) so that the graph \(G_{\lambda_{0}}\) is deterministic. If we consider the Gibbs posterior, then \(p(\mathbf{X}\mid\boldsymbol{\theta})\propto\exp(-\eta\ell(\hat{\mathbf{y}}, \mathbf{y}))\), where \(\hat{\mathbf{y}}\) depends on both \(\mathbf{X}=\{\mathbf{x},\mathbf{y}\}\) and \(\boldsymbol{\theta}\). Thus, maximizing \(p(\mathbf{X}\mid\boldsymbol{\theta})\) is equivalent to the classical objective of minimizing \(\ell(\hat{\mathbf{y}},\mathbf{y})\). ### Expectation-Maximization for GNN Optimizing (2) directly can be challenging, and we utilize the EM algorithm that employs an iterative approach alternating between the E-step and the M-step. Adapted to our setting, the process unfolds as follows: 1. E-step: Given parameters \(\boldsymbol{\theta}^{(t)}\) at the \(t\)-th iteration, we compute the expectation as the Q-function \[Q(\boldsymbol{\theta}\mid\boldsymbol{\theta}^{(t)})=\mathbb{E}_{\lambda\sim p (\cdot\mid\mathbf{X},\boldsymbol{\theta}^{(t)})}[\log p(\lambda,\mathbf{X} \mid\boldsymbol{\theta})].\] 2. M-step: \(\boldsymbol{\theta}^{(t+1)}\) is updated as \(\arg\max Q(\boldsymbol{\theta}\mid\boldsymbol{\theta}^{(t)})\). _For the E-step_ in the \(t\)-th iteration, in the same spirit as the PAC-Bayesian framework (Guedj, 2019), we apply the Gibbs posterior and assume that \[p(\lambda,\mathbf{X}\mid\boldsymbol{\theta})\propto\exp(-\eta^{(t)}L_{ \mathbf{X}}(\lambda,\boldsymbol{\theta}))\pi_{0}(\lambda,\mathbf{X}), \tag{3}\] for a tunable hyperparameter \(\eta^{(t)}\), while \(\pi_{0}(\cdot)\) is a prior density of the joint \((\lambda,\mathbf{X})\) independent of \(\boldsymbol{\theta}\), representing our initial knowledge regarding \(\lambda\) and \(\mathbf{X}\). In this expression, \(L_{\mathbf{X}}(\lambda,\boldsymbol{\theta})\) implicitly depends on the observations \(\mathbf{X}\). The normalization constant is given by \[\begin{split} C(\boldsymbol{\theta})&=\int_{(\lambda, \mathbf{X}^{\prime})\in\Lambda\times\mathfrak{X}}\exp(-\eta^{(t)}L_{\mathbf{X}^ {\prime}}(\lambda,\boldsymbol{\theta}))\pi_{0}(\lambda,\mathbf{X}^{\prime}) \,\mathrm{d}(\lambda,\mathbf{X}^{\prime})\\ &=\mathbb{E}_{(\lambda,\mathbf{X}^{\prime})\sim\pi_{0}}\Big{[} \exp(-\eta^{(t)}L_{\mathbf{X}^{\prime}}(\lambda,\boldsymbol{\theta}))\Big{]}. \end{split} \tag{4}\] As a prior belief, we treat the observed \(\mathbf{X}\) as a typical sample such that the above average is (approximately) the same as the average over graphs by fixing \(\mathbf{X}\). We assume that for each fixed \(\mathbf{X}\), there exists some prior distribution with density \(p_{0,\mathbf{X}}(\cdot)\) on \(\Lambda\) such that: \[\begin{split}&\mathbb{E}_{(\lambda,\mathbf{X}^{\prime})\sim\pi_{0}} \Big{[}\exp(-\eta^{(t)}L_{\mathbf{X}^{\prime}}(\lambda,\boldsymbol{\theta})) \Big{]}\\ &\approx\int_{\lambda\in\Lambda}\exp(-\eta^{(t)}L_{\mathbf{X}}( \lambda,\boldsymbol{\theta}))p_{0,\mathbf{X}}(\lambda)\,\mathrm{d}(\lambda)\\ &=\mathbb{E}_{\lambda\sim p_{0,\mathbf{X}}}\Big{[}\exp(-\eta^{(t)} L_{\mathbf{X}}(\lambda,\boldsymbol{\theta}))\,\Big{|}\,\mathbf{X}\Big{]}.\end{split}\] For simplification, we denote \(p_{0,\mathbf{X}}(\cdot)\) as \(p_{0}(\cdot)\) and \(\mathbb{E}_{\lambda\sim p_{0}}\big{[}\exp(-\eta^{(t)}L_{\mathbf{X}}(\lambda, \boldsymbol{\theta}))\,\big{|}\,\mathbf{X}\Big{]}\) as \(\mathbb{E}_{\lambda\sim p_{0}}\big{[}\exp(-\eta^{(t)}L_{\mathbf{X}}(\lambda, \boldsymbol{\theta}))\big{]}\). We write \[C(\boldsymbol{\theta})=\mathbb{E}_{\lambda\sim p_{0}}\Big{[}\exp(-\eta^{(t)}L_{ \mathbf{X}}(\lambda,\boldsymbol{\theta}))\Big{]}. \tag{5}\] On the other hand, given \(\mathbf{\theta}^{(t)}\), from (3), we have \[p(\lambda\mid\mathbf{X},\mathbf{\theta}^{(t)}) =\frac{p(\lambda,\mathbf{X}\mid\mathbf{\theta}^{(t)})}{p(\mathbf{X}\mid \mathbf{\theta}^{(t)})}\] \[\propto\exp(-\eta^{(t)}L_{\mathbf{X}}(\lambda,\mathbf{\theta}^{(t)})) \frac{\pi_{0}(\lambda,\mathbf{X})}{p(\mathbf{X}\mid\mathbf{\theta}^{(t)})}.\] We assume that there is a prior \(p^{\prime}_{0,t}(\cdot)\) such that \(p^{\prime}_{0,t}(\cdot)\propto\frac{\pi_{0}(\lambda,\mathbf{X})}{p(\mathbf{X }\mid\mathbf{\theta}^{(t)})}\), and which does not depend on \(\mathbf{\theta}\) but is a function of \(t\). By fixing \(\mathbf{X}\), the posterior is written as \[p(\lambda\mid\mathbf{X},\mathbf{\theta}^{(t)})\propto\exp(-\eta^{(t)}L_{\mathbf{ X}}(\lambda,\mathbf{\theta}^{(t)}))p^{\prime}_{0,t}(\lambda). \tag{6}\] In our framework, we do not need to estimate the normalization constant for (6). **Remark 1**.: _From the above discussion, we see that priors \(p_{0}(\cdot)\) and \(p^{\prime}_{0,t}(\cdot)\) play important roles. We discuss their choices in Section 4 below. It is desirable to have a weaker prior assumption, under which the optimizer can still be readily estimated._ _For the M-step,_ we analyze the Q-function in more detail. For convenience, we use \(p_{t}(\lambda)\) to denote \(p(\lambda\mid\mathbf{X},\mathbf{\theta}^{(t)})\). With (5) and (6), we can express the Q-function as: \[Q(\mathbf{\theta}\mid\mathbf{\theta}^{(t)})\] \[= \mathbb{E}_{\lambda\sim p_{t}}\bigg{[}\log\frac{\exp(-\eta^{(t)}L _{\mathbf{X}}(\lambda,\mathbf{\theta}))\pi_{0}(\lambda,\mathbf{X})}{C(\mathbf{\theta})}\bigg{]}\] \[= -\eta^{(t)}\mathbb{E}_{\lambda\sim p_{t}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta})]+D-\log C(\mathbf{\theta}),\] where \(D\) is a constant independent of \(\mathbf{\theta}\). To estimate \(\log C(\mathbf{\theta})\), we make the following considerations. First of all, by Jensen's inequality, \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta}) ]\leq\log C(\mathbf{\theta})\). This means that if \(\log C(\mathbf{\theta})\) is small, then necessarily so is \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta})]\). On the other hand, Teh et al. (2006) proposes to use \(\mathbb{E}[\log Y]+\frac{\text{var}(Y)}{2\mathbb{E}(Y)^{2}}\) to approximate \(\log\mathbb{E}[Y]\) for a random variable \(Y\). This is derived from the second-order Taylor expansion of \(\log Y\) at \(\log\mathbb{E}[Y]\). In our case, we have \[\log C(\mathbf{\theta}) \approx-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}( \lambda,\mathbf{\theta})] \tag{7}\] \[+\frac{\text{var}\big{(}\exp(-\eta^{(t)}L_{\mathbf{X}}(\lambda, \mathbf{\theta}))\big{)}}{2\big{(}\mathbb{E}_{\lambda\sim p_{0}}\big{[}\exp(-\eta ^{(t)}L_{\mathbf{X}}(\lambda,\mathbf{\theta}))\big{]}\big{)}^{2}}.\] If \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta})]\) is the dominant component, then we may use \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta})]\) as a proxy for \(\log C(\mathbf{\theta})\), which is more manageable. In Section 4.3.2, we shall numerically verify that this is indeed the case for our applications. Hence, \(Q(\mathbf{\theta}\mid\mathbf{\theta}^{(t)})\) is approximated by \[-\eta^{(t)}\Big{(}\mathbb{E}_{\lambda\sim p_{t}}[L_{\mathbf{X}}(\lambda,\mathbf{ \theta})]-\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta})] \Big{)}+D.\] _In summary,_ if we disregard \(\eta^{(t)}\) and \(D\), which are independent of \(\mathbf{\theta}\), we can minimize the following function in the M-step: \[J(\mathbf{\theta}) =\mathbb{E}_{\lambda\sim p_{t}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta })]-\mathbb{E}_{\lambda\sim p_{0}}[L_{\mathbf{X}}(\lambda,\mathbf{\theta})] \tag{8}\] \[=\int_{\lambda\in\Lambda}\big{(}p_{t}(\lambda)-p_{0}(\lambda)\big{)} L_{\mathbf{X}}(\lambda,\mathbf{\theta})\,\mathrm{d}\lambda.\] ### The Proposed Algorithm: EMGNN To minimize \(J(\mathbf{\theta})\) in (8), our strategy is to re-express it as an expectation. For this purpose, we introduce a proposal distribution. Let \(q(\cdot)\) be the density function of a probability distribution on the sample space \(\Lambda\) whose support includes that of \(p_{0}\). Then we have: \[J(\mathbf{\theta}) =\int_{\lambda\in\Lambda}q(\lambda)\frac{p_{t}(\lambda)-p_{0}( \lambda)}{q(\lambda)}L_{\mathbf{X}}(\lambda,\mathbf{\theta})\,\mathrm{d}\lambda\] \[=\mathbb{E}_{\lambda\sim q}\bigg{[}\frac{p_{t}(\lambda)-p_{0}( \lambda)}{q(\lambda)}L_{\mathbf{X}}(\lambda,\mathbf{\theta})\bigg{]}.\] We propose to minimize \(J(\mathbf{\theta})\) by first randomly drawing samples \(\Lambda_{T^{\prime}}=\{\lambda_{1},\ldots,\lambda_{T^{\prime}}\}\) according to the density \(q(\cdot)\). Following that, we successively apply gradient descent to \(\frac{p_{t}(\lambda_{t^{\prime}})-p_{0}(\lambda_{t^{\prime}})}{q(\lambda)}L_{ \mathbf{X}}(\lambda_{t^{\prime}},\mathbf{\theta})\) to update \(\mathbf{\theta}\). Finally, given (6), \(p_{t}(\lambda)\) can be approximated by an empirical distribution if we apply an MCMC method. The overall algorithm is summarized in Algorithm 1 and illustrated in Fig. 1. **Remark 2**.: _In practice, the choices of the prior distributions \(p_{0}(\cdot),q(\cdot)\) and \(p^{\prime}_{0,t}(\cdot)\) are hyperparameters. Moreover, in our experiments, \(p^{\prime}_{0,t}(\cdot)\) is set to be the same for every \(t\). We also discretize the continuous sample space \(\Lambda\) for simplicity in analysis and computation._ **Remark 3**.: _If we algorithmically plug in the delta distribution supported on \(\lambda_{0}\) and \(p_{0}(\lambda_{0})=0\) for \(p^{\prime}_{0,t}(\cdot)\) and \(q(\cdot)\) respectively, then EMGNN reduces to the ordinary GNN model on the graph \(G_{\lambda_{0}}\)._ Figure 1: Illustration of EMGNN. **Remark 4**.: _Note that for the coefficient \(\frac{p_{t}(\lambda)-p_{0}(\lambda)}{q(\lambda)}\), if \(p_{t}(\lambda)<p_{0}(\lambda)\), then the loss \(L_{\mathbf{X}}(\mathbf{\mathbf{\theta}})\) is to be made larger. Intuitively, in this case, a "bad" \(\lambda\) is chosen. For the choice of \(q(\cdot)\), in practice, we propose two options in Section 4: either the uniform distribution or \(q(\cdot)=p_{t}(\cdot)\). Nonetheless, \(q(\cdot)\) can also be other appropriate density functions._ As we do not minimize \(J(\mathbf{\mathbf{\theta}})\) directly, we justify the proposed approach under additional assumptions. We theoretically analyze the performance of the proposed (randomized) algorithm in lines 5-12 of Algorithm 1, denoted by \(\mathcal{A}\). With samples \(\Lambda_{T^{\prime}}\), the algorithm \(\mathcal{A}\) outputs \(\widehat{\mathbf{\mathbf{\theta}}}=\mathcal{A}(\Lambda_{T^{\prime}})\). The following expression is considered in algorithm \(\mathcal{A}\): \[J_{\Lambda_{T^{\prime}}}(\widehat{\mathbf{\mathbf{\theta}}})=\frac{1}{T^{\prime}} \sum_{\lambda_{t^{\prime}}\in\Lambda_{T^{\prime}}}\frac{p_{t}(\lambda_{t^{ \prime}})-p_{0}(\lambda_{t^{\prime}})}{q(\lambda_{t^{\prime}})}L_{\mathbf{X}} (\lambda_{t^{\prime}},\widehat{\mathbf{\mathbf{\theta}}}).\] We assume that after translation and scaling by positive constant of \(L_{\mathbf{X}}(\lambda,\cdot)\) if necessary, the expression \(\frac{p_{t}(\lambda)-p_{0}(\lambda)}{q(\lambda)}L_{\mathbf{X}}(\lambda,\mathbf{ \mathbf{\theta}})\) always belong to \([0,1]\). The following notions are well-known. **Definition 1**.: _A differential function \(f\) is \(\alpha\)-Lipschitz if for all \(x\) in the domain of \(f\), we have \(\|\nabla f(x)\|\leq\alpha\). It is \(\beta\)-smooth if its gradient is \(\beta\)-Lipschitz._ Denote by \(1_{p_{t}(\cdot)\geq p_{0}(\cdot)}(\lambda)\) the indicator that is \(1\) if \(p_{t}(\lambda)\geq p_{0}(\lambda)\), and \(0\) otherwise. Let \(b_{1}=\mathbb{E}_{\lambda\sim q}1_{p_{t}(\cdot)\geq p_{0}(\cdot)}\). Intuitively, it computes the measure of \(\lambda\), for which \(p_{t}(\cdot)\) is larger. On the other hand, let \(b_{2}=\mathbb{E}_{\lambda\sim q}\frac{|p_{t}(\lambda)-p_{0}(\lambda)|}{q( \lambda)}\), and \(\gamma=\sup_{\lambda\in\Lambda}1/q(\lambda)\). **Theorem 1**.: _Assume for any \(\lambda\), the loss \(L_{\mathbf{X}}(\lambda,\cdot)\) is convex, \(\alpha\)-Lipschitz and \(\beta\)-smooth. Let \(b_{1},b_{2},\gamma\) be defined as above. If for every \(t^{\prime}\leq T^{\prime}\), the non-increasing step-size in the algorithm \(\mathcal{A}\) satisfies \(a_{t^{\prime}}\leq\min\{2/(\beta\gamma),c/t^{\prime}\}\) for a constant \(c\), then there is a constant \(C\) independent of \(T^{\prime},\alpha\) such that_ \[\Big{|}\mathbb{E}_{\mathcal{A},\Lambda_{T^{\prime}}}\Big{[}J_{\Lambda_{T^{ \prime}}}(\widehat{\mathbf{\mathbf{\theta}}})-J(\widehat{\mathbf{\mathbf{\theta}}}) \Big{]}\Big{|}\leq\epsilon=C\bigg{(}\frac{b_{2}^{2}\alpha^{2}}{T^{\prime}} \bigg{)}^{\frac{1}{\beta\gamma(1-b_{1})+1}}.\] Proof.: The proof is given in the Appendix. **Remark 5**.: _From the result, we see that if \(b_{1}\) is close to \(1\), i.e., the set \(\{\lambda\mid p_{t}(\lambda)\geq p_{0}(\lambda)\}\) has a large measure, then the expected error decays at a rate close to \(T^{\prime-1}\)._ ### A Brief Discussion on Testing As our framework deals with a distribution of graphs, during testing, we acquire our final learned representation as \(\mathbf{z}_{\mathrm{final}}=\mathbb{E}_{\lambda\sim p_{T}}\big{[}\Psi(\lambda, \mathbf{x};\mathbf{\mathbf{\theta}}_{T^{\prime}}^{(T)})\big{]}\). The learned model parameters are a particular realization of the possible random models that align with the observed data \(\mathbf{X}\) and the multiple graphs influence the final embeddings based on their respective likelihoods. The embedding \(\mathbf{z}_{\mathrm{final}}\) is subjected to a softmax operation to obtain \(\hat{\mathbf{y}}\) for node classification tasks, while a READOUT function is applied for graph-level tasks. ## 4 Experiments To validate our algorithm, we explored two tasks: node classification in heterogeneous graphs and graph regression using chemical datasets. Code and hyperparameter details are provided in the Appendix. ### Heterogeneous Graphs Heterogeneous graphs are graph structures characterized by the presence of multiple node types and multiple edge types, imparting a greater degree of complexity compared to homogeneous graphs, which consist of a single node and edge type. In these datasets, we leveraged latent parameter(s) to generate multiple graph instances with varying edge weights for each edge type, representing different information transmission rates. In the case of a heterogeneous graph with \(\omega\) edge types, we introduce a vector of latent parameters \(\mathbf{\mathbf{\lambda}}=\{\lambda_{1},\ldots,\lambda_{\omega}\}\), where each \(\lambda_{i}\geq 0\) and \(\sum_{i=1}^{\omega}\lambda_{i}=1\) Drawing a set of latent variables uniformly from the space that the parameters span, a graph adjacency matrix of \(G_{\mathbf{\lambda}}\) is generated as follows: \[A_{\mathbf{\lambda}}=\sum_{i=1}^{\omega}\lambda_{i}A_{i}, \tag{9}\] where \(A_{i}\) is the respective edge-type specific adjacency matrix. Note that when any \(\lambda_{i}=0\), the edges of the associated edge types are removed. #### 4.1.1 Baselines and Datasets The heterogeneous graph datasets used are the same as those employed in Yun et al. (2019) and Lee et al. (2022). These datasets include two citation networks named DBLP1 and ACM2, as well as a movie dataset named IMDB3. Within the DBLP dataset, there are three distinct node types (Paper (P), Author (A), Conference (C)) and four edge types (PA, AP, PC, CP). The ACM dataset also comprises three node types (Paper (P), Author (A), and Subject (S)) and four edge types (PA, AP, PS, SP). Similarly, the IMDB dataset has three node types (Movie (M), Actor (A), Director (D)) along with four edge types (MD, DM, MA, AM). The dataset statistics are given in the Appendix. Footnote 1: [https://dblp.uni-trier.de/](https://dblp.uni-trier.de/) Footnote 2: [http://dl.acm.org/](http://dl.acm.org/) Footnote 3: [https://www.imdb.com/interfaces/](https://www.imdb.com/interfaces/) We assessed our approach against five baseline models. Specifically, GAT and GCN are designed for homogeneous graphs, while GTN (Yun et al., 2019), SimpleHGN (Lv et al., 2021) and SeHGNN (Yang et al., 2023) are state-of-the-art models developed for heterogeneous graph settings. For our model, we treat edges of different directions as a single type, resulting in two symmetric edge-type specific adjacency matrices. We consider the sample space \(\Lambda\) to span the range \([0.0,1.0]\) and discretize it in increments of \(0.05\) to obtain \(\widehat{\Lambda}\). We consider different variants of EMGNN with a GCN backbone, based on choices of \(p_{0}(\cdot),p^{\prime}_{0,t}(\cdot),q(\cdot)\), as summarized in Table 1. In particular, for EMGNN-PD and EMGNN-PH, we set \(\lambda_{0}\) for the delta function to be any \(\lambda\) such that \(\{\lambda\in\Lambda-\widehat{\Lambda}\}\). Consequently, \(p_{0}(\cdot)\) will be \(0\) with probability \(1\) w.r.t \(q(\cdot)\) on \(\widehat{\Lambda}\). Hence, for these variants, there is no "bad" \(\lambda\) such that the corresponding iteration increases \(L_{\mathbf{\mathrm{X}}}(\cdot,\cdot)\)(Refer to Remark 4). #### 4.1.2 Results Results are shown in Table 2. Similar to recent findings (Lv et al., 2021), GCN and GAT are observed to perform competitively against models designed for heterogeneous graphs such as GTN under appropriate settings. Meanwhile, EMGNN-PT consistently outperforms other variants in our framework, in both micro and macro F1 scores. In particular, the superior performance of EMGNN-PT, EMGNN-PO, EMGNN-PD and EMGNN-PH compared to GCN indicates the effectiveness of learning with multiple graphs. Moreover, EMGNN-PT outperforming EMGNN-PD, along with EMGNN-PO frequently outperforming EMGNN-PH, indicates that increasing the loss for a "bad" \(\lambda\) is beneficial as it penalizes deviations from desirable graphs. EMGNN-PT also often surpasses baseline models with attention mechanisms, namely GAT, Simple-HGN, SeHGNN, and GTN, despite not incorporating any attention mechanisms. This could be attributed to the construction of multiple graphs, which may form instances whose information is similar to what is achieved with semantic attention. In addition, the model may be able to extract additional useful interactions in other instances of graphs which can enhance its performance. ### Chemical Datasets Conventional molecular graph representations mirror a molecule's Lewis structure, with atoms as nodes and chemical bonds as edges. This representation falls short in capturing variations in molecular properties resulting from different three-dimensional (3D) spatial arrangements when molecules share the same topology, as seen in cases like cis-trans isomers. Moreover, molecules inherently possess uncertainty due to their quantum mechanical properties, particularly concerning electron orbitals. Hence, using a distribution of graphs for learning in such cases is a sensible choice. The process of generating different molecular graphs begins with the acquisition of coarse 3D coordinates of the atoms in a molecule using RDKit4. Following that, the interatomic Euclidean distances between all atoms within the molecule are calculated. A parameter, \(\lambda\) is then introduced to define a threshold range, \([0,\lambda]\), for determining node connections and generating multiple graph instances. The notion of employing thresholding based on the interatomic distance between nodes in molecular graphs has been previously documented in works such as Shui and Karypis (2020) and Shen et al. (2023). The former introduced a cut-off distance hyperparameter to construct heterogeneous molecular graphs. Meanwhile, our approach aligns more closely with the latter, where the Vietoris-Rips complex and thresholding are used to form a series of \(G_{\lambda}\) graphs. However, in Shen et al. (2023), they utilized five non-overlapping, manually adjusted intervals for thresholding and adopted a computationally intensive multi-channel configuration to learn from the five generated graphs. There are also works such as Thomas et al. (2018), where molecules are treated as 3D point clouds and a radius is set to specify interacting vertices. The graph construction process may involve adding new edges, connecting distant nodes, or removing existing edges. Ideally, the model should prioritize "useful" graph realizations and assign a low probability to less beneficial ones, effectively discarding them. #### 4.2.1 Baselines and Datasets MoleculeNet5 is a popular benchmark for molecular machine learning, encompassing multiple datasets to be tested on a diverse of molecular properties. For our evaluation, we specifically selected the datasets FreeSolv, ESOL, and Lipophilicity, all of which are designed for graph regression tasks. Further elaboration on the chosen datasets can be found in the Appendix. Footnote 5: [https://moleculeenet.org/datasets-1](https://moleculeenet.org/datasets-1) We compared our approach against standard models for molecular properties prediction that do not incorporate transfer learning from a larger dataset such as Zinc156. The selected baseline models for this comparison included Weave (Kearnes et al., 2016), MPNN (Gilmer et al., 2017), AttentiveFP (Xiong et al., 2020), GIN (Xu et al., 2019), as well as the standard GCN and GAT models. For EMGNN, a GNN model that generalizes GCN with a degree-1 convolutional filter \(P_{\lambda}(S_{\lambda})\) (refer to Section 2.2) is utilized as the backbone of our model. The sample space \(\Lambda\) spans the range \([1,10]\)A and \(\widehat{\Lambda}\) is the discretized space with 0.05 increments. Footnote 6: a database of purchasable drug-like compounds; [https://zinc.docking.org/tranches/home/](https://zinc.docking.org/tranches/home/) #### 4.2.2 Results In Table 3, the average test root mean square error (rmse) over ten runs with standard deviation is reported for the graph regression task, where the molecular properties of molecular graphs are to be predicted. The result shown is for the case of \(q(\cdot)=p_{t}(\cdot)\). We observe that EMGNN frequently performed better than the baselines. This may be due to EMGNN's training process, which exposes it to diverse graph realizations, allowing it to capture non-covalent interactions that are critical for characterizing the physical properties of molecules. In contrast, the baselines employ the conventional molecular graph representation. We note that our framework does not explicitly incorporate bond angles but it does expose the model to graphs with a broad range of connectivities. This exposure indirectly integrates geometric information, as the latent variable constructs graphs with bond lengths falling within specific ranges. This provides our model with additional 2D information regarding interatomic distances, which may offer insights into the underlying 3D structure. ### Model Analysis #### 4.3.1 Distribution Learned For the heterogeneous node classification task, we examined the learned empirical distributions, depicted in Fig. 2. Across all datasets, we notice that the empirical probability of \(\lambda\) falls within the range of approximately \([0.4,0.6]\) is relatively high. This observation suggests a possible explanation for the decent performance of GCN on a single graph with uniform edge weights. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{IMDB} & \multicolumn{2}{c}{ACM} & \multicolumn{2}{c}{DBLP} \\ & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 \\ \hline GCN & 61.91 \(\pm\) 0.67 & 60.91 \(\pm\) 0.57 & 91.92 \(\pm\) 0.40 & 92.00 \(\pm\) 0.41 & 94.60 \(\pm\) 0.31 & 93.88 \(\pm\) 0.36 \\ GAT & 63.54 \(\pm\) 1.10 & 61.87 \(\pm\) 0.95 & 92.61 \(\pm\) 0.36 & 92.68 \(\pm\) 0.36 & 94.48 \(\pm\) 0.22 & 93.74 \(\pm\) 0.27 \\ GTN & 60.58 \(\pm\) 2.10 & 59.12 \(\pm\) 1.58 & 92.12 \(\pm\) 0.62 & 92.23 \(\pm\) 0.60 & 94.17 \(\pm\) 0.26 & 93.59 \(\pm\) 0.40 \\ Simple-HGN & 58.91 \(\pm\) 1.06 & 58.30 \(\pm\) 0.34 & **92.73 \(\pm\) 0.21** & 92.56 \(\pm\) 0.42 & 94.48 \(\pm\) 0.38 & 93.69 \(\pm\) 0.32 \\ SeHGNN & 62.13 \(\pm\) 2.38 & 60.62 \(\pm\) 1.95 & 92.45 \(\pm\) 0.17 & 92.51 \(\pm\) 0.16 & 94.86 \(\pm\) 0.14 & 94.14 \(\pm\) 0.19 \\ \hline EMGNN-PT & **64.78 \(\pm\) 1.24** & **63.36 \(\pm\) 0.80** & 92.70 \(\pm\) 0.26 & **92.78 \(\pm\) 0.26** & **95.06 \(\pm\) 0.39** & **94.41 \(\pm\) 0.45** \\ EMGNN-PO & 63.35 \(\pm\) 0.79 & 62.25 \(\pm\) 0.59 & 92.35 \(\pm\) 0.38 & 92.45 \(\pm\) 0.38 & 94.95 \(\pm\) 0.24 & 94.28 \(\pm\) 0.28 \\ EMGNN-PD & 62.49 \(\pm\) 0.87 & 61.55 \(\pm\) 0.71 & 92.31 \(\pm\) 0.43 & 92.41 \(\pm\) 0.42 & 94.89 \(\pm\) 0.17 & 94.15 \(\pm\) 0.23 \\ EMGNN-PH & 62.01 \(\pm\) 0.55 & 61.15 \(\pm\) 0.46 & 92.18 \(\pm\) 0.52 & 92.29 \(\pm\) 0.52 & 95.02 \(\pm\) 0.19 & 94.34 \(\pm\) 0.20 \\ \hline \hline \end{tabular} \end{table} Table 2: Heterogeneous node classification task. Results averaged over ten runs. The best performance is boldfaced and the second-best performance is underlined. For IMDB, (9) is of the form \(\lambda A_{MD}+(1-\lambda)A_{MA}\). We observed that \(\lambda=1\) has a relatively lower probability compared to \(\lambda=0\). When \(\lambda=1\), it implies that edges in \(A_{MA}\) are all removed. This indicates that the MA relation is more crucial than the MD relation. This observation might be linked to the significant difference in edge density in the edge-type specific adjacency matrices where \(A_{MA}\) has three times more non-zero entries than \(A_{MA}\). A similar trend was observed in the ACM dataset, where the disparity in edge density is also substantial. In ACM's case, (9) takes the form \(\lambda A_{PA}+(1-\lambda)A_{PS}\). The high probability of \(\lambda=1\) indicates that the PS relation is more significant than PA for the task. Consequently, we can infer that edge types with relatively sparse connections have a limited impact on the task. #### 4.3.2 The Dominant Component of (7) In Section 3.2, we use \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}\big{(}L_{\mathbf{X}}(\lambda, \boldsymbol{\theta})\big{)}\) to approximate \(\log C(\boldsymbol{\theta})\). To justify this, we provide numerical evidence that the dominant component of (7) is \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}\big{(}L_{\mathbf{X}}(\lambda, \boldsymbol{\theta})\big{)}\). This assertion is supported by assessing the ratio \[\varrho=\frac{\rho}{-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}\big{(}L_{ \mathbf{X}}(\lambda,\boldsymbol{\theta})\big{)}},\] where \(\rho=\frac{\text{var}\left(\exp(-\eta^{(t)}L_{\mathbf{X}}(\lambda,\boldsymbol{ \theta}))\right)}{2\left(\mathbb{E}_{\lambda\sim p_{0}}\exp(-\eta^{(t)}L_{ \mathbf{X}}(\lambda,\boldsymbol{\theta}))\right)^{2}}\). We plot the \(\varrho\) values for the heterogeneous graph datasets on EMGNN-PT over multiple \(t\) iterations in Fig. 3. We found that \(\varrho\) consistently exhibits small absolute values, supporting the postulation that \(-\eta^{(t)}\mathbb{E}_{\lambda\sim p_{0}}\big{(}L_{\mathbf{X}}(\lambda, \boldsymbol{\theta})\big{)}\) is the main component in (7). ## 5 Conclusion In this paper, we explored employing a distribution of parametrized graphs for training a GNN in an EM framework. Through a probabilistic framework, we handle the uncertainty in graph structures stemming from various sources. Our approach enables the model to handle multiple graphs where the prediction loss is utilized to estimate the likelihood of the graphs. The model's performance is enhanced as we provide it with a wider array of graphs, which it can then sift through to acquire more valuable information or remove noise. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{Random split} & \multicolumn{3}{c}{Scaffold split} \\ Datasets & FreeSolv & ESOL & Lipophilicity & FreeSolv & ESOL & Lipophilicity \\ \hline GCN & \(1.157\pm 0.215\) & \(0.652\pm 0.073\) & \(0.707\pm 0.030\) & \(2.618\pm 0.298\) & \(0.876\pm 0.037\) & \(0.760\pm 0.009\) \\ GAT & \(1.873\pm 0.522\) & \(0.837\pm 0.101\) & \(0.704\pm 0.058\) & \(2.942\pm 0.591\) & \(0.907\pm 0.034\) & \(0.777\pm 0.037\) \\ Weave & \(1.497\pm 0.251\) & \(0.798\pm 0.088\) & \(0.789\pm 0.059\) & \(3.129\pm 0.203\) & \(1.104\pm 0.063\) & \(0.844\pm 0.031\) \\ MPNN & \(1.388\pm 0.404\) & \(0.703\pm 0.075\) & \(0.640\pm 0.025\) & \(2.975\pm 0.775\) & \(1.117\pm 0.058\) & \(\mathbf{0.735\pm 0.019}\) \\ AttentiveFP & \(1.275\pm 0.289\) & \(0.673\pm 0.085\) & \(0.719\pm 0.042\) & \(2.698\pm 0.297\) & \(0.855\pm 0.029\) & \(0.762\pm 0.022\) \\ GIN & \(1.678\pm 0.494\) & \(0.792\pm 0.097\) & \(0.716\pm 0.073\) & \(2.957\pm 0.696\) & \(0.990\pm 0.057\) & \(0.770\pm 0.021\) \\ \hline EMGNN & \(\mathbf{0.936\pm 0.162}\) & \(\mathbf{0.606\pm 0.041}\) & \(\mathbf{0.639\pm 0.028}\) & \(\mathbf{2.189\pm 0.128}\) & \(\mathbf{0.834\pm 0.027}\) & \(0.743\pm 0.013\) \\ \hline \hline \end{tabular} \end{table} Table 3: Graph regression task on molecular datasets. Average test rmse reported, the lower the better. Figure 3: Plot of \(\varrho\) across \(t\) EM iterations Figure 2: Empirical distribution of \(p_{t}(\cdot)\) that is obtained from the final E-step.
2301.06635
Data-aware customization of activation functions reduces neural network error
Activation functions play critical roles in neural networks, yet current off-the-shelf neural networks pay little attention to the specific choice of activation functions used. Here we show that data-aware customization of activation functions can result in striking reductions in neural network error. We first give a simple linear algebraic explanation of the role of activation functions in neural networks; then, through connection with the Diaconis-Shahshahani Approximation Theorem, we propose a set of criteria for good activation functions. As a case study, we consider regression tasks with a partially exchangeable target function, \emph{i.e.} $f(u,v,w)=f(v,u,w)$ for $u,v\in \mathbb{R}^d$ and $w\in \mathbb{R}^k$, and prove that for such a target function, using an even activation function in at least one of the layers guarantees that the prediction preserves partial exchangeability for best performance. Since even activation functions are seldom used in practice, we designed the ``seagull'' even activation function $\log(1+x^2)$ according to our criteria. Empirical testing on over two dozen 9-25 dimensional examples with different local smoothness, curvature, and degree of exchangeability revealed that a simple substitution with the ``seagull'' activation function in an already-refined neural network can lead to an order-of-magnitude reduction in error. This improvement was most pronounced when the activation function substitution was applied to the layer in which the exchangeable variables are connected for the first time. While the improvement is greatest for low-dimensional data, experiments on the CIFAR10 image classification dataset showed that use of ``seagull'' can reduce error even for high-dimensional cases. These results collectively highlight the potential of customizing activation functions as a general approach to improve neural network performance.
Fuchang Gao, Boyu Zhang
2023-01-16T23:38:37Z
http://arxiv.org/abs/2301.06635v1
# Data-aware customization of activation functions reduces neural network error ###### Abstract Activation functions play critical roles in neural networks, yet current off-the-shelf neural networks pay little attention to the specific choice of activation functions used. Here we show that data-aware customization of activation functions can result in striking reductions in neural network error. We first give a simple linear algebraic explanation of the role of activation functions in neural networks; then, through connection with the Diaconis-Shahshahani Approximation Theorem, we propose a set of criteria for good activation functions. As a case study, we consider regression tasks with a partially exchangeable target function, _i.e._\(f(u,v,w)=f(v,u,w)\) for \(u,v\in\mathbb{R}^{d}\) and \(w\in\mathbb{R}^{k}\), and prove that for such a target function, using an even activation function in at least one of the layers guarantees that the prediction preserves partial exchangeability for best performance. Since even activation functions are seldom used in practice, we designed the "seagull" even activation function \(\log(1+x^{2})\) according to our criteria. Empirical testing on over two dozen 9-25 dimensional examples with different local smoothness, curvature, and degree of exchangeability revealed that a simple substitution with the "seagull" activation function in an already-refined neural network can lead to an order-of-magnitude reduction in error. This improvement was most pronounced when the activation function substitution was applied to the layer in which the exchangeable variables are connected for the first time. While the improvement is greatest for low-dimensional data, experiments on the CIFAR10 image classification dataset showed that use of "seagull" can reduce error even for high-dimensional cases. These results collectively highlight the potential of customizing activation functions as a general approach to improve neural network performance. Keywords:Activation Function, Neural Network, Partial Exchangeable, Customization ## 1 Introduction The last decade has witnessed the remarkable success of deep neural networks in analyzing high dimensional data, especially in computer vision and natural language processing. In recent years, these successes have sparked significant interest in applying deep neural networks to make scientific discoveries from experimental data, including the identification of hidden physical principles and governing equations across a wide range of problem settings [1, 10, 17, 19]. While discoveries of scientific principles can range from conceptual and qualitative to precise and quantitative, one common desire is to discover interpretatable and quantitative relationships between target and input variables. To achieve such an ambitious goal, one must integrate existing scientific knowledge into neural network design. Despite the efforts aimed at improving the interpretability of deep neural networks, they are still largely viewed as "black boxes." This "black box" nature sometimes is convenient because it allows users to create useful models and obtain results without requiring a complete understanding of its inner workings. However, these models tend to generalize poorly across different application settings, making it difficult for users to re-design the model for new applications of interest. In real-world applications, data sets are often associated with extra information that is very important to domain experts--for example, underlying constraints between variables, or meta-data about which variables (measurements) are more or less reliable than others. Inferring this extra information _de novo_ from the data may be difficult for various reasons, such as computational cost, sampling bias, or limitations of the model itself. For instance, invariance to reflection across a vertical axis is essential knowledge for object classification from images, but this information is difficult to learn from limited training data. At the same time, building this type of knowledge into the model is expected to increase the model's performance for the specific data set at hand. Currently, this is commonly done by data augmentation or by introducing additional terms in the loss function. In this paper, we emphasize that analytically architecting a neural network based on underlying data structure and customizing activation functions provide a more efficient approach than data augmentation or customizing loss functions. This view is shared by many researchers. For example, [22] investigated tailoring deep neural network structures to enforce a different symmetry. It is not a surprise that their methods improved the generalization of the model. On the other hand, by purposefully designing neural network architecture and components and testing their effectiveness on data, we may let machine learning models guide us to discover new scientific laws hidden in the data. In this paper, we first give a linear algebraic explanation of the inner workings of a neural network. This simple and transparent explanation helps us to get a better understanding of the role of activation functions in neural networks. Then, through the connection with the Diaconis-Shahshahani Approximation Theorem, we discuss the importance and criteria for customizing activation functions. As a case study, we consider one special yet common case when the target function \(f\) is partially exchangeable, _i.e._\(f(u,v,w)=f(v,u,w)\) for \(u,v\in\mathbb{R}^{d}\), and \(w\in\mathbb{R}^{k}\). We proved that for such a target function, using an even activation function in at least one of the layers guarantees that the prediction preserves partial exchangeability for best performance. Since even activation functions are seldom used in practice, we then designed a "seagull" activation function \(\log(1+x^{2})\) based on our criteria of good activation, and empirically tested over two dozen 9-64 dimensional regression problems, revealing a significant improvement. Improvement was observed regardless of the original activation function, the loss function, the optimizer, potential noise in the data, but it was most pronounced when the activation function substitute was applied to the layer in which the exchangeable variables are connected for the first time. While the improvement is greatest for low-dimensional data, our experiments on CIFAR10 image classification problem showed that even for high-dimensional case with low degree of exchangeability, the improvement can still be noticeable. ## 2 A Linear Algebraic Explanation of the Role of Activation For a deep neural network with \(k\)-hidden layers, the approximation can be written as: \[f(x_{1},x_{2},\ldots,x_{d})\approx\sum_{i=1}^{m_{k}}g\left(\phi_{i}^{k}(x_{1}, x_{2},\ldots,x_{d})\right)w_{i}^{k}+b^{k},\] where \(\phi_{i}^{k}\) are recursively defined by \(\phi_{i}^{1}(x_{1},x_{2},\ldots,x_{d})=x_{1}w_{1i}^{1}+x_{2}w_{2i}^{1}+\cdots+ x_{d}w_{di}^{1}+b_{i}^{1}\), and \[\phi_{i}^{r}(x_{1},x_{2},\ldots,x_{d})=\sum_{j=1}^{m_{r}}g\left(\phi_{j}^{r-1} (x_{1},x_{2},\ldots,x_{d})\right)w_{ji}^{r}+b_{i}^{r},\ \ r=2,3,\ldots,k.\] It is well known that neural networks with at least one hidden layer can approximate any multivariate continuous function ([6][8][9]). While the proofs are not difficult, they are usually abstract. For example, the proof in [6] used Hahn-Banach Theorem. To better understand the inner workings of neural networks, here we use linear algebra to explain the approximation mechanism. For clarity, we only consider neural networks with one hidden layer and assume the error is measured by the mean square error. For each input data \(\mathbf{x}_{j}=(x_{j1},x_{j2},\ldots,x_{jd})\), the output \[\sum_{i=1}^{m}g\left(x_{j1}w_{1i}+x_{j2}w_{2i}+\cdots+x_{jd}w_{di}+b_{i} \right)\alpha_{i}+\beta\] is meant to approximate the target value \(y_{j}\). If we denote \[X=\left(\begin{array}{cccc}x_{11}&x_{12}&\cdots&x_{1d}\\ x_{21}&x_{22}&\cdots&x_{2d}\\ \vdots&\vdots&\cdots&\vdots\\ x_{N1}&x_{N2}&\cdots&x_{Nd}\end{array}\right),\ \ W=\left(\begin{array}{ ccccc}w_{11}&w_{12}&\cdots&w_{1m}\\ w_{21}&w_{22}&\cdots&w_{2m}\\ \vdots&\vdots&\cdots&\vdots\\ w_{d1}&w_{d2}&\cdots&w_{dm}\end{array}\right), \tag{1}\] and denote \(y=(y_{1},y_{2},\ldots,y_{N})^{T}\), \(b=(b_{1},b_{2},\ldots,b_{m})\), \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{m})^{T}\), and let \(\mathds{1}\) be the \(N\)-dimensional column vector whose entries are all \(1\), then the problem can be formulated as finding the least square solution to the problem \[g(XW+\mathds{1}b)\alpha+\mathds{1}\beta=y,\] where \(g(U)\) means to apply the function \(g\) to every entry of \(U\). When \(g(XW+\mathds{1}b)\) is fixed, there is an analytical solution \[\alpha= \left(g(XW+\mathds{1}b)-\overline{g(XW+\mathds{1}b)}\right)^{+}y, \tag{2}\] \[\beta= \overline{y}-\overline{g(XW+\mathds{1}b)}\alpha, \tag{3}\] where \(U^{+}\) means the Moore-Penrose inverse of the rectangular matrix \(U\), and \(\overline{M}\) means the row mean of the matrix \(M\). Thus, the main goal of the problem is to find \(W\) and \(b\) so that the affine span of \(g(XW+\mathds{1}b)\) can best approximate the \(N\)-dimensional vector \(y\). Note that the matrix \(XW+\mathds{1}b\) has rank at most \(d+1\). The role of the activation function is to create a matrix \(g(XW+\mathds{1}b)\) with a much higher rank, so that affine combinations of its column vectors can approximate the \(N\)-dimensional vector \(y\). Not every function \(g\) has the such a potential. For example, if \(g(t)\) is a polynomial of degree \(p\), then the matrix \(g(XW+\mathds{1}b)\) has rank at most \(\binom{d+p}{p}\), regardless of the size of \(W\). Fortunately, the following theorem says that polynomials are essentially the only functions that have this limitation. Theorem 2.1: _Suppose \(g(t)\) is not a polynomial in some open interval \(I\subset\mathds{R}\), in which there exists one point at which \(g\) is infinitely many times differentiable. For any distinct points \(\mathbf{x}_{i}\), \(1\leq i\leq N\) in \(\mathds{R}^{d}\), with \(N\gg d\), and any \(d<m\leq N\), there exists a \(d\times m\) matrix \(W\) and an \(m\)-dimensional row vector \(b\), such that \(g(XW+\mathds{1}b)\) has rank \(m\), where \(X\) is the \(N\times d\) matrix whose \(i\)-th row is the vector \(\mathbf{x}_{i}\). The same statement holds if \(g(t)=\max\{t,0\}\). On the other hand, if \(g\) is a polynomial of degree \(p\), then the rank of \(g(XW+\mathds{1}b)\) is at most \(\binom{p+d}{p}\) regardless of the size of \(W\)._ Proof: Because the \(N\) points are distinct, there is a direction \((w_{1},w_{2},\ldots,w_{d})^{T}\) so that the scalar projections of the points on this vector have distinct values. That is, each term in the following sequence \[c_{k}:=x_{k1}w_{1}+x_{k2}w_{2}+\cdots+x_{kd}w_{d},\ k=1,2,\ldots,N\] has a distinct non-zero value. Let us first consider the case where \(g\) is infinitely many times differentiable on \(I\) and is not a polynomial. In this case, there exists \(b_{0}\in I\) such that the derivatives \(g^{(k)}(b_{0})\neq 0\) for \(0\leq k\leq N\). We show that for any \(d<m\leq N\), we can choose a \(d\times m\) matrix \(W=(w_{1},w_{2},\ldots,w_{d})^{T}(t_{1},t_{2},\ldots,t_{m})\) and \(b=(b_{0},b_{0},\ldots,b_{0})\) such that the matrix \(g(XW+\mathds{1}b)\) has rank \(m\). Suppose this is not true. Let \(S\) be the linear span of first \(m-1\) columns of \(g(XW+\mathds{1}b)\), and consider the matrix \(W(t)=(w_{1},w_{2},\ldots,w_{d})^{T}(t_{1},t_{2},\ldots,t_{m-1},t)\). Since for all \(t\), the last column of the matrix \(g(XW(t)+\mathds{1}b)\), that is, the vector \(v(t):=(g(c_{1}t+b_{0}),g(c_{2}t+b_{0}),\ldots,g(c_{N}t+b_{0}))^{T}\) belongs to the space \(S\), which is independent of \(t\). So does the derivatives of \(v(t)\). Taking derivative of \(v(t)\) with respect to \(t\) for \(k\)-times, \(k=0,1,...,m-1\) and evaluate it at \(t=0\), we obtain \(v^{(k)}(0)=g^{(k)}(b_{0})(c_{1}^{k},c_{2}^{k},\ldots,c_{N}^{k})^{T}\). Since \(g^{(k)}(b_{0})\neq 0\) for all \(k=0,1,2,\ldots m-1\), We have \((c_{1}^{k},c_{2}^{k},\ldots,c_{N}^{k})^{T}\in S\), \(0\leq k\leq m-1\). This is a contradiction because the \(m\) vectors \((c_{1}^{k},c_{2}^{k},\ldots,c_{N}^{k})\), \(0\leq k\leq m-1\) are linearly independent, while the dimension of \(S\) is at most \(m-1\). Thus, \(g(XW+\mathds{1}b)\) has rank \(m\) for some \(d\times m\) matrix \(W\) and some \(m\)-dimensional row vector \(b\). Now, we consider the case when \(g(t)=\max\{t,0\}\). Without loss of generality, we assume \(c_{1}>c_{2}>\cdots>c_{N}\). By choosing \(W=(w_{1},w_{2},\ldots,w_{d})^{T}(1,1,\ldots,1)\) and letting \(b=(-s_{1},-s_{2},\ldots,-s_{m})\) where \(c_{k}>s_{k}\geq c_{k+1}\), \(1\leq k\leq m\), we see that the \(k\)-th column of \(g(XW+\mathds{1}b)\) is the vector \((c_{1}-s_{k},c_{2}-s_{k},\ldots,c_{k}-s_{k},0,0,\ldots,0)^{T}\) with \(k\)-th coordinate non-zero, all the coordinates after the \(k\)-th equal 0. It is clear that the linear span of such column vectors has dimension \(m\). In the other direction, suppose \(g\) is a polynomial of degree \(p\), then all the column vectors of \(g(XW+\mathds{1}b)\) can be expressed as a linear combination of entry-wise products of the form \(u_{j_{1}}u_{j_{2}}\cdots u_{j_{k}}\) for \(1\leq k\leq p\), where \(u_{j}=(x_{1j},x_{2j},\ldots,x_{Nj})^{T}\). Thus, the rank of \(g(XW+\mathds{1}b)\) is at most \(\binom{d+p}{p}\). Remark 1: It is known that a continuous non-polynomial activation function ensures uniform approximability on a compact domain (Theorem 3.1 in [15]). Here we only gave a quick linear algebraic proof under a stronger assumption of infinite differentiability at one point, paving the road to constructing a new activation function in the next section. Also, the result about ReLU function is known (e.g. Theorem 1 in [23]). ## 3 Customizing Activation Functions While all functions except polynomials can be a valid choice for an activation function, a good activation may significantly reduce the number of combination terms needed. Thus, customizing activation functions can be very important. The following theorem of Diaconis and Shahshahani [3] can be viewed as the first promotion of customizing activation functions: Any multivariate continuous function \(f\) on a bounded closed region in \(\mathbb{R}^{d}\) can be approximated by: \[f(x_{1},x_{2},\ldots,x_{d})\approx\sum_{i=1}^{m}g_{i}\left(a_{i1}x_{1}+a_{i2}x _{2}+\ldots+a_{id}x_{d}\right), \tag{4}\] where \(g_{i}\) are non-linear continuous functions ([3] Theorem 4). Thus, choosing good activation functions \(g_{i}\) are very important for accurate approximation. The current off-the-shelf neural networks pay little attention to choosing activation functions. Instead, they simply use one of a few fixed activation functions such as ReLU (i.e., \(\max(x,0)\)). While such a practice seems to work well especially for classification problems, it may not be the most efficient way for regression problems. This greatly limits the potential of the neural networks. A good choice of activation functions can greatly increase the efficiency of a neural network. For example, to approximate the function \(z=\sin(xy)\), a neural network may use the following step-by-step strategy (5) to bring \((x,y)^{T}\) to \(\sin(xy)\): \[\left(\begin{array}{c}x\\ y\end{array}\right)\stackrel{{\text{linear}}}{{\longrightarrow}} \left(\begin{array}{c}x+y\\ x-y\end{array}\right)\stackrel{{\text{(\cdot)^{2}}}}{{ \longrightarrow}}\left(\begin{array}{c}(x+y)^{2}\\ (x-y)^{2}\end{array}\right)\stackrel{{\text{linear}}}{{ \longrightarrow}}xy\quad\stackrel{{\sin(\cdot)}}{{ \longrightarrow}}\sin(xy) \tag{5}\] In other words, if one happens to use the non-linear functions \(f(t)=t^{2}\) and \(g(s)=\sin(s)\) as the activation function in the first layer and the second layer, respectively, then even the exact function can be discovered. Of course, without knowing the closed form expression of \(f\), it is impossible to choose the exact activation functions as exemplified above. However, with some partial information that likely exists from domain knowledge, one may design better activation functions accordingly. This is the goal of the current paper. Let us remark that this analytical-design approach is different from the adaptive activation functions approach used in Japtap et al. [11], in which both layer-wise and neuron-wise locally adaptive activation functions were introduced into Physics-informed Neural Networks [18], and showed that such adaptive activation functions lead to improvements on training speed and accuracy on a group of benchmark data sets. To customize activation functions, the main criteria is their potential to easily generate vectors outside the linear span of the input vectors. We have analyzed above that as long as the activation function is not a polynomial, it can generate vectors whose affine combinations can approximate all target vectors. Other than that, the performances of an activation function often depends on the datasets and the model hyper-parameters and network architecture. In image classification, ReLU and Leaky ReLU are popular choices. For recurrent neural networks, sigmoid and hyperbolic tangent are more commonly used. Here at the risk of drawing criticisms, we post three criteria: **1. Smoothness**. The smoothness of the activation function is a natural assumption for regression problems in which the target function is smooth. Consider the regression problem of approximating an unknown multivariate continuous function \(f(x_{1},x_{2},\ldots,x_{d})\) on a bounded region using a neural network that uses a gradient-based optimizer. For a gradient-based algorithm to work well, the partial derivatives of the function \(f\) must exist. Thus, it is necessary that the function \(f\) is of bounded Lipschitz, i.e., there exists a constant \(L\) such that for all \(X=(x_{1},x_{2},\ldots,x_{d})\) and \(Y=(y_{1},y_{2},\ldots,y_{d})\) in the bounded region, we have \[|f(X)-f(Y)|\leq L\sqrt{\Sigma_{i=1}^{d}(x_{i}-y_{i})^{2}}. \tag{6}\] If we allow some errors \(\varepsilon\) in Hausdorff distance in approximating the level sets \(K\) of the target function, then we are approximating the set \(K+B_{d-1}(\varepsilon)\), where \(B_{d-1}(\varepsilon)\) is the ball in \(\mathds{R}^{d-1}\) with radius \(\varepsilon\). It is known that the surface of \(K+B_{d-1}(\varepsilon)\) is of \(C^{1,1}\) (but \(C^{2}\)) even if \(K\) is \(c^{\infty}\) smooth [12]. To approximate such a set, it is best to use shapes of the same degree of smoothness. Thus, the activation function is better to have a continuous first derivative. Extra degree of smoothness is not necessary. In other words, using functions like \(\log(1+x^{4})\) for extra smoothness near \(x=0\) is not necessary. ## 4 Local curvature Although from our proof of Theorem 3.1, we only need one point \(x_{0}\) at which the activation function \(g\) satisfies \(g^{(k)}(x_{0})\neq 0\) for all \(k\), to release the burden of choosing the best \(W\) and \(b\) in \(g(XW+\mathds{1}b)\) to match the point \(x_{0}=XW+\mathds{1}b\), we require that \(g^{(k)}(x)\neq 0\) for all \(k\) at many places of \(x\) as possible. In other words, \(g\) is not a piecewise polynomial. At first look, this sounds like contradicting with the common use of cubic splines in approximating smooth functions. The difference is that this activation function is used to transform the intermediate variables, not to locally approximate the function itself. ## 5 Growth rate From point of view of reducing generalization error, a good activation function should not put too much weight on a few particular variables. An outlier in \(X\), may cause the value \(XW+\mathds{1}b\) to increase significantly. If \(g^{\prime}(t)\) is large when \(|t|\) is large, then a gradient-based optimizer will likely to make big change on the corresponding parameter in the weight matrix \(W\), making the neural network difficult to train, and a small perturbation on a few variables could significantly alter the prediction. Thus, if an activation function is used in several layers, the growth rate at infinite should not be faster than linear. There are several dozens of activation functions that are commonly used. Analyzing these functions, we notice that besides a few bounded functions such as Binary Step, Sigmoid and Hyperbolic Tangent, all the remaining activation functions (such as in ReLU, Leaky ReLU, PReLU, ELU, GELU, SELU, SiLU, Softplus, Mish) grow linearly as \(x\to\infty\). To take all three criteria into consideration, and to fill in the gap between bounded and linear growth, we propose to consider the following activation functions: \(g_{1}(x)=\log(1+|x|^{\alpha})\), \(g_{2}(x)=\operatorname{sign}(x)\log(1+|x|^{\alpha})\) and the function \(g_{3}(x)=\log(1+|x|^{\alpha})\) if \(x>0\); and \(g_{3}(x)=0\) if \(x\leq 0\). For \(1\leq\alpha\leq 2\), the function \(g_{1}(x)\) behaves like \(x^{\alpha}\) for positive \(x\) near \(0\), and behaves like \(\alpha\log x\) for \(|x|\) large. The function \(g_{2}\) and \(g_{3}\) are the modifications. In particular, introduce the following two functions: \(\log(1+x^{2})\) and \(\operatorname{sign}(x)\log(1+|x|)\). For easy referencing, we call \(\log(1+x^{2})\) as the Seagull activation function, as its graph looks like a flying seagull, and call \(\operatorname{sign}(x)\log(1+|x|)\) as Linear Logarithmic Unit (LLU). We will demonstrate the effectiveness of Seagull activation later in the paper. Figure 3 of [4] showed another example that activation functions with logarithmic growth performed noticeably better than bounded or linear growth activation functions. ## 6 A Case Study: Partially Exchangeable Targets In this section, we consider the case where \(f\) satisfies the relation \[f(u,v,w)=f(v,u,w) \tag{7}\] for \(u=(x_{1},x_{2},\ldots,x_{k}),v=(x_{k+1},x_{k+2},\ldots,x_{2k})\in[-1,1]^{k}\) and \(w=(x_{2k+1},x_{2k+2},\ldots,x_{d})\in[-1,1]^{d-2k}\). For convenience, we say such functions are partially exchangeable with respect to two subsets of variables. These functions are very common in practice. For example, if the function \(f\) depends on the distance between two points of observation in space, then as a multivariate function of the coordinates of these two points (with or without factors), \(f\) is partially exchangeable. As another example, if a label of an image is invariant under left-right flipping, then it is partially exchangeable. Similarly, the rotational invariance of 3D structures [21] and view point equivalent of human faces [14] could also be described using partially exchangeable feature. Indeed, if \(x_{1},x_{2},\ldots,x_{n}\) is the usual vectorization of the pixels, denote \(u=(x_{1},x_{2},\ldots,x_{k})\), \(v=(x_{m},x_{m-1},\ldots,x_{m-k})\), where \(k=m/2\) if \(m\) is even, and \(k=(m-1)/2\) if \(m\) is odd. Then the label of the image can be expressed as \(f(u,v)\) if \(m\) is even, and \(f(u,v,x_{k+1})\) if \(m\) is odd. In either case, the function is partially exchangeable with respect to \(u\) and \(v\), i.e. \(f(u,v)=f(v,u)\) or \(f(u,v,x_{k+1})=f(v,u,x_{k+1})\). A special case is that \(f\) also satisfies \[f(u,v,w)=f\left(\frac{u+v}{2},\frac{u+v}{2},w\right). \tag{8}\] The latter case is more restrictive and less interesting, and will be called the trivial case. An example of the trivial case is that a function \(f\) depends on the middle point of \(u\) and \(v\) in space, not the actual location of each individual points. We first prove the following: Theorem 4.1: _Suppose a multi-layer neural network is trained to predict an unknown bounded Lipschitz function \(f(x)\) on a region in \(\mathbb{R}^{d}\) that contains the origin as its interior. Suppose the target function is partially exchangeable with respect to some two subsets of variables. Let \(\widehat{f}(x)=\psi\circ g(xW)\) be the neural network prediction of \(f\), where \(W\) is a \(d\times m\) is the weight matrix of the first hidden layer with activation function \(g\) and no bias. If \(\widehat{f}\) keeps the non-trivial partial exchangeability of \(f\) and is non-trivial, then \(\psi\circ g(\cdot)\) is an even function, which can be achieved when \(g\) is an even function._ Proof: Let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{k},\beta_{1},\beta_{2},\ldots,\beta_{k}, \gamma_{1},\gamma_{2},\ldots,\gamma_{d-2k}\) be the row vectors of \(W\). Denote \(u=(x_{1},x_{2},\ldots,x_{k})\), \(v=(x_{k+1},x_{k+2},\ldots,x_{2k})\) and \(w=(x_{2k+1},x_{2k+2},\ldots,x_{d})\). We have \[\widehat{f}(u,v,w)=\psi\circ g\left(\sum_{i=1}^{k}\alpha_{i}x_{i}+\sum_{i=1}^{ k}\beta_{i}x_{i+k}+\sum_{j=1}^{d-2k}\gamma_{j}x_{2k+j}\right).\] In particular, from these expressions we have \[\widehat{f}(u,-u,0) =\psi\circ g\left(\sum_{i=1}^{k}(\alpha_{i}-\beta_{i})x_{i}\right),\] \[\widehat{f}(-u,u,0) =\psi\circ g\left(-\sum_{i=1}^{k}(\alpha_{i}-\beta_{i})x_{i}\right).\] By the non-trivial partial exchangeability assumption on \(\widehat{f}\), we have \(\widehat{f}(u,-u,0)=\widehat{f}(-u,u,0)\). Thus, \[\psi\circ g(\sum_{i=1}^{k}(\alpha_{i}-\beta_{i})x_{i})=\psi\circ g(-\sum_{i=1 }^{k}(\alpha_{i}-\beta_{i})x_{i}),\] which implies that \(\psi\circ g(\cdot)\) is an even function in \(\mathds{R}^{m}\). Remark 2: Since the multivariate function \(\psi\) depends on later layers and is typically very complicated, an easy and effective way to ensure that \(\psi\circ g(\cdot)\) be an even function is to choose an even activation function \(g\) in the first layer where the exchangeably variables are connected. The evenness of \(\psi\circ g\) can also be achieved by the evenness of \(\psi\), which can be attained by using an even activation function in later layers. For a convolutional neural network, to capture the left-right flipping invariance, one can achieve this by using an even activation function in the first fully-connected layer. To capture local exchangeability, one may use an even activation in the middle. Using an even activation function is clearly not a common practice. Indeed, almost all the popular activation functions are not even. As such, we will use the Seagull activation function that we have constructed above, i.e., the function \(\log(1+x^{2})\). Note that Theorem 4.1 does not suggest that one should use an even activation function for all the layers. Also, our experiments do not seem to indicate that one can get further benefit from extensive use of even activation functions. We suggest to use it in the layer where the exchangeable variables are integrated together for the first time. ## 5 Empirical Evaluation This section presents the usages of the proposed customization of activation function on both synthetic and real-world data. We first consider lower dimensional cases where there is a strong exchangeability in the target function. ### Experiment on different target functions and different original activations Consider function \(y=f(u,v,w)\) being the area of the triangle with vertices \(u=(x_{1},x_{2},x_{3})\), \(v=(x_{4},x_{5},x_{6})\), and \(w=(x_{7},x_{8},x_{9})\). Clearly, \(f(x_{1},x_{2},...,x_{9})\) satisfies \(f(u,v,w)=f(v,u,w)\). In fact, we have the closed formula \(f(u,v,w)=\frac{1}{2}\sqrt{A^{2}+B^{2}+C^{2}}\), where \[A =(x_{4}-x_{1})(x_{8}-x_{2})-(x_{7}-x_{1})(x_{5}-x_{2}),\] \[B =(x_{4}-x_{1})(x_{9}-x_{3})-(x_{7}-x_{1})(x_{6}-x_{3}),\] \[C =(x_{5}-x_{2})(x_{9}-x_{3})-(x_{8}-x_{2})(x_{6}-x_{3}).\] In short, \(f\) is a square root of a positive fourth degree polynomial of 9 variables. To ensure that the improved performance was not due to the specific formula of the target function \(f\), we also consider the following four target functions \(\log(1+f)\), \(e^{f}/100\), \(\sin(f)\) and \(Q(f)=\sqrt{\frac{f^{2}+3}{f+1}}\). Thus, these five functions cover different rates of growth, from logarithmic growth to exponential growth to wavy function. To discover an approximate formula using a neural network, we randomly sampled 10,000 9-dimensional vectors from \([-2,2]^{9}\), representing the \((x,y,z)\)-coordinates of three points \(u,v,w\) in \(\mathds{R}^{3}\) respectively, and created labels using the formulas above. In the same way, we independently generated 2000 vectors and the corresponding labels to form a test set. For the function \(f\), we then built several fully-connected neural networks and selected the one with the best overall performance. The selected model was a fully-connected neural network (_i.e._ Multilayer Perceptron) with four hidden layers. The input layer had 9 nodes and the output layer had 1 node. Each hidden layer had 100 nodes. For other four functions, we repeated the same procedure, and found that the same neural network architecture is close to the best. Thus, we used the same architecture for all five functions. Based on the selected neural network architecture, for each target function, we built five models using five popular activation functions (ReLU, ELU, sigmoid, tanh, softplus), and trained all the 25 models with the RMSProp optimizer for 500 epochs with batch size was set as 100. The learning rate are tuned for each model function, starting from \((0.001,0.003,0.005)\) which was halved every 100 epochs. The best one was used. To demonstrate the effectiveness of using the customized activation function, for each of the 25 selected models we replaced the activation function in the first hidden layer by the Seagull activation function \(\log(1+x^{2})\), while leaving all the remaining parts of the neural networks and hyperparameters unchanged. We evaluated the performance of the network by Mean Absolute Error (MAE), calculated as follows: \[MAE=\frac{1}{N}\sum_{j=1}^{N}|y_{j}-\hat{y_{j}}| \tag{9}\] Figure 1: Effect of using Seagull activation functions on five regression tasks. Wider bars: MAE for the original model; Narrow blue bars: MAE of the model when the first activation function is replaced by Seagull. Left: Each experiment was performed 5 times independently. Right: with 5% noise added in. where \(N\) is the number of testing samples, \(y_{j}\) and \(\hat{y_{j}}\) are the ground truth and the prediction, respectively. The reason of first choosing MAE instead of MSE is that MAE is less related to the square function appeared in the new activation function in \(\log(1+x^{2})\). The experiment was performed 5 times independently with different number random seeds. The results are reported in Figure 1-left. The results for different target functions are consistent. In real-world applications, data always contains noise. To test the effectiveness of our method on noisy data, we added random noise of mean 0 and 5% standard deviation onto the training labels. The results in Figure 1-right demonstrate the robustness and effectiveness of the even activation function. The improvement can also be visualized by the learning cure. To demonstrate this, we consider the \(3\times 3\) determinant formed by 9 variables \(x_{1},x_{2},\ldots,x_{9}\). If we let \(f_{1}\) be the cosine of the determinant. The function \(f_{1}\) is partially exchangeable. To test the effectiveness of substituting with Seagull, we generated 50000 training vectors with each coordinate randomly generated from centered Gaussian distribution with variance 1, and in the same way independently generated 2000 test vectors. After experimenting and tuning the parameters, we built a fully-connected 8-layer neural network with 100 neurons in all middle layers, interlaced by two Batch Normalization layers and five Dropout layers of dropout rate 0.5, with Sigmoid as activation function. We training the model for 500 epochs and batch size 100 using RMSprop with learning rate 0.003 that is halved every 100 epochs to minimize the MSE. Then, for each model, we replaced the Sigmoid by Seagull in the first layer, leaving all the hyperparameters unchanged. Retraining the models from the beginning. The result is reflected in Figure 2. ### Experiment on varying degree of exchangeability The function \(f\) used in the previous two experiments are of strong exchangeability. Indeed, the function is obviously invariant when the three groups of variables \((x_{1},x_{2},x_{3})\), \((x_{4},x_{5},x_{6})\) and \((x_{7},x_{8},x_{9})\) are permuted. Less obvious is that the function is also invariant when the three groups of variables \((x_{1},x_{4},x_{7})\), \((x_{2},x_{5},x_{8})\) and \((x_{3},x_{6},x_{9})\) are permuted. There are total 12 different permutations under which the function \(f\) is invariant. To see the effect of even Seagull on target functions with varying degree of exchangeability, we also consider the following target function. To minimize the artifacts caused by dimensions, we also consider a 9-dimensional function. Let \(\phi\) the solid angle formed by the three vectors \(u=(x_{1},x_{2},x_{3})\), \(v=(x_{4},x_{5},x_{6})\) Figure 2: Comparison of the learning curves. and \(w=(x_{7},x_{8},x_{9})\) on the unit sphere. The target function \(\phi\) has a closed formula \[\phi(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8},x_{9})=\phi(u,v,w)=\left\{ \begin{array}{ll}2\tan^{-1}z&z\geq 0\\ \pi+2\tan^{-1}z&z<0\end{array}\right.\] where \[z=\frac{|(u\times v)\cdot w|}{1+u\cdot v+v\cdot w+w\cdot u}.\] Note that \(\phi\) is intrinsically different from \(f\), and the inner function \(z\) even has singularity on the surface \(u\cdot v+v\cdot w+w\cdot u=1\). The function \(\phi\) are invariant when the three groups of variables \((x_{1},x_{2},x_{3})\), \((x_{4},x_{5},x_{6})\) and \((x_{7},x_{8},x_{9})\) are permuted, but not invariant when the three groups of variables \((x_{1},x_{4},x_{7})\), \((x_{2},x_{5},x_{8})\) and \((x_{3},x_{6},x_{9})\) are permuted. There are 6 different permutations under which \(\phi\) is invariant. Furthermore, if we define \(\psi(x_{1},x_{2},\ldots,x_{9})=\phi(x_{1},x_{2},\ldots,x_{8},-x_{9})\). Then \(\psi\) is invariant only when \((x_{1},x_{2},x_{3})\) and \((x_{4},x_{5},x_{6})\) are exchanged. Thus, \(\psi\) has the weakest exchangeability (2 permutations), while \(f\) has the strongest exchangeability (12 permutations) among the three functions \(f\), \(\phi\) and \(\psi\). We used the same approach to test the effectiveness of substituting the original activation in the first layer by Seagull. For each function, we built five models. The number of layers and the number of neurons in each layer are the same, but the activation functions are different (ReLU, ELU, Sigmoid, Tanh, Softplus). We used MSE as loss function and experimented with three different optimizers (SGD, Adam and RMSprop), each with three choices of learning rate. The results in Figure 3 shows that while improvement seems to be positively correlated to the degree of exchangeability, for low-dimensional case, the improvement can be significant when there is only weak exchangeability. Without the exchangeability, a substitution with Seagull is not expected to have any benefit and could even get worse. To verify the statement, we used model and function associated with Figure 2, but replacing the cosine function with the sine function. The new function is no longer partially exchangeable, and the substitution did not lead to any noticeable improved. Indeed, in for some activation functions, we even got slightly worse results, partly because the hyperparameters were not turned after the substitution. #### 4.2.2 Experiment on applying Seagull on different layers The layer on which the Seagull function is applied matters. The most effect way is to put it in the layer where the exchangeable variables are connected for the first time. Figure 3-left shows that the effectiveness quickly decreases. If the activation function is put at the last layer, the effectiveness could vanish. In one case, it even get worse performance. We believe this is partly due to the fact that the parameters were tuned for the original activation function, but not adjusted after the substitution. Figure 3: Effect of using Seagull activation functions on some 9 dimensional target functions with varying degree of exchangeability. The improvement is not an artifact of the loss function MAE. Figure 3-right shows that replacing MAE loss with the Mean Square Error (MSE), the improvement is also significant. While it is possible that a more carefully designed neural network with longer training and finer tuning of the learning rates may produce predictions with smaller errors, the effect of using Seagull activation function in the first layer is evident from all examples. #### 4.2.2 Experiment on dimensions The effect of partial exchangeability will decrease as the dimension of the data increases. Indeed, in high-dimensional case, there are many combinations of the variables, and the ones satisfying partial exchangeability condition are only a tiny fraction. Thus, it is expected that the improvement of using a Seagull activation will decrease as the dimension increases. The results in the previous experiments are for relatively low dimensions (9 dimensions). To see the dependence on the dimension, we experimented with 16, 25, 36, and 64 dimensional target functions. The improvement quickly decreases as dimension increases. For example, for 25-dimensional functions, we experimented on the volume of 5-dimensional simplex generated by the origin and five 5-dimensional points whose coordinated are generated by normally distributed with mean 0 and variance 1. The function is invariant under 10 different permutations. We used the same experimental strategy as in the previous examples, but the improvement reduced to 12.7% on average. When we increase the dimension to 64, we observed no improvement without tuning the parameters after substituting the activation function. To see if improvement can still be made for very high-dimensional partially exchangeable target functions when parameters are tuned after the substitution, we performed experiments on CIFAR 10, a popular data set that consists of \(60000\)\(32\times 32\) color images in 10 classes, with 6000 images per class. More details about the CIFAR-10 data set could be found in [13].) The dimension of the data is 3072. There is partial exchangeability due to the left-right flipping invariance. One may also argue that by shuffling RGB channels, the classification is likely to be invariant. Although, the partial exchangeability is very limited in such a high-dimensional data, we can still see the effectiveness of using a Seagull activation function. The following DCNNs were tested: Regnet [16], Resnet [7], VGG [20], and DPN [2]. The details of each of these networks vary. In particular, we employ Regnet with 200 million flops (Regnet200MF), 50-layer Resnet (Resnet50), VGG net with 13 convolutional layers and 3 fully connected layers (VGG16), and DPN-26. There are different ways to introduce an even activation function into the DCNNs. Specifically, we replaced the ReLU at the output of each block for Regnet200MF, Resnet50, and VGG16. Taking Resnet50 as an example, each block in the Resnet50 consists of a few weighted layers and a skip connection. We replaced the last ReLU, which directly affects the output of the block, with Seagull activation function \(\log(1+x^{2})\) and left the rest of the activation functions unchanged. For training setup, 50,000 images were used as training samples and 10,000 images for testing. The total epoch number was limited as 350. We selected the SGD optimizer with 0.9 momentum and set the learning Figure 4: Effect of using Seagull activation functions on different layers. Left: MAE loss. Right: MSE loss. rate as \(0.1\) for epoch \([0,150)\), \(0.01\) for epoch \([150,250)\), and \(0.001\) for epoch \([250,350)\). All training was started from scratch. Considering the random initialization of the DCNNs, each training process was performed \(5\) times, and the average accuracy on testing data and standard deviation were summarized in Figure 6. The results in Figure 6 reveal some interesting information. The Seagull function presents significant improvement compared to the ReLU function on all three DCNNs, despite some variation. The feature maps in the network retain the partial exchangeability of the images. Considering the DCNN structures, the convolution operation and activation functions leave the spatial relationship unchanged as well as the partial exchangeability. According to Theorem 4.1, the even activation function could capture this feature better compared to ReLU. While one might argue the rotation and flip invariance could be achieved by data augmentation and filter rotation [5], using the proposed Seagull function would provide the same feature, as well as significant reduction of the network size and training time. For DPN-26, a different strategy was selected. First, we train \(300\) epochs on the original DPN-26 from scratch. Then we insert a fully-connected layer with an even activation function and trained the neural network with the same hyper parameters for \(300\) epochs. The result shows noticeable improvements: For the network without using even activation functions, the average accuracy reached \(95.10\%\pm 0.11\%\), whereas the network using an even activation function reached \(95.35\%\pm 0.04\%\). The \(95.35\%\) accuracy also outperformed DPN-92, which consists of many more parameters. This substantial improvement further demonstrates the effectiveness of using even activation functions for partially exchangeable target functions. ## 6 Conclusion In this paper, we emphasized the importance of studying activation functions in neural networks. We theoretically proved and experimentally validated on synthetic and real-world data sets that when the target function is partially exchangeable, using Seagull activation function in the layer when the exchangeably variables are connected for the first time can improve neural network performance. Through a special yet commonly encountered case, these results demonstrate the great potential of customizing activation functions. AcknowledgementWe would like to acknowledge the support from the National Science Foundation grants OCA-1940270 and 2019609, and the National Institutes of Health grant P20GM104420. Figure 5: Effect of using Seagull activation functions on a \(25\)-dimensional target function.
2301.08618
Solving PDEs with Unmeasurable Source Terms Using Coupled Physics-Informed Neural Network with Recurrent Prediction for Soft Sensors
Partial differential equations (PDEs) are a model candidate for soft sensors in industrial processes with spatiotemporal dependence. Although physics-informed neural networks (PINNs) are a promising machine learning method for solving PDEs, they are infeasible for the nonhomogeneous PDEs with unmeasurable source terms. To this end, a coupled PINN (CPINN) with a recurrent prediction (RP) learning strategy (CPINN- RP) is proposed. First, CPINN composed of NetU and NetG is proposed. NetU is for approximating PDEs solutions and NetG is for regularizing the training of NetU. The two networks are integrated into a data-physics-hybrid loss function. Then, we theoretically prove that the proposed CPINN has a satisfying approximation capability for solutions to nonhomogeneous PDEs with unmeasurable source terms. Besides the theoretical aspects, we propose a hierarchical training strategy to optimize and couple NetU and NetG. Secondly, NetU-RP is proposed for compensating information loss in data sampling to improve the prediction performance, in which RP is the recurrently delayed outputs of well-trained CPINN and hard sensors. Finally, the artificial and practical datasets are used to verify the feasibility and effectiveness of CPINN-RP for soft sensors.
Aina Wang, Pan Qin, Xi-Ming Sun
2023-01-20T14:59:33Z
http://arxiv.org/abs/2301.08618v3
Coupled Physics-informed Neural Networks for Inferring Solutions of Partial Differential Equations with Unknown Source Terms ###### Abstract Physics-informed neural networks (PINNs) provide a transformative development for approximating the solutions to partial differential equations (PDEs). This work proposes a coupled physics-informed neural network (C-PINNs) for the nonhomogeneous PDEs with unknown dynamical source terms, which is used to describe the systems with external forces and cannot be well approximated by the existing PINNs. In our method, two neural networks, \(NetU\) and \(NetG\), are proposed. \(NetU\) is constructed to generate a quasi-solution satisfying PDEs under study. \(NetG\) is used to regularize the training of \(NetU\). Then, the two networks are integrated into a data-physics-hybrid cost function. Finally, we propose a hierarchical training strategy to optimize and couple the two networks. The performance of C-PINN is proved by approximating several classical PDEs. Coupled physics-informed neural network, hierarchical training strategy, partial differential equations, unknown source term ## I Introduction Partial differential equations (PDEs) are one of the general representations for describing spatio-temporal dependence in physics [1], medicine [2], engineering [3], finance [4], and weather [5, 6]. Numerical approaches, like the finite difference method (FDM) [7] and finite element (FEM) [8, 9], have been widely investigated and applied. FDM used a topologically square lines network to construct PDEs' discretization. Thus, complex geometries in multiple dimensions challenge FDM [10]. On the other hand, complicated geometries can be treated with FEM [11]. The greatest difficulty of classical numerical approaches is balancing the accuracy and efficiency of forming meshes. Among the numerical methods for solving PDEs, the Galerkin method is a famous computation method in which the linear combination of basis functions was employed to approximate the solutions to PDEs [12]. Motivated by this, several works have used machine learning models to replace the linear combination of basis functions to construct data-efficient and physics-informed learning methods for solving PDEs [13, 14, 15]. Successful applications of deep learning methods to various fields, like image [16], text [17], and speech recognition [18], ensure that they are excellent replacers of the linear combination of basis functions for solving PDEs [4]. Consequently, leveraging the well-known approximation capability of neural networks to solve PDEs is a natural idea and has been investigated in various forms previously [19, 20, 21]. The framework of physics-informed neural networks (PINNs) [22] was introduced to solve the forward problems while respecting any given physical laws governed by PDEs, including the nonlinear operator, initial, and boundary conditions. Within the PINNs framework, both the sparse measurements and the physical knowledge were fully integrated into cost function [23, 24]. The solution with respect to spatio-temporal dependence was obtained by training the cost function. Note that the approximation obtained by machine learning and deep learning is meshfree, which has no problem on balancing accuracy and efficiency of forming meshes. Meanwhile, the potential of using PINNs to solve the inverse problem is promising [25]. A hybrid PINN was proposed to solve PDEs in [26], in which a local fitting method was combined with neural networks to solve PDEs. The hybrid PINN was used to identify unknown constant parameters in PDEs. The generative adversarial network (GAN) [27] was also physics-informed to solve the inverse problems. The stochastic physics-informed GAN was investigated for estimating the distributions of unknown parameters in PDEs. The recent work [28] encoded the physical laws governed by PDEs into the architecture of GANs to solve the inverse problems for stochastic PDEs. PINNs were also combined with the Bayesian method to solve inverse problems from noisy data [29]. PDEs can be classified into homogeneous and nonhomogeneous types. Systems without external forces can be described by the homogeneous PDEs. The nonhomogeneous PDEs can be applied to reveal the continuous energy propagation behavior of the source and hereby are effective for describing practical systems driven by external forces. The function forms of the solution and the source term were both assumed to be unknown in [30], in which the measurements of the source term should be obtained separately from the measurements of the solution. However, the independent measurements of the external forces cannot always be easily obtained from practical situations. The recent work [31] can directly solve the steady-state PDEs' forward and inverse problems, where the source terms were assumed to be constant. Thus, [31] was not feasible for systems with unsteady external forces, which should be described by dynamical functions. Although the aforementioned methods have made great progress on unknown parameters, prior information or measurements on external forces cannot always be easily obtained from practical situations. For example, the real distribution of the seismic wave field underground is unknown [32]; the vast of signals internal engine, indicating the operation state of the engine, cannot be isolated [33]. Furthermore, the existing methods with the assumption of the constant source term cannot be readily extended to describe the spatio-temporal dependence of complex dynamical systems. The determination of dynamical source terms with less prior information or even without any prior information is an under-investigated issue. To this end, this paper proposes a coupled-PINN (C-PINN), using the sparse measurements and limited prior information of PDEs, to solve PDEs with unknown source terms. In our method, two neural networks, \(NetU\) and \(NetG\), are proposed. \(NetU\) is applied to generate a quasi-solution satisfying PDEs under study; \(NetG\) is used to regularize the training of \(NetU\). Then, the two networks are integrated into a data-physics-hybrid cost function. Furthermore, we propose a hierarchical training strategy to optimize and couple the two networks. Finally, the proposed C-PINN is applied to solve several classical PDEs to demonstrate its performance. The rest of the paper is organized as follows. The classical PINNs is briefly reviewed in Section II. A C-PINN using the sparse measurements and limited prior knowledge to solve PDEs with unknown source terms is proposed in Section III. Meanwhile, the two neural networks, \(NetU\) and \(NetG\), are proposed in our method. Furthermore, a hierarchical training strategy is proposed to optimize and couple the two networks. In Section IV, our proposed C-PINN is validated with four case studies. In Section V, the concluding remarks and the future work are presented. ## II Brief Review of PINNs In this section, we briefly review the basic idea of PINNs for data-driven solutions to PDEs and data-driven discovery of PDEs [22]. Data-driven solutions to PDEs describe that solve PDEs of the generalized form \[u_{t}(\mathbf{x},t)+\mathcal{N}[u(\mathbf{x},t)]=0,\,\,x\in\Omega\subseteq\mathbb{R}^ {d},\,\,t\in[0,T]\subset\mathbb{R} \tag{1}\] with known parameters. Here, \(\mathbf{x}\) is the spatial variable, \(t\) is the temporal variable with \(t=0\) being at the initial state, \(u:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}\) denotes the hidden solution, \(\mathcal{N}[\cdot]\) is a series of partial differential operators, the domain \(\Omega\subseteq\mathbb{R}^{d}\) is a spatial bounded open set with the boundary \(\partial\Omega\). Analytical or numerical methods have been widely investigated to find proper solution \(\psi(\mathbf{x},t)\) satisfying (1) [34]. The left-hand-side of (1) can be used to define a residual function as the following \[f(\mathbf{x},t):=u_{t}(\mathbf{x},t)+\mathcal{N}[u(\mathbf{x},t)], \tag{2}\] where a neural network is used to approximate the solution \(\psi(\mathbf{x},t)\) to PDEs. The inverse problem is focused on the data-driven discovery of PDEs of the generalized form (1), where unknown parameters of PDEs here turn into parameters of PINNs. PINNs for both problems can be trained by minimizing the cost function \[\text{MSE}=\text{MSE}_{D}+\text{MSE}_{PH}. \tag{3}\] Here, \(\text{MSE}_{D}\) is formulated as the following \[\text{MSE}_{D}=\sum_{(\mathbf{x},t)\in D}\left(\hat{u}\left(\mathbf{x},t;\hat{\mathbf{ \Theta}}_{U}\right)-u\left(\mathbf{x},t\right)\right)^{2}, \tag{4}\] where \(\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right)\) is the function of neural network with \(\hat{\mathbf{\Theta}}_{U}\) being its trained parameter set. Let \(D\) denote the training dataset. This mean squared error term can be considered as the data-driven loss. \(\text{MSE}_{PH}\) is as the following \[\text{MSE}_{PH}=\sum_{(\mathbf{x},t)\in E}\hat{f}\left(\mathbf{x},t\right)^{2}, \tag{5}\] which regularizes \(\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right)\) to satisfy (1). Let \(E\) denote the set of collocation points. This regularization term can be considered as the physics-informed loss for the homogeneous PDEs. Here, \(\hat{f}\left(\mathbf{x},t\right)\) is defined as \[\hat{f}\left(\mathbf{x},t\right):=\hat{u}_{t}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U }\right)+\mathcal{N}\left[\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right) \right], \tag{6}\] where \(\hat{u}_{t}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right)\) and \(\mathcal{N}\left[\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right)\right]\) can be obtained using automatic differential [35]. ## III Constructing C-PINN C-PINN for solving PDEs with unknown source terms is presented in this section. The nonhomogeneous PDEs are of the following generalized form \[u_{t}(\mathbf{x},t)+\mathcal{N}[u(\mathbf{x},t)]=g(\mathbf{x},t),\,\mathbf{x}\in\Omega \subseteq\mathbb{R}^{d},\,t\in[0,T]\subset\mathbb{R}, \tag{7}\] where \(\mathbf{x}\) and \(t\) are the spatial and temporal variable, respectively, \(u:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}\) is similar to (1), \(g:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}\) denotes the general types of source terms including linear, nonlinear, state-steady, or dynamical, \(\Omega\) is a spatial bounded open set with the boundary \(\partial\Omega\). Without loss of generality, the spatial set of (7) is subjected to Dirichlet boundary, Neumann boundary, or the hybrid of Dirichlet and Neumann boundary conditions. In general, \(g(\mathbf{x},t)\) is used as source terms to describe the external forces for dynamical systems and cannot always be separately measured, as mentioned in Section I. Different from (6), the residual function is defined for the nonhomogeneous case as the following \[f_{N}(\mathbf{x},t):=f(\mathbf{x},t)-g(\mathbf{x},t)=u_{t}(\mathbf{x},t)+\mathcal{N}[u(\mathbf{x},t )]-g(\mathbf{x},t). \tag{8}\] When \(g(\mathbf{x},t)\) is exactly known, \(\hat{f}_{N}(\mathbf{x},t)\), obtained with automatic differential from (8), can be directly used to regularize the approximation of \(u(\mathbf{x},t)\). However, the unknown \(g(\mathbf{x},t)\) will lead to unknown \(f_{N}(\mathbf{x},t)\), which makes the aforementioned regularization infeasible. Therefore, the goal of C-PINN is to approximate the solution to PDEs with unknown source terms described by (7). To this end, there are two neural networks included in C-PINN: (a) \(NetU\) for approximating the solution satisfying (7); (b) \(NetG\) for regularizing the training of \(NetU\). #### Iii-1 Cost function To train C-PINN, the training dataset is uniformly sampled from the system governed by (7). The training dataset \(D\) divided into \(D=D_{B}\cup D_{I}\) with \(D_{B}\cap D_{I}=\varnothing\), where \(D_{B}\) denotes the boundary and initial training dataset and \(D_{I}\) is the training dataset of interior of \(\Omega\). Collocation points \((\mathbf{x},t)\in E\) correspond to those of \((\mathbf{x},t,u)\in D_{I}\). Then, we adopt the following data-physics-hybrid cost function \[\text{MSE}=\text{MSE}_{D}+\text{MSE}_{PN} \tag{9}\] to train our proposed C-PINN. \(\text{MSE}_{D}\) and \(\text{MSE}_{PN}\) in (9) are the data-driven loss and physics-informed loss for the nonhomogeneous PDEs, respectively. \(\text{MSE}_{D}\) adopts the same form of (4). \(\text{MSE}_{PN}\) is as the following \[\text{MSE}_{PN}=\sum_{(\mathbf{x},t)\in E}\left(\hat{f}\left(\mathbf{x},t \right)-\hat{g}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{G}\right)\right)^{2},\] where \(\hat{g}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{G}\right)\) is the function of \(NetG\) with \(\hat{\mathbf{\Theta}}_{G}\) being its trained parameter set, \(\hat{f}(\mathbf{x},t)\) has been defined by (2). \(\text{MSE}_{PN}\) corresponds to the physics-informed loss for the nonhomogeneous PDEs obtained from (8) imposed at a finite set collocation points \((\mathbf{x},t)\in E\), which is used to regularize \(\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right)\) of \(NetU\) to satisfy (7). #### Iii-B2 Hierarchical training strategy Considering the relation between \(NetU\) and \(NetG\) in (3), a hierarchical training strategy is proposed. In many cases, the exact formulation or even sparse measurements of \(g(\mathbf{x},t)\) are not available, while the sparse measurements \(D_{I}\) can be obtained to enforce the structure of (7) to achieve \(\hat{\mathbf{\Theta}}_{G}\). Thus, \(\mathbf{\Theta}_{U}\) and \(\mathbf{\Theta}_{G}\) should be iteratively estimated with mutual dependence. Assume \(k\) is the present iteration step, the core issue of the hierarchical training strategy is described by the following two optimization problems \[\hat{\mathbf{\Theta}}_{G}^{(k+1)} =\underset{\mathbf{\Theta}_{G}}{\arg\min}\ \left\{\text{MSE}_{D}\left(\hat{\mathbf{\Theta}}_{U}^{(k)} \right)+\text{MSE}_{PN}\left(\mathbf{\Theta}_{G};\hat{\mathbf{\Theta}}_{U}^{(k)} \right)\right\} \tag{10}\] \[=\underset{\mathbf{\Theta}_{G}}{\arg\min}\ \ \text{MSE}_{PN}\left(\mathbf{\Theta}_{G};\hat{\mathbf{ \Theta}}_{U}^{(k)}\right)\] and \[\hat{\mathbf{\Theta}}_{U}^{(k+1)}=\underset{\mathbf{\Theta}_{U}}{\arg\min}\ \left\{\text{MSE}_{D}\left(\mathbf{\Theta}_{U} \right)+\text{MSE}_{PN}\left(\mathbf{\Theta}_{U};\hat{\mathbf{\Theta}}_{G}^{(k+1)} \right)\right\}, \tag{11}\] where \(\hat{\mathbf{\Theta}}_{U}^{(k)}\) is the estimated parameter set of \(NetU\) at \(k^{th}\) step, \(\hat{\mathbf{\Theta}}_{U}^{(k+1)}\) is the estimated parameter set of \(NetG\) at \((k+1)^{th}\) step, \(\hat{\mathbf{\Theta}}_{U}^{(k+1)}\) is the estimated parameter set of \(NetU\) at \((k+1)^{th}\) step, which is used to describe the function \(\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}^{(k+1)}\right)\). The details of the hierarchical training strategy are obtained by Algorithm 1. Note that \(\mathbf{\Theta}_{U}^{(0)}\) and \(\mathbf{\Theta}_{G}^{(0)}\) are used as a given parameter set for \(NetU\) and the initialization of the parameter set for \(NetG\) at Step 0, respectively. Furthermore, the iterative transmission of parameter sets of \(NetG\) and \(NetU\) happens in the algorithm. ## IV Numerical experiments In this section, our proposed C-PINN is applied to solve several classical PDEs to demonstrate its performance. All the examples are implemented with TensorFlow. The fully connected structure with a hyperbolic tangent activation function is applied, which is initialized by Xavier. These training dataset \((\mathbf{x},t,u)\in D\) and collocation points \((\mathbf{x},t)\in E\) are then input into \(NetU\) and \(NetG\). L-BFGS [36] is used to hierarchically solve the optimization problems (10) and (11) to couple the two networks. ``` -Initialize: Randomly sampled training dataset \((\mathbf{x},t,u)\in D\) and collocation points \((\mathbf{x},t)\in E\). Randomly generate initial parameter sets \(\mathbf{\Theta}_{U}^{(0)}\) and \(\mathbf{\Theta}_{G}^{(0)}\). - Step 0: Assume the \(k^{th}\) iteration has achieved \(\hat{\mathbf{\Theta}}_{U}^{(k)}\) and \(\hat{\mathbf{\Theta}}_{G}^{(k)}\). - Repeat: - Step \(k\)-1: Training for \(NetG\) by solving the optimization problem (10) to obtain \(\hat{\mathbf{\Theta}}_{U}^{(k+1)}\), where the estimations of \(\hat{u}_{t}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}^{(k)}\right)+\mathcal{N}\left( \hat{u}(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}^{(k)}\right)\) in \(\text{MSE}_{PN}\) is obtained from the former iteration result \(\hat{\mathbf{\Theta}}_{U}^{(k)}\). - Step \(k\)-2: Training for \(NetU\) by solving the optimization problem (11) to obtain \(\hat{\mathbf{\Theta}}_{U}^{(k+1)}\), which is used to estimate \(\hat{g}\left(\mathbf{x},t;\mathbf{\Theta}_{G}^{(k+1)}\right)\) in \(\text{MSE}_{PN}\). -Until the stop criterion is satisfied. -Return the solution function \(\hat{\mathbf{\Theta}}_{U}\rightarrow\hat{u}\left(\mathbf{x},t;\hat{\mathbf{\Theta}}_{U}\right)\), which can predict the solution (8) with any point \((\mathbf{x},t)\) in \(\Omega\). ``` **Algorithm 1** The hierarchical strategy of optimizing and coupling for C-PINN. We evaluate the performance of our proposed C-PINN by means of root mean squared error (RMSE) \[\text{RMSE}=\sqrt{\frac{1}{|T|}\sum_{(\mathbf{x},t)\in T}\left(u\left( \mathbf{x},t\right)-\hat{u}\left(\mathbf{x},t\right)\right)^{2}},\] where \(|T|\) is the cardinality with respect to the collocation points \((\mathbf{x},t)\in T\), \(T\) is the set of testing collocation points. \(u\left(\mathbf{x},t\right)\) and \(\hat{u}\left(\mathbf{x},t\right)\) denote the ground truth and the corresponding predictions, respectively. To further validate the performance of C-PINN, the Pearson's correlation coefficient (CC) \[\text{CC}=\frac{\text{cov}\left(u\left(\mathbf{x},t\right),\hat{u}\left(\mathbf{x},t \right)\right)}{\sqrt{\text{Var}\;u\left(\mathbf{x},t\right)}\sqrt{\text{Var}\;\hat{u} \left(\mathbf{x},t\right)}}\] is also used to measure the similarity between ground truth and prediction, where CC is the correlation coefficient of \(u(\mathbf{x},t)\) and \(\hat{u}(\mathbf{x},t)\), \(\text{cov}\left(u\left(\mathbf{x},t\right),\hat{u}\left(\mathbf{x},t\right)\right)\) is the covariance between \(u(\mathbf{x},t)\) and \(\hat{u}(\mathbf{x},t)\), and Var \(\hat{u}\left(\mathbf{x},t\right)\) and Var \(\hat{u}\left(\mathbf{x},t\right)\) are variance of \(u(\mathbf{x},t)\) and \(\hat{u}(\mathbf{x},t)\), respectively. ### _Case 1: 1-D Heat Equation_ C-PINN is first applied to solve the heat equation with unknown external forces, where both Dirichlet and Neumann boundary conditions are conducted to demonstrate its performance. #### Iv-A1 Dirichlet Boundary Condition Here, we consider the heat equation with Dirichlet boundary condition as the following \[\frac{\partial u}{\partial t}=a^{2}\frac{\partial^{2}u}{\partial x^{2}}+g(x,t), \quad 0<x<L,t>0 \tag{12}\] \[u|_{t=0}=\phi(x),\quad 0\leqslant x\leqslant L\] \[u|_{x=0}=0,\quad u|_{x=L}=0,\quad t>0,\] where thermal diffusivity \(a=1\), \(u(x,t)\) is the primary variable and means the temperature at \((x,t)\), \(L=\pi\) is the length of bounded rod, \(\phi(x)=0\) is initial temperature, and \(g(x,t)=xe^{-t}\) denotes the unknown external heat source at \((x,t)\). The analytical solution \(u\left(x,t\right)\) to (12) is obtained with respect to [37]. In this experiment, the setting-ups of C-PINN are as follows. There are eight hidden layers with 20 units in each of them for both \(NetU\) and \(NetG\). A total of 110 training data \((x,t,u(x,t))\) in \(D\) with \(t\in[0,6]\), including 10 training data in \(D_{\text{1}}\) and 100 training data in \(D_{\text{B}}\), are randomly sampled, 10 sparse collocation points are randomly sampled to enforce the structure of (12). Fig. 1 shows the sparse training dataset and the prediction results. Specifically, the magnitude of the predictions \(\hat{u}(x,t)\) using the training dataset is shown in Fig. 1(a) with a heat map. In this case, RMSE is \(4.225390e-02\) and the correlation coefficient is \(9.785444e-01\). Moreover, we compare the ground truths and the predictions at fixed-time \(t\)= 1.5, 3, and 4.5 in Fig. 1(b) to (d), respectively. The evaluation criteria in Table I are applied to further quantify the performance of our proposed C-PINN. Subsequently, the experiment for PDE with Neumann boundary condition will be further explored to show the general performance of C-PINN. _2) Neumann Boundary Condition_ Heat equation with Neumann boundary condition is defined as \[\frac{\partial u}{\partial t}=a^{2}\frac{\partial^{2}u}{\partial x ^{2}}+g(x,t),\quad 0<x<L,t>0 \tag{13}\] \[u|_{t=0}=\phi(x),\quad\quad 0\leqslant x\leqslant L\] \[u|_{x=0}=0,\quad\left.\frac{\partial u}{\partial x}\right|_{x=L} =0,\quad t>0,\] with the thermal diffusivity \(a=1\), the length of bounded rod \(L=\pi\), the initial temperature \(\phi(x)=\sin\left(x/2\right)\), and the external heat source is \(g(x,t)=\sin\left(x/2\right)\). The analytical solution \(u(x,t)\) to (13) is obtained according to [37]. In this example, \(NetU\) is of three hidden layers consisting of 30 neurons individually. \(NetG\) is of eight hidden layers consisting of 20 units individually. \((x,t,u(x,t))\) in \(D\) are considered with \(t\in[0,10]\). A total of 130 training data in \(D_{B}\), including 10 initial training data, 60 left boundary training data, and 60 right boundary training data are randomly sampled. Moreover, the 20 sparse collocation points are randomly sampled to enforce the structure of (13). The magnitude of the predictions \(\hat{u}(x,t)\) using the training dataset is shown in Fig. 2(a). RMSE is \(5.748950e-02\) and the correlation coefficient is \(9.988286e-01\). Moreover, we compare the ground truths and the predictions at fixed-time \(t\)= 3, 6, and 9 in Fig. 2(b) to (d), respectively. The evaluation criteria in Table II are applied to further evaluate the performance of our proposed C-PINN. ### _Case 2: 1-D Wave Equation_ The wave equation is as the following \[\frac{\partial^{2}u}{\partial t^{2}}=a^{2}\frac{\partial^{2}u}{ \partial x^{2}}+g(x,t),\quad 0<x<L,t>0 \tag{14}\] \[u|_{x=0}=0,\quad u|_{x=L}=0,\quad t>0\] \[u|_{t=0}=0,\quad\left.\frac{\partial u}{\partial t}\right|_{t=0} =0,\quad 0\leqslant x\leqslant L,\] Fig. 1: (a) Predictions \(\hat{u}(x,t)\) for the 1-D heat equation with Dirichlet boundary condition; (b), (c), and (d) Comparisons of the ground truths and predictions corresponding to the fixed-time \(t\)= 1.5, 3, and 4.5 snapshots depicted by the dashed vertical lines in (a), respectively. Fig. 2: (a) Predictions \(\hat{u}\left(x,t\right)\) for the 1-D heat equation with Neumann boundary condition. (b), (c), and (d) Comparisons of the ground truths and predictions correspond to the fixed-time \(t\)= 3, 6, and 9 snapshots depicted by the dashed vertical lines in (a), respectively. where the wave speed \(a\) is 1, the length of bounded string \(L\) is \(\pi\), the time of wave propagation \(t\) is 6, the external force is \[g(x,t)=\sin\frac{2\pi x}{L}\sin\frac{2\pi t}{L}\] at\((x,t)\) and displacement \(u(x,t)\) at \((x,t)\) according to [37] is further investigated. In this experiment, \(NetU\) is of three hidden layers consisting of 30 neurons individually. \(NetG\) is of eight hidden layers consisting of 20 units individually. A total of 210 training data \((x,t,u\left(x,t\right))\) in \(D\), including 50 initial training data, 120 boundary training data, and 40 collation points are randomly sampled. Fig. 3(a) shows the sparse training dataset and the magnitude of displacement \(\hat{u}(x,t)\) at \((x,t)\). Fig. 3(b) to (d) show the comparisons of ground truths and predictions corresponding to the three fixed-time \(t\)=1.5, 3, and 4.5, which are depicted by the dashed vertical lines in Fig. 3(a), respectively. RMSE is \(7.068626e-02\) and the correlation coefficient is \(9.864411e-01\). The evaluation criteria for the three temporal snapshots are listed in Table III. ### _Case 3: 2-D Poisson Equation_ We further consider the following 2-D Poisson equation \[\begin{split}&\frac{\partial^{2}u}{\partial x^{2}}+\frac{ \partial^{2}u}{\partial y^{2}}=T_{0},\quad 0<x<a,0<y<b\\ & u(x,0)=0,\quad u(x,b)=T,\quad 0\leqslant x\leqslant a\\ & u(0,y)=0,\quad u(a,y)=0,\quad 0\leqslant y\leqslant b,\end{split} \tag{15}\] where \(T\) is 1, the constant source term \(T_{0}=1\) is unknown, and \(a=b=1\). The analytical solution \(u(x,y)\) to (15) is obtained according to [37]. In this experiment, the setting-ups of C-PINN are as follows. There are eight hidden layers with 20 units in each of them for both \(NetU\) and \(NetG\). Thirty training data in \(D_{\text{B}}\) and 3 collocation points in \(D_{\text{I}}\) are used. Fig. 4(a) shows the sparse training dataset and the predictions \(\hat{u}(x,y)\). Fig. 4(b) to (d) show the prediction performance of fixed-location \(y\)=0.2, 0.4, and 0.6 snapshots depicted in Fig. 4(a), respectively. RMSE is \(1.594000e-02\) and the correlation coefficient is \(9.997390e-01\). The corresponding evaluation criteria are listed in Table IV. ### _Case 4: 3-D Helmholtz Equation_ C-PINN is also applied to solve 3-D Helmholtz equation with an unknown source term. In particular, we consider the same test PDEs that were previously suggested in [26] \[\begin{split}\Delta u(\mathbf{x})+p^{2}u(\mathbf{x})&=g( \mathbf{x})\text{ in }\Omega\subset\mathbb{R}^{3}\\ u(\mathbf{x})&=u_{0}(\mathbf{x})\text{ on }\partial\Omega, \end{split} \tag{16}\] where \(\Delta=\frac{\partial}{\partial x^{2}}+\frac{\partial}{\partial y^{2}}+ \frac{\partial}{\partial z^{2}}\) is Laplacian operator, \(\mathbf{x}=(x,y,z)^{\top}\) is coordinates with \(x,y,z\in(0,1/4]\), \(p=5\) is the wavenumber, Fig. 4: (a) Predictions \(\hat{u}\left(x,y\right)\) for the 2-D Poisson equation. (b), (c), and, (d) Comparisons of the ground truths and predictions corresponding to the fixed-location \(y=0.2\), 0.4, and 0.6 snapshots depicted by the dashed vertical lines in (a), respectively. Fig. 3: (a) Predictions \(\hat{u}\left(x,t\right)\) for 1-D wave equation. (b), (c), and (d) Comparisons of the ground truths and predictions corresponding to the fixed-time \(t\)=1.5, 3, and 4.5 snapshots depicted by the dashed vertical lines in Fig. 3(a), respectively. a suitable \(g(\mathbf{x})\) is the right-hand side of (16) so that \[u(\mathbf{x})=(0.1\sin\left(2\pi x\right)+\tanh\left(10x\right))\sin\left(2\pi y \right)\sin\left(2\pi z\right)\] is the analytical solution of (16) [26]. In this experiment, \(NetU\) is of three hidden layers consisting of 100, 50, and 50 neurons individually. \(NetG\) is of eight hidden layers consisting of 20 units individually. Sixty training data and 120 collocation points are sampled. Fig. 5(a) shows the solution of \((x,y,z=0.12)\) snapshot. Furthermore, Fig. 5(b) to (d) show the comparisons of ground truths and predictions, which are extracted at \((x=0.05,z=0.12)\), \((x=0.15,z=0.12)\), and \((x=0.2,z=0.12)\), respectively. The evaluation criteria for this extractions are listed in Table V. In this experiment, RMSE is \(1.192859e-01\), and the correlation coefficient is \(9.057524e-01\). ## V Conclusion This paper proposes a novel PINN, called C-PINN, to solve PDEs with less prior information or even without any prior information for source terms. In our approach, two neural networks, \(NetU\) and \(NetG\), are proposed with a fully-connected structure. \(NetU\) for approximating the solution satisfying PDEs under study; \(NetG\) for regularizing the training of \(NetU\). Then, the two networks are integrated into a data-physics-hybrid cost function. Furthermore, the two networks are optimized and coupled by the proposed hierarchical training strategy. Finally, C-PINN is applied to solve several classical PDEs to testify to its performance. Note that C-PINN inherits the advantages of PINN, such as sparse property and automatic differential. C-PINN is proposed to solve such a dilemma as the governing equation of dynamical systems with unknown forces. Thus, C-PINN can be further applied to infer the unknown source terms. Meanwhile, C-PINN can be extended to identify the operators from the sparse measurements. In the future, we will continue to use our C-PINN in various scenarios, like solving PDEs with unknown structure parameters and high-dimension PDEs. For the case, the structures of PDE are totally unknown, regularization method will be combined with C-PINN to select operators from the sparse measurements. Our proposed C-PINN has been shown to solve several classical PDEs successfully. For more complex situations, the features extraction, like convolution and pooling, will be added to C-PINN.
2307.04347
Injecting Logical Constraints into Neural Networks via Straight-Through Estimators
Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI. We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning. More specifically, we design a systematic way to represent discrete logical constraints as a loss function; minimizing this loss using gradient descent via a straight-through-estimator updates the neural network's weights in the direction that the binarized outputs satisfy the logical constraints. The experimental results show that by leveraging GPUs and batch training, this method scales significantly better than existing neuro-symbolic methods that require heavy symbolic computation for computing gradients. Also, we demonstrate that our method applies to different types of neural networks, such as MLP, CNN, and GNN, making them learn with no or fewer labeled data by learning directly from known constraints.
Zhun Yang, Joohyung Lee, Chiyoun Park
2023-07-10T05:12:05Z
http://arxiv.org/abs/2307.04347v1
# Injecting Logical Constraints into Neural Networks ###### Abstract Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI. We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning. More specifically, we design a systematic way to represent discrete logical constraints as a loss function; minimizing this loss using gradient descent via a straight-through-estimator updates the neural network's weights in the direction that the binarized outputs satisfy the logical constraints. The experimental results show that by leveraging GPUs and batch training, this method scales significantly better than existing neuro-symbolic methods that require heavy symbolic computation for computing gradients. Also, we demonstrate that our method applies to different types of neural networks, such as MLP, CNN, and GNN, making them learn with no or fewer labeled data by learning directly from known constraints. Machine Learning, Neural Networks, Neural Networks, Neural Networks, Neural Networks ## 1 Introduction Neuro-symbolic AI (Besold et al., 2017; Mao et al., 2019; De Raedt et al., 2019; Garcez et al., 2019) aims to combine deep neural network learning and symbolic AI reasoning, which look intrinsically different from each other on the surface. It appears hard to incorporate discrete logical reasoning into the conventional gradient descent method that deals with continuous values. Some recent works in neuro-symbolic AI (Manhaeve et al., 2018; Yang et al., 2020; Pogancic et al., 2020; Tsamoura et al., 2021) associate continuous parameters in neural networks (NNs) with logic languages so that logical reasoning applied to NN outputs produces "semantic loss" (Xu et al., 2018). Minimizing such loss leads to updating NN parameters via backpropagation through logic layers. Like human learning that leverages known constraints, these methods have shown promising results that allow NNs to learn effectively with fewer data leveraging the semantic constraints. On the other hand, the symbolic computation performed during backpropagation is implemented by weighted model counting using circuits (Darwiche, 2011; Manhaeve et al., 2018; Tsamoura et al., 2021) or by calling symbolic solvers (Pogancic et al., 2019; Yang et al., 2020), which are often computationally expensive; it takes too long to generate arithmetic circuits or enumerate all models or proofs by calling symbolic solvers. One main reason for the development of the different ideas is that a naive representation of discrete constraints as a loss function is not meaningfully differentiable. Even for the intervals that it is differentiable, the gradient is zero, so NNs won't update their weights. To address this, we turn to the idea of straight-through estimators (STE) (Courbariaux et al., 2015), which were originally introduced to train binary neural networks (BNNs) -- neural networks with binary weights and activation at run-time. The main idea of STE is to use a binarization function in forward propagation while its gradient, which is zero almost everywhere, is replaced by the gradient of a different, meaningfully differentiable function in backward propagation. It turns out that the method works well for NN quantization in practice. However, adopting STE alone is not enough for learning with constraints. We need a systematic method of encoding logical constraints as a loss function and ensure that its gradient enables NNs to learn logical constraints. This paper makes the following contributions. * We design a systematic way to encode logical constraints in propositional logic as a loss function in neural network learning, which we call _CL-STE_. We demonstrate that minimizing this loss function via STE enforces the logical constraints in neural network learning so that neural networks learn from the explicit constraints. * We show that by leveraging GPUs and batch training, CL-STE scales significantly better than the other neuro-symbolic learning methods that use heavy symbolic computation for computing gradients. * We also find that the concept of Training Gate Function (TGF) (Kim et al., 2020), which was applied to channel pruning, is closely related to STE. We establish the precise relationship between them, which gives a new perspective of STE. The paper is organized as follows. Section 2 presents related works, and Section 3 reviews STE and TFG and establish their relationships. Section 4 presents our loss function representation of logical constraints and proves its properties assuming minimization via STE, and Section 5 shows experimental results. The implementation of our method is publicly available online at [https://github.com/azreasoners/cl-ste](https://github.com/azreasoners/cl-ste). ## 2 Related Work Our work is closely related to (Xu et al., 2018), which proposes a semantic loss function to bridge NN outputs and logical constraints. The method treats an NN output as probability and computes semantic loss as the negative logarithm of the probability to generate a state satisfying the logical constraints. Their experiments show that the encoded semantic loss function guides the learner to achieve state-of-the-art results in supervised and semi-supervised learning on multi-class classification. For the efficient computation of a loss function, they encode logical constraints in Sentential Decision Diagram (SDD) (Darwiche, 2011). However, generating SDDs is computationally expensive for most practical tasks. Several neuro-symbolic formalisms, such as DeepProbLog (Manhaeve et al., 2018), NeurASP (Yang et al., 2020), and NeuroLog (Tsamoura et al., 2021), have been proposed to integrate neural networks with logic programming languages. Since discrete logical inference cannot be in general captured via a differentiable function, they use relaxation to weighted models or probability. While this approach provides a systematic representation of constraints, the symbolic computation is often the bottleneck in training. Since fuzzy logic operations are naturally differentiable, several works, such as Logic Tensor Network (Serafini and Garcez, 2016), Continuous Query Decomposition (Arakelyan et al., 2020), Semantic Based Regularization (Diligenti et al., 2017; Roychowdhury et al., 2021), directly apply fuzzy operators to neural network outputs. However, as stated in (Marra et al., 2021), the fuzzification procedure alters the logical properties of the original theory (such as satisfiability). Other works train neural networks for learning satisfiability, such as (Wang et al., 2019; Selsam et al., 2019). SATNet (Wang et al., 2019) builds on a line of research exploring SDP relaxations as a tool for solving MAXSAT, producing tighter approximation guarantees than standard linear programming relaxation. Graph Neural Networks (GNNs) (Battaglia et al., 2018; Lamb et al., 2020) have been widely applied for logical reasoning. For example, Recurrent Relational Network (RRN) was able to learn how to solve Sudoku puzzles. GNNs use message-passing to propagate logical constraints in neural networks, but they do not have the mechanism to specify the logical constraints directly as we do. While STE has not been exploited in neuro-symbolic learning to our best knowledge, (Pogancic et al., 2020)'s work is related in that it also uses a gradient that is different from the forward function's gradient. It uses the gradient obtained from a linear relaxation of the forward function. The work also requires a combinatorial solver to compute the gradient. ## 3 Straight-Through-Estimators and Trainable Gate Function **Review**. STEs are used to estimate the gradients of a discrete function. Courbariaux et al. (2015) consider a binarization function \(b\) that transforms real-valued weights \(x\) into discrete values \(b(x)\) as \(b(x)=1\) if \(x\geq 0\) and \(b(x)=0\) otherwise. A loss function \(L\) is defined on binarized weights \(b(x)\), but the gradient descent won't update binarized weights in small increments. However, using STE, we could update the real-valued weights \(x\) that are input to \(b(x)\). In the end, a quantized model consists of binarized weights \(b(x)\) only. More specifically, according to the chain rule, the gradient of loss \(L\) w.r.t. \(x\) is \(\frac{\partial L}{\partial x}=\frac{\partial L}{\partial b(x)}\times\frac{ \partial b(x)}{\partial x}\), where \(\frac{\partial b(x)}{\partial x}\) is zero almost everywhere. The idea is to replace \(\frac{\partial b(x)}{\partial x}\) with an STE \(\frac{\partial s(x)}{\partial x}\) for some (sub)differentiable function \(s(x)\). The STE \(\frac{\partial s(x)}{\partial x}\) is called the _identity STE_ (iSTE) if \(s(x)=x\) and is called the _saturated STE_ (sSTE) if \(s(x)=clip(x,[-1,1])=min(max(x,-1),1)\). Since \(\frac{\partial s(x)}{\partial x}=1\), by \(\frac{\partial L}{\partial x}\stackrel{{ iSTE}}{{\approx}}\frac{ \partial L}{\partial b(x)}\), we denote the identification of \(\frac{\partial L}{\partial x}\) with \(\frac{\partial L}{\partial b(x)}\) under iSTE. The binarization function \(b(x)\) passes only the sign of \(x\) while information about the magnitude of \(x\) is lost (Simons and Lee, 2019). In XNOR-Net (Rastegari et al., 2016), the input \(x\) is normalized to have the zero mean and a small variance before the binarization to reduce the information loss. In this paper, we normalize \(x\) by turning it into a probability using softmax or sigmoid activation functions. Indeed, several neuro-symbolic learning methods (e.g., Deep ProbLog, NeurASP, NeuroLog) assume the neural network outputs fed into the logic layer are normalized as probabilities. To address a probabilistic input, we introduce a variant binarization function \(b_{p}(x)\) for probabilities \(x\in[0,1]\): \(b_{p}(x)=1\) if \(x\geq 0.5\) and \(b_{p}(x)=0\) otherwise. It is easy to see that iSTE and sSTE work the same with \(b_{p}(x)\) since \(x=clip(x,[-1,1])\) when \(x\in[0,1]\). A vector \(\mathbf{x}\) is allowed as input to the binarization functions \(b\) and \(b_{p}\), in which case they are applied to each element of \(\mathbf{x}\). TGF and Its Relation to STE.The concept of STE is closely related to that of the Trainable Gate Function (TGF) from (Kim et al., 2020), which was applied to channel pruning. Instead of replacing the gradient \(\frac{\partial b(x)}{\partial x}\) with an STE, TGF tweaks the binarization function \(b(x)\) to make it meaningfully differentiable. More specifically, a differentiable binarization function \(\widetilde{b}^{K}\) is defined as \[\widetilde{b}^{K}(x)=b(x)+s^{K}(x)g(x), \tag{1}\] where \(K\) is a large constant; \(s^{K}(x)=\frac{Kx-\lfloor Kx\rfloor}{K}\) is called a _gradient tweaking_ function, whose value is less than \(\frac{1}{K}\) and whose gradient is always 1 wherever differentiable; \(g(x)\) is called a _gradient shaping_ function, which could be an arbitrary function, but the authors note that the selection does not affect the results critically and \(g(x)=1\) can be adopted without significant loss of accuracy. As obvious from Figure 1, as \(K\) becomes large, TGF \(\widetilde{b}^{K}(x)\) is an approximation of \(b(x)\), but its gradient is \(1\) wherever differentiable. Proposition 3.1 tells us a precise relationship between TGF and STE: when \(K\) is big enough, the binarization function \(b(x)\) with iSTE or sSTE can be simulated by TGF. In other words, Proposition 3.1 allows us to visualize \(b(x)\) with STE as the TGF \(\widetilde{b}^{K}(x)\) with \(K=\infty\) as Figure 1 illustrates. **Proposition 3.1**.: _When \(K\) approaches \(\infty\) and \(|g(x)|\leq c\) for some constant \(c\), the value of \(\widetilde{b}^{K}(x)\) converges to \(b(x)\):_ \[\lim_{K\to\infty}\widetilde{b}^{K}(x)=b(x).\] _The gradient \(\frac{\partial\widetilde{b}^{K}(x)}{\partial x}\), wherever defined, is exactly the iSTE of \(\frac{\partial b(x)}{\partial x}\) if \(g(x)=1\), or the sSTE of \(\frac{\partial b(x)}{\partial x}\) if_ \[g(x)=\begin{cases}1&\text{if $-1\leq x\leq 1$},\\ 0&\text{otherwise.}\end{cases}\] Proposition 3.1 still holds if we replace \(b(x)\) with \(b_{p}(x)\). The proposition yields insights into STE and TGF in terms of each other. As shown in Figure 1, TGF is a sawtooth function that approximates a step function as \(K\) becomes large. At large, TGF works like a discrete function, but it is differentiable almost everywhere. In view of Proposition 3.1, this fact gives an idea why the STE method works in practice. On the other hand, the proposition tells that the implementation of TGF can be replaced with STE. That could be better because TGF in equation (1) requires that \(K\) approximate infinity and be non-differentiable when \(x\) is a multiple of \(\frac{1}{K}\) whereas STE is differentiable at every \(x\). ## 4 Enforcing Logical Constraints using STE This section presents our method of encoding logical constraints in propositional logic as a loss function so that minimizing its value via STE makes neural network prediction follow the logical constraints. ### Encoding CNF as a Loss Function Using STE We first review the terminology in propositional logic. A _signature_ is a set of symbols called _atoms_. Each atom represents a proposition that is true or false. A _literal_ is either an atom \(p\) (_positive literal_) or its negation \(\neg p\) (_negative literal_). A _clause_ is a disjunction over literals, e.g., \(p_{1}\vee\neg p_{2}\lor p_{3}\). A _Horn clause_ is a clause with at most one positive literal. We assume a (_propositional_) _theory_ consisting of a set of clauses (sometimes called a _CNF (Conjunctive Normal Form) theory_). A truth assignment to atoms _satisfies_ (denoted by \(|\!=\)) a theory if at least one literal in each clause is true under the assignment. A theory is _satisfiable_ if at least one truth assignment satisfies the theory. A theory _entails_ (also denoted by \(|\!=\)) a literal if every truth assignment that satisfies the theory also satisfies that literal. We define a general loss function \(L_{cnf}\) for any CNF theory as follows. Here, bold upper and lower letters (e.g., \(\mathbf{C}\) and \(\mathbf{v}\)) denote matrices and vectors, respectively; \(\mathbf{C}[i,j]\) and \(\mathbf{v}[i]\) denote their elements. Consider a propositional signature \(\sigma=\{p_{1},\ldots,p_{n}\}\). Given (i) a theory \(C\) consisting of \(m\) clauses (encoding domain knowledge), (ii) a set \(F\) of atoms denoting some atomic facts that we assume known to be true (representing the ground-truth label of a data instance), and (iii) a truth assignment \(v\) such that \(v\models F\), we construct their matrix/vector representations as * the matrix \(\mathbf{C}\in\{-1,0,1\}^{m\times n}\) to represent the theory such that \(\mathbf{C}[i,j]\) is \(1\) (\(-1\), resp.) if \(p_{j}\) (\(\neg p_{j}\), resp.) belongs to the \(i\)-th clause in the theory, and is \(0\) if neither \(p_{j}\) nor \(\neg p_{j}\) belongs to the clause; * the vector \(\mathbf{f}\in\{0,1\}^{n}\) to represent \(F\) such that \(\mathbf{f}[j]\) is \(1\) if \(p_{j}\in F\) and is \(0\) otherwise; and Figure 1: Trainable gate function \(\widetilde{b}^{K}(x)\) when \(g(x)=1\) * the vector \(\mathbf{v}\in\{0,1\}^{n}\) to represent \(v\) such that \(\mathbf{v}[j]\) is \(1\) if \(v(p_{j})=\texttt{true}\), and is \(0\) if \(v(p_{j})=\texttt{false}\). Figure 2 shows an architecture that overlays the two loss functions \(L_{bound}\) and \(L_{cnf}\) over the neural network output, where \(L_{cnf}\) is the main loss function to encode logical constraints and \(L_{bound}\) is a regularizer to limit the raw neural network output not to grow too big (more details will follow). The part \(\mathbf{input}\) is a tensor (e.g., images) for a data instance; \(\mathbf{label}\) denotes the labels of input data; \(\mathbf{C}\) encodes the domain knowledge, \(\mathbf{x}\in[0,1]^{n}\) denotes the NN output (in probability), and \(\mathbf{f}\in\{0,1\}^{n}\) records the known facts in that data instance (e.g., given digits in Sudoku).1 Let \(\mathbb{1}_{\{k\}}(X)\) denote an indicator function that replaces every element in \(X\) with \(1\) if it is \(k\) and with \(0\) otherwise. Then the binary prediction \(\mathbf{v}\) is constructed as \(\mathbf{v}=\mathbf{f}+\mathbb{1}_{\{0\}}(\mathbf{f})\odot b_{p}(\mathbf{x})\), where \(\odot\) denotes element-wise multiplication. Intuitively, \(\mathbf{v}\) is the binarized NN output with part of it strictly following the given facts specified in \(\mathbf{f}\) (ensuring \(v\models F\)). Footnote 1: In case the length of \(\mathbf{x}\) is less than \(n\), we pad \(\mathbf{x}\) with \(0\)s for all the atoms that are not related to NN output. **Example 4.1**.: _Consider a simple example \(\mathbf{mnistAdd}\) from (Manhaeve et al., 2018), where the task is, given a pair of MNIST digit images and their sum as the label, to let a neural network learn the digit classification of the input images. The example is used to demonstrate how NNs can learn from known constraints. In Figure 2, the input consists of two-digit images \(i_{1}\) and \(i_{2}\), and the label is an integer \(l\) in \(\{0,...,18\}\) denoting the sum of \(i_{1}\) and \(i_{2}\). The neural network is the same Convolutional Neural Network (CNN) used in (Manhaeve et al., 2018)._ _The theory for this problem consists of the following clause for \(l\in\{0,\dots,18\}\), where \(sum(l)\) represents "the sum of \(i_{1}\) and \(i_{2}\) is \(l\)" and \(pred(n_{1},n_{2})\) represents "the neural network predicts \(i_{1}\) and \(i_{2}\) as \(n_{1}\) and \(n_{2}\) respectively":_ \[\neg sum(l)\vee\bigvee_{\begin{subarray}{c}n_{1},n_{2}\in\{0,\dots,9\}:\\ n_{1}+n_{2}=l\end{subarray}}pred(n_{1},n_{2}).\] _This theory contains \(19+100=119\) atoms for \(sum/1\) and \(pred/2\) respectively. We construct the matrix \(\mathbf{C}\in\{-1,0,1\}^{19\times 119}\), where each row represents a clause. For instance, the row for the clause \(\neg sum(1)\lor pred(0,1)\lor pred(1,0)\) is a vector in \(\{-1,0,1\}^{1\times 119}\) containing a single \(-1\) for atom \(sum(1)\), two \(1s\) for atoms \(pred(0,1)\) and \(pred(1,0)\), and 116 \(0\)s._ _Vectors \(\mathbf{f}\) and \(\mathbf{v}\) are in \(\{0,1\}^{119}\) constructed from each data instance \(\langle i_{1},i_{2},l\rangle\). The fact vector \(\mathbf{f}\) contains a single \(1\) for atom \(sum(l)\) (ground truth label) and 118 \(0\)s. To obtain the prediction vector \(\mathbf{v}\), we (i) feed images \(i_{1}\),\(i_{2}\) into the CNN (with softmax at the last layer) from (Manhaeve et al., 2018) to obtain the outputs \(\mathbf{x}_{1},\mathbf{x}_{2}\in[0,1]^{10}\) (consisting of probabilities), (ii) construct the vector \(\mathbf{x}\in[0,1]^{100}\) (for 100 atoms of \(pred/2\)) such that \(\mathbf{x}[10i+j]\) is \(\mathbf{x}_{1}[i]\times\mathbf{x}_{2}[j]\) for \(i,j\in\{0,\dots,9\}\), (iii) update \(\mathbf{x}\) as the concatenation of \(\mathbf{x}\) and \(\{0\}^{19}\), and (iv) finally, let \(\mathbf{v}=\mathbf{f}+\mathbb{1}_{\{0\}}(\mathbf{f})\odot b_{p}(\mathbf{x})\)._ Using \(\mathbf{C}\), \(\mathbf{v}\), and \(\mathbf{f}\), we define the _CNF loss_\(L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})\) as follows: \[\mathbf{L}_{f} =\mathbf{C}\odot\mathbf{f} \tag{2}\] \[\mathbf{L}_{v} =\mathbb{1}_{\{1\}}(\mathbf{C})\odot\mathbf{v}+\mathbb{1}_{\{-1 \}}(\mathbf{C})\odot(1-\mathbf{v})\] (3) \[\mathbf{deduce} =\mathbb{1}_{\{1\}}\Big{(}sum(\mathbf{C}\odot\mathbf{C})-sum( \mathbb{1}_{\{-1\}}(\mathbf{L}_{f}))\Big{)}\] (4) \[\mathbf{unsat} =prod(1-\mathbf{L}_{v})\] (5) \[\mathbf{keep} =sum(\mathbb{1}_{\{1\}}(\mathbf{L}_{v})\odot(1-\mathbf{L}_{v})+ \mathbb{1}_{\{0\}}(\mathbf{L}_{v})\odot\mathbf{L}_{v})\] (6) \[L_{deduce} =sum(\mathbf{deduce}\odot\mathbf{unsat})\] (7) \[L_{unsat} =avg(\mathbb{1}_{\{1\}}(\mathbf{unsat})\odot\mathbf{unsat})\] (8) \[L_{sat} =avg(\mathbb{1}_{\{0\}}(\mathbf{unsat})\odot\mathbf{keep}) \tag{9}\] \[L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})=L_{deduce}+L_{unsat}+L_{sat} \tag{10}\] where \(prod(X)\), \(sum(X)\), and \(avg(X)\) compute the product, sum, and average of the elements in \(X\) along its last dimension. 2 Although these equations may look complex, it helps to know that they use the form \(\mathbb{1}_{\{k\}}(X_{1})\odot X_{2}\), where the indicator function \(\mathbb{1}_{\{k\}}(X_{1})\) can be seen as a constant that is multiplied to a trainable variable \(X_{2}\). Take equation (8) as an example. To minimize \(L_{unsat}\), the NN parameters will be updated towards making \(\mathbf{unsat}[i]\) to be \(0\) whenever \(\mathbb{1}_{\{1\}}(\mathbf{unsat})\) is \(1\), i.e., towards making unsatisfied clauses satisfied. Footnote 2: The aggregated dimension is “squeezed,” which is the default behavior in PyTorch aggregate functions (keepdim is False). In equations (2) and (3), \(\mathbf{f}\) and \(\mathbf{v}\) are treated as matrices in \(\{0,1\}^{1\times n}\) to have element-wise multiplication (with broadcasting) with a matrix in \(\{-1,0,1\}^{m\times n}\). Take equation (2) as an example, \(\mathbf{L}_{f}[i,j]=\mathbf{C}[i,j]\times\mathbf{f}[j]\). \(\mathbf{L}_{f}\) is the matrix in \(\{-1,0,1\}^{m\times n}\) such that (i) \(\mathbf{L}_{f}[i,j]=1\) iff clause \(i\) contains literal \(p_{j}\) and \(p_{j}\in F\); (ii) \(\mathbf{L}_{f}[i,j]=-1\) iff clause \(i\) contains literal \(\neg p_{j}\) and \(p_{j}\in F\); (iii) otherwise, \(\mathbf{L}_{f}[i,j]=0\). Figure 2: Architecture that overlays constraint loss \(\mathbf{L}_{v}\) is the matrix in \(\{0,1\}^{m\times n}\) such that \(\mathbf{L}_{v}[i,j]=1\) iff clause \(i\) contains a literal (\(p_{j}\) or \(\neg p_{j}\)) for atom \(p_{j}\) and this literal is true under \(v\). In equations (4), (5), and (6), \(sum(\mathbf{C}\odot\mathbf{C})\) is a vector in \(\mathbb{N}^{m}\) representing in each clause the number of literals, and \(sum(\mathbbm{1}_{\{-1\}}(\mathbf{L}_{f}))\) is a vector in \(\mathbb{N}^{m}\) representing in each clause the number of literals that are false under \(F\) (i.e., the number of literals of the form \(\neg p\) such that \(p\in F\)). Consequently, **deduce** is a vector in \(\{0,1\}^{m}\) where \(\mathbf{deduce}[i]\) is \(1\) iff clause \(i\) has all but one literal being false under \(F\). If \(C\cup F\) is satisfiable and a clause has all but one literal being false under \(F\), then we can safely deduce that the remaining literal is true. For instance, in a clause for Sudoku \[\neg a(1,1,9)\vee\neg a(1,2,9), \tag{11}\] if \(a(1,1,9)\) is in the ground-truth label (i.e., in \(F\)) but \(a(1,2,9)\) is not, we can safely deduce \(\neg a(1,2,9)\) is true. It follows that such a clause is always a Horn clause. Intuitively, the vector **deduce** represents the clauses that such deduction can be applied given \(F\). The vector \(\mathbf{unsat}\in\{0,1\}^{m}\) indicates which clause is not satisfied by the truth assignment \(v\), where \(\mathbf{unsat}[i]\) is \(1\) iff \(v\) does not satisfy the \(i\)-th clause. The vector \(\mathbf{keep}\in\{0\}^{m}\) consists of \(m\) zeros while its gradient w.r.t. \(\mathbf{v}\) consists of non-zeros. Intuitively, the gradient of \(\mathbf{keep}\) tries to keep the current predictions \(\mathbf{v}\) in each clause. In equations (7), (8), and (9), \(L_{deduce}\in\mathbb{N}\) represents the number of clauses that can deduce a literal given \(F\) and are not satisfied by \(v\). The vector \(\mathbbm{1}_{\{1\}}(\mathbf{unsat})\in\{0,1\}^{m}\) (and \(\mathbbm{1}_{\{0\}}(\mathbf{unsat})\), resp.) indicates the clauses that are not satisfied (and are satisfied, resp.) by \(v\). Intuitively, for all clauses, minimizing \(L_{unsat}\) makes the neural network change its predictions to decrease the number of unsatisfied clauses. In contrast, minimizing \(L_{sat}\) makes the neural network more confident in its predictions in the satisfied clauses. We use \(avg\) instead of \(sum\) in equations (8) and (9) to ensure that the gradients from \(L_{unsat}\) and \(L_{sat}\) do not overpower those from \(L_{deduce}\). Formal statements of these intuitive explanations follow in the next section. For any neural network output \(\mathbf{x}\) consisting of probabilities, let \(\mathbf{x}^{r}\) denote the raw value of \(\mathbf{x}\) before the activation function (e.g., softmax or sigmoid) in the last layer. Without restriction, the value \(\mathbf{x}^{r}\) may vary in a large range when trained with STE. When such an output is fed into softmax or sigmoid, it easily falls into a saturation region of the activation function (Tang et al., 2017). To resolve this issue, we include another loss function to bound the scale of \(\mathbf{x}^{r}\): \[L_{bound}(\mathbf{x})=avg(\mathbf{x}^{r}\odot\mathbf{x}^{r}).\] To enforce constraints, we add the weighted sum of \(L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})\) and \(L_{bound}(\mathbf{x})\) to the baseline loss (if any), where the weight of each loss is a hyperparameter. We call this way of semantic regularization the _CL-STE_ (Constraint Loss via STE) method. **Example 4.1** Continued.: Given the matrix \(\mathbf{C}\) for the CNF theory, a data instance \(\langle i_{1},i_{2},l\rangle\), the NN outputs \(\mathbf{x}_{1},\mathbf{x}_{2}\) for \(i_{1},i_{2}\), and the vectors \(\mathbf{f},\mathbf{v}\) as constructed in Example 4.1, the total loss function used for \(\mathbf{mnistAdd}\) problem is \[\mathcal{L}=L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})+\sum_{\mathbf{x}\in\{ \mathbf{x}_{1},\mathbf{x}_{2}\}}0.1\times L_{bound}(\mathbf{x}).\] ### Properties of Constraint Loss and Its Gradients Proposition 4.2 shows the relation between \(L_{deduce}\), \(L_{unsat}\), and \(L_{sat}\) components in the constraint loss \(L_{cnf}\) and its logical counterpart. **Proposition 4.2**.: _Given a theory \(C\), a set \(F\) of atoms, and a truth assignment \(v\) such that \(v\models F\), let \(\mathbf{C},\mathbf{f},\mathbf{v}\) denote their matrix/vector representations, respectively. Let \(C_{deduce}\subseteq C\) denote the set of Horn clauses \(H\) in \(C\) such that all but one literal in \(H\) are of the form \(\neg p\) such that \(p\in F\). 3 Then_ Footnote 3: This implies that the remaining literal is either an atom or \(\neg p\) such that \(p\not\in F\). * _the minimum values of_ \(L_{deduce}\)_,_ \(L_{unsat}\)_,_ \(L_{sat}\) _, and_ \(L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})\) _are 0;_ * \(v\models C_{deduce}\) _iff_ \(L_{deduce}\) _is 0;_ * \(v\models C\) _iff_ \(L_{unsat}\) _is 0 iff_ \(L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})\) _is 0._ Clause (11) is an example clause in \(C_{deduce}\). There could be many other ways to design \(L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})\) to satisfy the properties in Proposition 4.2. Propositions 4.3 and 4.5 below justify our design choice. **Proposition 4.3**.: _Given a theory \(C\) with \(m\) clauses and \(n\) atoms and a set \(F\) of atoms such that \(C\cup F\) is satisfiable, let \(\mathbf{C},\mathbf{f}\) denote their matrix/vector representations, respectively. Given a neural network output \(\mathbf{x}\in[0,1]^{n}\) denoting probabilities, we construct \(\mathbf{v}=\mathbf{f}+\mathbbm{1}_{\{0\}}(\mathbf{f})\odot b_{p}(\mathbf{x})\) and a truth assignment \(v\) such that \(v(p_{j})=\textsc{true}\) if \(\mathbf{v}[j]\) is \(1\), and \(v(p_{j})=\textsc{false}\) if \(\mathbf{v}[j]\) is \(0\). Let \(C_{deduce}\subseteq C\) denote the set of Horn clauses \(H\) in \(C\) such that all but one literal in \(H\) are of the form \(\neg p\) where \(p\in F\). Then, for any \(j\in\{1,\ldots,n\}\),_ 1. _if_ \(p_{j}\in F\)_, all of_ \(\frac{\partial L_{deduce}}{\partial\mathbf{x}[j]}\)_,_ \(\frac{\partial L_{unsat}}{\partial\mathbf{x}[j]}\)_, and_ \(\frac{\partial L_{sat}}{\partial\mathbf{x}[j]}\) _are zeros;_ 2. _if_ \(p_{j}\not\in F\)_,_ \[\frac{\partial L_{deduce}}{\partial\mathbf{x}[j]}\overset{iSTE}{\approx}\begin{cases}-c& \text{if $c>0$ clauses in $C_{deduce}$}\\ &\text{ contain literal $p_{j}$;}\\ c&\text{ if $c>0$ clauses in $C_{deduce}$}\\ &\text{ contain literal $\neg p_{j}$;}\\ 0&\text{ otherwise;}\end{cases}\] \[\frac{\partial L_{unsat}}{\partial\mathbf{x}[j]}\stackrel{{ iSTE}}{{\approx}}\frac{c_{2}-c_{1}}{m}\] \[\frac{\partial L_{sat}}{\partial\mathbf{x}[j]}\stackrel{{ iSTE}}{{\approx}}\begin{cases}-\frac{c_{3}}{m}&\text{if }v\models p_{j}\\ \frac{c_{3}}{m}&\text{if }v\not\models p_{j},\end{cases}\] _where \(\stackrel{{ iSTE}}{{\approx}}\) stands for the equivalence of gradients assuming iSTE; \(c_{1}\) (and \(c_{2}\), resp.) is the number of clauses in \(C\) that are not satisfied by \(v\) and contain \(p_{j}\) (and \(\neg p_{j}\), resp.); \(c_{3}\) is the number of clauses in \(C\) that are satisfied by \(v\) and contain \(p_{j}\) or \(\neg p_{j}\)._ Intuitively, Proposition 4.3 ensures the following properties of the gradient \(\frac{\partial L_{conf}(\mathbf{C},\mathbf{v},\mathbf{f})}{\partial\mathbf{x} [j]}\), which consists of \(\frac{\partial L_{advace}}{\partial\mathbf{x}[j]}\), \(\frac{\partial L_{unsat}}{\partial\mathbf{x}[j]}\), and \(\frac{\partial L_{ost}}{\partial\mathbf{x}[j]}\). **P1.**: If we know for sure that \(p_{j}\) is true (\(p_{j}\in F\)), these gradients w.r.t. \(\mathbf{x}[j]\) (real values corresponding to \(p_{j}\)) are \(0\), so they do not affect the truth value of \(p_{j}\). **P2.**: Otherwise (\(F\) does not tell whether \(p_{j}\) is true), 1. the gradient \(\frac{\partial L_{advace}}{\partial\mathbf{x}[j]}\) is negative (positive, resp.) to increase (decrease, resp.) the value of \(\mathbf{x}[j]\) by the gradient descent if \(C\cup F\) entails \(p_{j}\) (\(\neg p_{j}\), resp.); 2. the gradient \(\frac{\partial L_{unsat}}{\partial\mathbf{x}[j]}\) is negative (positive resp.) to increase (decrease, resp.) the value of \(\mathbf{x}[j]\) by the gradient descent if, among all unsatisfied clauses, more clauses contain \(p_{j}\) than \(\neg p_{j}\) (\(\neg p_{j}\) than \(p_{j}\), resp.); 3. the gradient \(\frac{\partial L_{sat}}{\partial\mathbf{x}[j]}\) is negative (positive resp.) to increase (decrease, resp.) the value of \(\mathbf{x}[j]\) by the gradient descent if \(v\models p_{j}\) (\(v\not\models p_{j}\), resp.) and there exist satisfied clauses containing literal \(p_{j}\) or \(\neg p_{j}\). Intuitively, bullet 1 in **P2** simulates a deduction step, which is always correct, while bullets 2 and 3 simulate two heuristics: "we tend to believe a literal if more unsatisfied clauses contain this literal than its negation" and "we tend to keep our prediction on an atom if many satisfied clauses contain this atom." This justifies another property below. **P3.**: The sign of the gradient \(\frac{\partial L_{conf}}{\partial\mathbf{x}[j]}\) is the same as the sign of \(\frac{\partial L_{advace}}{\partial\mathbf{x}[j]}\) when the latter gradient is non-zero. **Example 4.4**.: _Consider the theory \(C\) below with \(m=2\) clauses and 3 atoms_ \[\neg a\vee\neg b\lor c\] \[\neg a\lor b\] _and consider the set of given facts \(F=\{a\}\). They are represented by matrix \(\mathbf{C}=\begin{bmatrix}-1&-1&1\\ -1&1&0\end{bmatrix}\) and vector \(\mathbf{f}=[1,0,0]\). Suppose a neural network predicts \(\mathbf{x}=[0.3,0.1,0.9]\) as the probabilities of the 3 atoms \(\{a,b,c\}\)._ _With the above matrix and vectors, we can compute_ \[b_{p}(\mathbf{x}) =[0,0,1],\] \[\mathbf{v} =\mathbf{f}+\mathbbm{1}_{\{0\}}(\mathbf{f})\odot b_{p}(\mathbf{x} )=[1,0,1].\] _From \(\mathbf{v}\), we construct the truth assignment \(v=\{a=\text{true},b=\text{false},c=\text{true}\}\). Clearly, \(v\) satisfies the first clause but not the second one. Given \(F=\{a\}\), we see \(C_{deduce}\) is \(\neg a\lor b\)._ _According to Proposition 4.3,_ \[\frac{\partial L_{deduce}}{\partial\mathbf{x}}\stackrel{{ iSTE}}{{\approx}}[0,-1,0],\quad\frac{\partial L_{unsat}}{ \partial\mathbf{x}}\stackrel{{ iSTE}}{{\approx}}[0,-\frac{1}{2}, 0],\] \[\frac{\partial L_{sat}}{\partial\mathbf{x}}\stackrel{{ iSTE}}{{\approx}}[0,\frac{1}{2},-\frac{1}{2}],\] \[\frac{\partial L_{conf}}{\partial\mathbf{x}}=\frac{\partial L_{ deduce}}{\partial\mathbf{x}}+\frac{\partial L_{unsat}}{\partial\mathbf{x}}+\frac{ \partial L_{sat}}{\partial\mathbf{x}}\stackrel{{ iSTE}}{{\approx}}[0,-1,-\frac{1}{2}].\] _Intuitively, given \(C\), \(F\), and the current truth assignment \(v\), (**P1**) we know \(a\) is true (\(a\in F\)) thus no need to update it, (**P2.1** and **P3**) we know for sure that the prediction for \(b\) should be changed to true by deduction on clause \(\neg a\lor b\) and the given fact \(F=\{a\}\), (**P2.3**) we tend to strengthen our belief in \(c\) being true due to the satisfied clause \(\neg a\vee\neg b\lor c\)._ The proposition also holds with another binarization function \(b(x)\). **Proposition 4.5**.: _Proposition 4.3 still holds for \(\mathbf{x}\in\mathbb{R}^{n}\) and \(\mathbf{v}=\mathbf{f}+\mathbbm{1}_{\{0\}}(\mathbf{f})\odot b(\mathbf{x})\)._ ## 5 Evaluation We conduct an experimental evaluation to answer the following questions. * Is CL-STE more scalable in injecting discrete constraints into neural network learning than existing neuro-symbolic learning methods? * Does CL-STE make neural networks learn with no or fewer labeled data by effectively utilizing the given constraints? * Is CL-STE general enough to overlay constraint loss on different types of neural networks to enforce logical constraints and improve the accuracy of existing networks? Our implementation takes a CNF theory in DIMACS format (the standard format for input to SAT solvers).4 Since the CL-STE method alone doesn't have associated symbolic rules, unlike DeepProbLog, NeurASP, and NeuroLog, in this section, we compare these methods on the classification accuracy of the trained NNs (e.g., correctly predicting the label of an MNIST image) instead of query accuracy (e.g., correctly predicting the sum of two MNIST images). ### mnistAdd Revisited We introduced the CNF encoding and the loss function for the **mnistAdd** problem in Example 4.1. The problem was used in (Manhaeve et al., 2018) and (Yang et al., 2020) as a benchmark. Figure 3 compares the MNIST digit classification accuracy of neural networks trained by different methods on a single epoch of 30,000 addition data from (Manhaeve et al., 2018). "CL-STE(\(n\))" denotes our method with \(b_{p}(x)\) and iSTE using a batch of size \(n\). As we see, DeepProbLog, NeurASP, and CL-STE with a batch size of 1 could quickly converge to near 100% test accuracy. Training time-wise, CL-STE outperforms the other approaches since it does not need to generate arithmetic circuits for every training instance as in DeepProbLog or enumerate all models as in NeurASP. Also, while DeepProbLog and NeurASP do not support batch training, CL-STE could leverage the batch training to reduce the training time to 22s with a batch size of 16 (denoted by CL-STE(16)). We observe that increasing the batch size in CL-STE also increases the number of parameter updates for convergence, which we could decrease by using batch normalization as shown in the blue line denoted by CL-STE(16)-BN. Furthermore, we apply CL-STE to the variants of **mnistAdd** by training with two-digit sums (**mnistAdd2**(Manhaeve et al., 2018)) and three-digit sums (**mnistAdd3**). Table 1 shows that the CL-STE method scales much better than DeepProbLog and NeurASP. The time and accuracy are reported for a single epoch of training, where the cutoff time is 24 hours after which we report "timeout." ### Benchmarks from (Tsamoura et al., 2021) The following are benchmark problems from (Tsamoura et al., 2021). Like the **mnistAdd** problem, labels are not immediately associated with the data instances but with the results of logical operations applied to them. **add2x2** The input is a \(2\times 2\) grid of digit images. The output is the four sums of the pairs of digits in each row/column. The task is to train a CNN for digit classification. **apply2x2** The input is three digits and a \(2\times 2\) grid of handwritten math operator images in \(\{+,-,\times\}\). The output is the four results of applying the two math operators in each row/column in the grid on the three digits. The task is to train a CNN for math operator classification. **member(n)** The input is a set of \(n\) images of digits and a digit in \(\{0,\dots,9\}\). The output is 0 or 1, indicating whether the digit appears in the set of digit images. The task is to train a CNN for digit classification. Table 2 compares our method with DeepProbLog, NeurASP, and NeuroLog test accuracy-wise and training time-wise. Note that, instead of comparing the query accuracy as in (Tsamoura et al., 2021), we evaluate and compare the NN classification accuracies. Our experiments agree with (Yin et al., 2019), which proves the instability issue of iSTE and the convergence guarantees with sSTE in a simple 2-layer CNN. Their experiments also observe a better performance of \(b(x)\)+sSTE over \(b(x)\)+iSTE on deep neural networks. Our experimental results (especially for member(5) problem) also reveal the instability issue of \(b(x)\)+iSTE and show that \(b(x)\)+sSTE achieves higher and more stable accuracy. Furthermore, we observe that \(b_{p}(x)\) works better than \(b(x)\) in terms of both accuracy and time in our experiments. This is because the input \(x\) to \(b_{p}(x)\) is normalized into probabilities before binarization, resulting in less information loss (i.e., change in magnitude \begin{table} \begin{tabular}{c|c c|c c|c} \hline \hline & \multicolumn{2}{c|}{mnistAdd} & \multicolumn{2}{c|}{mnistAdd2} & \multicolumn{2}{c}{mnistAdd3} \\ \hline DeepProbLog & 98.36\% & 2565s & 97.57\% & 2269\% & timeout \\ \hline NeurASP & 97.87\% & 292s & 97.85\% & 1682s & timeout \\ \hline CL-STE & 97.48\% & 22s & 98.12\% & 92s & 97.78\% & 402s \\ \hline \hline \end{tabular} \end{table} Table 1: Experiments on **mnistAdd** \begin{table} \begin{tabular}{c|c c c} \hline \hline & add2x2 & apply2x2 & member(3) & member(5) \\ \hline accuracy(\%) & & & \\ \hline DeepProbLog & 88.4\(\pm\)0.7 & 100\(\pm\)0 & 96.3\(\pm\)0.3 & timeout \\ \hline NeurASP & 97.6\(\pm\)0.2 & 100\(\pm\)0 & 93.5\(\pm\)0.9 & timeout \\ \hline NeurLog & 97.5\(\pm\)0.4 & 100\(\pm\)0 & 94.5\(\pm\)1.5 & 93.9\(\pm\)1.5 \\ \hline \(b(x)+iSTE\) & 95.5\(\pm\)0.7 & 100\(\pm\)0 & 73.2\(\pm\)9.1 & 51.1\(\pm\)2.9 \\ \(b(x)+sSTE\) & 95.7\(\pm\)0.5 & 100\(\pm\)0 & 83.2\(\pm\)8.4 & 88.0\(\pm\)7.1 \\ \(b_{p}(x)+iSTE\) & 98.0\(\pm\)0.2 & 100\(\pm\)0 & 95.5\(\pm\)0.7 & 95.0\(\pm\)0.5 \\ \hline time(s) & & & \\ \hline DeepProbLog & 1035\(\pm\)5.7 & 586\(\pm\)9 & 2218\(\pm\)211 & timeout \\ \hline NeurASP & 142\(\pm\)2 & 47\(\pm\)1 & 253\(\pm\)1 & timeout \\ \hline NeurLog & 2400\(\pm\)46 & 2428\(\pm\)29 & 427\(\pm\)12 & 68\(\pm\)40 \\ \hline \(b(x)\) + iSTE & 80\(\pm\)2 & 208\(\pm\)1 & 45\(\pm\)0.0 & 177\(\pm\)1 \\ \(b(x)+sSTE\) & 81\(\pm\)2 & 214\(\pm\)8 & 46\(\pm\)1 & 181\(\pm\)10 \\ \(b_{p}(x)+sSTE\) & 54\(\pm\)4 & 112\(\pm\)2 & 43\(\pm\)3 & 49\(\pm\)4 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between CL-STE and other approaches: The numbers in parentheses are the times spent by NeuroLog to generate allductive proofs. Figure 3: Comparison on mnistAdd "\(b_{p}(x)-x\)") when the neural network accuracy increases. ### CNN + Constraint Loss for Sudoku The following experimental setting from (Yang et al., 2020) demonstrates unsupervised learning with NeurASP on Sudoku problems. Given a textual representation of a Sudoku puzzle (in the form of a \(9\times 9\) matrix where an empty cell is represented by 0), Park (2018) trained a CNN (composed of 9 convolutional layers and a \(1\times 1\) convoutional layer, followed by softmax) using 1 million examples and achieved 70% test accuracy using an "inference trick": instead of predicting digits for all empty cells at once, which leads to poor accuracy, predict the most probable grid-cell value one by one. With the same CNN and inference trick, Yang et al. (2020) achieved 66.5% accuracy with only 7% data with no supervision (i.e., 70k data instances without labels) by enforcing semantic constraints in neural network training with NeurASP. In this section, we consider the same unsupervised learning problem for Sudoku while we represent the Sudoku problem in CNF and use \(L_{cnf}\) to enforce logical constraints during training. We use a CNF theory for \(9\times 9\) Sudoku problems with \(9^{3}=729\) atoms and \(8991\) clauses as described in Appendix C.6. This CNF can be represented by a matrix \(\mathbf{C}\in\{-1,0,1\}^{8991\times 729}\). The dataset consists of 70k data instances, 80%/20% for training/testing. Each data instance is a pair \(\langle\mathbf{q},\mathbf{l}\rangle\) where \(\mathbf{q}\in\{0,1,\ldots,9\}^{81}\) denotes a \(9\times 9\) Sudoku board (\(0\) denotes an empty cell) and \(\mathbf{l}\in\{1,\ldots,9\}^{81}\) denotes its solution (\(\mathbf{l}\) is not used in NeurASP and our method during training). The non-zero values in \(\mathbf{q}\) are treated as atomic facts \(F\) and we construct the matrix \(\mathbf{F}\in\{0,1\}^{81\times 9}\) such that, for \(i\in\{1,\ldots,81\}\), the \(i\)-th row \(\mathbf{F}[i,\mathbf{\cdot}]\) is the vector \(\{0\}^{9}\) if \(\mathbf{q}[i]=0\) and is the one-hot vector for \(\mathbf{q}[i]\) if \(\mathbf{q}[i]\neq 0\). Then, the vector \(\mathbf{f}\in\{0,1\}^{729}\) is simply the flattening of \(\mathbf{F}\). We feed \(\mathbf{q}\) into the CNN and obtain the output \(\mathbf{x}\in[0,1]^{729}\). Finally, the prediction \(\mathbf{v}\in\{0,1\}^{729}\) is obtained as \(\mathbf{f}+\mathbf{l}_{\{0\}}(\mathbf{f})\odot b_{p}(\mathbf{x})\), and the total loss function \(\mathcal{L}\) we used is \(\mathcal{L}=L_{cnf}(\mathbf{C},\mathbf{v},\mathbf{f})+0.1\times L_{bound}( \mathbf{x})\). Table 3 compares the training time and the (whole-board) test accuracies with and without the inference trick (\(Acc_{w}\) and \(Acc_{wo}\), resp.) using \(b_{p}(x)\)+iSTE against NeurASP and baseline CNN (Park, 2018). In each experiment, the same CNN is trained with only 70k (labeled/unlabeled) data instances from (Yang et al., 2020) with an average of 43 given digits in a puzzle (min: 26, max: 77). As we can see, our method outperforms NeurASP in both accuracy and time. Accuracy-wise, the CNN model trained using CL-STE is 27.2% more accurate than the CNN model trained using NeurASP when we use the inference trick. Training time-wise, CL-STE is 16 times faster than NeurASP because we directly encode semantic constraints in a loss function, which saves the time to call a symbolic engine externally (e.g., clingo to enumerate all stable models as in NeurASP). Table 4 compares CNN+CL-STE with SATNet trained on Park 70k and tested on both Park 70k and Palm Sudoku dataset (Palm et al., 2018). While CNN is less tailored to logical reasoning than SATNet, our experiment shows that, when it is trained via CL-STE, it performs better than SATNet. ### GNN + Constraint Loss for Sudoku This section investigates if a GNN training can be improved with the constraint loss functions with STE by utilizing already known constraints without always relying on the labeled data. We consider the Recurrent Relational Network (RRN) (Palm et al., 2018), a state-of-the-art GNN for multi-step relational reasoning that achieves 96.6% accuracy for hardest Sudoku problems by training on 180k labeled data instances. Our focus here is to make RRN learn more effectively using fewer data by injecting known constraints. The training dataset in (Palm et al., 2018) contains 180k data instances evenly distributed in 18 difficulties with 17-34 given numbers. We use a small subset of this dataset with random sampling. Given a data instance \(\langle\mathbf{q},\mathbf{l}\rangle\) where \(\mathbf{q}\in\{0,1,\ldots,9\}^{81}\) denotes a \(9\times 9\) Sudoku board and \(\mathbf{l}\in\{1,\ldots,9\}^{81}\) denotes its solution, RRN takes \(\mathbf{q}\) as input and, after 32 iterations of message passing, outputs 32 matrices of probabilities \(\mathbf{X}_{s}\in\mathbf{R}^{81\times 9}\) where \(s\in\{1,\ldots,32\}\); for example, \(\mathbf{X}_{1}\) is the RRN prediction after 1 message passing step. The baseline loss is the sum of the cross-entropy losses between prediction \(\mathbf{X}_{s}\) and label 1 for all \(s\). We evaluate if using constraint loss could further improve the performance of RRN with the same labeled data. We use the same \(L_{cnf}\) and \(L_{bound}\) defined in CNN (with weights \(1\) and \(0.1\), resp.), which are applied to \(\mathbf{X}_{1}\) only so that the RRN could be trained to deduce new digits in a single message passing step. We also use a continuous regularizer \(L_{sum}\) below to regularize every \(\mathbf{X}_{s}\) that "the sum of the 9 \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & Train Data & Test & \multirow{2}{*}{\#Given} & Test \\ & (Supv) & Data & & Accuracy \\ \hline \multirow{2}{*}{SATNet} & Park 70k & Park 70k & 26-77 & 67.78\% \\ & (Full) & Palm & 17-34 & 6.76\% \\ \hline \multirow{2}{*}{CNN+CL-STE} & Park 70k & Park 70k & 26-77 & 95.70\% \\ & (No) & Palm & 17-34 & 27.37\% \\ \hline \hline \end{tabular} \end{table} Table 4: SATNet vs. CNN+CL-STE \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Method & Supervised & \(Acc_{wo}\) & \(Acc_{w}\) & time(m) \\ \hline Park’s CNN & Full & 0.94\% & 23.3\% & 163 \\ \hline Park’s CNN+NeurASP & No & 1.69\% & 66.5\% & 13230 \\ \hline Park’s CNN+CL-STE & No & 2.38\% & 93.7\% & 813 \\ \hline \hline \end{tabular} \end{table} Table 3: CNN, NeurASP, and CL-STE on Park 70k Sudoku dataset (80%/20% split) w/ and w/o inference trick probabilities in \(\mathbf{X}_{s}\) in the same row/column/box must be 1": \[L_{sum}=\sum_{\begin{subarray}{c}s\in\{1,\dots,32\}\\ i\in\{row,col,box\}\end{subarray}}avg((sum(\mathbf{X}_{s}^{i})-1)^{2}).\] Here, \(avg(X)\) and \(sum(X)\) compute the average and sum of all elements in \(X\) along its last dimension; \(\mathbf{X}_{s}^{row},\mathbf{X}_{s}^{col},\mathbf{X}_{s}^{box}\in\mathbb{R}^{ 81\times 9}\) are reshaped copies of \(\mathbf{X}_{s}\) such that each row in, for example, \(\mathbf{X}_{s}^{row}\) contains 9 probabilities for atoms \(a(1,C,N),\dots,a(9,C,N)\) for some \(C\) and \(N\). Figure 4 compares the test accuracy of the RRN trained for 100 epochs under 4 settings: (a) the RRN trained with baseline loss using 30k labeled data; (b) the RRN trained with both baseline loss and constraint losses (\(L_{sum}\), \(L_{cnf}\), and \(L_{bound}\)) using the same 30k labeled data; (c) the same setting as (b) with additional 30k unlabeled data; (d) same as (a) with additional 30k labeled data. Comparing (a) and (b) indicates the effectiveness of the constraint loss using the same number of labeled data; comparison between (b) and (c) indicates even with the same number of labeled data but adding unlabeled data could increase the accuracy (due to the constraint loss); comparison between (c) and (d) shows that the effectiveness of the constraint loss is comparable to adding additional 30k labels. Figure 5 assesses the effect of constraint loss using fixed 10k labeled data and varying numbers (10k, 30k, 70k) of unlabeled data. We see that the baseline RRN trained with 10k labeled data ([10k L] RRN) has roughly saturated while the other methods are still slowly improving the accuracy. Training with the same number of labeled data but adding more unlabeled data makes the trained RRN achieve higher test accuracy, indicating that the constraint loss is effective in training even when the labels are unavailable. ### Discussion Regarding **Q1**, Figure 3, Tables 1 and 2 show that our method achieves comparable accuracy with existing neuro-symbolic formalisms but is much more scalable. Regarding **Q2**, Table 3 and Figures 4 and 5 illustrate our method could be used for unsupervised and semi-supervised learning by utilizing the constraints underlying the data. Regarding **Q3**, we applied constraint loss to MLP, CNN, and GNN, and observed that it improves the existing neural networks' prediction accuracy. As we noted, the gradient computation in other neuro-symbolic approaches, such as NeurASP, DeepProbLog, and NeuroLog, requires external calls to symbolic solvers to compute stable models or proofs for every data instance, which takes a long time. These approaches may give better quality gradients to navigate to feasible solutions, but their gradient computations are associated with NP-hardness (the worst case exponential size of SDD, computing all proofs or stable models). In comparison, CL-STE treats each clause independently and locally to accumulate small pieces of gradients, allowing us to leverage GPUs and batch training as in the standard deep learning. The method resembles local search and deduction in SAT, and the gradients may not reflect the global property but could be computed significantly faster. Indeed, together with the gradient signals coming from the data, our method works well even when logical constraints are hard to satisfy, e.g., in training a neural network to solve Sudoku where a single feasible solution lies among \(9^{47}\) to \(9^{64}\) candidates when 17-34 digits are given. ## 6 Conclusion Constraint loss helps neural networks learn with fewer data, but the state-of-the-art methods require combinatorial computation to compute gradients. By leveraging STE, we demonstrate the feasibility of more scalable constraint learning in neural networks. Also, we showed that GNNs could learn with fewer (labeled) data by utilizing known constraints. Based on the formal properties of the CNF constraint loss and the promising initial experiments here, the next step is to apply the method to larger-scale experiments. ## Acknowledgements We are grateful to Adam Ishay and the anonymous referees for their useful comments. This work was partially supported by the National Science Foundation under Grant IIS-2006747. Figure 4: Test accuracy on the same randomly sampled 1k data from Palm Sudoku dataset when trained with RRN(+STE) with 30k to 60k [L]abeled/[U]nlabeled data Figure 5: Semi-supervised learning with RRN+STE on Sudoku using only 10k labeled data and varying numbers of unlabeled data from Palm dataset for training and using the same randomly sampled 1k data for testing
2310.04788
PMNN:Physical Model-driven Neural Network for solving time-fractional differential equations
In this paper, an innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations. It establishes a temporal iteration scheme based on physical model-driven neural networks which effectively combines deep neural networks (DNNs) with interpolation approximation of fractional derivatives. Specifically, once the fractional differential operator is discretized, DNNs are employed as a bridge to integrate interpolation approximation techniques with differential equations. On the basis of this integration, we construct a neural-based iteration scheme. Subsequently, by training DNNs to learn this temporal iteration scheme, approximate solutions to the differential equations can be obtained. The proposed method aims to preserve the intrinsic physical information within the equations as far as possible. It fully utilizes the powerful fitting capability of neural networks while maintaining the efficiency of the difference schemes for fractional differential equations. Moreover, we validate the efficiency and accuracy of PMNN through several numerical experiments.
Zhiying Ma, Jie Hou, Wenhao Zhu, Yaxin Peng, Ying Li
2023-10-07T12:43:32Z
http://arxiv.org/abs/2310.04788v1
# PMNN: Physical Model-driven Neural Network for solving time-fractional differential equations ###### Abstract In this paper, an innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations. It establishes a temporal iteration scheme based on physical model-driven neural networks which effectively combines deep neural networks (DNNs) with interpolation approximation of fractional derivatives. Specifically, once the fractional differential operator is discretized, DNNs are employed as a bridge to integrate interpolation approximation techniques with differential equations. On the basis of this integration, we construct a neural-based iteration scheme. Subsequently, by training DNNs to learn this temporal iteration scheme, approximate solutions to the differential equations can be obtained. The proposed method aims to preserve the intrinsic physical information within the equations as far as possible. It fully utilizes the powerful fitting capability of neural networks while maintaining the efficiency of the difference schemes for fractional differential equations. Moreover, we validate the efficiency and accuracy of PMNN through several numerical experiments. Keywords: Deep neural network; Physical model-driven; Iteration scheme approximation; Time-fractional differential equation Introduction In the study of fractional differential equations (FDEs), researchers have observed that fractional-order differential operators possess non-local properties, which distinguishes them from integer-order differential operators. As a result, FDEs are well-suited for describing dynamic processes in the real world that involve memory and hereditary characteristics. FDEs have been widely applied in various fields [1] such as anomalous diffusion, viscoelasticity, fluid mechanics, electromagnetic waves, statistical models, signal processing and system identification, quantum economics, fractal theory, robotics, etc. However, it is extremely challenging to obtain analytical solutions for FDEs. Even if exact solutions can be obtained, they may involve complex functions like Mittag-Leffler functions, H functions, Wright functions, and so on [2]. Dealing with these functions in numerical computations is complicated. Therefore, finding effective numerical simulation methods for fractional differential equations has become one of the important research topics. In recent decades, researchers have made significant advancements in the numerical solution of FDEs. Various numerical solution techniques have been developed, including finite difference method (FDM) [1, 3], finite element method (FEM) [4, 5], spectral methods [6, 7, 8], Wavelet methods [9, 10, 11], matrix methods [12, 13, 14], Laplace transforms [15], variational iteration methods [16], and Adomian decomposition methods [17, 18, 19]. These traditional numerical methods have made significant advancements in the solving of fractional-order equations and dealing with fractional order derivatives. These advancements have laid the foundation for other innovative numerical methods of FDEs. With the rapid advancement of deep learning technology, an increasing number of scholars have embarked on the exploration of employing deep learning for solving differential equations. As one of fundamental models in the field of deep learning, DNNs exhibit not only remarkable fitting capabilities but also the capacity to learn and optimize models in an adaptive manner. Consequently, the utilization of deep learning for solving differential equations has emerged as a burgeoning research direction that gains significant attention. Currently, a multitude of neural network-based methods have been proposed for solving integer-order differential equations. For instance, Lagaris et al. were among the pioneers who successfully applied Artificial Neural Networks (ANNs) to solve integer-order differential equations. They employed ANNs to construct trial solutions for solving initial value and boundary value problems [20]. Raissi et al. introduced the Physics-Informed Neural Networks (PINNs) approach, a new class of universal function approximators that is capable of encoding any underlying physical laws present in a given dataset [21]. Due to its high predictive accuracy and robustness, PINNs rapidly became a benchmark method in the field, leading many researchers to conduct related studies [22; 23; 24; 25; 26]. Lu et al. introduced DeepONets, a deep operator network, to accurately and effectively learn nonlinear operators from relatively small datasets [27]. These approaches mentioned above are all based on data-driven deep neural network methods, which are characterized by their fast prediction speed. The features required by data-driven methods depend only on the training data [28]. In practical applications, it is important to note that differential equations contain many physical laws. Therefore, relying solely on sampled data to capture information may result in incomplete findings. For certain complex equations, even if enough training data is selected, it remains challenging to accurately capture the physical information encoded within the equations. This poses a disadvantage to solving differential equations. In fact, among the various neural network methods for solving integer-order differential equations, there is a class of methods known as model-driven approaches. These methods do not require a large number of sample data for their construction. For example, Li et al. proposed an iterative scheme approximation based on deep learning framework, known as DeLISA. The authors first obtained time iteration schemes using implicit multistep and Runge-Kutta methods. Subsequently, this iteration scheme was approximated using a neural network. This method achieves continuous-time prediction without the need for a large number of interior points [29]. Long et al. utilized Convolutional Neural Networks (CNNs) to develop a novel approach for solving non-stationary partial differential equations, known as PDE-Net. The fundamental idea of this method is to employ convolutional kernels to learn differential operators and use neural networks to approximate unknown nonlinear responses [30]. Building upon their earlier PDE-Net framework, Long et al. proposed a new deep neural network called PDE-Net 2.0. It combines numerical approximation of differential operators using convolutions with multi-layer symbolic neural networks for model recovery, which is employed to discover potential differential equations from observed data [31]. These model-driven methods possess clear statistical or physical significance, and many traditional methods can be directly combined with them, which provides a new perspective for solving differential equations. However, deep neural networks face a major challenge when it comes to solving fractional differential equations: the handling of fractional derivatives. This is due to the limitations of automatic differentiation techniques, which can only compute derivatives of integer orders. Consequently, deep neural networks are unable to handle fractional derivatives directly. As a result, research on utilizing neural networks for solving FDEs is relatively limited, but several novel methods have been proposed by scholars. Raja et al. successfully solved various types of linear and nonlinear FDEs using feedforward ANNs. Unlike other numerical methods, this approach offers the advantage of providing a continuous solution over the entire finite domain [32]. Zuniga-Aguilar et al. proposed a new ANN method to approximate the solution of FDEs. The author mainly considered the variable-order fractional differential equations with Mittag-Leffler kernel in the sense of Liouville-Caputo. [33]. Pang et al. extended PINNs to fractional PINNs (fPINNs) for solving spatiotemporal fractional advection-diffusion equations (ADEs), and systematically studied their convergence. The authors utilized automatic differentiation to analytically compute integer-order derivatives within equations. While for fractional-order derivatives, they utilize traditional numerical approximation methods for numerical discretization. This approach effectively overcomes the challenge of neural networks being unable to directly solve fractional derivatives [34]. Qu et al. developed neural networks based on sine and cosine functions using uniformly distributed sampling points. They obtained approximate solutions to initial boundary value problems of several FDEs [35, 36]. Ye et al. employed Physics-Informed Neural Networks (PINNs) to investigate the forward and inverse problems of time fractional diffusion equations with conformable derivative, addressing the limitation of directly applying neural networks to solve fractional-order differential equations [37]. Fang et al. solved the fractional PDEs in high-dimension and the inverse problems using DNNs.[38]. Therefore, it naturally raises the question of whether the combination of traditional numerical methods and model-driven neural network methods can be utilized to solve FDEs. The answer is affirmative. Motivated by the aforementioned work, we propose an innovative Physical Model-driven Neural Network (PMNN) method for solving FDEs. In PMNN, we first discretize the fractional derivatives using an interpolation-based Finite Difference Method (FDM) to construct a temporal iteration scheme for the equation. Subsequently, we use this iteration scheme to construct the loss function for training a physical model-driven neural network. Since the iteration scheme contains the physical information of the equation, obtaining an approximate solution can be viewed as the learning process of the temporal iteration scheme using a physical model-driven neural network. Therefore, when the loss gradually decreases and converges, we can consider the neural network as an approximation solution of the equation. The proposed method preserves the physical information of the equation to the greatest extent. Furthermore, it effectively utilizes the powerful fitting capability of neural networks while ensuring the validity of the difference schemes for fractional differential equations. Moreover, in this paper, we evaluate the performance of PMNN through several numerical experiments. These experiments demonstrate the effectiveness of the method and its potential in solving a wide range of fractional differential equation problems. In addition, it is worth noting that the PMNN method, introduced in this paper, represents a general framework for solving FDEs. It combines neural networks with fractional derivative interpolation approximations, without being limited to a specific approximation method. In this paper, we employ two interpolation approximation methods: L1 and L2 \(-1_{\sigma}\), which result in two different physical model-driven neural network: PMNN on L1 and PMNN on L2 \(-1_{\sigma}\). The remaining part of this paper is organized as follows: In Section 2, we first introduce the fundamental knowledge of FDEs, and then provide a comprehensive overview of the L1 and L2 \(-1_{\sigma}\) interpolation approximation for fractional derivatives. In Section 3, we present a detailed description of the construction of PMNN and provide a step-by-step explanation of the process involved in solving equations using PMNN. Section 4 primarily presents several numerical experiments to demonstrate the effectiveness of our proposed method. Finally, Section 5 provides a summary of the paper. ## 2 Preliminaries In this section, we will review two definitions of fractional derivatives and two interpolation approximations for the Caputo derivatives. ### Definitions We start by presenting several fundamental definitions, which can be found in [39]. **Definition 1**: _The Riemann-Liouville integral with order \(\alpha>0\) of the given function \(f(t)\), \(t\in(a,b)\) are defined as_ \[{}_{a}\mathrm{I}_{t}^{-\alpha}f(t)=\frac{1}{\Gamma(\alpha)}\int_{a}^{t}(t-s)^{ \alpha-1}f(s)\mathrm{d}s, \tag{1}\] _where \(\Gamma(\cdot)\) is the Euler's gamma function._ **Definition 2**: _The Riemann-Liouville derivatives with order \(\alpha>0\) of the given function \(f(t)\), \(t\in(a,b)\) are defined as_ \[{}_{a}\mathrm{D}_{t}^{\alpha}f(t) =\frac{\mathrm{d}^{m}}{\mathrm{d}t^{m}}\left[{}_{a}\mathrm{I}_{t} ^{-(m-\alpha)}f(t)\right] \tag{2}\] \[=\frac{1}{\Gamma(m-\alpha)}\frac{\mathrm{d}^{m}}{\mathrm{d}t^{m} }\int_{a}^{t}(t-s)^{m-\alpha-1}f(s)\mathrm{d}s,\] _where \(m\) is a positive integer satisfying \(m-1\leq\alpha<m\)._ **Definition 3**: _The Caputo derivatives with order \(\alpha>0\) of the given function \(f(t)\), \(t\in(a,b)\) are defined as_ \[{}_{a}^{C}\mathrm{D}_{t}^{\alpha}f(t) ={}_{a}\mathrm{I}_{t}^{-(m-\alpha)}\left[f^{(m)}(t)\right] \tag{3}\] \[=\frac{1}{\Gamma(m-\alpha)}\int_{a}^{t}(t-s)^{m-\alpha-1}f^{(m)}( s)\mathrm{d}s,\] _where \(m\) is a positive integer satisfying \(m-1<\alpha\leq m\)._ The Riemann-Liouville (R-L) derivative and the Caputo derivative may exhibit differences in numerical computations. For instance, the Caputo fractional derivative of a constant function is 0, whereas the R-L fractional derivative is non-zero. The distinct characteristics of these two integrals determine their applicability in different contexts. The R-L derivative imposes fewer conditions on the function \(f(x)\), enhancing its convenience for mathematical theoretical investigations. On the other hand, the Caputo derivative finds wider application in solving initial and boundary value problems of differential equations in the field of physical engineering. In this paper, we adopt the Caputo derivative. ### Interpolation approximation of Caputo derivative Over the past several decades, researchers in the field of Finite Difference Methods for solving FDEs have made significant research achievements. The underlying idea of these methods is to discretize the fractional derivatives in the differential equation, transforming the fractional-order equations into integer-order equations for numerical computation. The interpolation approximation of the fractional derivative serves as a crucial step in the FDMs, which provides us with a promising direction for solving FDEs using DNNs. Following that, we will illustrate two interpolation approximations for the \(\alpha\)-order Caputo fractional derivative. #### 2.2.1 L1 approximation For the Caputo derivative of order \(\alpha\) (\(0<\alpha<1\)) \[{}^{C}_{0}D_{t}^{\alpha}f(t)=\frac{1}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{f^{ \prime}(s)}{(t-s)^{\alpha}}\mathrm{d}s, \tag{4}\] the most commonly used approach is the L1 approximation based on piecewise linear interpolation. Let \(N\) be a positive integer. We define \(\tau=\frac{T}{N}\), \(t_{k}=k\tau\), \(0\leqslant k\leqslant N\), and \[a_{l}^{(\alpha)}=(l+1)^{1-\alpha}-l^{1-\alpha},\quad l\geqslant 0, \tag{5}\] we derive an approximation formula for the calculation of \({}^{C}_{0}D^{\alpha}_{t}f(t)|_{t=t_{n}}\): \[D^{\alpha}_{t}f(t_{n})\equiv\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left[a^{( \alpha)}_{0}f(t_{n})-\sum_{k=1}^{n-1}\left(a^{(\alpha)}_{n-k-1}-a^{(\alpha)}_{n -k}\right)f(t_{k})-a^{(\alpha)}_{n-1}f(t_{0})\right]. \tag{6}\] Eq.(6) is commonly known as the L1 formula or L1 approximation. #### 2.2.2 \(\mathrm{L2-1}_{\sigma}\) approximation For the \(\alpha\) (\(0<\alpha<1\)) order Caputo derivative, the aforementioned L1 approximation formula achieves uniform convergence of order \(2-\alpha\). Alikhanov [40] further extended this result by discovering superconvergent interpolation points and establishing the \(\mathrm{L2-1}_{\sigma}\) approximation formula, which achieves uniform convergence of order \(3-\alpha\). In the following section, we will present this result in detail. Let us denote \[\sigma=1-\frac{\alpha}{2},\quad t_{n+\sigma}=(n+\sigma)\tau,\quad f^{n}=f(t_{ n}), \tag{7}\] the approximation formula for evaluating \({}^{C}_{0}D^{\alpha}_{t}f(t)|_{t=t_{n-1+\sigma}}\) can be obtained as follows: \[\Delta^{\alpha}_{t}f(t_{n-1+\sigma})\equiv\frac{\tau^{-\alpha}}{\Gamma(2- \alpha)}\sum_{k=0}^{n-1}c^{(n,\alpha)}_{k}[f(t_{n-k})-f(t_{n-k-1})],\quad 1 \leqslant n\leqslant N. \tag{8}\] Eq.(8) is typically referred to as the \(\mathrm{L2-1}_{\sigma}\) formula or \(\mathrm{L2-1}_{\sigma}\) approximation. When \(n=1\), \[c^{(1,\alpha)}_{0}=\sigma^{1-\alpha}, \tag{9}\] while when \(n\geqslant 2\), \[\begin{cases}c_{0}^{(n,\alpha)}=&\frac{(1+\sigma)^{2-\alpha}-\sigma^{2- \alpha}}{2-\alpha}-\frac{(1+\sigma)^{1-\alpha}-\sigma^{1-\alpha}}{2},\\ c_{k}^{(n,\alpha)}=&\frac{1}{2-\alpha}[(k+1+\sigma)^{2- \alpha}-2(k+\sigma)^{2-\alpha}+(k-1+\sigma)^{2-\alpha}]\\ &-\frac{1}{2}[(k+1+\sigma)^{1-\alpha}-2(k+\sigma)^{1-\alpha}+(k-1+ \sigma)^{1-\alpha}],\\ &1\leqslant k\leqslant n-2,\\ c_{n-1}^{(n,\alpha)}=&\frac{1}{2}[3(n-1+\sigma)^{1- \alpha}-(n-2+\sigma)^{1-\alpha}]\\ &-\frac{1}{2-\alpha}[(n-1+\sigma)^{2-\alpha}-(n-2+\sigma)^{2- \alpha}].\end{cases} \tag{10}\] ### Classification of Caputo Fractional Partial Differential Equations Consider the following Caputo fractional partial differential equation: \[\begin{array}{c}C\\ 0\end{array}D_{t}^{\alpha}u=\Delta u \tag{11}\] Based on the interval of values for \(\alpha\), it can be categorized [41], as depicted in Table 1. This paper primarily addresses the case involving derivatives of order \(0<\alpha<1\). ## 3 Illustration of the method ### Problem setup In this paper, we focus on fractional ordinary differential equations (FODEs) and fractional partial differential equations (FPDEs). Next, we provide a detailed exposition of our method ology by considering the initial-boundary value problem for the FPDE on a bounded domain \(\Omega\in\mathbb{R}^{n}\). Consider the initial-boundary value problem for the following time-fractional slow diffusion equation: \[{}^{C}_{0}D^{\alpha}_{t}u(\mathbf{x},t) =\mathcal{L}u(\mathbf{x},t)+f(\mathbf{x},t),\quad(\mathbf{x},t)\in\Omega\times( 0,T], \tag{12}\] \[u(\mathbf{x},t) =\mu(\mathbf{x},t), (\mathbf{x},t)\in\partial\Omega\times(0,T],\] (13) \[u(\mathbf{x},0) =\varphi(\mathbf{x}), \mathbf{x}\in\Omega, \tag{14}\] where, \({}^{C}_{0}D^{\alpha}_{t}\) denotes the Caputo fractional derivative with \(\alpha\in(0,1)\), \(\mathcal{L}\) represents an integer-order differential operator, \(u(\mathbf{x},t)\) is the solution of the equation, \(f\), \(\mu\) and \(\varphi\) are known functions. ### Architecture of PMNN DNNs have gained remarkable achievements in tackling differential equations of integer order, owing largely to the integration of automatic differentiation techniques. Nevertheless, the applicability of automatic differentiation is restricted to functions with integer-order differentials. To surmount the obstacle of automatic differentiation in utilizing neural networks for solving fractional-order differentials, we tackle the issue by discretizing the Caputo derivatives in the equations using L1 and \(\mathrm{L2}-1_{\sigma}\) approximations, respectively. Subsequently, neural networks are introduced to establish PMNN on L1 and PMNN on \(\mathrm{L2}-1_{\sigma}\), respectively. This section presents the detailed procedures involved in constructing PMNN. #### 3.2.1 PMNN on \(\mathrm{L1}\) The first step of our approach is to semi-discretize the fractional-order derivative in the temporal domain. Let \(N\) be a positive integer. We define \(\tau=\frac{T}{N}\), \(t_{k}=k\tau\), where \(0\leqslant k\leqslant N\). Utilizing the L1 formula presented in Section 2.2.1, we obtain: \[\begin{split}\prescript{C}{0}{D}_{t}^{\alpha}u^{n}& \approx D_{t}^{\alpha}u_{n}\\ &=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left[a_{0}^{(\alpha)}u^ {n}-\sum_{k=1}^{n-1}\left(a_{n-k-1}^{(\alpha)}-a_{n-k}^{(\alpha)}\right)u^{k}- a_{n-1}^{(\alpha)}u^{0}\right],\end{split} \tag{15}\] where \(u_{n}=u(t_{n},\mathbf{x})\), \(1\leqslant n\leqslant N\). Substituting it into the governing equation, we obtain the temporal iteration scheme based on Eq.(12): \[u^{n}=\frac{\Gamma(2-\alpha)\cdot\tau^{\alpha}}{a_{0}^{(\alpha)}}\left[ \mathcal{L}u^{n}+f^{n}\right]+\sum_{k=1}^{n-1}\left(\frac{a_{n-k-1}^{(\alpha)} -a_{n-k}^{(\alpha)}}{a_{0}^{(\alpha)}}\right)u^{k}+\frac{a_{n-1}^{(\alpha)}}{ a_{0}^{(\alpha)}}u^{0}. \tag{16}\] Next, we introduce physical model-driven neural networks as solvers to obtain approximate solutions for the differential equations. In this study, we directly consider the output of the neural network, \(\hat{u}(\mathbf{x},t;\theta)\), as the approximation solution. By substituting it into Eq.(16), we obtain the expression for PMNN on L1 as follows: \[U^{n}=\frac{\Gamma(2-\alpha)\cdot\tau^{\alpha}}{a_{0}^{(\alpha)}}\left[ \mathcal{L}\hat{u}^{n}+f^{n}\right]+\sum_{k=1}^{n-1}\left(\frac{a_{n-k-1}^{( \alpha)}-a_{n-k}^{(\alpha)}}{a_{0}^{(\alpha)}}\right)\hat{u}^{k}+\frac{a_{n-1} ^{(\alpha)}}{a_{0}^{(\alpha)}}\hat{u}^{0}, \tag{17}\] where \(\hat{u}^{n}=u(\mathbf{x},t_{n};\theta)\) represents the output of the neural network at time \(t_{n}\), and \(f^{n}=f(\mathbf{x},t_{n})\). Evidently, the iterative scheme mentioned above incorporates the physical information inherent in the governing equation. Taking into account the initial and boundary conditions (13)-(14), we define the loss function for the PMNN on L1 model as follows: \[Loss(\theta)=Loss_{f}(\theta)+Loss_{ic}(\theta)+Loss_{bc}(\theta), \tag{18}\] the definitions of each component in the loss function are as follows: \[Loss_{f}(\theta) =\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left[\hat{u}(x_{f}^{i},t_{f}^{i}) -U(x_{f}^{i},t_{f}^{i})\right]^{2},\] \[Loss_{bc}(\theta) =\frac{1}{N_{bc}}\sum_{i=1}^{N_{bc}}\left[\hat{u}(x_{bc}^{i},t_{ bc}^{i})-u(x_{bc}^{i},t_{bc}^{i})\right]^{2}, \tag{19}\] \[Loss_{ic}(\theta) =\frac{1}{N_{ic}}\sum_{i=1}^{N_{ic}}\left[\hat{u}(x_{ic}^{i},t_{ bc}^{i})-u(x_{ic}^{i},t_{ic}^{i})\right]^{2},\] where \(\{x_{f}^{i},t_{f}^{i}\}_{i=1}^{N_{f}}\) represents the training points of the iterative scheme, where \(N_{f}\) denotes the number of training points. \(\{x_{ic}^{i},t_{ic}^{i}\}_{i=1}^{N_{ic}}\) refers to the initial training points, where \(N_{ic}\) represents the number of initial training points. Similarly, \(\{x_{bc}^{i},t_{bc}^{i}\}_{i=1}^{N_{bc}}\) denotes the boundary points, and \(N_{bc}\) represents the number of boundary points. The architecture of the PMNN on L1 model is illustrated in Fig.1. #### 3.2.2 PMNN on \(\mathrm{L2-1}_{\sigma}\) Just like in the case of PMNN on L1, we proceed with the discretization of the equation. Let \(N\) be a positive integer. We define \(\tau=\frac{T}{N}\), \(t_{k}=k\tau\), \(0\leqslant k\leqslant N\), and \(\sigma=1-\frac{\alpha}{2}\). Based on the Figure 1: The architecture of PMNN on L1 \(\mathrm{L2}-1_{\sigma}\) formula described in Section 2.2.2, we obtain: \[\begin{split}^{C}_{0}D^{\alpha}_{t}u^{n-1+\sigma}& \approx\Delta^{\alpha}_{t}u(t_{n-1+\sigma})\\ &=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\sum_{k=0}^{n-1}c^{(n, \alpha)}_{k}[u(t_{n-k})-u(t_{n-k-1})],\end{split} \tag{20}\] where \(u_{n}=u(t_{n},\mathbf{x})\), \(1\leqslant n\leqslant N\). We note that the \(\mathrm{L2}-1_{\sigma}\) scheme, unlike the \(\mathrm{L1}\) scheme, discretizes the fractional derivative at time points \(t_{n-1+\sigma}\), which is located outside the set of discrete time points \(t_{n}\). Therefore, in the process of discretizing the equation, we need to consider the time node \(t_{n-1+\sigma}\). The temporal iteration scheme, derived from the differential equation (12), is given by: \[u^{n}=\frac{\Gamma(2-\alpha)\cdot\tau^{\alpha}}{c^{(n,\alpha)}_{0}}\left[ \mathcal{L}u^{n-1+\sigma}+f^{n-1+\sigma}\right]+\sum_{k=1}^{n-1}\frac{c^{(n, \alpha)}_{k}}{c^{(n,\alpha)}_{0}}(u^{n-k-1}-u^{n-k})+u^{n-1}. \tag{21}\] Similarly, by substituting the neural network's output \(\hat{u}(\mathbf{x},t;\theta)\) as an approximation into equation (21), we can derive the expression for PMNN on \(\mathrm{L2}-1_{\sigma}\) as follows: \[U^{n}=\frac{\Gamma(2-\alpha)\cdot\tau^{\alpha}}{c^{(n,\alpha)}_{0}}\left[ \mathcal{L}\hat{u}^{n-1+\sigma}+f^{n-1+\sigma}\right]+\sum_{k=1}^{n-1}\frac{c^ {(n,\alpha)}_{k}}{c^{(n,\alpha)}_{0}}(\hat{u}^{n-k-1}-\hat{u}^{n-k})+\hat{u}^ {n-1}, \tag{22}\] where \(\hat{u}^{n}=u(\mathbf{x},t_{n};\theta)\) represents the output of the network at time \(t_{n}\), and \(f^{n}=f(\mathbf{x},t_{n})\). The loss function for the PMNN on \(\mathrm{L2}-1_{\sigma}\) model is defined by \[Loss(\theta)=Loss_{f}(\theta)+Loss_{ic}(\theta)+Loss_{bc}(\theta). \tag{23}\] The definitions of each loss term can be found in Eq.(19). The architecture of the PMNN on \(\mathrm{L2}-1_{\sigma}\) model is depicted in Fig.2. ## 4 Numerical results In this section, the performance of our proposed model is evaluated through the analysis of three single-term temporal fractional differential equations: a time-fractional ODE, a one dimensional time-fractional PDE, and a two-dimensional time-fractional PDE. The loss function is minimized using the L-BFGS-B method, and the accuracy of the numerical solutions is assessed using the \(L^{2}\) relative error. By contrasting the effectiveness of the two models, it is observed that the proposed model achieves high accuracy. Additionally, all experiments in this section are carried out utilizing an NVIDIA RTX 1660 GPU card. **Example 1**: Single-term Temporal Fractional Ordinary Differential Equation Considering a single-term temporal fractional ordinary differential equation: \[\begin{split}\prescript{C}{0}D_{t}^{\alpha}u(t)&=-u(t )+f(t),\quad t\in(0,T],\\ u(0)&=0,\end{split} \tag{24}\] where, \(0<\alpha<1\), and the exact solution to the equation is given by \(u(t)=t^{5+\alpha}\). The right-hand side term is defined as \(f(t)=\frac{\Gamma(6+\alpha)}{120}t^{5}+t^{5+\alpha}\). For this experiment, we focus on the case where \(T=1\). In this experiment, a fully connected neural network (FNN) with 5 hidden layers is utilized. Figure 2: The architecture of PMNN on L\(2-1_{\sigma}\) Each hidden layer consists of \(20\) neurons with a \(\tanh\) activation function. The training and testing data are uniformly sampled from the interval \([0,T]\). For our analysis, we specifically select \(N_{t}\) training points and \(500\) testing points. To begin, we conduct tests for the case of \(\alpha=0.5\) and \(N_{t}=41\). Fig.3 shows the comparison between the predicted solutions of the two models and the exact solution. The blue solid line represents the graph of the exact solution, while the red dashed line represents the curve of the predicted solution using the PMNN model. It is evident that the predicted solution precisely aligns with the exact solution. Fig.4 illustrates the variation of the error with respect to time \(t\). For the PMNN on L1 model, the error initially remains below \(1\times 10^{-3}\), but around \(t=0.4\), it gradually increases over time. In contrast, the error for the PMNN on \(\text{L2}-1_{\sigma}\) model remains stable at around \(1\times 10^{-4}\). Therefore, in this case, the PMNN on \(\text{L2}-1_{\sigma}\) model demonstrates superior overall performance. The trends of loss with respect to the number of iterations for the two models are depicted in Fig.5. A comparison of the plots reveals that the PMNN on \(\text{L2}-1_{\sigma}\) model exhibits faster convergence. Table 2 displays the iteration counts and training times for the two models with different \(\alpha\). Comparing the iteration counts, it is evident that PMNN on \(\text{L2}-1_{\sigma}\) converges more rapidly. However, it needs a longer training time, potentially due to its requirement for a larger training dataset. To further evaluate the performance of the proposed method, we conducted tests with Figure 3: Single-term FODE: the exact solution and the predict solution of PMNN. different \(\alpha\) and varying numbers of training points. The \(L^{2}\) relative errors for each case are presented in Table 3. It can be observed that as the number of training points increases, the overall errors of both models decrease. Moreover, after approximately \(N_{t}=41\), both models demonstrate good performance. The minimum error attained by PMNN on L1 is \(1.71\times 10^{-4}\), whereas for PMNN on \(\text{L2}-1_{\sigma}\), the minimum error reaches \(3.52\times 10^{-4}\). \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Iter} & \multicolumn{2}{c}{Training Time(s)} \\ \cline{2-5} & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) \\ \hline \(\alpha=0.25\) & 90 & 77 & 8.41 & 8.12 \\ \(\alpha=0.50\) & 77 & 73 & 6.47 & 13.99 \\ \(\alpha=0.75\) & 94 & 71 & 8.49 & 12.57 \\ \hline \hline \end{tabular} \end{table} Table 2: The number of iterations and training time of two PMNN for single-term FODE Figure 4: Single-term FODE: the trend of the error with the time \(t\). Figure 5: Single-term FODE: the trend of the loss function with the number of iterations **Example 2**: 1D Time-fractional Convection-diffusion Equation Consider the following one-dimensional time-fractional convection-diffusion equation, with the exact solution given by \(u(x,t)=x^{2}+\frac{2t^{\alpha}}{\Gamma(1+\alpha)}\). \[\frac{\partial^{\alpha}u(x,t)}{\partial t^{\alpha}} =u_{xx}, x\in[0,1],\ \ t\in(0,1],\] \[u(x,0) =x^{2}, x\in[0,1],\] \[u(0,t) =\frac{2t^{\alpha}}{\Gamma(1+\alpha)}, t\in[0,1],\] \[u(1,t) =1+\frac{2t^{\alpha}}{\Gamma(1+\alpha)}, t\in[0,1],\] The same FNN architecture is employed in this experiment, following the configuration of Example 1. We uniformly sample \(N_{t}\) time nodes from the interval \([0,1]\) and \(N_{x}\) spatial nodes from the interval \([0,1]\). Consequently, the training data comprises a total of \(N_{t}\times N_{x}\) data points. Similarly, we select \(100\times 100\) test data points. We begin the experiment by fixing \(\alpha=0.5\), \(N_{t}=41\), and \(N_{x}=11\). Fig.6 shows the comparison between the predicted solutions obtained from the two models and the exact solution. The left panel depicts the graph of the exact solution, while the middle and right panels display the predicted solutions using PMNN on L1 and PMNN on L\(2-1_{\sigma}\), respectively. A visual analysis reveals a remarkable consistency between the predicted solutions and the graph of the exact solution. The error of the two PMNN models is depicted in Fig.7. It is evident that in this case, there are fluctuations in the error within a small interval near \(t=0\), while the error remains stable \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(N_{t}\)} & \multicolumn{2}{c}{\(\alpha=0.25\)} & \multicolumn{2}{c}{\(\alpha=0.50\)} & \multicolumn{2}{c}{\(\alpha=0.75\)} \\ \cline{2-7} & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) \\ \hline 11 & 1.55 e-02 & 1.20 e-03 & 5.31 e-02 & 2.50 e-03 & 1.33 e-01 & 6.28 e-03 \\ 21 & 5.35 e-03 & 1.21 e-03 & 2.07 e-02 & 2.19 e-03 & 5.88 e-02 & 1.53 e-03 \\ 41 & 1.86 e-03 & 3.71 e-03 & 7.84 e-03 & 3.02 e-03 & 2.55 e-02 & 7.84 e-04 \\ 81 & 9.58 e-04 & 2.64 e-03 & 3.10 e-03 & 1.58 e-03 & 1.09 e-02 & 4.75 e-04 \\ 101 & 6.68 e-04 & 3.51 e-03 & 2.26 e-03 & 2.51 e-03 & 8.36 e-03 & 3.39 e-04 \\ 201 & 1.71 e-04 & 3.86 e-03 & 1.26 e-03 & 2.37 e-03 & 4.59 e-03 & 3.52 e-04 \\ \hline \hline \end{tabular} \end{table} Table 3: The \(L^{2}\) error of two PMNN for single-term FODE for the rest of the time. Furthermore, it can be observed that during the fluctuation period, the error exhibits symmetry around the spatial coordinate point \(x=0.5\). This is an intriguing phenomenon. Additionally, the error of PMNN on \(\text{L2}-1_{\sigma}\) converges to a stable state in a shorter time frame. The evolution of the loss functions for both models during the iterative process is visualized in Fig.8. The first two subfigures in Fig.8 offer a comprehensive depiction of the dynamic evolution of the individual components comprising the loss functions, whereas the last subfigure portrays the overall variation of the loss. By comparing the two plots, it becomes apparent that PMNN on L1 demonstrates a noticeably faster convergence rate, which is contrary to the observations in Example 1. Figure 6: 1D Single-term FPDE: the exact solution and the predict solution of PMNN. Figure 7: 1D Single-term FPDE: the error presentation of PMNN. Table 4 provides the iteration counts and training times for the two models with varying orders \(\alpha\). It is observed that PMNN on L1 converges faster when \(\alpha=0.25\) and \(\alpha=0.5\), whereas PMNN on \(\text{L2}-1_{\sigma}\) converges faster when \(\alpha=0.75\). This indicates that the performance of the two models is influenced by the choice of order in this particular example. In the following experiments, we fix the number of time nodes at \(N_{t}=41\) and set \(\alpha=0.5\). We then change the number of spatial nodes, denoted as \(N_{x}\), to investigate the influence on the models. The corresponding \(L^{2}\) relative errors for each case are presented in Table 5. It can be observed that as the number of spatial training points increases, the errors of both models exhibit minimal fluctuations. This suggests that the quantity of spatial training points has minimal impact on the model's performance. Therefore, it is feasible to select a smaller number of spatial nodes for training, thereby reducing computational costs effectively. Finally, we fix \(N_{x}=11\) and perform tests with different values of \(\alpha\) and varying numbers of spatial nodes \(N_{x}\). The \(L^{2}\) relative errors are presented in Table 6. Consistent with the \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Iter} & \multicolumn{2}{c}{Training Time(s)} \\ \cline{2-5} & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) \\ \hline \(\alpha=0.25\) & 919 & 689 & 97.06 & 103.28 \\ \(\alpha=0.50\) & 901 & 1058 & 170.55 & 194.2 \\ \(\alpha=0.75\) & 1630 & 1511 & 307.21 & 233.91 \\ \hline \hline \end{tabular} \end{table} Table 4: The number of iterations and training time of two PMNN for 1D single-term FPDE Figure 8: 1D Single-term FPDE: the trend of the loss function with the number of iterations. experimental results in Example 1, the errors of both models decrease as the number of time nodes increases. Overall, PMNN on \(\mathrm{L2-1}_{\sigma}\) outperforms PMNN on L1 in terms of performance. **Example 3**: 2D Time-fractional Convection-diffusion Equation Consider the following time-fractional convection-diffusion equation, with an exact solution given by \(u(\mathbf{x},t)=t^{2}e^{x+y}\). \[\frac{\partial^{\alpha}u(\mathbf{x},t)}{\partial t^{\alpha}} =\Delta u(\mathbf{x},t)+f(\mathbf{x},t), \mathbf{x}\in\Omega\subset\mathbb{R}^{2},\ \ t\in(0,1),\] \[u(\mathbf{x},t) =t^{2}e^{x+y}, \mathbf{x}\in\partial\Omega,\ \ t\in(0,1),\] \[u(\mathbf{x},0) =0, \mathbf{x}\in\Omega,\] where \(f(\mathbf{x},t)=[\frac{2t^{2-\alpha}}{\Gamma(3-\alpha)}-2t^{2}]e^{x+y}\). \(\Omega=[0,1]\times[0,1]\) \begin{table} \begin{tabular}{c c c c c c} \hline \multirow{2}{*}{\(N_{t}\)} & \multicolumn{2}{c}{\(\alpha=0.25\)} & \multicolumn{2}{c}{\(\alpha=0.50\)} & \multicolumn{2}{c}{\(\alpha=0.75\)} \\ \cline{2-7} & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) \\ \hline 11 & 3.46e-02 & 2.80e-02 & 1.38e-02 & 8.03e-03 & 6.39e-03 & 2.14e-03 \\ 21 & 1.81e-02 & 1.58e-02 & 6.43e-03 & 3.59e-03 & 3.52e-03 & 1.17e-03 \\ 41 & 7.74e-03 & 5.85e-03 & 3.66e-03 & 1.64e-03 & 2.03e-03 & 6.70e-04 \\ 81 & 2.21e-03 & 9.73e-04 & 1.85e-03 & 6.74e-04 & 1.13e-03 & 3.81e-04 \\ 101 & 1.43e-03 & 5.24e-04 & 1.48e-03 & 5.28e-04 & 9.61e-04 & 2.90e-04 \\ \hline \end{tabular} \end{table} Table 6: The \(L^{2}\) error of two PMNN for 1D single-term FPDE \begin{table} \begin{tabular}{c c c} \hline \multirow{2}{*}{\(N_{x}\)} & \multicolumn{2}{c}{\(\alpha=0.5\)} \\ \cline{2-3} & \(L1\) & \(L2-1_{\sigma}\) \\ \hline 6 & 3.24e-03 & 1.75e-03 \\ 11 & 3.66e-03 & 1.64e-03 \\ 21 & 3.45e-03 & 1.59e-03 \\ 41 & 3.37e-03 & 1.61e-03 \\ 81 & 3.52e-03 & 1.68e-03 \\ 101 & 3.59e-03 & 1.63e-03 \\ \hline \end{tabular} \end{table} Table 5: The \(L^{2}\) error of two PMNN for 1D single-term FPDE The present experiment follows the same configuration as the previous two experiments using FNN. We uniformly select \(N_{t}\) time nodes on the interval \([0,1]\) and \(N_{x}\times N_{x}\) spatial nodes on the domain \([0,1]\times[0,1]\), resulting in a total of \(N_{t}\times N_{x}\times N_{x}\) training data points. In a similar manner, \(100\times 100\times 100\) test data points are selected. For the experiment, we set \(\alpha=0.5\), \(N_{t}=21\), and \(N_{x}=11\). Fig.9 illustrates the comparison between the exact solution and the predicted solutions obtained by the two models at \(t=1\). On the left is the image of the exact solution, in the middle is the image of the predicted solution using PMNN on L1, and on the right is the image of the predicted solution using PMNN on \(\mathrm{L2}-1_{\sigma}\). Through a visual comparison, it is evident that the predicted solution is the same as the exact solution. Fig.10 depicts the errors of the two PMNN models at \(t=1\). It can be observed that the error of PMNN on L1 is relatively larger near the center of the spatial plane, while it decreases as it approaches the boundaries. On the other hand, the error of PMNN on \(\mathrm{L2}-1_{\sigma}\) exhibits less fluctuation. Fig.11 illustrates the trends in the loss functions of the two models. The first two subfigures provide a detailed view of the changes in the individual loss components, while the last subfigure depicts the overall loss of the models. Consistent with the experimental results in Example 1, it is evident that PMNN on \(\mathrm{L2}-1_{\sigma}\) exhibits a faster convergence speed. Table 7 presents the number of iterations and training time for both models when different orders of \(\alpha\) are chosen. It can be observed that PMNN on L1 converges faster when \(\alpha=0.25\), while PMNN on \(\mathrm{L2}-1_{\sigma}\) converges faster when \(\alpha=0.5\) and \(\alpha=0.75\). This indicates that the Figure 9: 2D Single-term FPDE: the exact solution and the predict solution when \(t=1.00\). performance of the two models is influenced by the order of the fractional derivative in this example. This conclusion is consistent with Example 2. In the following, we fix \(N_{t}=21\) and \(\alpha=0.5\), and then vary the number of spatial nodes, \(N_{x}\), to investigate their impact on the model. The corresponding \(L^{2}\) relative errors are presented in Table 8. The analysis reveals that the model's performance is unaffected by the number of \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Iter} & \multicolumn{2}{c}{Training Time(s)} \\ \cline{2-5} & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) \\ \hline \(\alpha=0.25\) & 1595 & 1875 & 287.62 & 341.82 \\ \(\alpha=0.50\) & 1056 & 935 & 315.21 & 280.81 \\ \(\alpha=0.75\) & 1076 & 723 & 347.18 & 218.5 \\ \hline \hline \end{tabular} \end{table} Table 7: The number of iterations and training time of two PMNN for 2D single-term FPDE Figure 11: 2D Single-term FPDE: the trend of the loss function with the number of iterations. Figure 10: 2D Single-term FPDE: the error presentation of PMNN when \(t=1.00\). spatial training points. Consequently, it is viable to employ a minimal number of spatial nodes during training, resulting in reduced computational costs. Lastly, by fixing \(N_{x}=11\), we perform experiments with different values of \(\alpha\) and varying \(N_{t}\). The \(L^{2}\) relative errors are presented in Table 9. In contrast to the previous two experiments, the errors of both models do not consistently decrease as the number of time nodes increases. Instead, they exhibit minor fluctuations. Overall, the PMNN on \(\text{L2}-1_{\sigma}\) model demonstrates superior performance compared to the PMNN on L1 model. ## 5 Conclusions In this paper, we introduce PMNN, an iteration scheme approximation method based on physical model-driven neural networks, to solve FDEs. This iteration scheme leverages the physical \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(N_{x}\)} & \multicolumn{2}{c}{\(\alpha=0.5\)} \\ \cline{2-3} & \(L1\) & \(L2-1_{\sigma}\) \\ \hline 6 & 3.42e-04 & 7.03e-05 \\ 11 & 3.16e-04 & 4.67e-05 \\ 21 & 3.07e-04 & 4.16e-05 \\ 41 & 3.10e-04 & 4.25e-05 \\ 81 & 3.09e-04 & 6.76e-05 \\ \hline \hline \end{tabular} \end{table} Table 8: The \(L^{2}\) error of two PMNN for 2D single-term FPDE \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\(N_{t}\)} & \multicolumn{2}{c}{\(\alpha=0.25\)} & \multicolumn{2}{c}{\(\alpha=0.50\)} & \multicolumn{2}{c}{\(\alpha=0.75\)} \\ \cline{2-5} & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) & \(L1\) & \(L2-1_{\sigma}\) \\ \hline 11 & 2.50e-04 & 6.23e-05 & 8.13e-04 & 6.15e-05 & 2.34e-03 & 5.00e-05 \\ 21 & 8.27e-05 & 4.01e-05 & 3.16e-04 & 4.67e-05 & 1.01e-03 & 4.17e-05 \\ 41 & 4.07e-05 & 1.66e-04 & 1.21e-04 & 3.91e-05 & 4.32e-04 & 4.77e-05 \\ 81 & 2.34e-05 & 3.32e-04 & 5.26e-05 & 5.25e-05 & 1.76e-04 & 5.08e-05 \\ 101 & 3.04e-05 & 5.61e-05 & 4.56e-05 & 5.62e-05 & 1.35e-04 & 5.68e-05 \\ \hline \hline \end{tabular} \end{table} Table 9: The \(L^{2}\) error of two PMNN for 2D single-term FPDE information embedded in the equations, enabling the problem of solving FDEs to be reframed as a learning task for the PMNN model. Specifically tailored for Caputo FDEs, PMNN overcomes the limitations of automatic differentiation techniques in neural networks for solving fractional derivatives. It combines the efficiency of traditional interpolation approximation and harnesses the powerful fitting capabilities of neural networks. Through three numerical experiments, we demonstrate the excellent performance of the proposed model. Moreover, we present two variations of the model, and the numerical experiments show their distinct merits in various scenarios. Hence, in practical applications, the selection of the suitable model can be selected to cater to specific requirements. In spite of that, PMNN on \(\text{L}2-1_{\sigma}\) exhibits particularly appealing performance in the majority of cases. PMNN perfectly merges physical model-driven neural networks with interpolation approximation techniques for fractional derivatives, offering a novel approach for numerically solving FDEs. The proposed model currently applies only to the solution of single-term temporal fractional differential equations. Furthermore, some interesting phenomena observed in the experimental results still lack a clear explanation, presenting unresolved challenges for future investigation. In the future, investigating neural network-based physical model-driven methods for multi-term FDEs and spatial fractional-order differential equations could provide a potential direction for further research. ## CRediT authorship contribution statement **Zhiying Ma**: Methodology, Software, Writing-original draft. **Jie Hou**: Methodology, Writing - review & editing. **Wenhao Zhu**: Supervision, Writing-review & editing. **Yaxin Peng**: Supervision, Writing-review & editing. **Ying Li**: Conceptualization, Funding acquisition, Writing-review & editing. ## Acknowledgments This work is supported by the National Key Research and Development Program of China (No.2021YFA1003004). ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2301.12873
Approximating DTW with a convolutional neural network on EEG data
Dynamic Time Wrapping (DTW) is a widely used algorithm for measuring similarities between two time series. It is especially valuable in a wide variety of applications, such as clustering, anomaly detection, classification, or video segmentation, where the time-series have different timescales, are irregularly sampled, or are shifted. However, it is not prone to be considered as a loss function in an end-to-end learning framework because of its non-differentiability and its quadratic temporal complexity. While differentiable variants of DTW have been introduced by the community, they still present some drawbacks: computing the distance is still expensive and this similarity tends to blur some differences in the time-series. In this paper, we propose a fast and differentiable approximation of DTW by comparing two architectures: the first one for learning an embedding in which the Euclidean distance mimics the DTW, and the second one for directly predicting the DTW output using regression. We build the former by training a siamese neural network to regress the DTW value between two time-series. Depending on the nature of the activation function, this approximation naturally supports differentiation, and it is efficient to compute. We show, in a time-series retrieval context on EEG datasets, that our methods achieve at least the same level of accuracy as other DTW main approximations with higher computational efficiency. We also show that it can be used to learn in an end-to-end setting on long time series by proposing generative models of EEGs.
Hugo Lerogeron, Romain Picot-Clemente, Alain Rakotomamonjy, Laurent Heutte
2023-01-30T13:27:47Z
http://arxiv.org/abs/2301.12873v1
# Approximating DTW with a convolutional neural network on EEG data ###### Abstract Dynamic Time Wrapping (DTW) is a widely used algorithm for measuring similarities between two time series. It is especially valuable in a wide variety of applications, such as clustering, anomaly detection, classification, or video segmentation, where the time-series have different timescales, are irregularly sampled, or are shifted. However, it is not prone to be considered as a loss function in an end-to-end learning framework because of its non-differentiability and its quadratic temporal complexity. While differentiable variants of DTW have been introduced by the community, they still present some drawbacks: computing the distance is still expensive and this similarity tends to blur some differences in the time-series. In this paper, we propose a fast and differentiable approximation of DTW by comparing two architectures: the first one for learning an embedding in which the Euclidean distance mimics the DTW, and the second one for directly predicting the DTW output using regression. We build the former by training a siamese neural network to regress the DTW value between two time-series. Depending on the nature of the activation function, this approximation naturally supports differentiation, and it is efficient to compute. We show, in a time-series retrieval context on EEG datasets, that our methods achieve at least the same level of accuracy as other DTW main approximations with higher computational efficiency. We also show that it can be used to learn in an end-to-end setting on long time series by proposing generative models of EEGs. ## 1 Introduction Proposed by Sakoe and al [1], Dynamic Time Wrapping (DTW) algorithm is an alignment-based similarity measure for temporal sequences. Initially used for speech applications, its properties, notably its invariance to time shifts and its ability to compare series of different lengths, make the DTW useful in various time-series related applications. For instance, Seto and al [2] make use of the DTW to create meaningful features for human activity recognition, Lapere and al [3] employ the DTW as a regularization tool in disturbance storm time forecasting and Zifan and al [4] consider DTW on piecewise approximation of time series to segment ECG data. Nevertheless, due to its non-differentiability (see Tavenard [5]), DTW can not be considered as a loss function for end-to-end training of deep neural networks. To circumvent those limitations, differentiable approximations of the DTW have been proposed, such as SoftDTW by Cuturi and al [6], which notably replace the min operator by a softmin. While this approximation enables kernel machine and end-to-end deep neural network training, it keeps the quadratic complexity in time of DTW which creates a running time problem for applications in which longer time series are considered such as EEG signals. For instance, the widely used SleepEDF dataset introduced by Kemp and al [7] uses splits of size 3000. Therefore, in order to be able to use DTW as a loss in an end-to-end training on EEG signals, we propose a neural model that approximates DTW similarity between two time-series. To do so, we propose two architectures to compare : an encoder-decoder scheme in which the backbone is a siamese convolutional neural network and a direct regression model. We show that this enables to obtain an accurate, scalable and differentiable approximation of DTW. In this paper, our contributions are the following ones: * we compare a direct regression architecture and a siamese encoder-decoder inspired by Courty and al [8] to approximate DTW; * we show how such an approximation is fast, more faithful to the objective function than other approximations (namely FastDTW [9] and SoftDTW [6]) and can be used in end-to-end training; * we show how such an approximation can be transferred to others similar EEG signals using another public dataset. After considering related works in Section 2, we detail our approach used to approximate DTW on time series in Section 3 and our experimental setup in Section 4. We then discuss in Section 5 the differentiability, time efficiency and performance on classification tasks of our proposed method. We conclude in Section 6 and draw future works from our results. ## 2 Related Works ### Approximation of the DTW While the advantages of DTW are well-known, its quadratic complexity in both time and space has limited its practical use to small datasets of short time series. To counteract those limitations, some efforts have been made to introduce approximated versions of DTW. Salvador and al [9] introduced FastDTW, an approximation with linear complexity in both time and space. However, because FastDTW is not differentiable, it cannot be used directly as a loss in gradient based algorithms. To allow for differentiable end-to-end learning with DTW, Cuturi and al [6] introduced SoftDTW. The algorithm computes the soft minimum of all costs spanned by all possible alignments between two time series, which leads to a differentiable approximation of the DTW. While the forward pass has a linear time complexity, the backward pass needs to consider all the alignments, resulting in a quadratic complexity in time and a linear complexity in space. The addition of the smoothing factor \(\gamma\), also may force more hyperparameter tuning. With DTWNet [10], Cai and al introduced a stochastic backpropagation of DTW. They leverage the fact that once the warping path of the DTW is obtained, it is fixed for the iteration, cannot have more than \((n+m)\) elements (if \(n\) and \(m\) are the respective lengths of the input signals) and is in itself differentiable. While the gradient can be computed in \(O(n+m)\), the warping path needs to be obtained, which still requires \(O(n.m)\) operations. Therefore, while various approaches have been proposed to approximate DTW, to the best of our knowledge none of them enables both differentiability and at most linear complexity in time. ### Approximation of distances via neural networks As far as we know, no one has attempted to mimic the DTW with a neural network. However, Courty and al [8] similarly approximate the Wasserstein distance using a siamese network to make the squared Euclidean distance of the embedded vectors mimic the Wasserstein distance between the base vectors. The two input vectors are fed through the same (hence siamese) encoder \(\phi\). Then a decoder \(\psi\), in that case two fully connected layers, tries to reconstruct the original vector. The encoder learns through the MSE loss, while a KL divergence loss is used for the reconstruction error of the decoder. Authors choose to use the KL divergence because the Wasserstein distance operates on probability distributions. This allows for interpretation of the embedding space, and also fulfills two conditions of a distance (identity and symmetry) since the model is deterministic in inference. ## 3 Architecture for learning to approximate DTW In this section, we introduce the two architectures we use to approximate DTW that will be compared in sections 4 and 5. Because of all the advantages of the method mentioned in 2.2, we first choose to use a similar siamese architecture as in Courty and al [8]. Our adapted global architecture is shown in figure 1. Contrary to the Wasserstein distance used in Courty and al [8], DTW does not work with probabilities distributions but directly with the time series. Therefore, we use the MSE loss instead of the KL divergence to evaluate the reconstruction error made by the decoder. The goal of the decoder is to force the encoder to keep as much information as possible when projecting the signals. That way, the encoder can not collapse into embedding all the signals to the same representation. It also helps to regularize the training. Overall, the encoder \(\phi\) takes as input two signals \(x\in\mathbb{R}^{L}\) and \(x^{\prime}\in\mathbb{R}^{L^{\prime}}\) and projects them to two signals \(z\in\mathbb{R}^{H}\) and \(z^{\prime}\in\mathbb{R}^{H}\), where \(H\) is the hidden dimension. Feeding pairs of signals \(\{x_{i},x^{\prime}_{j}\}_{i,j\in 1,\dots,n}\) to the model, the global objective function is then, with \(z=\phi(x),z^{\prime}=\phi(x^{\prime})\) denoting the encoded signals, \(\psi\) the decoder, and \(y_{i,j}\) the target DTW value: \[\min_{\phi,\psi}\underbrace{\sum_{i,j}\left\|\left\|z_{i}-z^{\prime}_{j} \right\|^{2}-y_{i,j}\right\|^{2}}_{\text{approximation loss}}+\lambda\underbrace{ \left(\sum_{i,j}\left\|\psi(z_{i})-x_{i}\right\|^{2}+\sum_{i,j}\left\|\psi(z^ {\prime}_{j})-x^{\prime}_{j}\right\|^{2}\right)}_{\text{reconstruction loss}} \tag{1}\] \(\lambda\) is a hyperparameter and aims at balancing the losses. ### Training Procedure We describe our training loop with the algorithm 1. We directly feed pairs of signals \(\{x_{i},x^{\prime}_{j}\}_{i,j\in 1,\dots,n}\) of length \(L\) and dimension \(d=1\) to the encoder, which processes both signals independently. We use their corresponding DTW distance \(\{y_{i,j}=DTW(x_{i},x^{\prime}_{j})\}_{i,j\in 1,\dots,n}\) as label. Once we have encoded signals, we get the predicted DTW value by taking the Euclidean distance between them and compare it to the reference value via MSE to get the encoder loss. We then use the decoder to get the decoded signals from the encoded ones, then compare them to the input signals to get the decoder loss. We then sum the losses with a balancing parameter \(\lambda\) and update the parameters of the encoder and decoder at the same time. ### Encoder architecture The global architecture of our approach is independent of the type of time series we train it on. On the other hand, if we want a reliable approximation, the encoder needs to be able to project the signals meaningfully, and therefore must be adapted to each type of data. In our case, we choose to focus on EEG data. As a result, we use SorsNet introduced Figure 1: Global architecture of the model. Two signals are drawn from the dataset and encoded by the same encoder. The goal is to get the L2 distance between the encoded vectors as close as possible to DTW between the drawn signals. The encoded signals then pass through a decoder, which tries to reconstruct the original signal. by Sors and al [11] as encoder. SorsNet is a 1D convolutional neural network consisting of a series of blocks. Each block contains a convolutional layer, a batch normalization layer and a ReLU activation function. We choose SorsNet because it has been shown to work well on sleep staging on EEG signals ([11]), thus we assume that the architecture allows for a good representation of EEG data. The network is also fully convolutional, with kernel sizes, strides and padding carefully chosen to always get a projected vector \(z\in\mathbb{R}^{1,H}\) as long as the length of the time series is less or equal than 3000, the usual size for EEG data. This permits us to use the same model with the same weights for time series of different lengths, thus allowing to mimic the DTW ability to compare time series of different lengths. Finally, the network being fully convolutional also enables low inference time. ### Decoder We want to force the encoder to learn a meaningful embedding that keeps as much information about the original signals as possible, in order to improve the accuracy of the approximation of DTW. Inspired by Thill and al [12], we first use an upsampling layer so that \(z\) is of the same dimension as the input signal \(x\), then use a Temporal Convolutional Network (TCN, Bai and al [13]) to decode \(z\) and try to retrieve \(x\). We use \(q=[32,16,8,4,2,1]\) as dilation rates and \(k=20\) as kernel size. The choice of a TCN allows our decoder to be independent of the length of time series \(x\) since all the layers are convolutional. ### Direct Regression While the architecture of our encoder-decoder allows comparing signals of different lengths, a simpler architecture may work better on signals of fixed lengths. To investigate this, we also introduce a simpler architecture that we call **direct regression**. It takes as input pairs of signals \(\{x_{i},x_{j}^{\prime}\}_{i,j\in 1,\ldots,n}\), concatenates them to get a tensor \(x_{cat}\in\mathbb{R}^{B,L,2}\) with \(B\) the batch size, then feeds \(x_{cat}\) as input to the SorsNet encoder. Afterwards, a dense layer with batch normalization and ReLU activation processes the tensor, before a final dense layer outputs the predicted DTW value. In this case, we do not need a decoder since we directly get the value to predict. Everything else is kept identical to the siamese encoder-decoder architecture. ## 4 Experimental Setup ### Datasets While ideally our model should be able to approximate the DTW no matter the origin of the time series, we decide to first focus only on sleep data. We choose to use the SleepEDF-78 dataset [7], which contains recording of various sensors during sleep. The participants were involved in two studies: Sleep Cassette, which focuses on age effect on sleep and Sleep Telemetry, which focuses on the impact of temazepam on sleep. For these two datasets, each PSG file contains two EEG channels (Fpz-Cz, Pz-Oz) sampled at 100 Hz, one EOG channel and one chin EMG channel. We decided, following the literature (Phan and al [14], Tsinalis and al [15]) to use only the Sleep Cassette files. To train our model, we are however able to use all the different channels, while previous studies focus on the Fpz-Cz channel only. We randomly split the patients, keeping 70 patients for training and validation and 8 patients for testing. For each patient, we split the signals in cuts of size \(L\) along a randomly chosen signal. The dataset is then made of \(N\) randomly selected cuts. **Data preprocessing** Since we use multiple channels of the sleep files, we have to process various types of data. This creates scaling problems, which is to say that some series will have values in much bigger ranges than others. Moreover, it will lead to big ranges for the reference DTW values and thus for the training loss. To face this difficulty, we preprocess the dataset as follows. We first clip all the values in the dataset \(Ds\) i.e., for every value \(a_{k}\) in each signal \(x_{i}\) in \(Ds\), we compute \(p_{1}\) and \(p_{99}\), respectively 1st and 99th percentile. Then \(\forall a_{k}\in Ds\) : \[a_{k}=\begin{cases}p_{1}&\text{if }a_{k}<p_{1}\\ p_{99}&\text{if }a_{k}>p_{99}\\ a_{k}&\text{otherwise}\end{cases} \tag{2}\] This allows to limit the impact of outliers without losing much information. We then apply the min-max normalization. It projects all the data to \([0,1]\), which bounds the reference DTW to \([0,L]\) where \(L\) is the length of the time-series \(x\) since DTW is ultimately the sum of Euclidean distances along the alignment path. Knowing the lower and upper bounds of the DTW allows us to normalize its values to \([0,1]\), greatly helping to balance the training losses. **Creation of the DTW matrix** We then fill the ground truth DTW matrix \(Y_{DTW}\). To do so, we randomly choose \(N_{pairs}\) pairs of signals and fill the corresponding parts of the ground truth matrix with DTW results between the pairs of selected signals, following Courty and al [8]. ### Training Parameters To speed up the training, we restrain the signals to length \(L=1000\) by randomly slicing the signal. We select \(N=10,000\) signals as explained in section 4.1 for the train set and randomly select \(N_{pairs}=10^{6}\) over the \(10^{8}\) possible pairs in order to train the model on a decent number of signals without overfitting. It also limits the training time. We do the same for the validation set and the test set with 1000 signals and 100,000 pairs. We use SorsNet as encoder, setting the dropout to 0 and replacing the classification layer by a dense layer with \(H=500\). We define the decoder as described in section 3.3 and use Adam optimizer with a \(10^{-5}\) learning rate to optimize both the encoder and the decoder parameters at the same time. We set the batch size to 128 and \(\lambda\) to 1. We train for 50 epochs with an early stopping if the validation loss does not improve for 8 straight epochs. Note that because of time constraints (training lasts for approximately 20 hours on one TITAN V), no extensive hyperparameter search was done. ## 5 Study of performance on downstream tasks ### Illustration of Efficiency and Approximation Properties on a Nearest Neighbour Retrieval Task In itself, the output value of DTW is not our main goal. What matters is the ability to compare time series and rank similarity between time series. Therefore, instead of comparing the raw values of our approximation with DTW, once we have trained our model we study its performance on downstream tasks. We want to study how our model can compare series and how close it is to DTW. To do so, we first select \(N_{t}\) signals in the test set. We then fit a nearest neighbor algorithm 1 using as custom metric our model (DeepDTW) on the test set. We do the same operation using DTW (we use the implementation from ppts by Faouzi and al [16]) as custom metric. We select the top1 nearest neighbor \(\tilde{x}_{top1}\) for all signals \(x\in N_{t}\) with \(x\neq\tilde{x}_{top1}\) and evaluate the number of times \(\tilde{x}_{top1}\) is in the top 5 ranking of nearest neighbors according to DTW. Since we use random subsets of the test set, we run the same experiment 8 times for increasing numbers of signals \(N_{t}\). We also add the two main approximations of DTW, FastDTW and SoftDTW, to the comparison. Since SoftDTW running time is dependent on the hyperparameter \(\gamma\) (the closer \(\gamma\) to 0, the more faithful SoftDTW to the standard DTW, but also the slower as explained by Cuturi and al [6]), we choose a middle ground with \(\gamma=0.1\). Footnote 1: [https://scikit-learn.org/stable/modules/neighbors.html](https://scikit-learn.org/stable/modules/neighbors.html) One of the major perks of DTW is its ability to compare time series of different lengths, and so a good approximation should mimic this feature. To study this property, we modify our dataset by restricting the size of EEG signals to random lengths following the uniform sampling method from Tan and al[17] i.e., \(\forall x\in N_{t}\), \(x\in\mathbb{R}^{L}\) with \(L\in[500;1000]\). While in inference no change is required for our siamese architecture to compute signals of varying lengths, we have to pad the signals to the size of the longest in a batch to make the direct regression architecture work. We show the results in table 1. We can see that the direct regression (DeepDTW Direct) model learns to order signals the closest to DTW, even outperforming FastDTW when the task is easy or moderately easy (50-200 signals). FastDTW is less impacted by the task getting harder and is the best with 400 and 600 signals in the set. Both our approximations outperform SoftDTW no matter the number of signals, with the direct approach above by \(\sim\)43 percentage points (pp) when the task is easy and still \(\sim\)21pp above in the hardest setting, while the siamese approach is \(\sim\)20pp above at 50 signals and about equal at 600 signals. It illustrates how our model can mimic DTW ability to compare series of different lengths well enough. ### Sleep Staging To complete the study of the faithfulness of our approximation, we evaluate how our model can be used in time-series classification context. We choose the sleep staging task to do so. It consists in classifying segments of sleep data in different classes. **Dataset** We use the test set from the SleepEDF dataset, with the same processing as in section 4.1. This time we use the labels of the states of sleep. The classes in SleepEDF include wake (W), rapid eye movement (REM), four stages of different deep sleep (N1, N2, N3, N4), M (movement time) and '?' (no data). Following Mousavi and al [18], we merge the stages N3 and N4, and remove the sequences labelled as M and '?'. It results in sequences of signals of \(length=3000\) distributed in 5 different classes. **KNN classification** We want to compare our approximation to DTW and its other main approximations for sleep staging. To do that, we create 4 instances of KNN classifiers, each with a different base metric: standard DTW, our model with the direct architecture (DeepDTW Direct), our model with the siamese architecture (DeepDTW Siamese) and FastDTW. We then run 5 iterations of the following experiment: select a number \(N\) of random signals in the set and split them in training and test sets at 50/50 proportion, fit the KNN instance of each metric on the training set and compute the corresponding macro F1 score (MF1) on the test set. We modify the SleepEDF test set to get time series of varying length in the same way as in section 5.1. We show the results on the SleepEDF dataset in table 2. Overall, both our approximation and FastDTW behave very similarly to standard DTW. Small variations due to variance aside, we can see that both our models and FastDTW lead to very similar classification score to DTW, which shows that they tend to compare series in the same way. ### Computation Time Study While the main drawback of FastDTW is the fact that it is not differentiable, the main drawback of SoftDTW is its low computation speed. To illustrate this point, we study the time needed to compute DTW between two uni-dimensional time series 1000 times. To fairly compare all the metrics, all the experiments are run only on CPU. We keep \(\gamma=0.1\) for SoftDTW. We show the results in figure 2. We can see how the computation times needed for both versions of our models (in blue and black) are very slowly increasing with the size of the time series, while FastDTW and especially SoftDTW running times are quickly increasing. At \(L=3000\), the standard size for sleep staging, our direct model is 100 times faster than SoftDTW. Our direct model is only approximately 3 times slower than FastDTW, can be run on GPU, and is differentiable, as shown in the following section. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline N signals & SoftDTW(0.1) & DeepDTW Direct & DeepDTW Siamese & FastDTW \\ \hline 50 & 42.25 \(\pm\) 4.79 & **85.75 \(\pm\) 4.74** & 62.0 \(\pm\) 6.48 & 74.25 \(\pm\) 7.45 \\ 100 & 28.12 \(\pm\) 3.82 & **75.25 \(\pm\) 1.39** & 45.25 \(\pm\) 6.76 & 65.25 \(\pm\) 3.67 \\ 200 & 23.06 \(\pm\) 2.16 & **61.0 \(\pm\) 2.76** & 33.0 \(\pm\) 3.22 & 57.56 \(\pm\) 4.44 \\ 400 & 20.22 \(\pm\) 1.33 & 47.31 \(\pm\) 3.12 & 22.84 \(\pm\) 2.43 & **52.47 \(\pm\) 1.83** \\ 600 & 18.0 \(\pm\) 1.43 & 39.88 \(\pm\) 2.33 & 18.52 \(\pm\) 0.83 & **49.75 \(\pm\) 1.86** \\ \hline \end{tabular} \end{table} Table 1: Percentage of time the closest different signal of length \(L\in[500;1000]\) of a given signal according to our model is among the top 5 closest according to DTW. N signals is the number of signals in the test set. \begin{table} \begin{tabular}{|c|c|c|c|} \hline N signals & DTW & DeepDTW Direct & DeepDTW Siamese & FastDTW \\ \hline 500 & 0.25 \(\pm\) 0.02 & 0.26 \(\pm\) 0.04 & 0.21 \(\pm\) 0.02 & 0.26 \(\pm\) 0.01 \\ 1000 & 0.34 \(\pm\) 0.02 & 0.33 \(\pm\) 0.01 & 0.37 \(\pm\) 0.02 & 0.35 \(\pm\) 0.02 \\ 2000 & 0.44 \(\pm\) 0.01 & 0.44 \(\pm\) 0.01 & 0.43 \(\pm\) 0.01 & 0.43 \(\pm\) 0.01 \\ \hline \end{tabular} \end{table} Table 2: Macro F1 score of sleep staging on SleepEDF with KNN using various metrics as base for the KNN. We run 5 iterations. For any given iteration, all the methods use the same data. ### Differentiability The goal of our model is to be accurate, fast, and differentiable. In this section, we illustrate the latter. Chang and al [19] introduced a way to learn class specific prototypes in order to classify time series. For a given dataset \(D\), they learn as many prototypes \(p\) as the number of classes \(k\) in the dataset: the inter-class distance between prototypes should be as large as possible, while at the same time a prototype should represent its class well enough to get good classification results. Once prototypes are learned, times series can be classified by using the nearest neighbor algorithm with the prototypes. They are learned by computing DTW between a given signal \(x_{n}\in D\) and each prototype \(p_{k}\) corresponding to each class \(k\). Since the idea is to learn the prototype end-to-end, to circumvent the non-differentiabilty of DTW, the authors choose to differentiate DTW by using the determined form along the warping path, i.e., the sum of Euclidean distances of paired points as done by Cai and al [10]. We apply the method to the SleepEDF dataset and compare the classification score obtained by computing DTW with our approximation. We show the results in figure 3. We can see that with our model used as distance to compare the signals, we learn prototypes that represent the classes better since the classification accuracy is higher. It is also faster to train, even on CPU (854 minutes for our method, versus 1773 minutes). We have shown that our approximation performs well in time series classifications tasks, is fast, and can be used to learn end-to-end. However, a good approximation of DTW should perform well independently of the dataset. This is what we illustrate in the following section. ### Adaptation to others datasets In this section, we study the generalization capacity of our model to similar EEG data. **Dataset** Following other sleep staging related contributions (Olesen and al [20], Phan and al [21], Eldele and al [22]) we evaluate the generalization capabilities of our model to another widely used EEG dataset: SHHS, introduced by Quan and al [23] and later updated by Zhang and al [24]. SHHS is a multichannel health signal database, aimed at studying the effect of sleep-disordered breathing on cardiovascular diseases. There are two rounds of PSG records in the dataset, SHHS-1 for Visit 1 and SHHS-2 for Visit 2. We only focus on the first set in this section. It contains 5791 subjects. Similar to other databases like SleepEDF annotated with the R&K rule, N3 and N4 stages were merged into N3 stage and MOVEMENT (M) and UNKNOWN (?) signals were removed. As we did in section 5.2, for each signal we randomly choose a channel among the pre-selected ones (EEG, EOG(L), EOG(R), ECG and EMG channels for this study). **Nearest Neighbor Retrieval** We reproduce the experiments from section 5.1, this time on the SHHS dataset. We choose to only use the direct architecture as it gave the best results on SleepEDF. To study how our model transfers its knowledge to other datasets, Figure 2: Time needed in seconds to run 1000 computations **on CPU** of the metric, depending on the lengths of uni-dimensional signals. we first use the best model learned on SleepEDF according to the nearest neighbor test, and directly use it without fine-tuning to do the same test on the SHHS dataset. We use the same preprocessing as in section 4.1. We also learn our approximation model on the SHHS dataset in the same way as in section 4.2 and do the nearest neighbor experiment on a separated test set. We summarize the results in table 3. The model learned on SleepEDF and tested on SHHS gives almost identical results to the one learned on SHHS, showing that for similar data with consistent preprocessing, our approximation model generalizes very well to new data. **KNN Classification on SHHS** We reproduce the experiment from section 5.2 on the SHHS dataset. We apply exactly the same processing on SHHS, generating time series of varying length. We compare the classification performance of 3 KNNs, one based on FastDTW, one on our direct regression model learned on SleepEDF (DeepDTW SleepEDF) and one learned on SHHS (DeepDTW SHHS). We show the results in table 4. Our models are very close to each other, showing that they also generalize well in the classification context. \begin{table} \begin{tabular}{|c|c|c|c|} \hline N signals & DeepDTW SHHS & DeepDTW SleepEDF & FastDTW \\ \hline 500 & 0.250 \(\pm\) 0.02 & 0.246 \(\pm\) 0.02 & 0.249 \(\pm\) 0.01 \\ 1000 & 0.329 \(\pm\) 0.01 & 0.327 \(\pm\) 0.01 & 0.342 \(\pm\) 0.04 \\ 2000 & 0.427 \(\pm\) 0.01 & 0.437 \(\pm\) 0.01 & 0.472 \(\pm\) 0.01 \\ \hline \end{tabular} \end{table} Table 4: Macro F1 score of sleep staging on SHHS with KNN using various metrics as base for the KNN. DeepDTW SleepEDF stands for the model learned on the SleepEDF set and not fine-tuned, while DeepDTW SHHS indicates the model trained on SHHS. Both use direct architecture. Figure 3: Accuracy score of prototype-based classification on the validation set during the training of class specific prototypes. The metric is used to compute the distance between an input signal and prototypes, and so is crucial for the training. FastDTW is used to compute the alignment path following Cai and al [10] (see section 5.4). \begin{table} \begin{tabular}{|c|c|c|} \hline N signals & Model trained on SHHS & Model trained on SleepEDF \\ \hline 50 & \(0.89\pm 0.04\) & \(0.86\pm 0.03\) \\ 100 & \(0.75\pm 0.05\) & \(0.74\pm 0.04\) \\ 200 & \(0.61\pm 0.03\) & \(0.62\pm 0.02\) \\ 400 & \(0.50\pm 0.03\) & \(0.48\pm 0.02\) \\ 600 & \(0.44\pm 0.02\) & \(0.43\pm 0.02\) \\ \hline \end{tabular} \end{table} Table 3: Nearest neighbor test score on SHHS. The model trained of SleepEDF is not fine-tuned. ## 6 Conclusion and future works In this paper, we presented and compared two architectures to approximate DTW. The first one is done by creating an embedding in which the Euclidean distance mimics the DTW. Such an embedding is obtained by training a siamese encoder-decoder model to both regress the DTW value and retrieve the original signals from the embedded ones. The second method concatenates signals to directly predict DTW value, allowing for better retrieval performance and slightly faster training time, and is therefore a better approach. We showed how our approximations can be used in an end-to-end training, are faster and more faithful to DTW than other approximations, and perform well in time series classification tasks. Finally, we also showed that we can extend the results to similar datasets. However, although multidimensional time series are quite common in DTW use cases, they were not addressed in this paper and are left for future works. Particularly, being able to embed signals in a space where the Euclidean distance mimics DTW no matter the length or number of dimensions of the signals would be the end goal. Finally, a perfect approximation of DTW should be usable on various types of data without the need for fine-tuning. Like it has been done in the literature for natural language processing, we leave for future work to build a huge dataset of various types of time series and create a generalist model able to approximate DTW on all those types of data.
2305.06329
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Measuring similarity of neural networks to understand and improve their behavior has become an issue of great importance and research interest. In this survey, we provide a comprehensive overview of two complementary perspectives of measuring neural network similarity: (i) representational similarity, which considers how activations of intermediate layers differ, and (ii) functional similarity, which considers how models differ in their outputs. In addition to providing detailed descriptions of existing measures, we summarize and discuss results on the properties of and relationships between these measures, and point to open research problems. We hope our work lays a foundation for more systematic research on the properties and applicability of similarity measures for neural network models.
Max Klabunde, Tobias Schumacher, Markus Strohmaier, Florian Lemmerich
2023-05-10T17:33:48Z
http://arxiv.org/abs/2305.06329v3
# Similarity of Neural Network Models: ###### Abstract Measuring similarity of neural networks has become an issue of great importance and research interest to understand and utilize differences of neural networks. While there are several perspectives on how neural networks can be similar, we specifically focus on two complementing perspectives, i.e., (i) representational similarity, which considers how _activations_ of intermediate neural layers differ, and (ii) functional similarity, which considers how models differ in their _outputs_. In this survey, we provide a comprehensive overview of these two families of similarity measures for neural network models. In addition to providing detailed descriptions of existing measures, we summarize and discuss results on the properties and relationships of these measures, and point to open research problems. Further, we provide practical recommendations that can guide researchers as well as practitioners in applying the measures. We hope our work lays a foundation for our community to engage in more systematic research on the properties, nature and applicability of similarity measures for neural network models. ## 1 Introduction Understanding and measuring similarity of neural networks is a complex problem, as there are many perspectives on how such models can be similar. In this work, we specifically focus on two broad and complementary perspectives: _representational_ and _functional measures of similarity_ (see Figure 1). Representational similarity measures assess how activations of intermediate neural layers differ, whereas functional similarity measures specifically compare the outputs of neural networks with respect to their learning task. Both perspectives on their own are not sufficient to gain detailed insights into similarity of neural network models. Seemingly similar representations can still yield different outputs, and conversely, similar outputs can result from very different representations. In that sense, combining these two perspectives provides a more comprehensive approach to analyze similarity between neural networks at all layers. Measures to quantify similarity of neural network models have been widely applied in the literature, including research on learning dynamics [1, 2], effect of width and depth [3], differences between supervised and unsupervised models [4], robustness [5, 6], effect of data and model updates [7, 8, 9, 10], evaluating knowledge distillation [11], designing ensembles [12], language representation [13, 14, 15], and generalizability [16, 17, 18]. Given this broad range of research on neural network similarity, numerous corresponding measures have been proposed and applied, often times with many lines of research being disconnected from each other. With this work, we provide a comprehensive overview of measures for representational similarity and functional similarity that gives a unified perspective on the existing literature and can inform and guide both researchers and practitioners interested in understanding and comparing neural network models. Measures for representational or functional similarity have been covered in prior work to some extent. Regarding representational similarity, measures for matrix correlation have been reviewed by [19, 20]. These surveys however do not cover more recent measures or do not consider the context of deep learning. A recent survey by Rauker et al. [21] reviews methods to interpret inner workings of neural networks, but discusses representational similarity measures only very briefly. Functional similarity measures have been surveyed in context of ensemble learning [22, 23], inter-rater agreement [24, 25, 26], and image and text generation scenarios [27, 28], with each considering differing subsets of the measures considered in context of our survey. Consequently, to the best of our knowledge, this survey represents the first comprehensive review of representational and functional similarity measures for neural networks. This survey makes the following contributions: 1. **Systematic and comprehensive overview:** We formally define the problem of measuring functional and representational similarity in neural networks and provide a systematic and comprehensive overview of existing measures in the context of classification. 2. **Unified terminology:** We provide detailed definitions, explanations and categorizations for each measure in a unified manner, facilitating the understanding of commonalities and differences between measures. 3. **Practical properties and applicability:** We discuss the practical properties of existing measures, such as robustness to noise or confounding issues, and connections between existing measures to guide researchers and practitioners in applying these measures. 4. **Open research challenges:** We highlight unresolved issues of similarity measures and point out research gaps that can be addressed in the future to improve our understanding of neural networks in general. While we focus on measures for representational and functional similarity, we acknowledge various other approaches to comparing neural networks, such as probing [29], weight masking [30], and visualization [31], that will not be discussed in this work. The rest of this article is structured as follows. In Section 2, we formally introduce the problem of measuring functional and representational similarity in neural network models. Afterward, we provide thorough overviews about both measures for representational similarity (Section 3) and functional similarity (Section 4). In Section 5, we summarize research on connections between these measures as well as practical properties of similarity measures. Section 6 provides a discussion on the relationship between both perspectives on similarity, open research problems, and practical considerations when applying the measures. Finally, in Section 7, we conclude our survey and provide pointers to future research on open problems. ## 2 Similarity of Neural Network Models We consider the problem of comparing neural networks, which we assume to have the form \[f=f^{(L)}\circ f^{(L-1)}\circ\cdots\circ f^{(1)}, \tag{1}\] with each function \(f^{(l)}:\mathbb{R}^{D}\longrightarrow\mathbb{R}^{D^{\prime}}\) denoting a single layer, and a total number of \(L\in\mathbb{N}\) layers. These networks operate on a set of \(N\) given inputs \(\{\mathbf{X}_{i}\}_{i=1}^{N}\), which we typically assume to be vectors in \(\mathbb{R}^{p}\), \(p\in\mathbb{N}\), although these can also be higher-dimensional structures as occurring in image or video data. For simplicity, we collect these inputs in a matrix \(\mathbf{X}\in\mathbb{R}^{N\times p}\) so that the \(i\)-th row \(\mathbf{X}_{i}\) corresponds to the \(i\)-th input. To further simplify notation, we will also denote individual inputs \(\mathbf{X}_{i}\) as _instances_\(i\in\{1,\dots,N\}\). We generally do not make any assumption on the number of features \(p\), the depth of the network \(L\), the width or activation function of any layer \(f^{(l)}\), or the training objective. Similarity of neural network models is then quantified by similarity measures \(m\). For simplicity, we also consider dissimilarity measures that quantify _distance_ between models as similarity measures, since these concepts are generally equivalent. While there are other approaches to analyze the similarity of neural network models (e.g., probing [29], or visualization [31]), in our survey we specifically consider two kinds of similarity, namely _representational similarity_ and _functional similarity_. Representational similarity measures consider how the inner activations of neural network models differ, whereas functional similarity measures compare the output behavior of neural networks with respect to the given (classification) task. Compared to other approaches, these kinds of similarity focus on how given inputs \(\mathbf{X}_{i}\) are processed, rather than considering architecture or plain weights of neural network models. Generally, there is no ground-truth as to whether two neural network models are similar. Hence, representational and functional similarity measures can offer complementing notions of similarity, allowing for more nuanced insights into model similarity when combining these two perspectives. In the following, we give more thorough definitions of representational and functional similarity, and provide a set of properties to categorize these measures. In general, we introduce notations for each variable that we use once at the first time the variable is needed. However, we provide an overview over notation conventions as well as a table of notations in Appendix A. ### Representational Similarity A representational similarity measure compares neural networks by measuring similarity between activations of a fixed set of inputs at intermediate layers \(f^{(l)}\). Given such inputs \(\mathbf{X}\), we define the representations of model \(f\) at layer \(l\) as a matrix \[\mathbf{R}:=\mathbf{R}^{(l)}=\begin{bmatrix}\left(f^{(l)}\circ f^{(l-1)}\circ\cdots \circ f^{(1)}\right)\left(\mathbf{X}_{1}\right)\\ \vdots\\ \left(f^{(l)}\circ f^{(l-1)}\circ\cdots\circ f^{(1)}\right)\left(\mathbf{X}_{N} \right)\end{bmatrix}\in\mathbb{R}^{N\times D}, \tag{2}\] where \(D:=D^{(l)}\) denotes the number of neurons in layer \(l\). The activations of instance \(i\) then correspond to the \(i\)-th row of \(\mathbf{R}\), which we denote as _instance representation_\(\mathbf{R}_{i}:=\left(f^{(l)}\circ\cdots\circ f^{(1)}\right)\left(\mathbf{X}_{i} \right)\in\mathbb{R}^{D}\). The activations of single neurons over all instances correspond to the columns of \(\mathbf{R}\), and we denote the \(j\)-th column of \(\mathbf{R}\) as \(\mathbf{R}_{-\cdot j}\). Like the inputs, we also consider the instance representations to be vectors even though in practice, for instance in convolutional neural networks, these activations can also be matrices. In such a case, these representations can be flattened (see Section 2.1.2). A representational similarity measure is then a mapping \(m:\mathbb{R}^{N\times D}\times\mathbb{R}^{N\times D^{\prime}}\longrightarrow \mathbb{R}\) that assigns a similarity score \(m(\mathbf{R},\mathbf{R^{\prime}})\) to a pair of representations \(\mathbf{R}\), \(\mathbf{R^{\prime}}\), which are derived from different models \(f,f^{\prime}\), but use the same inputs \(\mathbf{X}\). Without loss of generality, we assume that \(D\leq D^{\prime}\), though some measures require that \(D=D^{\prime}\). In such cases, preprocessing techniques can be applied (see Section 2.1.2). Figure 1: A conceptual overview of representational and functional similarity. In this illustration, two neural network models \(f,f^{\prime}\) are compared, but one could also consider similarity of more than two models. Functional similarity measures mainly consider the outputs \(\mathbf{O},\mathbf{O^{\prime}}\) of the compared models, whereas representational similarity measures consider their intermediate representations \(\mathbf{R},\mathbf{R^{\prime}}\). All models get the same inputs. Specifically in classification tasks, outputs have clear and universal semantics, so that they can be compared in a straightforward manner. In contrast, the geometry of the representations requires more care when measuring their similarity. In the illustration above, for instance, rotating \(\mathbf{R^{\prime}}\) by 90 degrees would yield an alignment of representations after which they would appear much more similar. Combined, representational and functional measures cover all layers of the models. #### 2.1.1 Equivalence of Representations Even if two representation matrices \(\mathbf{R},\mathbf{R^{\prime}}\in\mathbb{R}^{N\times D}\) are different on an element-per-element basis, one may still consider them to be equivalent, i.e., perfectly similar, written as \(\mathbf{R}\sim\mathbf{R^{\prime}}\). An intuitive example for such a case would be when representations only differ in their sign, i.e., \(\mathbf{R}=-\mathbf{R^{\prime}}\), or when representations can be rotated onto another. Such notions of equivalence can be formalized in terms of bijective mappings (transformations) \(\varphi:\mathbb{R}^{N\times D}\longrightarrow\mathbb{R}^{N\times D}\), for which it then holds that \(\varphi(\mathbf{R})=\mathbf{R^{\prime}}\). What kind of transformations constitute equivalence between representations may vary depending on the context at hand. For instance, equivalence up to rotation does not make sense if some feature dimensions are already aligned with fixed axes, e.g., in interpretable word embeddings where a dimension could represent a scale between two polar opposites like "bright" and "dark" [32]. Thus, we define equivalence of representations in terms of classes of transformations \(\mathcal{T}\), and call two representations \(\mathbf{R},\mathbf{R^{\prime}}\) equivalent with respect to a class of transformations \(\mathcal{T}\), if there is a \(\varphi\in\mathcal{T}\) such that \(\varphi(\mathbf{R})=\mathbf{R^{\prime}}\). One might argue that a similarity measure should not be able to distinguish two representations if the representations are considered equivalent. We call a representational similarity measure \(m\) invariant to a class of transformations \(\mathcal{T}\), if for all \(\mathbf{R},\mathbf{R^{\prime}}\in\mathbb{R}^{N\times D}\) and all \(\varphi\in\mathcal{T}\) it holds that \(m(\mathbf{R},\mathbf{R^{\prime}})=m(\mathbf{R},\varphi(\mathbf{R^{\prime}}))=m(\varphi(\mathbf{R} ),\mathbf{R^{\prime}})\). In related literature, there are six main classes of transformations under which representations are considered equivalent, and that representational similarity measures are often designed to be invariant to: * **Permutations (PT).** A similarity measure \(m\) is invariant to permutations if swapping columns of the representation matrices \(\mathbf{R}\), that is, reordering neurons, does not affect the resulting similarity score. Letting \(S_{D}\) denote the set of all permutations on \(\{1,\ldots,D\}\), and for \(\pi\in S_{D}\), \(\mathbf{P}_{\pi}=(p_{i,j})\in\mathbb{R}^{D\times D}\) denote the permutation matrix where \(p_{i,j}=1\) if \(\pi(i)=j\) and \(p_{i,j}=0\) otherwise, the class of all permutation transformations is given by the set \[\mathcal{T}_{\text{PT}}=\{\mathbf{R}\mapsto\mathbf{R}\mathbf{P}_{\pi}:\pi\in S_{D}\}.\] (3) * **Orthogonal transformations (OT).** As noted in an earlier example, one might intuitively consider two representations equivalent if they can be rotated onto each other. Next to rotations, the class of orthogonal transformations also includes permutations and reflections. Letting \(\mathrm{O}(D):=\{\mathbf{Q}\in\mathbb{R}^{D\times D},\mathbf{Q^{\mathsf{T}}Q}=\mathbf{I}_{D}\}\) denote the orthogonal group, the set of these transformations is given by \[\mathcal{T}_{\text{OT}}=\{\mathbf{R}\mapsto\mathbf{R}\mathbf{Q}:\mathbf{Q}\in\mathrm{O}(D)\}.\] (4) * **Scaling (IS).** Scaling all elements of a representation \(\mathbf{R}\) identically (isotropic scaling) does not change the angles between instance representations \(\mathbf{R}_{i}\). Thus, for some metrics it can be beneficial if it is invariant to such rescalings. The set of all isotropic scaling transformations is defined as \[\mathcal{T}_{\text{IS}}=\{\mathbf{R}\mapsto a\cdot\mathbf{R}:a\in\mathbb{R}_{+}\}.\] (5) * **Invertible linear transformations (ILT).** A broader class of transformations is given by the invertible linear transformations \[\mathcal{T}_{\text{ILT}}=\{\mathbf{R}\mapsto\mathbf{R}\mathbf{A}:\mathbf{A}\in\mathrm{GL}(D, \mathbb{R}),\] (6) where \(\mathrm{GL}(D,\mathbb{R})\) denotes the general linear group of all invertible matrices \(\mathbf{A}\in\mathbb{R}^{D\times D}\). This class of transformations also includes orthogonal transformations and rescalings. In case we have representations with \(N<D\), i.e., fewer instances than features, equivalence or invariance with respect to invertible linear transformations is not desirable, since any pair of representations \(\mathbf{R},\mathbf{R^{\prime}}\) with full rank will be equivalent [33]. Further, invertible linear transformations include normalization layers, which demonstrably influence neural network convergence [34]. Thus, invariance to invertible linear transformations may lead to overestimated representational similarity. * **Translations (TR).** If the angles between instance representations \(\mathbf{R}_{i}\) are not of concern, one might argue that two representations are equivalent if they can be mapped onto each other by adding a constant vector. In that regard, a measure \(m\) is invariant to translations if is invariant to the set of all mappings \[\mathcal{T}_{\text{TR}}=\{\mathbf{R}\mapsto\mathbf{R}+\mathbf{1}\mathbf{b}^{\mathsf{T}}:\mathbf{b} \in\mathbb{R}^{D}\},\] (7) where \(\mathbf{1}:=\mathbf{1}_{D}\in\{1\}^{D}\) is a vector of \(D\) ones. * **Affine transformations (AT).** The most general class of transformations that is typically considered for representations is given by the set of affine transformations \[\mathcal{T}_{\text{AT}}=\{\mathbf{R}\mapsto\mathbf{R}\mathbf{A}+\mathbf{1}\mathbf{b}^{\mathsf{T}}: \mathbf{A}\in\mathrm{GL}(D,\mathbb{R}),\mathbf{b}\in\mathbb{R}^{D}\}.\] (8) This class of transformations in particular also includes rescaling, translations, orthogonal transformations, and invertible linear transformations. If \(\mathbf{b}=0\), this class is the class of invertible linear transformations. We depict the hierarchy of these measures in Figure 2. In practice, representational similarity measures are designed with specific invariances dependent on context. Many of these measures are invariant to orthogonal transformations and isotropic scaling. #### 2.1.2 Preprocessing of Representations Many of the representational similarity measures will assume certain properties of the representations \(\mathbf{R},\mathbf{R^{\prime}}\), that in practice are not always given. In these cases, the representations need to be preprocessed depending on the given assumptions. Overall, there are three kinds of preprocessing that may have to be applied, namely a) adjusting dimensionality, b) normalization, and c) flattening of representations. We briefly describe common techniques for these preprocessing problems. Adjusting Dimensionality.Many of the representational similarity measures presented in Section 3 implicitly assume that the representations \(\mathbf{R},\mathbf{R^{\prime}}\) have the same dimensionality, i.e., \(D=D^{\prime}\). In practice, this however isn't always the case. Thus, if \(D<D^{\prime}\), some preprocessing technique must be applied to match the dimensionality. There are two techniques that are commonly recommended for preprocessing, namely zero-padding and dimensionality reduction, such as principal component analysis (PCA) [35, 36]. When zero-padding, the dimension \(D\) of representation \(\mathbf{R}\) is inflated by appending \(D^{\prime}-D\) columns of zeros to \(\mathbf{R}\). PCA conversely reduces the dimension of the representation \(\mathbf{R}^{\prime}\) by removing the \(D^{\prime}-D\) lowest-information components from the representation. Normalization.Some representational similarity measures assume that the representations are mean-centered in the columns [33, 36, 1]. This normalization technique effectively constitutes a translation of the representations, which alters the angles between instance representations. Similarly, double centering in both rows and columns changes the angles. Another common assumption is that the individual representations have unit norm, which can only be achieved by rescaling each instance representation accordingly [37, 38]. While this normalization approach preserves angles between single instance representations, the Euclidean distances are altered. Therefore, any normalization has to be applied with caution, as preprocessing might alter the inner representation structure -- for instance, neural embedding models such as skip-gram based embeddings model distance between representations in terms of angles. Flattening.Representational similarity measures assume input matrices \(\mathbf{R}\in\mathbb{R}^{N\times D}\) as input. However, some models such as convolutional neural networks (CNNs) produce representations of more than two dimensions, making them incompatible with these measures. Therefore, such multidimensional representations would have to be flattened. For this preprocessing, the specific properties of these representations need to be taken into account. As an example, representations of CNNs are usually modelled as elements of \(\mathbb{R}^{N\times h\times w\times c}\), where \(h,w\) are the height and the width of the feature maps, and \(c\) is the number of channels. While it is possible to flatten these representations into matrices \(\mathbf{R}\in\mathbb{R}^{N\times hwc}\), permuting the features of these flattened representations disregards the spatial information in the original feature map. As a solution, Williams et al. [36] recommend flattening CNN representations to the shape of \(Nhw\times c\), so that a permutation only affects the channels, and spatial differences can still be recognized. However, this flattening is only possible if the resulting effective number of inputs \(Nhw\) matches in both models or a feature map is upsampled. Aside from issues regarding the structure of representations, the computational cost of the representational similarity measure may also be strongly affected by the format of the flattened representations. Figure 2: Invariances of representational similarity measures build a hierarchy. Arrows describe implication, with the top invariance being the more general one. ### Functional Similarity Measures Functional similarity measures compare neural networks by measuring similarity of their output behavior [39]. Given a set of inputs \(\mathbf{X}\) and a neural network \(f\) that is trained for a classification task on \(C\) classes, we denote the matrix of its outputs as: \[\mathbf{O}:=\begin{bmatrix}f(\mathbf{X}_{1})\\ \vdots\\ f(\mathbf{X}_{N})\end{bmatrix}\in\mathbb{R}^{N\times C}. \tag{9}\] Each row \(\mathbf{O}_{i}=f(\mathbf{X}_{i})\in\mathbb{R}^{C}\) denotes the output with respect to input \(\mathbf{X}_{i}\). This vector-based output naturally includes _soft predictions_ where each element \(\mathbf{O}_{i,c}\) denotes the probabilities or decision function scores for class \(c\), as well as _hard predictions_ for multiclass problems, where \(\mathbf{O}_{i,c}=1\) if the model predicts class \(c\) for input \(\mathbf{X}_{i}\), and \(\mathbf{O}_{i,c}=0\) otherwise. Moreover, in the most common cases where the outputs are given as hard predictions or as soft predictions modeling probabilities, one can consider the individual rows \(\mathbf{O}^{\prime}_{i}\) to be elements of the probability simplex \[\Delta_{C}=\bigg{\{}\mathbf{p}\in[0,1]^{C}:\sum_{i=1}^{C}\mathbf{p}_{i}=1\bigg{\}}. \tag{10}\] In contrast to representational similarity, analyzing functional similarity generally does not require consideration of preprocessing or alignment issues. Even more, for many functional similarity measures, black-box access to the given model \(f\) is sufficient. However, functional similarity measures may include additional information aside from the raw outputs \(\mathbf{O}_{i}\). For instance, in practice a set of ground-truth labels \(\mathbf{y}\in\mathbb{R}^{N}\) is often given, which is typically used by a quality function \(q\) that quantifies how well the output matches the ground-truth. Another kind of additional information that can inform model similarity would be task-based gradients, which, for simplicity, we will assume to be directly included in the output matrix \(\mathbf{O}\). This information, however, can only be obtained with white-box access to the model. Overall, we broadly distinguish functional similarity measures into first-order and second-order similarity measures. First-order similarity measures \(m\) directly operate on the raw outputs and have the form \(m(\mathbf{O},\mathbf{O}^{\prime})\), whereas second-order similarity measures consider the outputs of quality functions \(q\) with respect to ground-truth labels \(\mathbf{y}\), and therefore have the form \(m(q(\mathbf{O},\mathbf{y}),q(\mathbf{O}^{\prime},\mathbf{y}))\). The most commonly used quality function in that context is the error rate, which is defined as \[q_{\mathrm{Err}}(\mathbf{O}):=q_{\mathrm{Err}}(\mathbf{O},\mathbf{y}):=\frac{1}{N}\sum_ {i=1}^{N}\mathbb{1}\{\arg\max_{j}\mathbf{O}_{i,j}\neq\mathbf{y}_{i}\}. \tag{11}\] Finally, some measures operate on sets of outputs, which we denote as \(m(\mathcal{O})\), with \(\mathcal{O}=\{\mathbf{O},\mathbf{O}^{\prime},\mathbf{O}^{\prime\prime},\dots\}\). ### Properties of Similarity Measures Within our survey, we provide an overview of a broad range of representational and functional similarity measures, which we will also categorize by some properties that may affect their practical applicability. In the following, we will briefly discuss two of such properties of measures, namely whether they can take groupwise inputs, and whether they are formal metrics. To accommodate both representational and functional similarity measures, we will use the notation that measures \(m\) consider input matrices \(\mathbf{A}\in\mathbb{R}^{N\times d},d\in\mathbb{N}\), for the rest of this section. #### 2.3.1 Groupwise Inputs The majority of measures discussed in this survey are mappings \(m:\mathbb{R}^{N\times d}\times\mathbb{R}^{N\times d^{\prime}}\longrightarrow \mathbb{R}\) that take a pair of matrices \(\mathbf{A},\mathbf{A}^{\prime}\) as input. In some applications, however, one might need to compare more a set \(\mathcal{A}\) consisting of more than two representations or outputs. In this case, the few measures that do not take _pairwise_, but _groupwise_ inputs, would be of specific interest. An alternative to using these measures would be to aggregate similarity scores over all pairs in \(\mathcal{A}\), for instance, by taking their mean: \[m(\mathcal{A})=\frac{2}{|\mathcal{A}|\cdot(|\mathcal{A}|-1)}\sum_{\mathbf{A},\mathbf{ A}^{\prime}\in\mathcal{A}}m(\mathbf{A},\mathbf{A}^{\prime}). \tag{12}\] However, this approach has the downside of not being able to find commonalities across the whole set of representations or outputs. #### 2.3.2 Distance Metrics For each measure \(m\) that is mentioned in this survey, there is a unique value \(c\in\mathbb{R}\) that indicates equality (or equivalence) of the inputs, i.e., for all input matrices \(\mathbf{A}\in\mathbb{R}^{N\times d}\) it holds that \(m(\mathbf{A},\mathbf{A})=c\). This value is also typically either the upper or lower bound of the range of a measure. In the case of \(c=0\), the corresponding measure will quantify distance rather than similarity. While -as noted before- in the terminology of this survey we do not differentiate between the notions of similarity and distance measures, some of the measures \(m:\mathbb{R}^{N\times d}\longrightarrow\mathbb{R}\) will satisfy the formal axioms of a (distance) metric, which includes that \(m(\mathbf{A},\mathbf{A}^{\prime})=0\) if \(\mathbf{A}\sim\mathbf{A}^{\prime}\), and \(m(\mathbf{A},\mathbf{A}^{\prime})>0\) otherwise. The second property of a metric is _symmetry_, which requires that for all \(\mathbf{A},\mathbf{A}\in\mathbb{R}^{N\times d}\) if holds that \[m(\mathbf{A},\mathbf{A}^{\prime})=m(\mathbf{A}^{\prime},\mathbf{A}). \tag{13}\] Unless noted otherwise, one can assume that measures presented in this survey are symmetric. The third and final property of a metric is the satisfaction of the _triangle inequality_, which states that for all \(\mathbf{A},\mathbf{A}^{\prime},\mathbf{Z}\in\mathbb{R}^{N\times d}\) it has to hold that \[m(\mathbf{A},\mathbf{A}^{\prime})\leq m(\mathbf{A},\mathbf{Z})+m(\mathbf{A}^{\prime},\mathbf{Z}). \tag{14}\] This property formalizes the intuitive notion that if two matrices are considered close to a reference matrix by a measure, the measure should also consider them close to each other. It is generally considered favorable for a similarity measure to satisfy these properties, since they benefit consistency and interpretability of what is measured. Hence, we will explicitly note when a representational similarity measure formally satisfies all these properties. ## 3 Representational Similarity Measures We now review existing the representational similarity measures, organized by their methodological approach. An overview of all representational similarity measures can be found in Table 1. \begin{table} \begin{tabular}{l|l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Measure} & \multirow{2}{*}{\begin{tabular}{c} Model \\ Apostice \\ \end{tabular} } & \multicolumn{6}{c}{Invariances} & \multirow{2}{*}{\begin{tabular}{c} \\ \end{tabular} } \\ & & & PT & OT & IS & LT & TR & AT & Other & \(D\neq D^{\prime}\) & Metric & Similarity \(\uparrow\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} Canonical \\ Conical \\ Analysis \\ \end{tabular} } & Mean Canonical Correlation [40] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ \\ & Mean Canonical Correlation [33, 41] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ \\ & Singular Vector Canonical Correlation Analysis (SVCCA) [40] & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ & Projection-Weighted Canonical Correlation Analysis (PVCCA) [1] & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \hline \multirow{6}{*}{\begin{tabular}{c} Alignment \\ Alignment \\ \end{tabular} } & Proximites [38, 36] & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & \(\alpha_{\text{LL}}\)-Prectures [42] & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ & Singular Cone Similarity [15] & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ \\ & Partial-Wheening Map Metric [36] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ \\ & Shift Shape Metric [36] & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & Correlation Match [43] & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ & Maximum Matching [48] & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ \\ & Linear Regression [33, 33] & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \multirow{6}{*}{ \begin{tabular}{c} Representation \\ Representation \\ \end{tabular} } & \multicolumn{1}{c}{✓ ### Canonical Correlation Analysis-Based Measures Canonical Correlation Analysis (CCA) [56] is a classical method to compare two sets of values of random variables. The goal is to find vectors \(\mathbf{w_{R}}\in\mathbb{R}^{D},\mathbf{w_{R^{\prime}}}\in\mathbb{R}^{D^{\prime}}\) in the representation space (weightings of the representation dimensions) such that their projections to the unit ball in \(\mathbb{R}^{N}\) via their corresponding representation matrices \(\mathbf{R},\mathbf{R^{\prime}}\) have minimal angle between them (or equivalently maximal correlation \(\rho\)): \[\rho:=\rho(\mathbf{R},\mathbf{R^{\prime}}):=\max_{\mathbf{w_{R}},\mathbf{w_{R^{ \prime}}}}\frac{\langle\mathbf{R}\mathbf{w_{R}},\mathbf{R^{\prime}}\mathbf{w_{R^{\prime}}} \rangle}{\|\mathbf{R}\mathbf{w_{R}}\|\cdot\|\mathbf{R^{\prime}}\mathbf{w_{R^{\prime}}}\|}. \tag{15}\] The vectors \(\mathbf{w_{R}^{(j)}}\mathbf{w_{R^{\prime}}^{(j)}}\) can be understood as weightings of the representation dimensions. \(\rho\) is called a _canonical correlation_. Once one has determined this maximum, one can find additional canonical correlations \(\rho_{i}\), with new vectors \(\mathbf{w_{R}^{(i)}},\mathbf{w_{R^{\prime}}^{(i)}}\) being orthogonal, and thus uncorrelated, to the previous ones. This yields a system of \(D\) canonical correlations \(\rho_{i}\) defined as \[\rho_{i}:=\max_{\mathbf{w_{R}^{(i)}},\mathbf{w_{R^{\prime}}^{(i)}}}\frac {\langle\mathbf{R}\mathbf{w_{R}^{(i)}},\mathbf{R^{\prime}}\mathbf{w_{R^{\prime}}^{(i)}} \rangle}{\|\mathbf{R}\mathbf{w_{R}^{(i)}}\|\cdot\|\mathbf{R^{\prime}}\mathbf{w_{R^{\prime}}^{( i)}}\|} \tag{16}\] \[\mathrm{s.t.}\ \mathbf{R}\mathbf{w_{R}^{(j)}}\bot\mathbf{R}\mathbf{w_{R}^{(i)}},\ \mathbf{R^{\prime}}\mathbf{w_{R^{ \prime}}^{(j)}}\bot\mathbf{R^{\prime}}\mathbf{w_{R^{\prime}}^{(i)}}\ \forall j<i,\] where \(\bot\) means orthogonality. For the first canonical correlation, we have \(\rho=\rho_{1}\). A single similarity score \(m(\mathbf{R},\mathbf{R^{\prime}})\) is then computed by aggregating the canonical correlations \(\rho_{i}\). Standard choices for this aggregation that have been used to quantify representational similarity for neural networks are the mean canonical correlation [40, 33, 57] \[m_{\mathrm{CCA}}(\mathbf{R},\mathbf{R^{\prime}})=\frac{1}{D}\sum_{i=1}^{D}\rho_{i}, \tag{17}\] and the mean squared canonical correlations [33, 57] \[m_{\mathrm{CCA}^{2}}(\mathbf{R},\mathbf{R^{\prime}})=\frac{1}{D}\sum_{i=1}^{D}\rho_{ i}^{2}, \tag{18}\] which are also known as Yanai's generalized coefficient of determination [41, 19]. Other prominent aggregation schemes, though not applied for neural representational similarity, include the sum of the squared canonical correlations (also known as Pillai's trace [58]), Wilk's lambda statistic [59], and the Lawley-Hotelling trace [60, 61]. Overall, several more aggregation methods can be applied, and there is a wide range of variants of CCA measures, including non-linear methods, or methods that consider more than two input matrices. Within this work, however, we only consider those CCA-based measures that have been used to measure representational similarity of neural networks. A broader overview on existing CCA measures is provided in the recent survey by Yang et al. [20] or the tutorial by Uurtio et al. [62]. CCA is invariant to affine transformations [1]. If the representations \(\mathbf{R},\mathbf{R^{\prime}}\) are equivalent, it holds that \(\rho_{i}=1\) for all \(i\in\{1,\dots,D\}\) and thus \(m_{\mathrm{CCA}}(\mathbf{R},\mathbf{R^{\prime}})=1\) and \(m_{\mathrm{CCA}^{2}}(\mathbf{R},\mathbf{R^{\prime}})=1\). We now describe CCA-based measures that have been used to measure neural network similarity. #### 3.1.1 Singular Value CCA Singular Value CCA (SVCCA) combines preprocessing of the representations via Singular Value Decomposition (SVD) with CCA [40]. Raghu et al. [40] argue that representations are noisy and that this noise should be removed before conducting CCA on the representations \(\mathbf{R},\mathbf{R^{\prime}}\). Their SVD-based approach to obtain denoised representations is equivalent to performing PCA on the representations. The number \(k\) of principal components that are kept is selected such that a fixed relative amount \(t\) of the variance in the data, usually 99 percent, is explained. Formally, assuming that the SVD of each representation \(\mathbf{R},\mathbf{R^{\prime}}\) has the form \(\mathbf{R}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\mathsf{T}}\) where \(\mathbf{U}\in\mathrm{O}(N)\), \(\mathbf{V}\in\mathrm{O}(D)\), and \(\Sigma=\mathrm{diag}(\sigma_{1},\dots,\sigma_{\min(N,D)})\in\mathbb{R}^{N\times D}\) is the rectangular diagonal matrix of singular values \(\sigma_{i}\), they propose to select the smallest value \(k\) such that \(\sum_{j=1}^{k}\sigma_{j}>t\cdot\sum_{j=1}^{\min(N,D)}\sigma_{j}\) for a relative threshold \(t\in(0,1)\). The resulting value \(k\) is then used to compute the denoised representations \[\mathbf{\tilde{R}}=\mathbf{U}_{-,-k}\mathbf{\Sigma}_{-k,-k}\in\mathbb{R}^{N\times k}, \tag{19}\] where \(\mathbf{U}_{-,-k}\in\mathbb{R}^{N\times k}\) consists of the first \(k\) columns of \(\mathbf{U}\), and \(\mathbf{\Sigma}_{-k,-k}=\mathrm{diag}(\sigma_{1},\dots,\sigma_{k})\in\mathbb{R}^{k \times k}\) is the diagonal matrix of the \(k\) largest singular values. Afterward, they use standard CCA on the denoised representations. The average canonical correlation is then used as the final similarity measure: \[m_{\mathrm{SVCCA}}(\mathbf{R},\mathbf{R^{\prime}})=m_{\mathrm{CCA}}(\mathbf{\tilde{R}},\mathbf{ \tilde{R^{\prime}}}). \tag{20}\] Unlike CCA, SVCCA is not invariant to affine transformations, since such transformations generally alter the singular value decomposition. However, SVCCA is still invariant to orthogonal transformations and isotropic scaling. SVCCA is bounded in the interval \([0,1]\), with a score of one indicating perfectly similar representations, and a score of zero indicating perfectly dissimilar representations. To compute SVCCA efficiently for CNNs with many features, Raghu et al. [40] apply a Discrete Fourier Transform on each channel, yielding block-diagonal matrices for CCA computation, which eliminates unneeded operations. #### 3.1.2 Projection Weighted CCA Morcos et al. [1] proposed Projection Weighted CCA (PWCCA) as an alternative to SVCCA. They argue that a representational similarity measure should weigh the individual canonical correlations \(\rho_{i}\) by their importance, i.e., the similarity of the _canonical variables_\(\mathbf{Rw}_{\mathbf{R}}^{(i)}\) with the raw representation \(\mathbf{R}\). For that purpose, for every canonical correlation \(\rho_{i}\), they formally define a weighting coefficient \[\tilde{\alpha}^{(i)}=\sum_{j=1}^{D}|\langle\mathbf{Rw}_{\mathbf{R}}^{(i)},\mathbf{R}_{-,j} \rangle|, \tag{21}\] which can be understood as measuring how much of the neurons is represented by a canonical variable. These coefficients are then normalized to weights \(\alpha^{(i)}=\tilde{\alpha}^{(i)}/\sum_{j}\tilde{\alpha}^{(j)}\), yielding the final representational similarity measure defined as \[m_{\mathrm{PWCCA}}(\mathbf{R},\mathbf{R^{\prime}})=\sum_{i=1}^{D}\alpha^{(i)}\rho_{i}. \tag{22}\] Since the weighting coefficients are only calculated based on \(\mathbf{R}\), without taking \(\mathbf{R^{\prime}}\) into account, this representational similarity measure is asymmetric. Further, this measure is not invariant to any class of matrix transformations, since in the computation of the weighting coefficients (21), only the first factor \(\mathbf{Rw}_{\mathbf{R}}^{(i)}\) in the scalar product will be unaltered, whereas the second factor will change. However, the normalization of the weighting coefficients makes PWCCA invariant to scaling. PWCCA is bounded in the interval \([0,1]\), with a value of one indicating equivalent representations. ### Alignment-Based Measures The next group of measures stipulates that a pair of representations \(\mathbf{R},\mathbf{R^{\prime}}\) can be compared directly, once the corresponding representation spaces have been aligned to each other. Such alignment is usually realized by finding an optimal transformation \(\varphi\in\mathcal{T}\) that minimizes a difference of the form \(\|\varphi(\mathbf{R})-\mathbf{R^{\prime}}\|\). In that context, the exact group of transformations \(\mathcal{T}\) that can be used for alignment also directly determines and usually corresponds to the group of transformations that the corresponding measure will be invariant to. Such direct alignment is only possible if the number of neurons in both representations are equal, and thus we will assume that throughout the next section it holds that \(D=D^{\prime}\), unless otherwise mentioned. We will now discuss existing measures from that category. #### 3.2.1 Orthogonal Procrustes The orthogonal Procrustes problem is a classical problem of finding the best orthogonal transformation to align two matrices, and can be formulated as \[\mathbf{Q}^{*}=\arg\min_{\mathbf{Q}\in\mathrm{O}(D)}\|\mathbf{RQ}-\mathbf{R^{\prime}}\|_{F}, \tag{23}\] where \(\|\cdot\|_{F}\) denotes the Frobenius norm. Ding et al. [38] use the square of the minimum value obtained in (23) as a measure for representational similarity, which is given by \[m_{\mathrm{Ortho-Proc}}(\mathbf{R},\mathbf{R^{\prime}})=\|\mathbf{R}\|_{F}^{2}+\|\mathbf{R^{ \prime}}\|_{F}^{2}-2\|\mathbf{R^{\mathrm{T}}}\mathbf{R^{\prime}}\|_{*}, \tag{24}\] where \(\|\cdot\|_{*}\) is the nuclear norm that is defined as the sum of the singular values of a matrix [63]. By design, this measure is invariant to orthogonal transformations. Further, Williams et al. [36] have proven that this measure satisfies the properties of a distance metric. This is also the case when the solution is constrained to come from a subgroup \(\mathrm{G}(D)\) of the orthogonal group \(\mathrm{O}(D)\) yielding the measure \[m_{\mathrm{Proc}}(\mathbf{R},\mathbf{R^{\prime}})=\min_{\mathbf{Q}\in\mathrm{G}(D)}\|\mathbf{RQ }-\mathbf{R^{\prime}}\|_{F}. \tag{25}\] This gives options for finer notions of similarity. Both for Orthogonal Procrustes and the more general Procrustes measure, a value of zero indicates equivalence and larger values dissimilar representations. Also, to consider translation equivariance of CNNs, Williams et al. [36] propose a metric that optimizes over orthogonal transformations along the channels and spatial shifts in the feature map. However, in practice the spatial shifts are ignored, and the metric reduces to Orthogonal Procrustes. #### 3.2.2 \(G_{\mathrm{ReLU}}\)-Procrustes \(G_{\mathrm{ReLU}}\)-Procrustes is a model-specific instantiation of Procrustes with invariance to a subset of the orthogonal transformations [42]. The work is motivated by understanding _symmetries_ of neural networks, such as different sets of weights that have identical function. The transformations, that \(G_{\mathrm{ReLU}}\)-Procrustes is invariant to, are a group consisting of the transformations applied before activation that have a counterpart after activation, such that the overall transformation is identical. Formally, Godfrey et al. [42] describe a group of transformations, called _intertwiner group_, for an element-wise applied activation function \(\sigma\): \[G_{\sigma}:=G_{\sigma,D}=\{\mathbf{A}\in\mathrm{GL}(D,\mathbb{R}):\exists\mathbf{B} \in\mathrm{GL}(D,\mathbb{R})\text{ s.t. }\sigma\circ\mathbf{A}=\mathbf{B}\circ\sigma\}, \tag{26}\] where \(\mathrm{GL}(D,\mathbb{R})\) are the invertible matrices in \(\mathbb{R}^{D\times D}\). We highlight the case where \(\sigma=\mathrm{ReLU}\), which yields the group \(G_{\mathrm{ReLU}}\), because Godfrey et al. [42] detail similarity measures for \(G_{\mathrm{ReLU}}\), but they also describe intertwiner groups for other popular activation functions. \(G_{\mathrm{ReLU}}\)-Procrustes is closely related to Procrustes alignment with permutations as \(G_{\mathrm{ReLU}}\) consists of matrices of the form \(\mathbf{PD}\), where \(\mathbf{P}\in\mathcal{P}\) is a permutation matrix and \(\mathbf{D}\) is a diagonal matrix with positive elements. Given column-wise normalized representations \(\mathbf{\tilde{R}}=\mathbf{RD_{R}^{-1}}\), where \(\mathbf{D_{R}}\) denotes the diagonal matrix of column norms of \(\mathbf{R}\), the measure is defined as: \[m_{\mathrm{G_{ReLU}}\text{-}\mathrm{P}\text{-}\mathrm{P}\text{-}\mathrm{P} \text{-}\mathrm{P}\text{-}\mathbf{R^{\prime}}}=1-\frac{\min_{\mathbf{P}\in\mathcal{P} }\|\mathbf{\tilde{R}}\mathbf{P}-\mathbf{\tilde{R}^{\prime}}\|_{F}}{2\sqrt{D}}. \tag{27}\] The denominator \(2\sqrt{D}\) is added to scale the measure between zero and one. If the representations are equivalent up to \(G_{\mathrm{ReLU}}\) transformation, the measure equals one. Practically, the minimization can be solved via the linear sum assignment problem [36]. #### 3.2.3 Aligned Cosine Similarity This measure has been used to quantify similarity of instance-wise representations, such as embeddings of individual words over time [15]. Its idea is to first align the representations by the orthogonal Procrustes transformation, and then to use cosine similarity to measure similarity between the aligned representations. Generally, the cosine similarity between two vectors \(\mathbf{v},\mathbf{v}^{\prime}\in\mathbb{R}^{n}\), \(n\in\mathbb{N}\), is defined as \[\text{cos-sim}(\mathbf{v},\mathbf{v}^{\prime})=\frac{\mathbf{v}^{\mathsf{T}}\mathbf{v}^{ \prime}}{\|\mathbf{v}\|_{2}\|\mathbf{v}^{\prime}\|_{2}}, \tag{28}\] and bounded in the interval \([-1,1]\), with \(\text{cos-sim}(\mathbf{v},\mathbf{v}^{\prime})=1\) indicating that both vectors point in the exact same direction, and \(\text{cos-sim}(\mathbf{v},\mathbf{v}^{\prime})=0\) indicating orthogonality. Letting \(\mathbf{Q}^{*}\) denote the solution of the Procrustes problem (23), the similarity of two instance representations is given by \[s_{\mathrm{Aligned-Cossim}}(\mathbf{R}_{i},\mathbf{R}_{i}^{\prime})=\text{cos-sim} \left((\mathbf{RQ^{*}})_{i},\mathbf{R}_{i}^{\prime}\right). \tag{29}\] Overall similarity can then be analyzed by comparing the overall distribution of similarity scores, or aggregating them by, for instance, taking their mean value [37]. The latter option yields a similarity measure \[m_{\mathrm{Aligned-Cossim}}(\mathbf{R},\mathbf{R}^{\prime})=\frac{1}{N}\sum_{i=1}^{N }\text{cos-sim}\left((\mathbf{RQ^{*}})_{i},\mathbf{R}_{i}^{\prime}\right). \tag{30}\] Due to the properties of cosine similarity, \(m_{\mathrm{Aligned-Cossim}}(\mathbf{R},\mathbf{R}^{\prime})=1\) indicates equivalence of representations, and lower values indicate less similarity. #### 3.2.4 Generalized Shape Metrics Williams et al. [36] apply theory of statistical shape analysis on the problem of representational similarity of neural network models. Next to several other findings, they also define a novel similarity measure. For this measure, they apply the partial whitening function \(\phi_{\alpha}:\mathbb{R}^{N\times D}\longrightarrow\mathbb{R}^{N\times D}\), \(\alpha\in[0,1]\), defined as \[\phi_{\alpha}(\mathbf{R})=\mathbf{HR}(\alpha\mathbf{I}_{D}+(1-\alpha)(\mathbf{R}^{\mathsf{T}} \mathbf{HR})^{-1/2}), \tag{31}\] where \(\mathbf{H}:=\mathbf{I}_{N}-\frac{1}{N}\mathbf{1}\mathbf{1}^{\mathsf{T}}\) is a centering matrix. The partial whitening makes neurons (partially) uncorrelated and have unit variance. Due to this transformation, the _partial whitening (PW) shape metric_ defined as \[m_{\theta,\alpha}(\mathbf{R},\mathbf{R}^{\prime})=\min_{\mathbf{Q}\in\mathrm{O}(D)}\arccos \frac{\langle\phi_{\alpha}(\mathbf{R})\mathbf{Q},\phi_{\alpha}(\mathbf{R}^{\prime})\rangle} {\|\phi_{\alpha}(\mathbf{R})\|\|\phi_{\alpha}(\mathbf{R}^{\prime})\|}, \tag{32}\] satisfies the properties of a distance metric for all \(\alpha\in[0,1]\), where \(\langle\cdot,\cdot\rangle\) denotes the inner Frobenius product [64]. Because it is a distance metric, a value of zero indicates equivalence and larger values dissimilar representations, similar to Procrustes. If the hyperparameter \(\alpha=0\), then this metric is invariant to affine transformation, if \(\alpha=1\) it is only invariant to orthogonal transformations. Williams et al. [36] further show that this metric is related to canonical correlations. Duong et al. [65] recently generalized these metrics to stochastic neural networks, which map to distributions of representations instead of deterministic representations, such as variational autoencoders [66]. #### 3.2.5 Correlation Match Li et al. [43] measure representational similarity by creating a correlation matrix between the neuron activations of two representations. They match neurons by permutation. This permutation step is achieved by viewing the correlation matrix as the adjacency matrix of a graph, on which they perform a (semi-)matching. The average along the diagonal of the aligned correlation matrix yields a summary of representational similarity: \[m_{\text{Corr-Match}}(\mathbf{R},\mathbf{R}^{\prime})=\frac{1}{D}\sum_{j=1}^{D}\frac{ \langle\mathbf{R}_{-,j},(\mathbf{R}^{\prime}\mathbf{P})_{-,j}\rangle}{\|\mathbf{R}_{-,j}\|_{ 2}\|(\mathbf{R}^{\prime}\mathbf{P})_{-,j}\|_{2}}, \tag{33}\] with mean centered representations and the permutation \(\mathbf{P}\) from the matching procedure. This measure is invariant to permutations, isotropic scaling, and translations. A value of one indicates equivalent representations, a value of zero uncorrelated ones. #### 3.2.6 Maximum Matching Similarity In contrast to the previous measures, maximum matching similarity [44] aligns representations only implicitly, by testing whether neuron activations of one representation, i.e., columns of the representation matrix, (approximately) lie in a subspace spanned from neuron activations of the other representation. Every neuron, of which the activation vector can be approximated by such a subspace, is then considered part of a match between the representations. Following this intuition, the main idea of the measure proposed by Wang et al. [44] is to find the maximal set of neurons in each representation that can be matched with the other subspace. Formally, for an index subset \(\mathcal{J}\subseteq\{1,\dots,D\}\), let \(\mathbf{R}_{-,\mathcal{J}}=\{\mathbf{R}_{-,j},j\in\mathcal{J}\}\) denote the set of corresponding neuron activation vectors. Then a pair \((\mathcal{J},\mathcal{J}^{\prime})\) forms an \(\varepsilon\)-approximate match, \(\varepsilon\in(0,1]\), on the representations \(\mathbf{R},\mathbf{R}^{\prime}\) if for all \(j\in\mathcal{J},j^{\prime}\in\mathcal{J}^{\prime}\) it holds that \[\min_{\mathbf{r}\in\mathrm{span}(\mathbf{R}_{-,\mathcal{J}})}\|\mathbf{R}^{ \prime}_{-,j^{\prime}}-\mathbf{r}\|\leq\varepsilon\cdot\|\mathbf{R}^{\prime}_{-,j^{ \prime}}\| \tag{34}\] \[\text{and}\quad\min_{\mathbf{r}^{\prime}\in\mathrm{span}(\mathbf{R}^{ \prime}_{-,\mathcal{J}^{\prime}})}\|\mathbf{R}_{-,j}-\mathbf{r}^{\prime}\|\leq \varepsilon\cdot\|\mathbf{R}_{-,j}\|. \tag{35}\] A pair \((\mathcal{J}_{\max},\mathcal{J}^{\prime}_{\max})\) is considered a maximum match, if for all \(\varepsilon\)-matches \((\mathcal{J},\mathcal{J}^{\prime})\) it holds that \(\mathcal{J}\subseteq\mathcal{J}_{\max}\) and \(\mathcal{J}^{\prime}\subseteq\mathcal{J}^{\prime}_{\max}\). Wang et al. [44] show that this maximum match is unique and provide algorithms to determine this match. Based on the maximum match, the maximum-match similarity is defined as \[m^{\varepsilon}_{\text{maximum-match}}(\mathbf{R},\mathbf{R}^{\prime})=\frac{| \mathcal{J}_{\max}|+|\mathcal{J}^{\prime}_{\max}|}{D+D^{\prime}}. \tag{36}\] Contrary to prior measures of this category, maximum matching similarity can operate on representations of different dimension. This measure is invariant to invertible linear transformation, since such transformations do not alter the subspaces. It is bounded in the interval \([0,1]\), with a similarity score of 1 indicating maximum similarity. #### 3.2.7 Linear Regression Another approach to match representations is by linear regression that predicts one representation from the other [43, 33]. Given a weight matrix \(\mathbf{W}\in\mathbb{R}^{D\times D}\), the R-squared of the optimal fit can be used to measure similarity [33]: \[m_{R^{2}}(\mathbf{R},\mathbf{R}^{\prime})=1-\frac{\min_{\mathbf{W}}\|\mathbf{R}^{\prime}-\mathbf{R} \mathbf{W}\|_{F}^{2}}{\|\mathbf{R}^{\prime}\|_{F}^{2}}. \tag{37}\] Li et al. [43] add a L1 penalty to the optimization to encourage a sparse mapping between neurons. This measure is also similar to _model stitching_ (cf. Section 4.5), though here the focus is on the quality of matching instead of effect on functional behavior. This asymmetric measure is invariant to orthogonal transformation and isotropic scaling. A value of one indicates maximal similarity, lower values indicate lower similarity. This measure has no lower bound. ### Representational Similarity Matrix-Based Measures A common approach to avoid alignment issues in direct comparisons of representations is to use _representational similarity matrices_ (RSMs). Intuitively, an RSM describes how each instance \(i\) is represented in relation to all other instances, in a given representation \(\mathbf{R}\). Given the RSM is computed suitably, these relations are invariant to alignment transformations such as rotation. One can then apply the RSMs of two different representations \(\mathbf{R},\mathbf{R^{\prime}}\) to quantify representational similarity in terms of the difference between these RSMs. Formally, given an instance-wise similarity function \(s:\mathbb{R}^{D}\times\mathbb{R}^{D}\longrightarrow\mathbb{R}\), the representational similarity matrix (RSM) \(\mathbf{S}\in\mathbb{R}^{N\times N}\) of a representation \(\mathbf{R}\) can be defined in terms of its entries via \[\mathbf{S}_{i,j}:=s(\mathbf{R}_{i},\mathbf{R}_{j}). \tag{38}\] Each row \(\mathbf{S}_{i}\) then corresponds to the similarity between the representations of instance \(i\) and the representations of all other inputs, including itself. RSMs can be computed with a variety of similarity measures \(s\) including correlation [45], Euclidean distance [67], cosine similarity [68], and kernels [33] -- like before, we do not differentiate between the equivalent concepts of similarity and distance functions. Naturally, the choice of the underlying similarity measure \(s\) has to suit the geometry of the representations, and has a large impact on the kind of transformations that the representational similarity measures \(m\) are invariant to. If the RSM is unchanged by a transformation, then the representational similarity will not change either. For instance, each of the measures mentioned above, i.e., linear kernels, RBF kernels, cosine similarity, Pearson correlation and Euclidean distance are invariant to orthogonal transformations. In addition, cosine similarity is invariant to scaling, whereas Euclidean distance and RBF kernels are invariant to translations. Even more, Pearson correlation is invariant to both of these classes of transformations. After selecting a suitable similarity measure \(s\), two RSMs \(\mathbf{S},\mathbf{S^{\prime}}\) are compared. One could directly compute their difference and apply some matrix norm \(\|\cdot\|\) to obtain a measure \[m_{\mathrm{Norm}}(\mathbf{R},\mathbf{R^{\prime}})=\|\mathbf{S}-\mathbf{S^{\prime}}\|. \tag{39}\] There are several more RSM-based representational similarity measures, which we will review in the following. #### 3.3.1 Representational Similarity Analysis Kriegeskorte et al. [45] propose Representational Similarity Analysis (RSA) in neuroscience. RSA is a general framework that utilizes RSMs to compare sets of measurements, such as neural representations. In the first step of this framework, RSMs with respect to an inner similarity measure \(s_{\text{in}}\) are computed. Since the RSMs are symmetric, their lower triangles can then be vectorized in a next step to vectors \(\mathsf{v}(\mathbf{S})\in\mathbb{R}^{\frac{N(N-1)}{2}}\). Finally, these vectors are compared by an outer similarity function \(s_{\text{out}}\), giving the following general representation of RSA similarity measures: \[m_{\mathrm{RSA}}(\mathbf{R},\mathbf{R^{\prime}})=s_{\text{out}}(\mathsf{v}(\mathbf{S}), \mathsf{v}(\mathbf{S^{\prime}})), \tag{40}\] This framework can be instantiated with various choices for the similarity functions \(s_{\text{in}}\) and \(s_{\text{out}}\). Kriegeskorte et al. [45] use Pearson correlation as inner similarity function \(s_{\text{in}}\) to compute the RSMs, and Spearman correlation as outer similarity function \(s_{\text{out}}\), since these correlation measures are also invariant to scaling and translations. As alternatives, Kriegeskorte et al. [45] further suggest measures such as Euclidean distance or Mahalanobis distance. As noted before, the choice of both inner and outer similarity function determines the kind of transformations that the overall representational similarity measure is invariant to. Further, these functions also determine the range and interpretation of this measure. #### 3.3.2 Centered Kernel Alignment Kornblith et al. [33] propose Centered Kernel Alignment (CKA) [69, 70] to measure representational similarity. CKA uses the Hilbert-Schmidt Independence Criterion (HSIC) [71] to test statistical independence between the RSMs. The similarity measure is defined as: \[m_{\rm{CKA}}(\mathbf{R},\mathbf{R^{\prime}})=\frac{\rm{HSIC}(\mathbf{S},\mathbf{S^{\prime}})}{ \sqrt{\rm{HSIC}(\mathbf{S},\mathbf{S})\rm{HSIC}(\mathbf{S^{\prime}},\mathbf{S^{\prime}})}}, \tag{41}\] where \(\rm{HSIC}(\mathbf{S},\mathbf{S^{\prime}})=\frac{1}{(N-1)^{2}}\,\rm{tr}(\mathbf{SHS^{\prime} }\mathbf{H})\), \(\mathbf{H}=\mathbf{I}-\frac{1}{N}\mathbf{1}\mathbf{1}^{\sf{T}}\) is a centering matrix, and \(\mathbf{1}\) is a vector of \(N\) ones. The denominator is introduced to scale CKA between zero and one, where a value of one indicates equivalent representations. Kornblith et al. [33] assume centered representations to compute RSMs, that is, all columns of the representation matrix have zero mean. The RSMs are computed with kernel functions. Specifically, Kornblith et al. [33] use the linear kernel and test the RBF kernel without reporting large differences in results. CKA with linear kernel has an alternative formulation that is more efficient if there are more neurons than inputs. CKA is invariant to orthogonal transformations and isotropic scaling, assuming invariant similarity measures for RSM computation. If a kernel with hyperparameters, such as RBF, is used, then the hyperparameters must be selected dependent on the data. To be invariant to isotropic scaling, they select the bandwidth proportional to the median distance within each set of representations. CKA with linear kernel is equivalent to the RV coefficient, a statistical measure to compare data matrices [72, 33]. Further, linear CKA is closely related to CCA and can be seen as an alternative weighting scheme of individual canonical correlations similar to PWCCA. The advantage is that linear CKA is symmetric and does not require a matrix decomposition to be computed [33]. #### 3.3.3 \(G_{\rm{ReLU}}\)-Cka Similar to \(G_{\rm{ReLU}}\)-Procrustes, \(G_{\rm{ReLU}}\)-CKA is a representational similarity measure that is invariant to \(G_{\rm{ReLU}}\) transformations [42] (see Section 3.2.2). \(G_{\rm{ReLU}}\)-CKA can be understood as a model-specific instantiation of CKA. To compute \(G_{\rm{ReLU}}\)-CKA, the representations are mean centered and then column-wise normalized by their column norm: \(\mathbf{\tilde{R}}=\mathbf{R}\mathbf{D}_{\mathbf{R}}^{-1}\), where \(\mathbf{D}_{\mathbf{R}}\) is the diagonal matrix of column norms of \(\mathbf{R}\). Then the RSMs are computed as \(\mathbf{S}_{i,j}=\max_{k}(\mathbf{\tilde{R}}_{i,k}\cdot\mathbf{\tilde{R}}_{j,k})\). The final score is computed using the standard CKA formulation, Equation (41), with an unbiased estimator of HSIC [73] in the sense that its expectation matches the value of HSIC if infinite amounts of data were available. Effectively, \(G_{\rm{ReLU}}\)-CKA is CKA with a maximum kernel [42]. \(G_{\rm{ReLU}}\)-CKA is invariant to \(G_{\rm{ReLU}}\), but otherwise inherits the properties of CKA. #### 3.3.4 Riemannian Distance This measure considers the special geometry of symmetric positive definite (SPD) matrices, which lie on a Riemannian manifold [e.g., 74]. Every inner product defined on a Riemannian manifold induces a distance metric that considers the special curvature of these structures. For the manifold of SPD matrices, such a metric is given by \[m_{\rm{Riemann}}(\mathbf{R},\mathbf{R^{\prime}})=\sqrt{\sum_{i=1}^{N}\log^{2}( \lambda_{i})}, \tag{42}\] where \(\lambda_{i}\) is the \(i\)-th eigenvalue of \(\mathbf{S}^{-1}\mathbf{S^{\prime}}\). Shahbazi et al. [46] have proposed this measure using RSMs defined as \(\mathbf{S}=\mathbf{R}\mathbf{R}^{\sf{T}}/D\). This matrix however can only be positive definite if \(D>N\), which limits applicability of this measure. Equivalence is indicated by a value of zero, and larger values indicate dissimilarity. #### 3.3.5 Adaptive Geo-Topological Independence Criterion Lin and Kriegeskorte [47] proposed this measure as an adaptation of distance correlation [75] that disregards the smallest and greatest distances in RSMs. This adaptation is motivated by the notion that short distances are often susceptible to noise, whereas the exact values of the longest distances may be somewhat arbitrary: whether two items are far or very far away from each other may be irrelevant to conclude that these items are not close to each other. Distance correlation is a non-linear correlation measure that tests dependence of two random variables \(X\) and \(Y\) with finite mean. In the context of our survey, we consider the individual representations as samples of such random variables. To determine the distance correlation of two representation matrices \(\mathbf{R},\mathbf{R^{\prime}}\), one first computes the RSMs \(\mathbf{S},\mathbf{S^{\prime}}\) using Euclidean distance as similarity function \(s\). Next, these RSMs are double centered, i.e., for each RSM \(\mathbf{S}\) one computes the matrix \(\tilde{\mathbf{S}}\) via \[\tilde{\mathbf{S}}_{i,j}:=\mathbf{S}_{i,j}-\frac{1}{N}\sum_{k=1}^{N}\mathbf{S}_{i,k}-\frac {1}{N}\sum_{k=1}^{N}\mathbf{S}_{k,j}+\frac{1}{N^{2}}\sum_{k=1}^{N}\sum_{n=1}^{N} \mathbf{S}_{k,n}. \tag{43}\] Then the squared sample distance covariance of the RSMs \(\mathbf{S},\mathbf{S^{\prime}}\) can be computed via: \[\mathrm{dCov}^{2}(\mathbf{S},\mathbf{S^{\prime}})=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1} ^{N}\tilde{\mathbf{S}}_{i,j}\tilde{\mathbf{S^{\prime}}}_{i,j}. \tag{44}\] Finally, the squared distance correlation is defined in [75] as \[m_{\mathrm{dCor}}^{2}(\mathbf{R},\mathbf{R^{\prime}})=\frac{\mathrm{dCov}^{2}(\mathbf{S}, \mathbf{S^{\prime}})}{\sqrt{\mathrm{dCov}^{2}(\mathbf{S},\mathbf{S})\,\mathrm{dCov}^{2}( \mathbf{S^{\prime}},\mathbf{S^{\prime}})}}. \tag{45}\] A distance correlation of zero would then indicate statistical independence between the representations \(\mathbf{R}\) and \(\mathbf{R^{\prime}}\). Lin and Kriegeskorte [47] modify this procedure by introducing a so-called "geo-topological" transformation function \(g_{l,u}:\mathbb{R}^{N\times N}\longrightarrow\mathbb{R}^{N\times N}\), which is a nonlinear monotonic transformation with two parameters \(u,l\), that will act on the elements of the RSMs. Letting \(\mathbf{S}_{\max}:=\max_{i,j}\mathbf{S}_{i,j}\) denote the maximum element of \(\mathbf{S}\), the transformation is defined as \[(g_{l,u}(\mathbf{S}))_{i,j}=\begin{cases}0&\text{if }0\leq\mathbf{S}_{i,j}<l\\ \mathbf{S}_{\max}\cdot\frac{\mathbf{S}_{i,j}-l}{u-l}&\text{if }l\leq\mathbf{S}_{i,j}<u\\ \mathbf{S}_{\max}&\text{if }u\leq\mathbf{S}_{i,j}.\end{cases} \tag{46}\] All distances below the lower threshold \(l\) are set to zero and all distances above the upper threshold \(u\) are set to the maximal distance. In between, the values are linearly interpolated. Letting \(\mathrm{dCov}^{2}(\mathbf{S}):=\mathrm{dCov}^{2}(\mathbf{S},\mathbf{S})\), the Adaptive Geo-Topological Independence Criterion (AGTIC) is defined as \[m_{\mathrm{AGTIC}}^{2}(\mathbf{R},\mathbf{R^{\prime}})=\max_{\tiny\begin{subarray}{ c}l,l^{\prime},u,u^{\prime}\\ u,l\leq u,l^{\prime}\leq u^{\prime}\end{subarray}}\frac{\mathrm{dCov}^{2}(g_{ l,u}(\mathbf{S}),g_{l^{\prime},u^{\prime}}(\mathbf{S^{\prime}}))}{\sqrt{\mathrm{dCov}^{2}(g_{ l,u}(\mathbf{S}))\,\mathrm{dCov}^{2}(g_{l^{\prime},u^{\prime}}(\mathbf{S^{\prime}}))}}. \tag{47}\] Due to the usage of Euclidean distance as similarity function \(s\), AGTIC is invariant to orthogonal transformations and translations. As with distance correlation, AGTIC of zero indicates statistical independence. Lin and Kriegeskorte [47] propose several variations of this measure, e.g., using percentile based cutoffs in the geo-topological transform. #### 3.3.6 Normalized Bures Similarity This measure has been inspired by the Bures distance, that has its roots in quantum information theory [76] and satisfies the properties of a distance metric on the space of positive semi-definite matrices [77]. As Tang et al. [48] apply linear kernel functions to compute the RSMs \(\mathbf{S},\mathbf{S^{\prime}}\), these matrices are by design positive semi-definite, and in consequence, these matrices also have a unique square root. Therefore, they can define the normalized Bures similarity as \[m_{\text{NBS}}(\mathbf{R},\mathbf{R^{\prime}})=\frac{\mathrm{tr}(\mathbf{S}^{\frac{1}{2}} \mathbf{S^{\prime}}\mathbf{S}^{\frac{1}{2}})^{\frac{1}{2}}}{\sqrt{\mathrm{tr}(\mathbf{S}) \,\mathrm{tr}(\mathbf{S^{\prime}})}}. \tag{48}\] This measure is bounded in the interval \([0,1]\), with \(m_{\text{NBS}}(\mathbf{R},\mathbf{R^{\prime}})=1\) indicating perfect similarity. Due to the application of linear kernels, it is invariant to orthogonal transformations, and further invariant to isotropic scaling due to the normalization. #### 3.3.7 Representation Topology Divergence The main idea of Representation Topology Divergence (RTD) [49] is to consider representations as graphs \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where each instance \(i\) corresponds to a node \(v_{i}\in\mathcal{V}\) with nodes forming an edge \((v_{i},v_{j})\in\mathcal{E}\) if the corresponding representations have a short distance to each other, and then to apply tools from algebraic topology to quantify differences between these graphs. Specifically, based on the RSMs computed from Euclidean distance, for a given distance threshold \(\alpha>0\) they construct a graph \(\mathcal{G}^{\alpha}(\mathbf{R})\) with adjacency matrix \(\mathbf{A}\) defined as \[\mathbf{A}_{i,j}=\begin{cases}\mathbf{S}_{i,j},&\text{if }\mathbf{S}_{i,j}<\alpha,\\ 0,&\text{else},\end{cases} \tag{49}\] and a union graph \(\mathcal{G}^{\alpha}(\mathbf{R},\mathbf{R}^{\prime})\) with its adjacency matrix \(\mathbf{A}\) defined as \[\mathbf{A}_{i,j}=\begin{cases}\min(\mathbf{S}_{i,j},\mathbf{S}^{\prime}_{i,j})&\text{if }\min(\mathbf{S}_{i,j},\mathbf{S}^{\prime}_{i,j})<\alpha,\\ 0,&\text{else}.\end{cases} \tag{50}\] If the number of connected components in \(\mathcal{G}^{\alpha}(\mathbf{R})\) is different from the number of connected components in \(\mathcal{G}^{\alpha}(\mathbf{R},\mathbf{R}^{\prime})\), this is considered a topological discrepancy. For each specific discrepancy that occurs for varying values of \(\alpha\), the shortest corresponding interval (bar) \((\alpha_{1},\alpha_{2})\), for which this discrepancy persists, is collected in a set \(\mathcal{B}(\mathbf{R},\mathbf{R}^{\prime})\) that is denoted as _barcode_. These barcodes are then summarized by the total length of their intervals, denoted as \[b(\mathbf{R},\mathbf{R}^{\prime})=\sum_{(\alpha_{1},\alpha_{2})\in\mathcal{B}(\mathbf{R}, \mathbf{R}^{\prime})}\alpha_{2}-\alpha_{1}, \tag{51}\] which ultimately quantifies representational similarity between two models. Their final RTD measure is then constructed by subsampling \(K\) subsets \(\mathcal{I}^{(k)}\subset\{1,\dots,N\}\) of \(n:=|\mathcal{I}^{(k)}|<N\) instances each, and collecting the barcodes derived from representations \(\mathbf{R}^{(k)}=(\mathbf{R}_{i})_{i\in\mathcal{I}^{(k)}}\in\mathbb{R}^{n\times D}\), forming a measure \[RTD(\mathbf{R},\mathbf{R}^{\prime})=\frac{1}{K}\sum_{i=1}^{K}b\big{(}\mathbf{R}^{(k)},\mathbf{ R}^{\prime(k)}\big{)}. \tag{52}\] Because \(RTD\) is asymmetric, they ultimately propose to use the following symmetrized variant: \[m_{\text{RTD}}(\mathbf{R},\mathbf{R}^{\prime})=\frac{1}{2}(RTD(\mathbf{R},\mathbf{R}^{\prime} )+RTD(\mathbf{R}^{\prime},\mathbf{R})). \tag{53}\] Since each interval (bar) of the barcode represents a topological discrepancy, increasing values of RTD indicate stronger dissimilarity. If the representations are equivalent, each barcode will be an empty set, and thus it will hold that \(m_{\text{RTD}}(\mathbf{R},\mathbf{R}^{\prime})=0\). Barannikov et al. [49] make the RTD invariant to isotropic scaling by normalizing the RSM \(\mathbf{S}\) by the ninetieth percentile of its values. Further, this measure is invariant to orthogonal transformations, since Euclidean distance is used to compute the RSMs. Regarding the choice of parameters, they suggest using \(n=10\) subsets of \(k=500\) representations each as default values. ### Neighborhood-Based Measures The measures in this section compare the nearest neighbors of instances in the representation space. Thus, each of these measures first determine the \(k\) nearest neighbors of each instance representation \(\mathbf{R}_{i}\) in the full representation matrix \(\mathbf{R}\) with respect to a given similarity measure \(s\). Letting \(\mathbf{S}\) denote the corresponding RSM of representation \(\mathbf{R}\), and w.l.o.g. assuming that lower values indicate more similar representations, we formally define the set of the \(k\) nearest neighbors of the instance representation \(\mathbf{R}_{i}\) as the set \(\mathcal{N}_{\mathbf{R}}^{k}(i):=\mathcal{N}_{\mathbf{R}}^{k}(i,s)\subset\{j:1\leq j \leq N,j\neq i\}\) with \(|\mathcal{N}_{\mathbf{R}}^{k}(i)|=k\) for which it holds that \(\mathbf{S}_{i,j}<\mathbf{S}_{i,l}\) for all \(j\in\mathcal{N}_{\mathbf{R}}^{k}(i),l\not\in\mathcal{N}_{\mathbf{R}}^{k}(i)\cup\{i\}\). Similar to previous groups of measures, the choice of the similarity function \(s\) directly determines which transformations a measure is invariant to. For all the measures that we introduce in the following, the neighborhood size \(k\) is a parameter that has to be chosen for the application at hand. #### 3.4.1 \(k\)-NN Jaccard Similarity This measure considers how many of the \(k\) nearest neighbors each instance has in common over a given pair of representations. It computes a vector of the instance-wise Jaccard similarities \(v_{\text{lac}}^{k}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\), where the \(i\)-th element, \(1\leq i\leq N\), corresponds to the instance representations \(\mathbf{R}_{i},\mathbf{R}^{\prime}_{i}\) and is defined as \[\left(\mathbf{v}_{\text{lac}}^{k}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\right)_{i}:= \frac{\left|\mathcal{N}_{\mathbf{R}}^{k}(i)\cap\mathcal{N}_{\mathbf{R}^{\prime}}^{k}( i)\right|}{\left|\mathcal{N}_{\mathbf{R}}^{k}(i)\cup\mathcal{N}_{\mathbf{R}^{\prime}}^{k}( i)\right|}. \tag{54}\] Its values are then averaged to obtain the final similarity measure \[m_{\text{lac}}^{k}(\mathbf{R},\mathbf{R}^{\prime}):=\frac{1}{N}\sum_{i=1}^{N}\left(v_{ \text{lac}}^{k}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\right)_{i}. \tag{55}\] In practice, cosine similarity is a common choice for a similarity function \(s\) to determine the nearest neighbors of each instance [37, 50]. Hryniowski and Wong [51] use the same approach under the name of nearest neighbor topological similarity with Euclidean distance to compute nearest neighbors. Gwilliam and Shrivastava [4] use this measure under the name of nearest-neighbor graph similarity. Jaccard similarity is bounded in the interval \([0,1]\). A value of zero indicates distinct nearest-neighbor sets, hence dissimilarity, whereas as value of one indicates equivalent representations. #### 3.4.2 Second-Order Cosine Similarity This method has been proposed by Hamilton et al. [14] when analyzing changes in word embeddings over time. It compares the cosine similarities (28) of each instance \(\mathbf{R}_{i}\) to its \(k\) nearest neighbors with the corresponding cosine similarities of \(\mathbf{R}^{\prime}_{i}\) to its nearest neighbors in \(\mathbf{R}^{\prime}\). Formally, the union of the \(k\) nearest neighbors is computed as an ordered set \(\{j_{1},\ldots,j_{K(i)}\}:=\mathcal{N}^{k}_{\mathbf{R}}(i)\cup\mathcal{N}^{k}_{ \mathbf{R}^{\prime}}(i)\). Given these neighbors, the cosine similarity RSMs \(\mathbf{S},\mathbf{S}^{\prime}\) of the representations \(\mathbf{R},\mathbf{R}^{\prime}\) are utilized. The vector of second-order cosine similarities \(\mathbf{v}^{k}_{\text{2nd-cos}}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\) can then be defined element-wise for \(1\leq i\leq N\) as \[\left(\mathbf{v}^{k}_{\text{2nd-cos}}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\right)_{ i}:=\quad\text{cos-sim}\left(\left(\mathbf{S}_{i,j_{1}},\ldots,\mathbf{S}_{i,j_{K(i)}} \right),\left(\mathbf{S}^{\prime}_{i,j_{1}},\ldots,\mathbf{S}^{\prime}_{i,j_{K(i)}} \right)\right).\] Again, averaging the values of this vector yields the final similarity measure \[m^{k}_{\text{2nd-cos}}(\mathbf{R},\mathbf{R}^{\prime}):=\frac{1}{N}\sum_{i=1}^{N} \left(\mathbf{v}^{k}_{\text{2nd-cos}}\left(\mathbf{R},\mathbf{R}^{\prime}\right)\right)_{ i}. \tag{56}\] This measure is bounded in the interval \([0,1]\), with \(m^{k}_{\text{2nd-cos}}(\mathbf{R},\mathbf{R}^{\prime})=1\) indicating equivalence of \(\mathbf{R}\) and \(\mathbf{R}^{\prime}\). Rather than considering the union of the neighborhoods \(\mathcal{N}^{k}_{\mathbf{R}}(i),\mathcal{N}^{k}_{\mathbf{R}^{\prime}}(i)\), Chen et al. [68] essentially compute this second-order similarity over the intersection of the top-\(k\) neighborhoods. Their approach is based on graph similarity [78]. Another similar approach was presented by Moschella et al. [79] where a random fixed set of reference instances is used instead of neighbors. #### 3.4.3 Rank Similarity The \(k\)-NN Jaccard similarity captures the extent to which two neighborhoods \(\mathcal{N}^{k}_{\mathbf{R}}(i),\mathcal{N}^{k}_{\mathbf{R}^{\prime}}(i)\) overlap, but not the order of the common neighbors with respect to the distance to their reference representations \(\mathbf{R}_{i},\mathbf{R}^{\prime}_{i}\). To also assign stronger weights to closer neighbors, Wang et al. [50] determine distance-based ranks \(r_{\mathbf{R}_{i}}(j)\) to all \(j\in\mathcal{N}^{k}_{\mathbf{R}}(i)\), where \(r_{\mathbf{R}_{i}}(j)=n\) if \(\mathbf{R}_{j}\) is the \(n\)-th closest neighbor of \(\mathbf{R}_{i}\) with respect to a given similarity measure \(s\). Based on these ranks, one defines the vector of instance-wise ranking similarity \(\mathbf{v}^{k}_{\mathrm{ranksim}}(\mathbf{R},\mathbf{R}^{\prime})\) as \[\left(\mathbf{v}^{k}_{\mathrm{ranksim}}(\mathbf{R},\mathbf{R}^{\prime})\right)_{i}=\frac{ 1}{(\mathbf{v}_{\mathrm{max}})_{i}}\times\sum_{j\in\mathcal{N}^{k}_{\mathbf{R}}(i) \cap\mathcal{N}^{k}_{\mathbf{R}^{\prime}}(i)}\frac{2}{(1+|r_{\mathbf{R}_{i}}(j)-r_{\bm {R}^{\prime}_{i}}(j)|)(r_{\mathbf{R}_{i}}(j)+r_{\mathbf{R}^{\prime}_{i}}(j))}, \tag{57}\] where \((\mathbf{v}_{\mathrm{max}})_{i}=\sum_{k=1}^{K}\frac{1}{k}\), with \(K=|\mathcal{N}^{k}_{\mathbf{R}}(i)\cap\mathcal{N}^{k}_{\mathbf{R}^{\prime}}(i)|\), is a normalization factor that limits the maximum of the ranking similarity to one. Intuitively, the first factor of the denominator in Equation (57) measures the similarity of the ranks of an instance, whereas the second factor assigns rank-based weights to this similarity, with lower-ranked instances gaining less influence on \(\left(\mathbf{v}^{k}_{\mathrm{ranksim}}\big{(}\mathbf{R},\mathbf{R}^{\prime}\big{)}\right)_{i}\). Based on the instance-wise values, a similarity score for the full representation can be determined by averaging: \[m^{k}_{\mathrm{rank}}(\mathbf{R},\mathbf{R}^{\prime})=\frac{1}{N}\sum_{i=1}^{N}\left( \mathbf{v}^{k}_{\mathrm{ranksim}}(\mathbf{R},\mathbf{R}^{\prime})\right)_{i}. \tag{58}\] It holds that \(m^{k}_{\mathrm{rank}}(\mathbf{R},\mathbf{R}^{\prime})\in(0,1]\), with \(m^{k}_{\mathrm{rank}}(\mathbf{R},\mathbf{R}^{\prime})=1\) indicating perfect similarity. #### 3.4.4 Joint Rank and k-NN Jaccard Similarity Rank similarity has the issue that it is only calculated on the intersection of the \(k\)-nearest neighbor sets in different representations. That means rank similarity might be high, even if the \(k\)-NN sets have almost no overlap. Similarly, Jaccard similarity might be high, but the order of the nearest neighbors might be completely different. Therefore, Wang et al. [50] combine these two approaches to calculate the _embedding stability_, by considering the product of Jaccard and rank similarity. Thus, using the instance vectors defined in Equation (54) and Equation (57), we can define the vector of instance-wise similarities as \[\left(\mathbf{v}_{\mathrm{Jac-Rank}}^{k}(\mathbf{R},\mathbf{R^{\prime}})\right)_{i}=\left( \mathbf{v}_{\mathrm{Jac}}^{k}\left(\mathbf{R},\mathbf{R^{\prime}}\right)\right)_{i}\times \left(\mathbf{v}_{\mathrm{ranksim}}^{k}(\mathbf{R},\mathbf{R^{\prime}})\right)_{i}. \tag{59}\] Overall similarity, considering all instances, is then once again obtained by averaging all instances: \[m_{\mathrm{Jac-Rank}}^{k}(\mathbf{R},\mathbf{R^{\prime}})=\frac{1}{N}\sum_{i=1}^{N} \left(\mathbf{v}_{\mathrm{Jac-Rank}}^{k}(\mathbf{R},\mathbf{R^{\prime}})\right)_{i}. \tag{60}\] By the properties of \(k\)-NN Jaccard similarity and rank similarity, it follows that \(m_{\mathrm{Jac-Rank}}^{k}(\mathbf{R},\mathbf{R^{\prime}})\in[0,1]\), with \(m_{\mathrm{Jac-Rank}}^{k}(\mathbf{R},\mathbf{R^{\prime}})=1\) indicating perfect similarity. ### Descriptive Statistics Measures of this category deviate from all previous measures in a way that they describe statistical properties of either (i) individual representations \(\mathbf{R}\), or (ii) measures of variance in the instance representations \(\mathbf{R}_{i}\) over sets of more than two representations. In case of (i), the similarity scores can be directly compared over pairs or sets of representations. For case (ii), one could aggregate or analyze the distribution of the instance-wise variations. While there are numerous statistics that could be used to compare representations, in the following we specifically outline statistics that have already been used to characterize representations in existing literature. #### 3.5.1 Magnitude Wang et al. [50] characterize magnitude as the Euclidean length of instance representations \(\mathbf{R}_{i}\). Based on this intuition, they consider the mean representation \(\bar{\mathbf{R}}:=\|\frac{1}{N}\sum_{i=1}^{N}\mathbf{R}_{i}\|\) and define its length as one statistic to characterize an individual representation \(\mathbf{R}\): \[m_{\mathrm{Mag}}(\mathbf{R}):=\|\bar{\mathbf{R}}\|_{2}. \tag{61}\] Aside from aggregating magnitude over all instances, they further propose a measure to quantify the variance of the magnitude of instance-wise representations over multiple models. More precisely, given a set of representations \(\mathbf{R}\), Wang et al. [50] measure the variance in the magnitudes of individual instances \(i\) as \[m_{\mathrm{Var-Mag}}(\mathcal{R},i)=\frac{1}{\max_{\mathbf{R}\in\mathcal{R}}\| \mathbf{R}_{i}\|_{2}-\min_{\mathbf{R}\in\mathcal{R}}\|\mathbf{R}_{i}\|_{2}}\times\sqrt{ \frac{1}{|\mathcal{R}|}\sum_{\mathbf{R}\in\mathcal{R}}(\|\mathbf{R}_{i}\|_{2}-\bar{d}_ {i})}, \tag{62}\] where \(\bar{d}_{i}=\frac{1}{|\mathcal{R}|}\sum_{\mathbf{R}\in\mathcal{R}}\|\mathbf{R}_{i}\|_{2}\) is the average magnitude of the representations of instance \(i\) in \(\mathcal{R}\). As magnitude is unchanged by distance-preserving transformation, this statistic is invariant to orthogonal transformation and translation. #### 3.5.2 Concentricity Wang et al. [50] propose concentricity as a measure of the density of representations. This measure is also defined on instance level, with the concentricity of instance \(i\) in representation \(\mathbf{R}\) being defined as the cosine similarity (28) of the representation \(\mathbf{R}_{i}\) and the mean representation \(\bar{\mathbf{R}}\): \[\alpha_{i}(\mathbf{R})=\mathrm{cos-sim}(\mathbf{R}_{i},\bar{\mathbf{R}}). \tag{63}\] Similar to magnitude, Wang et al. [50] consider the mean concentricity \[m_{\mathrm{mConc}}(\mathbf{R}):=\frac{1}{N}\sum_{i=1}^{N}\alpha_{i}(\mathbf{R}) \tag{64}\] as a statistic for a single model, and measure the instance-wise variance of concentricity via \[m_{\mathrm{Var-Conc}}(\mathcal{R},i)=\frac{1}{\max_{\mathbf{R}\in\mathcal{R}} \alpha_{i}(\mathbf{R})\,-\min_{\mathbf{R}\in\mathcal{R}}\alpha_{i}(\mathbf{R})}\times\sqrt {\frac{1}{|\mathcal{R}|}\sum_{\mathbf{R}\in\mathcal{R}}(\|\alpha_{i}(\mathbf{R})\|_{2} -\bar{\alpha}_{i})}, \tag{65}\] where \(\bar{\alpha}_{i}=\frac{1}{|\mathcal{R}|}\sum_{\mathbf{R}\in\mathcal{R}}\|\alpha_{i} \|_{2}\) is the average magnitude of the representation of input \(\mathbf{X}_{i}\) in \(\mathcal{R}\). Concentricity inherits from cosine similarity the invariances to orthogonal transformations and isotropic scaling. #### 3.5.3 Uniformity Another approach to measure density of representations is uniformity [52, 4]. Uniformity measures how close the distribution of individual representations is to a uniform distribution on the unit hypersphere, and is defined as \[m_{\mathrm{uniformity}}(\mathbf{R})=\log\left(\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j= 1}^{N}e^{-t\|\mathbf{R}_{i}-\mathbf{R}_{j}\|_{2}^{2}}\right), \tag{66}\] where \(t\) is a hyperparameter that is set to \(t=2\) by Wang et al. [52] and Gwilliam and Shrivastava [4]. \(m_{\mathrm{uniformity}}(\mathbf{R})=0\) indicates perfectly uniform representations. Uniformity is invariant to orthogonal transformations and translation, as these transformations preserve distances. #### 3.5.4 Tolerance This statistic measures how close representations of semantically similar inputs are [53]. In contrast to all previous measures, it specifically requires a vector of ground-truth labels \(\mathbf{y}\in\mathbb{R}^{N}\). Tolerance is computed as the mean similarity of inputs with the same class: \[m_{\mathrm{tol}}(\mathbf{R})=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}(\mathbf{R}_{i }^{\mathsf{T}}\mathbf{R}_{j})\cdot\mathbb{1}\{\mathbf{y}_{i}=\mathbf{y}_{j}\}. \tag{67}\] It is assumed that all instance representations are preprocessed to have unit norm, which effectively makes this measure invariant to scaling and the dot product equivalent to cosine similarity. The dot product is also invariant to orthogonal transformation. #### 3.5.5 Modularity Similar to the RTD measure (cf. Section 3.3.7), this measure is based on building a graph from the representations, or more precisely, their representational similarity matrices. It is also related to tolerance, as Lu et al. [54] propose to consider modularity as a measure that quantifies whether semantically similar inputs are close together in the graph, and consequently, the representation space. They specifically suggest connecting each node to its \(k\) nearest neighbors, using cosine similarity as a distance metric. Thus, letting \(\mathbf{S}\) denote the corresponding RSM, the adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) of the resulting graph \(\mathcal{G}\) is then defined as \[\mathbf{A}_{i,j}=\begin{cases}\mathbf{S}_{i,j},&\text{if }j\in\mathcal{N}_{\mathbf{R}}^{k}( i),\\ 0,&\text{otherwise}.\end{cases} \tag{68}\] The modularity of the network [80], and in consequence the statistic for \(\mathbf{R}\), is then defined as \[m_{\text{Mod}}(\mathbf{R})=\frac{1}{2W}\sum_{i,j}\left(\mathbf{A}_{i,j}-\frac{d_{i}d_ {j}}{W}\right)\cdot\mathbb{1}\{\mathbf{y}_{i}=\mathbf{y}_{j}\}, \tag{69}\] where \(d_{i}=\sum_{j}\mathbf{A}_{i,j}\) denotes the effective degree of node \(v_{i}\), \(W=\sum_{i,j}\mathbf{A}_{i,j}\) is a normalization factor, and \(\mathbf{y}\) is the vector of ground-truth labels. The maximum modularity is given by 1, and high modularity implies that nodes of the same label are highly connected with each other, with only few connections to nodes of another label. Since the adjacency matrix of the graph is based on cosine similarity, \(m_{\text{Mod}}\) is invariant to orthogonal transformation. #### 3.5.6 Neuron-RSM Modularity A variant of modularity is also used by Lange et al. [55] to describe the structure of representations, or more precisely, the pattern of neuron activations. They also define the adjacency matrix of a representation graph in terms of RSMs, however, in their case they consider RSMs that describe similarity between neuron activations instead of instance representations. Specifically, they propose four different variants of RSMs that either consider pure neuron activations in specific layers, or also gradients with respect to neuron activations at specific layers. In that latter case, one may also consider the modularity based on such RSMs as a hybrid measure that assesses characteristics of representations \(\mathbf{R}\) with respect to functional behavior of a neural network. The first RSM that they propose, which does not consider gradients, is defined as \[\mathbf{S}_{k,l}=\frac{1}{N-1}\left|\sum_{i=1}^{N}(\mathbf{R}_{i,k}-\bar{\mathbf{R}}_{-,k} )(\mathbf{R}_{i,l}-\bar{\mathbf{R}}_{-,l})\right|, \tag{70}\] where \(\bar{\mathbf{R}}_{-,k}\) is the mean activation of neuron \(k\). The other three RSMs consider gradients, with the first of three remaining variants considering the gradients of a representational layer with respect to the inputs \(\mathbf{X}_{i}\): \[\mathbf{S}_{k,l}=\frac{1}{N}\left|\sum_{i=1}^{N}(\nabla_{\mathbf{X}_{i}}\mathbf{R}_{i,k})^{ \mathsf{T}}\nabla_{\mathbf{X}_{i}}\mathbf{R}_{i,l}\right|. \tag{71}\] The second gradient-based RSM uses the gradient \(\nabla_{\mathbf{R}_{i}}\mathbf{O}_{i,c}\) of the outputs with respect to the neurons of a representational layer: \[\mathbf{S}_{k,l}=\frac{1}{N}\left|\sum_{i=1}^{N}\sum_{c=1}^{C}\frac{\partial\mathbf{ O}_{i,c}}{\partial\mathbf{R}_{i,c}}\frac{\partial\mathbf{O}_{i,c}}{\partial\mathbf{R}_{i,l} }\right|. \tag{72}\] Finally, the last RSM uses the Hessian of the loss \(\mathcal{L}\) with respect to the neuron activations of a given layer: \[\mathbf{S}_{k,l}=\frac{1}{N}\left|\sum_{i=1}^{N}\frac{\partial^{2}\mathcal{L}}{ \partial\mathbf{R}_{i,k}\partial\mathbf{R}_{i,l}}\right|. \tag{73}\] Once the RSM has been computed, Lange et al. [55] construct the adjacency matrix of the networks they want to compute modularity of via \[\mathbf{A}_{i,j}=\begin{cases}\mathbf{S}_{i,j},&\text{if }i\neq j,\\ 0,&\text{otherwise}.\end{cases} \tag{74}\] Unlike Lu et al. [54], they do not consider hard ground-truth labels to allocate nodes to clusters, but rather determine an optimal soft assignment of \(n\) clusters that maximizes modularity. Specifically, they try to find an optimal cluster assignment matrix \(\mathbf{C}\in\mathbb{R}^{D\times n}\), where each entry \(\mathbf{C}_{j,k}\in[0,1]\) determines the assignment of neuron \(j\in\{1,\dots,D\}\) to cluster \(k\in\{1,\dots,n\}\). These assignments have to be normalized such that \(\mathbf{C}\mathbf{1}_{n}=\mathbf{1}_{D}\), and the number of clusters \(n\leq D\) of neuron activations is a parameter that is to be optimized as well. Given a definition of clustering from Girvan and Newman [81], their neuron modularity is then defined as \[m_{\mathrm{nMod}}(\mathbf{R})=\max_{\mathbf{C}}\mathrm{tr}(\mathbf{C}^{\mathsf{T}}\mathbf{ \tilde{A}}\mathbf{C})-\mathrm{tr}(\mathbf{C}^{\mathsf{T}}\mathbf{1}^{\mathsf{T}}\mathbf{1} \mathbf{\tilde{A}}\mathbf{C}), \tag{75}\] where \(\mathbf{\tilde{A}}=\frac{1}{\mathbf{1}_{D}\mathbf{A}\mathbf{1}_{D}}\mathbf{A}\) is the normalized adjacency matrix. To determine the cluster assignment \(\mathbf{C}\), they provide an approximation method based on Newman's modularity maximization algorithm. Lange et al. [55] also propose normalizing the RSMs before constructing the corresponding graphs, but did not observe big differences in the modularity of the corresponding graphs. Further, they experiment with computing the modularity of untrained models, using their initial weights, and find that the resulting modularity was similar to the modularity of trained models. This implies that these kinds of RSMs are not suitable to study training dynamics. Generally, \(m_{\mathrm{nMod}}\) is invariant to permutations, since these effectively only relabel the nodes in the resulting graph. ## 4 Functional Similarity Measures We now present functional similarity measures. These can be categorized into four main approaches: performance-based, hard prediction-based, soft prediction-based, and gradient-based measures. We show an overview of all measures in Section 4. Both hard prediction and soft prediction measures fundamentally measure agreement of models as their output is directly compared without an oracle reference such as human labels. We sometimes collectively call them agreement-based measures. These measures are related to prior literature on ensemble diversity [22, 23, 82] and inter-rater agreement [24, 26, 25]. All measures can easily be used on subsets of inputs, e.g., of specific classes, to gain more detailed insights into functional behavior. ### Performance-Based Measures A popular view on functional similarity is that models are similar if they reach similar performance on some task (e.g., [38, 95, 39, 94]). This approach is easy to implement, as the comparison of models is reduced to comparing two scalar performance scores, such as accuracy. However, this simplification also obfuscates more nuanced differences in functional behavior, which cannot directly be captured with a single number. Most commonly, given some quality function \(q\) that evaluates the performance of a model, the (absolute) difference in performance is used for similarity: \[m_{\mathrm{Perf}}(\mathbf{O},\mathbf{O^{\prime}})=|q(\mathbf{O})-q(\mathbf{O^{\prime}})|. \tag{76}\] While this measure is symmetric, we can also define an asymmetric measure by leaving out the absolute value. Although accuracy is the most commonly used quality function in literature [38, 95, 39, 94], other performance metrics such as F1 score [96] are suitable, too. However, choosing performance metrics that capture relevant aspects of functional behavior is non-trivial, as highlighted in the survey on performance metrics for vision tasks by Reinke et al. [97]. ### Hard Prediction-Based Measures The measures in this section quantify functional similarity by comparing hard predictions on instance-level. Each of the measure of this category will report high similarity for two models if their hard predictions agree for most inputs, regardless of whether these predictions are correct or not. Aspects like the prediction confidence are not considered. In the following, we explain hard prediction-based measures in machine learning. #### 4.2.1 Disagreement Disagreement, also known as churn [7], jitter [8], or Hamming prediction differences [84], is the expected rate of conflicting hard predictions over inputs and models [98, 83]. Due to its simplicity, it is a particularly popular measure for functional similarity. Formally, disagreement between two models is defined as \[m_{\mathrm{Dis}}(\mathbf{O},\mathbf{O^{\prime}})=\frac{1}{N}\sum_{i=1}^{N}\mathbb{1} \,\{\arg\max_{j}\mathbf{O}_{i,j}\neq\arg\max_{j}\mathbf{O^{\prime}}_{i,j}\}. \tag{77}\] The measure is bounded in the interval \([0,1]\), with a disagreement of one indicating completely distinct functional behavior, and a disagreement of zero indicating perfect agreement in hard predictions. In practice, this range is however bounded by model quality, with high disagreement being impossible if the compared models are both very accurate. In that context, Bhojanapalli et al. [89] further present theoretical bounds on disagreement in terms of the confidence in the predictions that is encoded via the soft predictions. They show that disagreement can be expected to be low when the confidence in the predictions is high, or when the soft predictions of the compared models are very similar. _Transferred discrepancy_ used disagreement of linear classifiers trained on intermediate representations as a proxy for representational similarity [99]. \begin{table} \begin{tabular}{l|l c c c c} \hline \hline Type & Measure & Groupwise & Blackbox Access & Labels Required & Similarity \(\uparrow\) \\ \hline \multirow{2}{*}{Performance} & Performance Difference & ✗ & ✓ & ✓ & ✗ \\ \hline \multirow{6}{*}{Hard Prediction} & Disagreement [83, 7, 8, 84] & ✗ & ✓ & ✗ & ✗ \\ \cline{2-5} & Error-Corrected Disagreement [85] & ✗ & ✓ & ✓ & ✗ \\ \cline{2-5} & Minmax-normalized Disagreement [9] & ✗ & ✓ & ✓ & ✗ \\ \cline{2-5} & Cohen’s Kappa [86] & ✗ & ✓ & ✗ & ✓ \\ \cline{2-5} & Fleiss Kappa [87] & ✓ & ✓ & ✗ & ✓ \\ \cline{2-5} & Ambiguity [37, 88] & ✓ & ✓ & ✗ & ✗ \\ \cline{2-5} & Discrepancy [88] & ✓ & ✓ & ✗ & ✗ \\ \hline \multirow{3}{*}{Soft Prediction} & Surrogate Churn [89] & ✗ & ✓ & ✗ & ✗ \\ \cline{2-5} & Jensen-Shannon Divergence [90] & ✗ & ✓ & ✗ & ✗ \\ \cline{1-1} \cline{2-5} & Prediction Difference [84] & ✓ & ✓ & ✗ & ✗ \\ \cline{1-1} \cline{2-5} & Rashomon Capacity [91] & ✓ & ✓ & ✗ & ✗ \\ \hline \multirow{3}{*}{Gradient} & ModelDiff [92] & ✗ & ✗ & ✓ & ✓ \\ \cline{2-5} & Adversarial Transferability [93] & ✗ & ✗ & ✓ & ✓ \\ \cline{1-1} \cline{2-5} & Saliency Map Similarity [5] & ✗ & ✗ & ✓ & ✓ \\ \hline Stitching & Performance Difference [94, 39, 95] & ✗ & ✗ & ✓ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of functional similarity measures. #### 4.2.2 Error-Corrected Disagreement As the range of possible disagreement values is dependent on the accuracy of the compared models, Fort et al. [85] propose to correct for this influence by dividing disagreement by the error rate \(q_{\mathrm{Err}}\) (Eq. (11)) of one of the models: \[m_{\mathrm{ErrCorDis}}(\mathbf{O},\mathbf{O^{\prime}})=\frac{m_{\mathrm{Dis}}(\mathbf{O}, \mathbf{O^{\prime}})}{q_{\mathrm{Err}}(\mathbf{O})}. \tag{78}\] By design, this measure is not symmetric since the error rates of the outputs \(\mathbf{O},\mathbf{O^{\prime}}\) may vary. A normalized disagreement of zero indicates perfect agreement, whereas the upper limit is dependent on the error rate - exact limits are provided by Fort et al. [85], which helps to contextualize the similarity scores that are obtained. A normalized variant of this measure, which also provides symmetry, has been used by Klabunde and Lemmerich [9]. In their _Min-Max-normalized disagreement_ measure, they relate the obtained disagreement \(m_{\mathrm{Dis}}\) to the minimum and maximum possible disagreement, which are given by \[m_{\mathrm{Dis}}^{(\min)}(\mathbf{O},\mathbf{O^{\prime}})=|q_{\mathrm{Err}}(\mathbf{O})- q_{\mathrm{Err}}(\mathbf{O^{\prime}})|\quad\text{and}\quad m_{\mathrm{Dis}}^{( \max)}(\mathbf{O},\mathbf{O^{\prime}})=\min(q_{\mathrm{Err}}(\mathbf{O})+q_{\mathrm{Err}} (\mathbf{O^{\prime}}),1), \tag{79}\] respectively. Based on these values, their measure is defined as \[m_{\mathrm{MinMaxNormDis}}(\mathbf{O},\mathbf{O^{\prime}})=\frac{m_{\mathrm{Dis}}( \mathbf{O},\mathbf{O^{\prime}})-m_{\mathrm{Dis}}^{(\min)}(\mathbf{O},\mathbf{O^{\prime}})}{ m_{\mathrm{Dis}}^{(\max)}(\mathbf{O},\mathbf{O^{\prime}})-m_{\mathrm{Dis}}^{(\min)}( \mathbf{O},\mathbf{O^{\prime}})}. \tag{80}\] This measure is bounded in the interval \([0,1]\), with \(m_{\mathrm{MinMaxNormDis}}(\mathbf{O},\mathbf{O^{\prime}})=0\) indicating perfect agreement between the models. #### 4.2.3 Chance-Corrected Disagreement Rather than correcting for accuracy of models, chance corrected disagreement measures correct for the rate of agreement that two or more classification models are expected to have by chance. The probably most prominent measures that follow this rationale are _Cohen's Kappa_[86] and Fleiss's Kappa [87], which were historically introduced as measures for inter-rater agreement, and can also be used to evaluate similarity in machine learning models to measure functional similarity [28, 100]. Both measures assume that the outputs that they are comparing are statistically independent, with the main difference being that Cohen's Kappa can only compare a pair of model outputs. Given a pair of models \(f,f^{\prime}\) with corresponding outputs \(\mathbf{O},\mathbf{O^{\prime}}\), and letting \(k_{c}=\sum_{i=1}^{N}\mathbbm{1}\{\arg\max_{j}\mathbf{O}_{i,j}=c\}\) denote the absolute amount of times that class \(c\) is predicted by model \(f\), the expected agreement rate of such models is given by \(p_{e}=\frac{1}{N^{2}}\sum_{c=1}^{C}k_{c}k_{c}^{\prime}\). Based on these values, Cohen's Kappa is defined as \[m_{\text{Cohen}}(\mathbf{O},\mathbf{O^{\prime}})=1-\frac{m_{\mathrm{Dis}}(\mathbf{O},\mathbf{ O^{\prime}})}{1-p_{e}}=\frac{p_{o}-p_{e}}{1-p_{e}}, \tag{81}\] where \(p_{o}=1-m_{\mathrm{Dis}}(\mathbf{O},\mathbf{O^{\prime}})\) denotes the observed agreement. When \(m_{\text{Cohen}}(\mathbf{O},\mathbf{O^{\prime}})=1\), perfect agreement of the models is indicated, a value \(m_{\text{Cohen}}(\mathbf{O},\mathbf{O^{\prime}})<0\) indicates less agreement than expected by chance. Interpreting Cohen's Kappa is non-trivial as Kappa values are influenced by accuracy of the models, number of classes, and class imbalance [101, 102] - with the latter issue being quite prevalent in application scenarios. Fleiss's Kappa [87] can be seen as an extension of Cohen's Kappa to settings with multiple classification models. Letting \(k_{ic}=\sum_{\mathbf{O}\in\mathcal{O}}\mathbbm{1}\{\arg\max_{j}\mathbf{O}_{i,j}=c\}\) denote the number of times instance \(i\) is predicted as class \(c\), and \(p_{c}=\frac{1}{TN}\sum_{i=1}^{N}k_{ic}\) denote the share of class \(c\) over all predictions, the expected agreement over all models is given by \(\bar{P}_{e}=\sum_{c=1}^{C}p_{c}^{2}\). This expected agreement over all models is then related to the actual agreement \(\bar{P}=\frac{1}{N}\sum_{i=1}^{N}P_{i}\), where \(P_{i}=\frac{2}{T(T-1)}\sum_{c=1}^{C}\frac{k_{ic}(k_{ic}-1)}{2}\), \(i\in\{1,\dots,N\}\) denotes the actual instance-wise agreements. Finally, the definition of the measure is similar to Cohen's Kappa: \[m_{\text{Fleiss}}(\mathbf{O},\mathbf{O^{\prime}})=\frac{\bar{P}-\bar{P}_{e}}{1-\bar{P} _{e}}. \tag{82}\] Similar to Cohen's Kappa, \(m_{\text{Fleiss}}(\mathbf{O},\mathbf{O^{\prime}})=1\) indicates perfect agreement of outputs, and values lower than zero indicate less agreement than expected by chance. Aside from these two measures, there are several more measures that also correct for agreement by chance when measuring agreement. Cohen [103] proposes a weighted variant of his Kappa measure that assigns weights to different kinds of disagreement - for example, in ordinal classification, disagreement between similar classes may weigh less compared to disagreement of more distinct classes. Conger [104] and Davies and Fleiss [105] propose alternatives to Fleiss' Kappa that relax the assumption of identical marginal prediction distributions to compute expected agreement. Krippendorff's Alpha [106] can be used as a generalization of several agreement measures. #### 4.2.4 Groupwise Disagreement Disagreement cannot identify commonalities across a whole set of models, as pairwise similarity of models does not imply groupwise similarity. The two following measures extend disagreement to identify functional similarity across sets of models. Ambiguity [88], also called _linear prediction overlap_[4], is the share of instances that receive conflicting predictions by _any_ pair of models out of a given set of models. Ambiguity is defined formally as follows: \[m_{\text{Ambiguity}}(\mathcal{O})=\frac{1}{N}\sum_{i=1}^{N}\max_{ \begin{subarray}{c}\boldsymbol{O},\boldsymbol{O}^{\prime}\in\mathcal{O}\\ s.t.\ \boldsymbol{O}\not=\boldsymbol{O}^{\prime}\end{subarray}}1\{\arg\max_{j} \boldsymbol{O}_{i,j}\neq\arg\max_{j}\boldsymbol{O}^{\prime}_{i,j}\}. \tag{83}\] If all outputs have perfect agreement, it holds that \(m_{\text{Ambiguity}}(\mathcal{O})=0\). Marx et al. [88] originally proposed ambiguity to measure multiplicity of models with similar performance. Hence, in this formulation, one model was fixed for all comparisons and the maximum was taken over the set of models that have similar loss. The counterpart to ambiguity is the _stable core_ measure proposed by Schumacher et al. [37], which counts the share of instances with consistent predictions, and can be defined as \[m_{\text{StableCore}}(\mathcal{O})=1-m_{\text{Ambiguity}}(\mathcal{O}). \tag{84}\] They also consider a relaxation of the stable core, where an instance was only required to obtain the same prediction by a fixed proportion of models to be considered stable, which they set to 90%. Discrepancy [88] is defined as the maximum disagreement between two classifiers from a bigger set of models: \[m_{\text{Discrepancy}}(\mathcal{O})=\max_{\begin{subarray}{c}\boldsymbol{O}, \boldsymbol{O}^{\prime}\in\mathcal{O}\\ s.t.\ \boldsymbol{O}\not=\boldsymbol{O}^{\prime}\end{subarray}}\frac{1}{N}\sum_{i=1}^{ N}\mathbb{1}\{\arg\max_{j}\boldsymbol{O}_{i,j}\neq\arg\max_{j}\boldsymbol{O}^{ \prime}_{i,j}\}. \tag{85}\] Similar to ambiguity, Marx et al. [88] proposed discrepancy to measure model multiplicity. Again, one model was fixed for all comparisons and the maximum was taken over the set of models that have similar loss. ### Soft Prediction-Based Measures This group of measures specifically compares soft prediction outputs, such as class-wise probabilities or scores from decision functions. Intuitively, this provides more nuance to the notion of similarity in outputs, since we can consider differences in confidence of individual predictions. The impact of confidence is specifically exemplified by cases where scores are close to the decision boundary. Even a minimal change in scores may cause a different classification in one case, whereas scores would need to change drastically for a different classification in another case. The following measures are thus particularly sensitive to such cases. #### 4.3.1 Surrogate Churn Bhojanapalli et al. [89] propose _surrogate churn_ (SChurn) as a relaxed version of disagreement, that takes into account the distribution of the soft predictions. For \(\alpha>0\), it is defined as \[m_{\text{SChurn}}^{\alpha}(\boldsymbol{O},\boldsymbol{O^{\prime}})=\frac{1}{2N }\sum_{i=1}^{N}\left\|\left(\frac{\boldsymbol{O}_{i}}{\max_{c}\boldsymbol{O}_{ i,c}}\right)^{\alpha}-\left(\frac{\boldsymbol{O}^{\prime}_{i}}{\max_{c} \boldsymbol{O}^{\prime}_{i,c}}\right)^{\alpha}\right\|_{1}. \tag{86}\] A value \(m_{\text{SChurn}}^{\alpha}(\boldsymbol{O},\boldsymbol{O^{\prime}})=0\) indicates perfect agreement of outputs. The authors show that when \(\alpha\to\infty\), this measure is equivalent to standard disagreement (cf. Sec. 4.2.1), and use \(\alpha=1\) as the default value. #### 4.3.2 Jensen-Shannon Divergence When soft predictions are specifically modelling class probabilities, several divergence measures for probability distributions could be applied to measure the difference between instance-level predictions. A popular choice of measure for that case is Jensen-Shannon Divergence (JSD) [90], which, to measure functional similarity, is applied on every instance and then averaged [100, 85]. Thus, letting \(\mathrm{KL}(\cdot\|\cdot)\) denote the Kullback-Leibler divergence [107], this measure is defined as \[m_{\text{JSD}}(\mathbf{O},\mathbf{O^{\prime}})=\frac{1}{2N}\sum_{i=1}^{N}\mathrm{KL}( \mathbf{O}_{i}\|\mathbf{O}_{i}^{\prime})+\mathrm{KL}(\mathbf{O}_{i}^{\prime}\|\mathbf{O}_{i}). \tag{87}\] Equality of outputs is given when \(m_{\text{JSD}}(\mathbf{O},\mathbf{O^{\prime}})=0\), and higher values indicate more dissimilarity. As noted above, several divergence measures could be applied to measure similarity of probability distributions. A comprehensive overview of such divergence measures is given by Cha [108]. #### 4.3.3 Prediction Difference Shamir and Coviello [84] specifically consider differences in predictions over more than two models. Their _prediction difference_ (PD) intuitively quantifies variance in model predictions. Letting \(\mathbf{\bar{O}}=\frac{1}{|\mathcal{O}|}\sum_{\mathbf{O}\in\mathcal{O}}\mathbf{O}\) denote the average output matrix, their standard prediction difference measure aggregates instance-wise deviations from the average output in terms of a \(p\)-norm: \[m_{\text{PD}}^{p}(\mathcal{O})=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{|\mathcal{O} |}\sum_{\mathbf{O}\in\mathcal{O}}\|\mathbf{O}_{i}-\mathbf{\bar{O}}_{i}\|_{p}. \tag{88}\] Shamir and Coviello [84] use \(p=1\) for interpretable differences of probability distributions. \(m_{\text{PD}}^{p}(\mathcal{O})=0\) indicates identical outputs of all models and is therefore not achieved in practical settings. Other than that, higher PD indicates higher dissimilarity between the compared models. Next to norm-based prediction difference, Shamir and Coviello [84] further propose measures that relate the variance in the outputs to their average magnitude. That way, differences on low-confidence predictions are penalized stronger. Relative prediction difference is defined as \[m_{\text{Rel-PD}}(\mathcal{O})=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{|\mathcal{O} |}\sum_{\mathbf{O}\in\mathcal{O}}\left[\sum_{c=1}^{C}\frac{|\mathbf{O}_{i,c}-\mathbf{ \bar{O}}_{i,c}|}{\mathbf{\bar{O}}_{i,c}}\right]. \tag{89}\] When ground-truth labels \(\mathbf{y}\) are given, Shamir and Coviello [84] further propose to specifically focus on discrepancies in the confidence of the predictions of the true labels \(\mathbf{y}_{i}\). The corresponding measure is defined as \[m_{\text{Rel-True-PD}}(\mathcal{O})=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{| \mathcal{O}|}\sum_{\mathbf{O}\in\mathcal{O}}\left[\frac{|\mathbf{O}_{i,\mathbf{y}_{i}}- \mathbf{\bar{O}}_{i,\mathbf{y}_{i}}|}{\mathbf{\bar{O}}_{i,\mathbf{y}_{i}}}\right]. \tag{90}\] As for the standard prediction difference, both of these variants can only achieve values of zero when all models make identical predictions, and higher values indicate more dissimilarity. #### 4.3.4 Rashomon Capacity Rashomon Capacity (RC) [91] measures multiplicity in predictions on individual instances. It is rooted in information theory and has been proposed to specifically measure differences in distinct models that have very similar loss, so-called Rashomon sets. However, it can also be used to measure similarity of a general set of outputs \(\mathbf{O}\). Formally, letting \(P_{\mathbf{O}}\) denote a probability distribution over the set of outputs \(\mathcal{O}\), and \(\Delta_{C}\) the probability simplex (10), the output spread given this distribution is defined as \[\inf_{\mathbf{p}\in\Delta_{C}}\mathbb{E}_{\mathbf{O}\sim P_{\mathcal{O}}}\mathrm{KL}( \mathbf{O}_{i}\|\mathbf{p}), \tag{91}\] where \(\mathbf{p}\in\Delta_{C}\) is a reference distribution that is optimized to minimize distances to all outputs. The maximum spread, or _channel capacity_, over the outputs is then determined over all probability distributions on the outputs: \[\mathrm{Capacity}(\mathcal{O},i)=\sup_{P_{\mathbf{O}}}\inf_{\mathbf{p}\in\Delta_{C}} \mathbb{E}_{\mathbf{O}\sim P_{\mathbf{O}}}\mathrm{KL}(\mathbf{O}_{i}\|\mathbf{p}). \tag{92}\] Finally, the Rashomon Capacity over instance \(i\) is defined as \[m_{\text{RC}}(\mathcal{O},i)=2^{\mathrm{Capacity}(\mathcal{O},i)}. \tag{93}\] To approximate the Rashomon capacity of an instance, Hsu and Calmon [91] suggest using the Blahut-Arimoto algorithm [109, 110]. A similarity measure over all instances can be obtained by aggregation, such as taking the mean value. It holds that \(m_{\text{RC}}(\mathcal{O},i)\in[1,C]\) with \(m_{\text{RC}}(\mathcal{O},i)=1\) if and only if all outputs are identical, and \(m_{\text{RC}}(\mathcal{O},i)=C\) if and only if every class is predicted once with perfect confidence. Further, the measure is monotonous, i.e., it holds that \(m_{\text{RC}}(\mathcal{O}^{\prime},i)\leq m_{\text{RC}}(\mathcal{O},i)\) for all \(\mathcal{O}^{\prime}\subseteq\mathcal{O}\). ### Gradient-Based Measures The measures in this section use model gradients to characterize similarity. A core assumption of these methods is that similar models have similar gradients. This means that, for instance, adversarial examples created for one model should lead to similar effects in another model if they are similar. #### 4.4.1 ModelDiff In their _ModelDiff_ measure, Li et al. [92] use adversarial examples from perturbation attacks to characterize decision regions, which can then be compared across two models. Given a model \(f\), they first create adversarial examples \(\mathbf{\tilde{X}}_{i}\) for every input \(\mathbf{X}_{i}\), by adding noise to these inputs that steer the model away from a correct prediction. Such examples can be determined by methods such as projected gradient descent [111]. The difference between the original soft predictions \(\mathbf{O}_{i}=f(\mathbf{X}_{i})\) and the output of the adversarial example \(\tilde{\mathbf{O}}_{i}=f(\mathbf{\tilde{X}}_{i})\) is collected in a _decision distance vector_\(\mathbf{v}_{\mathrm{DDV}}\) (DDV). This difference is computed in terms of cosine similarity (28): \[\big{(}\mathbf{v}_{\mathrm{DDV}}(\mathbf{O},\tilde{\mathbf{O}})\big{)}_{i}=\mathrm{cos-sim} (\mathbf{O}_{i},\tilde{\mathbf{O}}_{i}). \tag{94}\] Finally, the DDVs of different models are compared, with the outputs \(\tilde{\mathbf{O}}_{i}^{\prime}=f^{\prime}(\mathbf{\tilde{X}}_{i})\) being computed from the same adversarial examples \(\mathbf{\tilde{X}}_{i}\), and again using cosine similarity: \[m_{\text{ModelDiff}}(\mathbf{O},\mathbf{O}^{\prime})=\mathrm{cos-sim}(\mathbf{v}_{\mathrm{ DDV}}(\mathbf{O},\tilde{\mathbf{O}}),\mathbf{v}_{\mathrm{DDV}}(\mathbf{O}^{\prime},\tilde{\mathbf{O}}^{ \prime})). \tag{95}\] A similarity score \(m_{\text{ModelDiff}}(\mathbf{O},\mathbf{O}^{\prime})=1\) indicates that both models are equivalent in their outputs. Since this measure uses adversarial examples of only one of the models, it is not symmetric. This asymmetry is rooted in the fact that ModelDiff was developed to identify (unauthorized) model reuse. In this scenario, access to a third-party model might be restricted and thus the generation of adversarial examples could become infeasible. Li et al. [92] also propose an approach to find adversarial examples without white-box access to either model. #### 4.4.2 Adversarial Transferability Similar to ModelDiff, Hwang et al. [93] measure the similarity of networks in terms of the transferability of adversarial attacks: if both models are susceptible to the same adversarial examples, then the networks are considered similar. Given two networks \(f,f^{\prime}\), for each input \(\mathbf{X}_{i}\) that is predicted correctly by both networks, a pair of corresponding adversarial examples \(\mathbf{\tilde{X}}_{i}\), \(\mathbf{\tilde{X}}_{i}^{\prime}\) is generated with projected gradient descent [111]. These adversarial examples are then fed into the opposite model, yielding outputs \(\tilde{\mathbf{O}}_{i}=f(\mathbf{\tilde{X}}_{i}^{\prime})\) and \(\tilde{\mathbf{O}}^{\prime}_{i}=f^{\prime}(\mathbf{\tilde{X}}_{i})\), for which it is then determined how often both are incorrect. Thus, given the vector of ground-truth labels \(\mathbf{y}\), and letting \[\mathcal{X}_{\text{true}}:=\{i|\arg\max_{j}\mathbf{O}_{i,j}=\arg\max_{j}\mathbf{O}_{i,j}^{\prime}=\mathbf{y}_{i}\} \tag{96}\] denote the set of instances that were predicted correctly by both models, Hwang et al. [93] define their measure as \[m_{\text{AdvTrans}}(\tilde{\mathbf{O}},\tilde{\mathbf{O}}^{\prime})=\log\bigg{[}\max \bigg{\{}\varepsilon,\frac{100}{2\left|\mathcal{X}_{\text{true}}\right|}\sum_ {i\in\mathcal{X}_{\text{true}}}\big{(}\mathbb{1}(\arg\max_{j}\tilde{\mathbf{O}}_{ i,j}\neq\mathbf{y}_{i})+\mathbb{1}(\arg\max_{j}\tilde{\mathbf{O}}_{i,j}^{\prime}\neq\mathbf{y}_{i})) \bigg{\}}\bigg{]}, \tag{97}\] where \(\varepsilon>0\) is introduced to avoid \(\log(0)\). A value of \(m_{\text{AdvTrans}}(\tilde{\mathbf{O}},\tilde{\mathbf{O}}^{\prime})=\log(100)\) indicates perfect model similarity, whereas \(m_{\text{AdvTrans}}(\tilde{\mathbf{O}},\tilde{\mathbf{O}}^{\prime})=\log(\varepsilon)\) indicates complete disagreement. #### 4.4.3 Cosine Similarity of Saliency Maps When investigating the relationship between model similarity and robustness, among other methods, Jones et al. [5] apply a direct approach to compare models in terms of their gradients. More precisely, they compute the cosine similarity (28) between (vectorized) saliency maps [112]. Saliency maps have been introduced in the context of image classification, where they conceptually displayed the influence of each pixel in an input image on the predictions of specific classes. More generally, these saliency maps can be depicted as the gradient of individual predictions \(\mathbf{O}_{i,c}\) with respect to their corresponding inputs \(\mathbf{X}_{i}\). Based on such individual gradients, a similarity score can be computed by aggregation: \[m_{\text{SaliencyMap}}(\mathbf{O},\mathbf{O}^{\prime})=\frac{1}{nC}\sum_{i=1}^{N}\sum _{c=1}^{C}\mathrm{cos-sim}\left(|\nabla_{\mathbf{X}_{i}}\mathbf{O}_{i,c}|,|\nabla_{ \mathbf{X}_{i}}\mathbf{O}_{i,c}^{\prime}|\right), \tag{98}\] where the absolute value \(|\cdot|\) is applied element-wise. A value \(m_{\text{SaliencyMap}}(\mathbf{O},\mathbf{O}^{\prime})=1\) indicates perfect similarity, with lower values indicating stronger differences between models. ### Stitching-Based Measures The intuition behind _stitching_ is that similar models should be similar in their internal processes and, thus, swapping layers between such models should not result in big differences in the outputs if a layer that converts representations is introduced [95, 39, 94]. Given two models \(f,f^{\prime}\), stitching consists of training a _stitching layer_ (or network) \(g\) to convert representations \(\mathbf{R}\) of model \(f\) at layer \(l\) into representations \(\mathbf{R^{\prime}}\) of model \(f^{\prime}\) at some layer \(l^{\prime}\). One then considers the composed model \[\tilde{f}:=f^{\prime(L^{\prime})}\circ\cdots\circ f^{\prime(l^{\prime}+1)}\circ f ^{\prime(l^{\prime})}\circ g\circ f^{(l)}\circ f^{(l-1)}\circ\cdots\circ f^{( 1)}, \tag{99}\] which uses the bottom-most layers of \(f\) and the top-most layers of \(f^{\prime}\), and compares its performance with the original models. In existing literature, the most prevalent approach for that is to directly compare their performance in terms of a quality function \(q\) such as accuracy [39, 94]. This yields a measure \[m_{\text{stitch}}(\tilde{\mathbf{O}},\mathbf{O^{\prime}})=q(\tilde{\mathbf{O}})-q(\mathbf{O^{ \prime}}), \tag{100}\] where \(\tilde{\mathbf{O}}=\tilde{f}(\mathbf{X})\) is the output of the stitched model. However, we point out that other functional similarity measures may be suitable to identify more fine-grained similarity. Although stitching operates on representations, we classify it as a functional similarity measure because, in the end, one only considers differences in the outputs. Both the design and the placement of the stitching layer are of high importance to obtain proper assessments of model similarity. Regarding design, Bansal et al. [94] generally choose stitching layers such that the architecture of the composed model is consistent with the architectures of the stitched models. For instance, they use a token-wise linear function between transformer blocks to stitch transformers. For CNNs, 1x1 convolutions are generally used in stitching layers [95, 39, 94]. Csiszarik et al. [39] further try orthogonal transformations, linear transformations, and low-rank linear transformations, to directly convert between representations, though SGD-trained stitching layers outperform these options. To train the stitching layers, one typically freezes parameters of the stitched models and only optimizes the weights of the stitching layer via backpropagation, using ground truth labels or the output of \(f^{\prime}\) as soft labels [39, 94, 95]. To aid the optimization, Bansal et al. [94] suggest adding BatchNorm layers before and after the stitching layer. In case the stitching layer constitutes a simple linear transformation \(\mathbf{T}\), its weights can be directly computed by solving the least squares problem \(\|\mathbf{R}^{(l)}\mathbf{T}-\mathbf{R^{\prime}}^{(l^{\prime})}\|_{F}\). This objective can be modified with \(L1\) regularization [39] to encourage a sparse stitching, i.e., each neuron of the stitched layer takes in outputs of only few neurons of the bottom network. The asymmetry of model stitching allows concluding whether parts of one model are better than another (does plugging in the representations of model \(f\) increase performance of the stitched model compared to model \(f^{\prime}\)?), whereas representational similarity measures can only indicate whether they are similar or not. Compared to other measures, model stitching requires training of an additional layer and thus might be more complicated and expensive to implement. ## 5 Meta-Analysis of Similarity Measures In this section, we will summarize existing results regarding the properties of similarity measures and their relationships, to aid the readers in choosing appropriate measures for their application. Much of the analysis of similarity measures in deep learning is focused on representational similarity. However, popular functional similarity measures are used beyond deep learning and, thus, have been analyzed in more general contexts. Hence, to keep in scope of similarity measures for deep learning, we focus on the results for representational similarity measures, but we point to related work for functional measures [113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123]. ### Correlation between Functional and Representational Measures There has only been little work that investigates the relationship between representational and functional similarity over specific kinds of neural networks. Most prominently, Ding et al. [38] conduct a correlation analysis on BERT language models [124] and ResNet image recognition models [125], where they investigate whether diverging functional behavior can also be depicted through representational similarity measures. For that purpose, they induce functional changes on the given models, such as varying training seeds, decreasing layer depth, removing principal components of representations at certain layers, or applying out-of-distribution inputs, and investigate whether observed changes in accuracy on classification tasks correlate with changes in representational similarity as measured by CKA, PWCCA, and Orthogonal Procrustes. Overall, they observe that on these two neural network types, the Procrustes measure generally correlates well with changes in functional behavior, which conversely does not always hold for CKA and PWCCA. Specifically, they find when removing principal components from intermediate representations, CKA is much less sensitive to these changes than orthogonal Procrustes and PWCCA. CKA still indicates high similarity between the original and the modified representation when the accuracy of the model has already dropped by over 15 percent. A similar analysis has been conducted by Hayne et al. [57]. They induce functional changes by deleting neurons in the linear layers of ImageNet-trained CNNs. Aside from using accuracy as a functional measure, they use another performance-based measure based on the rank of the true class in the soft predictions. They report that, on these CNNs, the orthogonal Procrustes measure and CKA correlate more with functional similarity than CCA measures. A key difference in their protocol compared to Ding et al. [38] is that they do not use the same preprocessing of presentations. They do not normalize the representations to unit norm, which makes their results not directly comparable. Davari et al. [126] have specifically explored the connection between CKA and functional similarity measures in more detail, pointing out how this measure is sensitive to manipulations of its input representations that would not affect the functional similarity of the underlying models. As one of their main results, they have shown that manipulating the representation of only a single instance can strongly affect the CKA score. Specifically, they could alter the CKA of two identical representations to almost zero by translating the representation of a single instance in one of the copies, without affecting the separability of the corresponding classes by this translation. Further, they have shown how pairwise CKA scores between layers can be altered to a specified reference with a setup similar to distillation, while leaving functional similarity almost unaffected. This kind of sensitivity of the CKA to manipulations that would not affect functional similarity of models has also been reported by Csiszarik et al. [39]. CKA scores were also compared to disagreement of models [49], clearly correlating to a lower degree than RTD. Finally, a number of works [33, 68, 46, 38] have explored the ability of representational similarity measures to match corresponding layers in pairs of models that only differ in their training seed. Since such models and their corresponding layers are typically very similar in terms of their performance [127], they are considered functionally similar in that context. In the experiments, the authors compare the similarities between all combinations of layers \(f^{(l)},f^{\prime(l^{\prime})}\), and consider a measure \(m\) to correctly match layers if the value \(m(\mathbf{R}^{(l)},\mathbf{R}^{\prime(l)})\) indicates higher similarity than \(m(\mathbf{R}^{(l)},\mathbf{R}^{\prime(l^{\prime})})\) for all \(l^{\prime}\neq l\). CKA and a variant of second-order cosine similarity overall outperform CCA-based measures in these experiments. ### Discriminative Abilities of Representational Similarity Measures There are a number of works that have assessed different aspects regarding what kind of representations are considered similar by representational similarity measures. Morcos et al. [1] test the robustness of CCA-based measures to noise in representations. They argue that such measures should identify two representations as similar if they have a fixed shared part, next to a number of dimensions that are random noise. Thus, they test the measures with varying share of noise dimensions in the overall representations, and find that PWCCA overall is most robust in indicating high similarity, even if half of the dimensions are noise. When the number of noise dimensions was smaller, SVCCA has also shown robust behavior, with only mean CCA falling behind. Shahbazi et al. [46] test whether representations obtained by sampling a low number of dimensions from a given baseline still yield high similarity with the original representation, as well as other low-dimensional samples. They specifically compare CKA, Riemannian distance, RSA, and Frobenius norm of the RSM difference on a neuroscience dataset, where the sampled dimensions are varied between 10 and 50. Their findings indicate that Riemannian distance almost always assigns high similarity between a low-dimensional sample and its original representation. The other measures do not assign such high similarities when dimensionality was low, though CKA gave better than RSA and RSM norm. In higher dimensions, all measures perform equally well. Further, Riemannian distance generally indicated high similarity between two low-dimensional samples from the same baseline. Lin and Kriegeskorte [47] have tested whether AGTIC, dCor, and HSIC, the statistic used in CKA, are able to discriminate between varying distributions of data patterns, such as spirals or circles. In lower-dimensional representations, AGTIC overall appeared to discriminate better than HSIC and dCor. When dimensions were higher, these measures yielded similar performance at this discrimination task. A similar experiment has been conducted by Barannikov et al. [49], who also use synthetic data patterns to test the ability of RTD to discriminate between topologically different data. Specifically, they generate data points that come from an increasing number of different clusters that are arranged circularly in two-dimensional space. They argue that the similarity between the original data with one cluster and other datasets with more clusters should decrease with increasing number of clusters. In their results, their rank correlation between similarity score of measure and number of clusters in the data was perfect for the RTD, whereas he CKA and SVCCA had relatively low correlations. Finally, Tang et al. [48] argued that models trained from two similar datasets, such as CIFAR-10 and CIFAR-100, should be more similar compared to models trained on dissimilar datasets, that for instance do not contain natural images. In their experiments, they consider the CKA and NBS measures. They find that CKA discriminates these types of data better. ### Influence of Inputs Another relevant issue that has been studied in literature is the impact that the inputs \(\mathbf{X}\) have on the resulting similarity scores. Specifically when it comes to functional similarity, it is a well-known fact in machine learning research that similarity of outputs is strongly confounded by the accuracy of the models, the number of classes and the class distribution [85, 9, 89, 101, 102]. Similar confounding effects also exist with respect to representational similarity measures, and we will discuss some corresponding results in the following. First, there are strong indications that increasing similarity of inputs also results in increasing similarity of the resulting outputs. Cui et al. [128] specifically point out this effect for the RSA and CKA measures, where they provide examples as to how strongly confounded inputs processed via random neural networks yield higher CKA scores than neural networks that were optimized for the given image recognition task. As a solution to this issue, they propose a regression-based approach to de-confound the inputs. Second, it has been shown that representational similarity measures can be confounded by input features. Dujmovic et al. [129] show this effect for RSA measures. They compare a model trained on standard image data to models trained on images that were modified such that in every image there was a pixel that leaked the class of the image, which allowed these models to learn a shortcut to classification. Depending on where the leaking pixels were placed in the images, the representational similarity between the standard model and the tweaked model varied strongly. Similarly, Jones et al. [5] find that feature co-occurrence in inputs leads to high representational similarity scores by CKA. Different input features may co-occur in the data used to compute representations, but models may use these features to different extents. For example, on a high level, the features "hair" and "eyes" co-occur in images of human faces, but one model may only use the hair for its task, whereas the other model may only use the eyes feature. In their analysis, they show that CKA scores ignore the difference in feature use with an image inversion approach: using data synthetically generated to produce the same representations in one model, similarity to the other model drops drastically as feature co-occurrences are eliminated. Third, there are also indicators that the quantity \(N\) of input instances can influence similarity scores. In that context, Williams et al. [36] study the relation between similarity scores and the ratio \(N/D\) of input quantity over dimensionality of the representations. They conduct experiments on variants of the Procrustes measure, from permutation invariance to linear invariance. They find that invariance affects the ratio \(N/D\) needed for consistent similarity scores. Invariances, that allow for more transformations between representations, such as linear invariance, require a higher ratio than invariances with comparatively few allowed transformations, such as permutation. ## 6 Discussion After presenting both representational and functional similarity measures, as well as results of meta-analyses on these measures, we will now discuss the relationship between these measures, present open research problems, and provide some practical considerations. ### Relationship between Representational Similarity and Functional Similarity As noted in Section 2, representational and functional similarity are two complementary notions, which in combination can allow for nuanced insights into similarity of models. However, to use and interpret the corresponding measures correctly, the relationships between these measures have to be properly contextualized. Functional outputs in classification tasks have a clear and universal semantic, which makes it intuitive to understand when two outputs are similar. In contrast, the semantics of representations may depend on the type of neural network, its activation function, or its objective. As a result, functional similarity measures are generally easier to interpret than representational measures. This allows us to use functional similarity to (partially) validate representational similarity. When functional similarity measures indicate strong dissimilarity of models, there has to be some dissimilarity in the representations of the previous layers, assuming that differences in the final classification layer cannot fully explain the functional difference. The opposite is not true: two functionally similar models may reach their output with dissimilar representations. At the same time, representational measures may simply not be suited to pick up on representational equivalences that could lead seemingly dissimilar models to the same outputs. Further, if a functional similarity measure indicates strong similarity, this does also not imply that the models are indeed equal in general, both from a representational or from a functional perspective. There may be various ways for representations to keep separability of inputs, and if the model was applied to out-of-distribution inputs, the resulting outputs may vary drastically [16]. Finally, strong representational similarity may not imply strong functional similarity either, as functional outputs may be susceptible to noise in representations that would not necessarily have strong effects on representational similarity measures or the models may use transformations that are less powerful than the invariances of the measure. In conclusion, a single sound implication can be made: if there is significant functional dissimilarity in a model, there also should be a representational similarity measure indicating significant dissimilarity of the model. This specific relation between functional and representational similarity was already used in the quality evaluation of representational similarity measures by measuring correlations between functional and representational similarity scores, as done by Ding et al. [38]. We recommend this specific approach to evaluate applicability of representational similarity measures, and hope that additional understanding of representational similarity measures will be gained in the future. ### Open Research Problems As can be seen from Section 5, there is a contrast between the numerous proposed measures, and the rather small amount of research dedicated to systematically analyzing and comparing the existing measures. We argue that this exemplifies a significant gap in research, as deeper understanding of the properties of measures and their applicability is necessary to apply them properly. Otherwise, misinterpreted or faulty measurements may mislead future research. This lack of understanding is a particular problem for representational similarity measures as the representational structure is usually difficult to understand, and strongly dependent on factors such as architecture, loss functions, layer depth, and activation functions. Therefore, we consider research on the applicability of representational similarity measures to specific models to be of particular value. As discussed in Section 5, only the works by Ding et al. [38] and Hayne et al. [57] have specifically addressed the issue of applicability of measures for specific language and image recognition models, where overall, Procrustes similarity was recommended and the behavior of CKA was different for differing neural network models. However, even these works only consider a small subset of representational similarity measures, and there are some differences in experimental settings, which impair the generalizability of these findings. As discussed in Section 6.1, we support the approach by the authors to determine applicability of measures, but argue that more research, considering a higher amount of both similarity measures and model types, is necessary. Beyond the problem of determining suitable measures for given model architectures, further understanding of the behavior of the measures is necessary to improve the interpretability of similarity scores. Unless a measure takes on a value that indicates perfect similarity, there is typically no direct implication on whether a given similarity score indicates similarity or dissimilarity of models, as this strongly depends on the context. For instance, when analyzing functional similarity, a disagreement of \(m_{\text{Dis}}(\mathbf{O},\mathbf{O^{\prime}})=0.05\) can be considered low in a difficult classification problem with many classes, where no high accuracy is expected, and high in an easy binary classification problem where one expects near-perfect accuracy. While such contextualization is easy to grasp and establish for measures that are as intuitive as disagreement, for more opaque representational similarity measures, there are hardly any known baselines that help to classify a score as indicating similarity or dissimilarity. It is possible to create a baseline for unrelated representations by permutation of the rows of the representation matrices[45, 46], which allows for testing whether a score indicates statistically significant similarity. However, a more general contextualization of similarity scores is still out of reach with this approach. Therefore, we argue that future research should put more emphasis on investigating practical properties of similarity measures, and on establishing baselines that help to interpret similarity scores. There are several aspects that could be considered, for instance: * **Effects of input similarity**: As discussed in Section 5.3, representational similarity tends to be higher if inputs are similar. Are there boundaries of representational similarity in terms of some notion of input similarity? * **Effects of representation perturbation**: It has already been demonstrated that CKA is sensitive to translating a single instance representation. At the same time, it is robust to the removal of principal components. For most other representational similarity measures, such analyses have not been conducted yet. To what extent do changes in a representation affect similarity scores? * **Effects of dimensionality**: As an example, Orthogonal Procrustes scores between random representations change with increasing dimension (see Appendix B). What is the range a measure can be expected to take, depending on such contexts? We argue that deeper understanding of such aspects is crucial to properly apply and interpret their output scores, which in consequence would strongly benefit the understanding of similarity of neural networks in general. ### Practical Considerations We close this discussion with providing advice for the practical applications of similarity measures. Aside from general recommendations, we will also briefly discuss how functional similarity can be assessed in related learning tasks such as regression or multi-label classification, and what computational challenges exist aside from the computational cost of computing similarity scores. #### 6.3.1 General Recommendations We describe 30 representational and 16 functional similarity measures in this survey. Given this big number of similarity measures, selecting an appropriate measure for a specific context is not trivial, and depends on constraints such as data accessibility, model access, computational resources, and objectives of the analysis. Regarding some of these external factors, Table 1 and Section 4 already provide an overview of basic properties that may affect the applicability of measures in a context at hand. However, there are a number of more things to consider when trying to measure similarity of neural networks in practice. Most importantly, as discussed in Section 6.1, one should generally consider both representational and functional similarity measures to assess similarity of neural networks in a holistic manner. Due to their generally low cost, high interpretability, and model agnostic semantics, it is also generally advisable to use as many performance-based and prediction-based functional similarity measures as the context, e.g., availability of soft predictions, allows. Agreement-based measures are to be preferred since they conceptually provide more nuanced information about the similarity of the outputs, but performance-based measures, and the plain performance values of the compared models themselves, should all be utilized as well to provide additional context, as agreement-based measures are confounded by model performance. Gradient-based measures are more expensive and require white-box access to the model, but can provide even more nuanced insights on functional similarity beyond plain agreement. In contrast, the choice of representational similarity measures requires more care. Since the structure of the representation space strongly depends on factors such as the type of the neural network, its loss function, or its activation functions, the chosen representational similarity measure should be compatible with this structure. For instance, nearest-neighbor based measures that apply Euclidean distance to identify nearest neighbors may not be suitable when similarity of representations is modeled in terms of angles. Conversely, angle-based similarity measures such as cosine similarity may not be a good choice when representations are bound to the positive orthant due to activation functions such as ReLu. Thus, if notions of equivalence in the representation space are given, one could consider measures that are invariant to the corresponding class of transformations, or if appropriate distance functions in representation space are known, these could be utilized when applying RSM-based or nearest-neighbor-based measures. More research is required at determining appropriate measures for given neural network designs - as of now, there is only some evidence indicating that orthogonal Procrustes is relatively well-applicable on specific image recognition and language models [38]. Again, also the results on representational similarity of a given set of models should be contextualized by considering appropriate baselines such as similarity of random representations, or representations stemming from random inputs, as there are several factors that could potentially confound the observed similarities. Further, sensitivities to noise, as known for the CKA measure [126], have to be considered. Moreover, non-linearity of measures should be taken into account. For example, the widely-used cosine similarity changes non-linearly with respect to the angle between two compared vectors, which may highlight some changes of representation or function, but downplay others. Finally, choosing appropriate inputs may also benefit the understanding of similarity between neural network models. Given that similarity of inputs can confound the resulting similarity scores (see Section 5.3), one may consider using inputs that do not correlate strongly with each other, or using the de-confounding approach for RSM-based measures by Cui et al. [128]. Even more, out-of-distribution inputs, if available, might be particularly well-suited to search for differences in behavior of models that may only have consistent behavior on the kinds of inputs it was trained on. #### 6.3.2 Functional Similarity for Non-Classification Tasks Although in this survey we focus on functional similarity with respect to classification, many of the functional similarity measures can be used for other tasks. In particular, if a suitable performance measure is given, performance-based measures can be used in any other context. This is also the case for gradient-based and stitching measures if white-box access to the models is given, and, in case of gradient-based measures, adversarial examples can be constructed for the given context. The agreement-based measures that we presented, conversely, are essentially limited to tasks where outputs are assigned discrete labels. For multi-label classification, the inter-rater agreement measures naturally generalize to this context, while most other measures are not applicable. For regression tasks, some measures can be adapted by comparing the regression outputs with a suitable distance measure. In that context, we would like to point to prior surveys on agreement of continuous outputs [26, 25, 130]. Finally, if output is structured, e.g., text or image generation, functional similarity becomes less tractable as outputs do not share universally identical semantics as in classification tasks. For example, generated images may have differences that are not perceivable for the human eye, and one would have to reconsider the notion of agreement or similarity in such a context, taking such issues into account. The evaluation of these kinds of models, including comparison of outputs to a human reference, has been studied in prior surveys as well [27, 28]. #### 6.3.3 Computational Challenges Similarity measures are typically used to understand neural networks, e.g., their changes upon modification of the architecture or training data. Because models can be diverse in their representational structure and functional behavior depending on arbitrary factors such as initialization seed [10, 127], multiple comparisons across a population of models are necessary to find generalizable results. Generating this model population represents a significant computation challenge added to the cost of comparison itself: training multiple models quickly becomes infeasible when models grow in size. Throughout the paper, we assumed that the models for a comparison are given - here we discuss options to generate a model population. As already mentioned, a naive solution is to retrain models from scratch with different initialization and batch order. This approach usually generates diverse models, but restricts the experiments to relatively small models due to the computational cost. One option to save computational costs when generating model populations is to fine-tune pre-initialized models with different seeds, as done by McCoy et al. [16] on language models. In the corresponding experiments, this approach yielded a population of models that was still functionally diverse in out-of-distribution tasks. Another alternative to fully training models is sampling models in the weight space neighborhood of a fully trained model, as suggested by Fort et al. [85]. These models are less diverse compared to populations obtained from full retraining, but allow for a much larger population of models as only a single model has to be trained. Possibly, the diversity can be increased by fine-tuning the sampled models, however, how the result would compare to different fully trained models or fine-tuned models from a single starting point is, to the best of our knowledge, unknown. Similar to fine-tuning, adversarial weight perturbation can also be used to generate functionally diverse models on specific inputs [91]. In this approach, a base model is fine-tuned with the objective of having a specific prediction for selected inputs. Compared to fine-tuning, this approach may be more efficient if the number of inputs that the model is perturbed for is relatively small. Finally, in some applications it might be possible to utilize resources that were published in prior works. Such resources may include models trained or fine-tuned from different seeds, checkpoints obtained over the course of training, and variations in model architecture [e.g. 131, 16, 132, 133]. Generally, the diversity of compared models should be taken into account when making statements about neural networks similarity: Similarity estimates may be considered as lower bounds on the maximal discrepancy between two models as the diversity of models typically cannot be fully explored. Hence, the dependency of results on the studied set of models should not be ignored. At the same time, some models may be (almost) deterministic by design, allowing for reliable results from few model comparisons. ## 7 Conclusion Representational similarity and functional similarity represent two complementing perspectives on analyzing and comparing neural networks. In this work, we provide a comprehensive overview of existing measures for both representational and functional similarity. We provide formal definitions for 46 similarity measures, along with a systematic categorization into different types of measures. In addition, we conduct a meta-analysis of the literature to shed light on some of their salient properties. We specifically identify a lack of research that analyzes properties and applicability of representational similarity measures for specific neural network models in a unified manner. This gap in the literature also affects the quality of the recommendations that one can make about their practical applicability. We argue that additional research is necessary to enable the informed application of similarity measures to better understand similarity of neural network models. We hope our work lays a foundation for our community to engage in more systematic research on the properties, nature and applicability of similarity measures for neural network models. Further, with our categorization and meta-analysis, we believe that our work can assist researchers and practitioners in choosing appropriate measures for their applications at hand, even if we cannot recommend a one-fits-all solution specifically for representational similarity.
2305.08310
Gradient-enhanced physics-informed neural networks based on transfer learning for inverse problems of the variable coefficient differential equations
We propose gradient-enhanced PINNs based on transfer learning (TL-gPINNs) for inverse problems of the function coefficient discovery in order to overcome deficiency of the discrete characterization of the PDE loss in neural networks and improve accuracy of function feature description, which offers a new angle of view for gPINNs. The TL-gPINN algorithm is applied to infer the unknown variable coefficients of various forms (the polynomial, trigonometric function, hyperbolic function and fractional polynomial) and multiple variable coefficients simultaneously with abundant soliton solutions for the well-known variable coefficient nonlinear Schr\"{o}odinger equation. Compared with the PINN and gPINN, TL-gPINN yields considerable improvement in accuracy. Moreover, our method leverages the advantage of the transfer learning technique, which can help to mitigate the problem of inefficiency caused by extra loss terms of the gradient. Numerical results fully demonstrate the effectiveness of the TL-gPINN method in significant accuracy enhancement, and it also outperforms gPINN in efficiency even when the training data was corrupted with different levels of noise or hyper-parameters of neural networks are arbitrarily changed.
Shuning Lin, Yong Chen
2023-05-15T02:36:14Z
http://arxiv.org/abs/2305.08310v1
Gradient-enhanced physics-informed neural networks based on transfer learning for inverse problems of the variable coefficient differential equations ###### Abstract. We propose gradient-enhanced PINNs based on transfer learning (TL-gPINNs) for inverse problems of the function coefficient discovery in order to overcome deficiency of the discrete characterization of the PDE loss in neural networks and improve accuracy of function feature description, which offers a new angle of view for gPINNs. The TL-gPINN algorithm is applied to infer the unknown variable coefficients of various forms (the polynomial, trigonometric function, hyperbolic function and fractional polynomial) and multiple variable coefficients simultaneously with abundant soliton solutions for the well-known variable coefficient nonlinear Schrodinger equation. Compared with the PINN and gPINN, TL-gPINN yields considerable improvement in accuracy. Moreover, our method leverages the advantage of the transfer learning technique, which can help to mitigate the problem of inefficiency caused by extra loss terms of the gradient. Numerical results fully demonstrate the effectiveness of the TL-gPINN method in significant accuracy enhancement, and it also outperforms gPINN in efficiency even when the training data was corrupted with different levels of noise or hyper-parameters of neural networks are arbitrarily changed. Keywords: TL-gPINN; transfer learning; variable coefficients; inverse problem. ## 1. Introduction With the vigorous development of nonlinear science, nonlinear models have been applied in more and more fields [1, 2, 3, 4, 5], such as optical fiber communication, fluid mechanics, biophysics and information science. Among them, nonlinear evolution equations, especially the variable coefficient ones, are an important class of nonlinear models and have attracted widespread attention [6, 7, 8] since the model with variable-coefficients is often preferable and suitable in describing real phenomena in many physical and engineering situations. For example, variable coefficient nonlinear Schroodinger models plays an important role in the study of optical fiber system or the Rossby waves [9]. Variable coefficient higher-order nonlinear Schrodinger and the variable coefficient Hirota equation can be used to describe the femtosecond pulse propagation [10] and the certain ultrashort optical pulses propagating in a nonlinear inhomogeneous fiber [11], respectively. Besides, many classical methods in the field of integrable systems have been widely used to derive exact solutions of variable coefficient equations, e.g., auto-Backlund transformation [12, 13], the Riemann-Hilbert method [14], the Hirota bilinear method [15, 16, 17], the Darboux transformation [18, 19, 20], etc. As early as the 1990s, the idea of solving partial differential equations (PDEs) by using the technique of neural networks was put forward [21]. However, limited by the level of science and technology at that time, it failed to get further development. With the explosive growth of computing resources, there has been renewed interest in researches of the numerical methods based on neural networks in recent years. This idea was revived by Raissi, Perdikaris and Karniadakis [22] in 2019, and the physics-informed neural network (PINN) method was proposed to solve forward and inverse problems involving nonlinear partial differential equations. Based on the universal approximation theorems of neural networks [23], it can accurately approximate functions with extraordinarily less data by embodying underlying physical constraints into neural networks. Due to its high accuracy and efficiency, the PINN method opens up a new approach for numerically solving nonlinear PDEs and immediately sets off a new research upsurge. On this foundation, variants and extensions targeted at different application scenarios also subsequently emerged in multitude, like fPINN [24] for solving fractional PDEs, NN-arbitrary polynomial chaos (NN-aPC) for solving stochastic problems [25], XPINN [26] and FBPINN [27] for multiscale problems, B-PINN [28] for forward and inverse PDE problems with noisy data, hp-VPINN [29] for rough solutions/input data such as singularities, steep solution, and sharp changes, etc. In addition, there have been many attempts to improve accuracy of the PINN method, such as locally adaptive activation functions with slope recovery [30], residual-based adaptive sampling [31, 32], gradient-enhanced PINN (gPINN) [33], PINN with multi-scale Fourier features [34] and so on. Overall, the framework of PINNs, a model integrating data and mathematical physics seamlessly, is groundbreaking and has had a significant impact on the field of scientific computing and beyond. Integrable deep learning, a concept first brought forward by Chen, deals with the combination of deep neural networks with integrable systems. In 2020, Li and Chen [35, 36] pioneered the use of the PINNs method in the field of integrable systems. Later, the dynamic behavior of rogue wave solution for the nonlinear Schrodinger equation [37] was reproduced for the first time by PINN. Abundant localized wave solutions are also simulated including the rogue periodic wave solution for the Chen-Lee-Liu equation [38], vector localized waves for Manakov system [39], data-driven soliton solutions for the nonlocal Hirota equation [40] and so on. Then the framework of the PINN method is extended to solve the \((N+1)\)-dimensional initial-boundary value problem with \(2N+1\) hyperplane boundaries and is applied to ###### Abstract We consider the problem of the stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PDEs) of a stochastic differential equation (PD) of a stochastic differential equation (PDEs) of a stochastic differential equation (PD The first part gives a brief overview of physics-informed neural networks (PINNs), an effective tool in solving forward and inverse problems of partial differential equations. Let's consider the general form of a \((N+1)\)-dimensional partial differential equation with parameters \(\mathbf{\lambda}\) \[f\left(\mathbf{x},t;\frac{\partial u}{\partial x_{1}},\dots,\frac{\partial u}{ \partial x_{N}},\frac{\partial u}{\partial x_{1}^{2}};\dots,\frac{\partial^{2} u}{\partial x_{1}\partial x_{N}},\frac{\partial^{2}u}{\partial x_{1}\partial t}; \dots;\mathbf{\lambda}\right)=0,\quad\mathbf{x}=(x_{1},\cdots,x_{N})\in\Omega, \quad t\in[t_{0},t_{1}], \tag{2.1}\] where \(u(\mathbf{x},t)\) is the solution and \(\Omega\) is a subset of \(\mathbb{R}^{N}\). To solve the above PDE with the first kind of boundary condition (Dirichlet boundary condition) \[\begin{cases}u(\mathbf{x},t_{0})=u_{0}(\mathbf{x}),&\forall\mathbf{x}\in \Omega\\ u(\mathbf{x},t)=\mathcal{B}(\mathbf{x},t),&\forall\mathbf{x}\in\partial \Omega,t\in[t_{0},t_{1}],\end{cases} \tag{2.2}\] we construct a neural network of depth \(L\) consisting of one input layer, \(L-1\) hidden layers and one output layer. Suppose that the \(l\)th \((l=0,1,\cdots,L)\) layer has \(N_{l}\) neurons, and then the connection between layers can be achieved by the following affine transformation \(\mathcal{A}\) and activation function \(\sigma(\cdot)\) \[\mathbf{x}^{l}=\sigma(\mathcal{A}_{l}(\mathbf{x}^{l-1}))=\sigma(\mathbf{w}^{l }\mathbf{x}^{l-1}+\mathbf{b}^{l}), \tag{2.3}\] where \(\mathbf{w}^{l}\in\mathbb{R}^{N_{l}\times N_{l-1}}\) and \(\mathbf{b}^{l}\in\mathbb{R}^{N_{l}}\) denote the weight matrix and bias vector separately. Especially, the input is \(\mathbf{x}^{0}=(x_{1},\cdots,x_{N},t)\) and output \(\mathbf{o}(\mathbf{x}^{0},\mathbf{\Theta})\) is given by \[\mathbf{o}(\mathbf{x}^{0},\mathbf{\Theta})=(\mathcal{A}_{L}\circ\sigma\circ \mathcal{A}_{L-1}\circ\cdots\circ\sigma\circ\mathcal{A}_{1})(\mathbf{x}^{0}), \tag{2.4}\] which is used to approximate the solution \(u(\mathbf{x},t)\), and \(\mathbf{\Theta}=\left\{\mathbf{w}^{l},\mathbf{b}^{l}\right\}_{l=1}^{L}\) represents the trainable parameters of PINN. With the initial-boundary dataset \(\{\mathbf{x}_{u}^{i},t_{u}^{i},u_{i}^{i}\}_{i=1}^{N_{u}}\) and the set of collocation points of \(f(\mathbf{x},t)\), denoted by \(\{\mathbf{x}_{f}^{i},t_{f}^{i}\}_{i=1}^{N_{f}}\), the loss function can be defined to measure the difference between the predicted values and the true values of each iteration \[MSE_{forward}=MSE_{u}+MSE_{f}, \tag{2.5}\] where \[MSE_{u}=\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}|\widehat{u}(\mathbf{x}_{u}^{i},t_{ u}^{i})-u^{i}|^{2}, \tag{2.6}\] \[MSE_{f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f(\mathbf{x}_{f}^{i},t_{f}^{i})|^{2}. \tag{2.7}\] With regard to the inverse problem, namely, the situation that the parameters \(\mathbf{\lambda}\) are unknown, some extra measurements \(\{\mathbf{x}_{in}^{i},t_{in}^{i},u_{in}^{i}\}_{i=1}^{N_{u_{in}}}\) of the internal area should be obtained and utilized to define a new loss function to learn the unknown parameters \(\mathbf{\lambda}\) \[MSE_{inverse}=MSE_{u}+MSE_{f}+MSE_{u_{in}}, \tag{2.8}\] where \[MSE_{u_{in}}=\frac{1}{N_{u_{in}}}\sum_{i=1}^{N_{u_{in}}}|\widehat{u}(\mathbf{ x}_{in}^{i},t_{in}^{i})-u_{in}^{i}|^{2}. \tag{2.9}\] Later, a deep learning method, gradient-enhanced physics-informed neural networks (gPINNs) [33] was proposed for improving the accuracy and training efficiency of PINNs by leveraging gradient information of the PDE residual and embedding the gradient into the loss function. The basic idea of gPINNs is that it enforces the derivatives of the PDE residual \(f\) to be zero since \(f(\mathbf{x},t)\) is zero for any \(\mathbf{x}\) and \(t\), i.e., \[\nabla f(\mathbf{x})=\left(\frac{\partial f}{\partial x_{1}},\frac{\partial f }{\partial x_{2}},\cdots,\frac{\partial f}{\partial x_{N}},\frac{\partial f }{\partial t}\right)=\mathbf{0},\quad\mathbf{x}\in\Omega,\quad t\in[t_{0},t_ {1}]. \tag{2.10}\] Then, based on the set of residual points \(\{\mathbf{x}_{g}^{i},t_{g}^{i}\}_{i=1}^{N_{g}}\) for the derivatives, the loss functions of the forward and inverse problems are separately defined as \[MSE_{forward}^{g}=MSE_{u}+MSE_{f}+MSE_{g}, \tag{2.11}\] \[MSE_{inverse}^{g}=MSE_{u}+MSE_{f}+MSE_{u_{in}}+MSE_{g}, \tag{2.12}\] where \[MSE_{g}=\frac{1}{N_{g}}\left(\sum_{j=1}^{N}\sum_{i=1}^{N_{g}}|\frac{\partial f }{\partial x_{j}}(\mathbf{x}_{g}^{i},t_{g}^{i})|^{2}+\sum_{i=1}^{N_{g}}|\frac{ \partial f}{\partial t}(\mathbf{x}_{g}^{i},t_{g}^{i})|^{2}\right). \tag{2.13}\] The set of residual points \(\{\mathbf{x}_{g}^{i},t_{g}^{i}\}_{i=1}^{N_{g}}\) for the derivatives can be different from the set of collocation points \(\{\mathbf{x}_{f}^{i},t_{f}^{i}\}_{i=1}^{N_{f}}\) of \(f(\mathbf{x},t)\), but we usually choose the same set for convenience. ### Gradient-enhanced PINNs based on transfer learning for data-driven variable coefficients For the inverse PDE problems with the aid of PINNs and its variants, the existing researches are mainly focused on the parameter discovery rather than the function discovery. Here, we propose the gradient-enhanced PINNs based on transfer learning (TL-gPINNs) to infer variable coefficients. \(\bullet\)**Motivation** For the inverse problem of identifying the variable coefficients, we aim to improve the neural network method to enhance prediction precision. Consider the \((N+1)\)-dimensional PDE with variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\) \[f\left(\mathbf{x},t;\frac{\partial u}{\partial x_{1}},\dots,\frac{\partial u}{ \partial x_{N}},\frac{\partial u}{\partial t};\frac{\partial^{2}u}{\partial x _{1}^{2}},\dots,\frac{\partial^{2}u}{\partial x_{1}\partial x_{N}},\frac{ \partial^{2}u}{\partial x_{1}\partial t};\dots;\mathbf{\Lambda}\right)=0,\quad \mathbf{x}=(x_{1},\cdots,x_{N})\in\Omega,\quad t\in[t_{0},t_{1}]. \tag{2.14}\] Obviously, neural networks are constrained by the PDE loss \(MSE_{f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f(\mathbf{x}_{f}^{i},t_{f}^{i})|^{2}\) to satisfy the equation above. In other words, PINNs enforce the PDE residual \(f\) to be \(0\) to make the data-driven variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\) close to the exact ones. Since this constraint is characterized by discrete points \(\{\mathbf{x}_{f}^{i},t_{f}^{i}\}_{i=1}^{N_{f}}\), it can only ensure that values of the obtained \(\mathbf{\Lambda}(\mathbf{x},t)\) are close to the true ones at these selected points. However, even if the predicted values of the variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\) are equal to the true values at these discrete points, the values of the identified \(\mathbf{\Lambda}(\mathbf{x},t)\) outside the given point set \(\{\mathbf{x}_{f}^{i},t_{f}^{i}\}_{i=1}^{N_{f}}\) have little direct effect on \(MSE_{f}\) and thus the data-driven variable coefficients may be a considerable departure from the exact ones. Consequently, it is biased and inaccurate to characterize a function (i.e., the unknown variable coefficient here) solely by the values at discrete points. It is necessary to introduce much more rigorous constraints considering the partial derivative values of variable coefficients from the perspective of enhancing gradients. Due to the lack of partial derivative values of variable coefficient at configuration points, a way with similar effect is to enforce the partial derivatives of it to satisfy the corresponding equation. Then residuals of equations satisfied by not only variable coefficients but also the partial derivatives of them should be taken into account in the design of loss functions. Specifically, neural networks enforce the residual of equations satisfied by both variable coefficients and the partial derivatives to be \(0\) to require the values of them to approximate the true ones at the discrete points. It happens to coincide with gradient-enhanced PINNs, the idea of which stems from that derivatives of the zero-valued function, (i.e., the PDE residual \(f\)) should also be \(0\). Note that the equations satisfied by the partial derivatives of variable coefficients \((\frac{\partial\mathbf{\Lambda}}{\partial t},\frac{\partial\mathbf{\Lambda}} {\partial x_{1}},\frac{\partial\mathbf{\Lambda}}{\partial x_{2}},\cdots)\) can be derived by direct differentiation with respect to the equation satisfied by variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\) themselves. Although our perspective and concerned aspects are different from that of gPINNs, the implementation method is the same. Ulteriorly, we improve the original gPINNs by means of the idea of transfer learning since the introduction of additional constraints on gradients probably gives rise to low efficiency. \(\bullet\)**Procedure** By fully leveraging the combined advantages of gradient-enhanced effect and transfer learning, TL-gPINN is proposed. The followings are the main steps involved: Firstly, the traditional PINNs are constructed to obtain the data-driven variable coefficients after defining the following loss function \[MSE_{inverse}=MSE_{u}+MSE_{f}+MSE_{u_{in}}+MSE_{\mathbf{\Lambda}}, \tag{2.15}\] where \[MSE_{u}=\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}|\widehat{u}(\mathbf{x}_{u}^{i},t_{ u}^{i})-u^{i}|^{2}, \tag{2.16}\] \[MSE_{f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f(\mathbf{x}_{f}^{i},t_{f}^{i})|^{2}, \tag{2.17}\] \[MSE_{u_{in}}=\frac{1}{N_{u_{in}}}\sum_{i=1}^{N_{u_{in}}}|\widehat{u}(\mathbf{ x}_{in}^{i},t_{in}^{i})-u_{in}^{i}|^{2}, \tag{2.18}\] \[MSE_{\mathbf{\Lambda}}=\frac{1}{N_{\mathbf{\Lambda}}}\sum_{i=1}^{N_{\mathbf{ \Lambda}}}|\widehat{\mathbf{\Lambda}}(\mathbf{x}_{\mathbf{\Lambda}}^{i},t_{ \mathbf{\Lambda}}^{i})-\mathbf{\Lambda}^{i}|^{2}, \tag{2.19}\] and the set \(\{\mathbf{x}_{\mathbf{\Lambda}}^{i},t_{\mathbf{\Lambda}}^{i},\mathbf{\Lambda}^{ i}\}_{i=1}^{N_{\mathbf{\Lambda}}}\) denotes the boundary training data of the variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\). In particular, the networks of the solution \(u(\mathbf{x},t)\) and the variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\) are set to be separate and named as the trunk and branch networks respectively in order to eliminate mutual influence. What's more, with regard to the network of variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\), the width and depth of it are usually narrower and shallower than that of the solution \(u(\mathbf{x},t)\) since the function expression of variable coefficient is simpler in general. Then at the end of the iteration process, the weight matrixes and bias vectors of PINNs are saved to initialize the gradient-enhanced PINNs with the advantage of transfer learning. The mean squared error loss function of gPINNs is given by \[MSE_{inverse}^{g}=MSE_{u}+MSE_{f}+MSE_{u_{in}}+MSE_{\mathbf{A}}+MSE_{g}, \tag{2.20}\] where \[MSE_{g}=\frac{1}{N_{g}}\left(\sum_{j=1}^{N}\sum_{i=1}^{N_{g}}|\frac{\partial f }{\partial x_{j}}(\mathbf{x}_{g}^{i},t_{g}^{i})|^{2}+\sum_{i=1}^{N_{g}}|\frac{ \partial f}{\partial t}(\mathbf{x}_{g}^{i},t_{g}^{i})|^{2}\right). \tag{2.21}\] Based on MSE criteria, the parameters of gPINNs are optimized and we finally obtain the data-driven variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\). To better understand our new angle of view for gPINNs, we take the inverse problem of identifying the time-varying variable coefficient \(\Lambda(t)\) as an example to illustrate the effect of gradient information on the optimization of loss (PDE residual) and inference of time-varying variable coefficient. The corresponding sketch map is displayed in Fig. 1. Wherein, the value of the predicted variable coefficient will close to that of the exact one at the selected point if the term of PDE residual \(MSE_{f}\) is considered solely. The derivative \(\frac{\partial\hat{\Lambda}(t)}{\partial t}\) of the predicted variable coefficient \(\hat{\Lambda}(t)\) is constrained to approximate \(\frac{\partial\Lambda(t)}{\partial t}\) at the given point due to the incorporation of the gradient term \(MSE_{g}\), which reflects the equation information satisfied by the derivative of variable coefficient. Then the combined effect is remarkable and it enables the predicted \(\hat{\Lambda}(t)\) to tend to the exact \(\Lambda(t)\). Our method uses a two-step optimization strategy and gradually increases the difficulty, resulting in better results than the direct one-step optimization, i.e., the original gPINN method. The process draft of the TL-gPINN method is sketched and shown in Fig. 2. Partial derivatives of higher orders, of course, can be considered by adding the loss of \(\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}},\frac{\partial^{2}f}{ \partial x_{i}\partial t}\) and so on into the term \(MSE_{g}\). However, excessive constraints may lead to high training costs and low efficiency, which is the reason why the transfer learning technique is introduced here. The difficulty of the inverse problem lies in that the information about variable coefficients is insufficient and meanwhile, the properties or physical laws (if any) of variable coefficients remain to be discovered, which can be used to achieve higher accuracy. Therefore, we should make full use of existing information. For example, the equations satisfied by the partial derivatives of variable coefficients (\(\frac{\partial\mathbf{A}}{\partial t},\frac{\partial\mathbf{A}}{\partial x_{1 }},\frac{\partial\mathbf{A}}{\partial x_{2}},\cdots\)) can be derived by differentiating the equation satisfied by variable coefficients \(\mathbf{\Lambda}(\mathbf{x},t)\) themselves. Thus, the gradient-enhanced PINN can be served as an effective tool to improve the accuracy of variable coefficients by fully utilizing the information of equations satisfied by the derivatives of variable coefficients. Further, TL-gPINNs can improve the accuracy and training efficiency of the original gPINNs. All the codes in this article are based on Python 3.7 and Tensorflow 1.15, and the presented numerical experiments are run on a DELL Precision 7920 Tower computer with 2.10 GHz 8-core Xeon Silver 4110 processor, 64 GB memory and 11 GB NVIDIA GeForce GTX 1080 Ti video card. Figure 1. (Color online) The effect of gPINN compared to PINN on the optimization of loss (PDE residual) and inference of time-varying variable coefficient \(\Lambda(t)\). ## 3. Applications in the variable coefficient nonlinear Schroodinger equation The nonlinear Schroodinger (NLS) equation, one of the most classical equations in integrable systems, is commonly used in the field of optical fiber communication to describe the propagation of optical solitons [46, 47]. When it comes to inhomogeneous optical fibers, it is believed that the variable coefficient Schroodinger equation is more accurate and realistic than the standard one since variable coefficients can reflect the inhomogeneities of media and nonuniformities of boundaries [48]. The research on variable coefficient NLS-type models has achieved very fruitful results [49, 50, 51, 52, 53], like the groundbreaking work of Serkin et al. [54]. Meanwhile, solutions of the variable coefficient NLS-type equations are also obtained by using powerful means, such as the Hirota bilinear method [55, 56], the Darboux transformation [57], the Riemann-Hilbert approach [58] and so on. In this part, we discuss the mathematical model which can be used to describe the optical fiber system or the Rossby waves [9], i.e., the variable coefficient nonlinear Schroodinger (vcNLS) equation \[\mathrm{i}A_{t}+\alpha(t)A_{xx}+\beta(t)A+\gamma(t)|A|^{2}A=0, \tag{3.1}\] where the variable coefficient \(\alpha(t)\) denotes the dispersion effect and \(\gamma(t)\) denotes the Kerr nonlinearity. When considering the inhomogeneities, the varying dispersion and Kerr nonlinearity are of practical importance in the optical-fiber transmission system. Under the assumption that the amplitude \(A(x,t)\) has the transformation \[A(x,t)=\mathrm{e}^{\mathrm{i}\int\beta(t)dt}\frac{h(x,t)}{g(x,t)}, \tag{3.2}\] Figure 2. (Color online) Schematic diagram of the TL-gPINN algorithm. the one-soliton solution can be derived by the Hirota method [56] \[A(x,t)=\mathrm{e}^{\mathrm{i}\int\beta(t)dt}\frac{\mathrm{e}^{\theta}}{1+\frac{ \gamma(t)}{2\alpha(t)(k+k^{\prime})^{2}}\mathrm{e}^{\theta+\theta^{*}}}, \tag{3.3}\] where \[\phi(t)=\mathrm{i}\int\alpha(t)k^{2}dt, \tag{3.4}\] \[\theta=kx+\phi(t)+\eta, \tag{3.5}\] \[\theta^{*}=k^{*}x+\phi(t)+\eta. \tag{3.6}\] Here, \(k\) is a complex constant and \(\eta\) is a real constant. The PINN, gPINN and TL-gPINN methods are applied to infer the unknown time-varying variable coefficients of the vcNLS equation. To avoid repetition, hyper-parameters of neural networks used for each case are listed in Table 1 and other parameters are selected as \(k=1+\mathrm{i},\eta=0\). ### Data-driven discovery of single variable coefficient The general aim is to utilize TL-gPINNs to solve the inverse problem for the discovery of function coefficient \(\gamma(t)\), and systematically compare the performance of three methods (PINNs, gPINNs and TL-gPINNs) under the circumstances that the other two variable coefficients \(\alpha(t)\) and \(\beta(t)\) are already known. Several types of time-varying variable coefficients \(\gamma(t)\) in common use are provided, such as linear, quadratic, sine, hyperbolic tangent and fractional functions: \(\gamma(t)=t,t^{2},\sin(t),\tanh(t),\frac{1}{1+t^{2}}\) separately. #### 3.1.1. Data-driven discovery of linear variable coefficient \(\gamma(t)\) In this part, we take \(\alpha(t)=\frac{t}{2},\beta(t)=\frac{t}{5}\) and choose \([x_{0},x_{1}]=[-4,4],[t_{0},t_{1}]=[-4,4]\) as the training region. In consideration of the complexity of the structure of complex-valued solution \(A(x,t)\), we decompose it into real part \(u(x,t)\) and imaginary part \(v(x,t)\), i.e., \(A(x,t)=u(x,t)+\mathrm{i}v(x,t)\). After substituting it into the governing equation \[f:=\mathrm{i}A_{t}+\alpha(t)A_{xx}+\beta(t)A+\gamma(t)|A|^{2}A=0, \tag{3.7}\] we have \[f_{u}:=-v_{t}+\alpha(t)u_{xx}+\beta(t)u+\gamma(t)(u^{2}+v^{2})u, \tag{3.8}\] \[f_{v}:=u_{t}+\alpha(t)v_{xx}+\beta(t)v+\gamma(t)(u^{2}+v^{2})v. \tag{3.9}\] Define the loss function of PINNs for the inverse problem as follows: \[MSE_{inverse}=MSE_{A}+MSE_{f}+MSE_{A_{in}}+MSE_{\gamma}, \tag{3.10}\] where \[MSE_{A}=MSE_{u}+MSE_{v},\quad MSE_{f}=MSE_{f_{u}}+MSE_{f_{v}},\quad MSE_{A_ {in}}=MSE_{u_{in}}+MSE_{v_{in}}, \tag{3.11}\] \[MSE_{u}=\frac{1}{N_{A}}\sum_{i=1}^{N_{A}}|\widehat{u}(x_{A}^{i},t_{A}^{i})-u^ {i}|^{2}, \tag{3.12}\] \[MSE_{v}=\frac{1}{N_{A}}\sum_{i=1}^{N_{A}}|\widehat{v}(x_{A}^{i},t_{A}^{i})-v^ {i}|^{2}, \tag{3.13}\] \[MSE_{f_{u}}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f_{u}(x_{f}^{i},t_{f}^{i})|^{2}, \tag{3.14}\] \begin{table} \begin{tabular}{c|c c c c c c} \hline \multirow{2}{*}{Section} & \multicolumn{4}{c|}{Variable coefficients} & Trunk network & Branch network \\ \cline{2-7} & Fixed & Inferred & Depth & Width & Depth & Width \\ \hline 3.1.1 & \(\alpha(t)=\frac{t}{2},\beta(t)=\frac{t}{5}\) & \(\gamma(t)=t\) & 8 & 40 & 4 & 30 \\ 3.1.2 & \(\alpha(t)=\frac{t}{2},\beta(t)=\frac{t}{5}\) & \(\gamma(t)=t^{2}\) & 8 & 40 & 4 & 30 \\ 3.1.3 & \(\alpha(t)=\sin(t),\beta(t)=\frac{t}{5}\) & \(\gamma(t)=\sin(t)\) & 8 & 40 & 4 & 30 \\ 3.1.4 & \(\alpha(t)=\tanh(t),\beta(t)=\frac{t}{5}\) & \(\gamma(t)=\tanh(t)\) & 8 & 40 & 4 & 30 \\ 3.1.5 & \(\alpha(t)=\frac{1}{2(1+t^{2})},\beta(t)=\frac{t}{5}\) & \(\gamma(t)=\frac{1}{1+t^{2}}\) & 8 & 40 & 4 & 30 \\ 3.2.1 & \(\alpha(t)=\sin(t)\) & \(\beta(t)=\frac{t}{5},\gamma(t)=\sin(t)\) & 8 & 40 & 4 & 30 \\ 3.2.2(1) & - & \(\alpha(t)=\frac{t}{2},\beta(t)=\frac{t}{5},\gamma(t)=t\) & 8 & 40 & 2 & 10 \\ 3.2.2(2) & - & \(\alpha(t)=\frac{1}{2(1+t^{2})},\beta(t)=\frac{t}{5},\gamma(t)=\frac{1}{1+t^{2}}\) & 8 & 40 & 4 & 10 \\ \hline \end{tabular} \end{table} Table 1. Hyper-parameters used for each case \[MSE_{f_{u}}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f_{v}(x_{f}^{i},t_{f}^{i})|^{2}, \tag{3.15}\] \[MSE_{u_{in}}=\frac{1}{N_{A_{in}}}\sum_{i=1}^{N_{A_{in}}}|\widehat{u}(x_{in}^{i}, t_{in}^{i})-u_{in}^{i}|^{2}, \tag{3.16}\] \[MSE_{v_{in}}=\frac{1}{N_{A_{in}}}\sum_{i=1}^{N_{A_{in}}}|\widehat{v}(x_{in}^{i },t_{in}^{i})-v_{in}^{i}|^{2}, \tag{3.17}\] \[MSE_{\gamma}=\frac{1}{N_{\gamma}}\sum_{i=1}^{N_{\gamma}}|\widehat{\gamma}(t_{ \gamma}^{i})-\gamma^{i}|^{2}=|\widehat{\gamma}(t_{0})-\gamma^{0}|^{2}. \tag{3.18}\] Here, \(\{x_{A}^{i},t_{A}^{i},u^{i},v^{i}\}_{i=1}^{N_{A}}\) and \(\{x_{in}^{i},t_{in}^{i},u^{i},v^{i}\}_{i=1}^{N_{A_{in}}}\) denote the training dataset consisting of initial-boundary points and internal points separately. Correspondingly, \(\{\widehat{u}(x_{A}^{i},t_{A}^{i}),\widehat{v}(x_{A}^{i},t_{A}^{i})\}_{i=1}^{ N_{A}}\) and \(\{\widehat{u}(x_{in}^{i},t_{in}^{i}),\widehat{v}(x_{in}^{i},t_{in}^{i})\}_{i=1} ^{N_{A_{in}}}\) are the predicted values. In order to calculate \(\{f_{u}(x_{f}^{i},t_{f}^{i}),f_{v}(x_{f}^{i},t_{f}^{i})\}_{i=1}^{N_{f}}\), the derivatives of the networks \(u\) and \(v\) with respect to time \(t\) and space \(x\) are derived by automatic differentiation [59]. Considering that the unknown time-varying variable coefficient \(\gamma(t)\) is independent of space \(x\) and the objective \(\gamma(t)\) takes the form of linear function, we take \(N_{\gamma}=1\) and choose \(\{t_{0},\gamma^{0}\}\) as the training data. Similarly, after additionally embedding the term of gradient-enhanced information into the loss function of PINNs, the mean squared error function of gPINNs is given by \[MSE_{inverse}^{g}=MSE_{A}+MSE_{f}+MSE_{A_{in}}+MSE_{\gamma}+MSE_{g}, \tag{3.19}\] where \[MSE_{g}=MSE_{g_{u}}+MSE_{g_{v}}, \tag{3.20}\] \[MSE_{g_{u}}=\frac{1}{N_{g}}\left(\sum_{i=1}^{N_{g}}|\frac{\partial f_{u}}{ \partial t}(x_{g}^{i},t_{g}^{i})|^{2}\right), \tag{3.21}\] \[MSE_{g_{v}}=\frac{1}{N_{g}}\left(\sum_{i=1}^{N_{g}}|\frac{\partial f_{v}}{ \partial t}(x_{g}^{i},t_{g}^{i})|^{2}\right), \tag{3.22}\] \[\frac{\partial f_{u}}{\partial t}=-v_{tt}+\alpha(t)_{t}u_{xx}+\alpha(t)u_{xxt }+\beta(t)_{t}u+\beta(t)u_{t}+\gamma(t)_{t}(u^{2}+v^{2})u+\gamma(t)(2uu_{t}+2 vv_{t})u+\gamma(t)(u^{2}+v^{2})u_{t}, \tag{3.23}\] \[\frac{\partial f_{v}}{\partial t}=u_{tt}+\alpha(t)_{t}v_{xx}+\alpha(t)v_{xxt }+\beta(t)_{t}v+\beta(t)v_{t}+\gamma(t)_{t}(u^{2}+v^{2})v+\gamma(t)(2uu_{t}+2 vv_{t})v+\gamma(t)(u^{2}+v^{2})v_{t}. \tag{3.24}\] For the time-varying variable coefficient \(\gamma(t)\), the gradient-enhanced effect of \(t\) is solely considered here by adding mean squared errors involving the partial derivatives of the governing functions with respect to time \((\frac{\partial f_{u}}{\partial t}\) and \(\frac{\partial f_{v}}{\partial t})\). Besides, the functions \(f_{u}\) and \(f_{v}\) only reflect the value of variable coefficient itself while \(\frac{\partial f_{u}}{\partial t}\) and \(\frac{\partial f_{v}}{\partial t}\) embody the extra derivative information of \(\gamma(t)\), i.e., the information of equations satisfied by \(\gamma_{t}\). With the aid of the MATLAB software, the spatial region \([-4,4]\) and the temporal region \([-4,4]\) are divided into \(N_{x}=513\) and \(N_{t}=201\) discrete equidistance points, respectively. Thus, the reference one-soliton solution \[A(x,t)=\frac{\mathrm{e}^{\frac{\mathrm{i}\mathrm{\gamma}t^{2}}{\mathrm{e}^{(1+ 1)x-\frac{t^{2}}{\gamma}}}}}{1+\frac{\mathrm{e}^{-x^{2}+2x}}{4}}, \tag{3.25}\] is discretized into \(513\times 201\) data points in the given spatiotemporal domain. Then \(N_{A}=200\) points are randomly selected from the initial-boundary dataset and \(N_{A_{in}}=2000\) points from interior point set. By means of the Latin hypercube sampling method [60], \(N_{f}=N_{g}=40000\) collocation points are also sampled. The neural network of the complex valued solution \(A(x,t)\) (called as the trunk network) consists of one input layer, 7 hidden layers with 40 neurons per hidden layer and one output layer. The output layer has 2 neurons to learn the real part \(u(x,t)\) and imaginary part \(v(x,t)\). Given that the function expression of variable coefficient is simpler than that of the solution, we construct the branch network consisting of one input layer, 3 hidden layers as well as one output layer with one neuron to obtain the data-driven variable coefficient \(\gamma(t)\) and each hidden layer has 30 neurons. The linear activation function is used in the branch network while \(tanh\) function is selected as the activation function in the trunk network. Weights of the neural networks are initialized with Xavier initialization [61]. In addition, we apply L-BFGS algorithm [62] to minimize the value of the loss function by optimizing the parameters of the neural networks. To evaluate the performance of three methods (PINNs, gPINNs, TL-gPINNs), we calculate the absolute error and relative error of \(\gamma(t)\): the mean absolute error (\(MAE\)) and relative \(\mathbb{L}_{2}\) error (\(RE\)) of the variable coefficient \(\gamma(t)\) \[MAE_{\gamma} =\frac{1}{N_{t}^{\prime}}\sum_{k=0}^{N_{t}^{\prime}-1}|\widehat{ \gamma}(t_{0}+k\frac{t_{1}-t_{0}}{N_{t}^{\prime}-1})-\gamma^{k}|, \tag{3.26}\] \[RE_{\gamma} =\frac{\sqrt{\sum_{k=0}^{N_{t}^{\prime}-1}|\widehat{\gamma}(t_{0 }+k\frac{t_{1}-t_{0}}{N_{t}^{\prime}-1})-\gamma^{k}|^{2}}}{\sqrt{\sum_{k=0}^{ N_{t}^{\prime}-1}|\gamma^{k}|^{2}}}, \tag{3.27}\] after choosing the corresponding parameter as \(N_{t}^{\prime}=500\). Firstly, the original PINNs is applied. Then, we save the weight matrixes and bias vectors of PINNs at the end of the iteration process to initialize corresponding parameters of gPINNs. After 1862.3727 seconds, the relative \(\mathbb{L}_{2}\) errors of the real part \(u\), the imaginary part \(v\) and the modulus \(|A|\) are 1.331004e-03, 1.407320e-03 and 9.441619e-04 separately. Besides, the mean absolute error and relative \(\mathbb{L}_{2}\) error of the variable coefficient \(\gamma(t)\) are: 1.915842e-05 and 9.511436e-06. Obviously, the training by gPINNs is based on training results of PINNs instead of training from scratch, which helps to accelerate convergence to the approximate optimal solution and variable coefficient. Ultimately, the unknown variable coefficient \(\gamma(t)\) is learned simultaneously with the one-soliton solution \(A(x,t)\) by TL-gPINNs. Density diagrams of the data-driven one-soliton solution, comparison between the predicted solution and exact solution as well as the evolution plots are shown in Fig. 3. It implies there is little difference between the exact solution and the predicted one. Fig. 4 (a) is a double coordinate plot, where the solid blue line and the dashed red line corresponding to the left coordinate axis represent the exact and predicted variable coefficient \(\gamma(t)\) respectively while the curve of the absolute error is drawn with black dotted line corresponding to the right one. Obviously, the curve of the absolute error exhibits a characteristic of linear variation, and the error is close to 0 at the initial moment since the data of \(\gamma(t)\) at \(t_{0}=-4\) is provided and the linear activation function is selected in the branch network. The predicted 3D plot of the soliton solution with a parabolic shape for the vcNLS equation are shown in Fig. 4 (b). From the above figures, it can be intuitively seen that the experimental results of \(\gamma(t)\) are in good agreement with the theoretical ones. For intuitive comparison of the prediction accuracy of different methods, the error reduction rate (\(ERR\)) can be obtained according to the mean absolute error and relative \(\mathbb{L}_{2}\) error achieved by PINNs and gPINNs (TL-gPINNs) \[ERR_{1} =\frac{MAE_{\gamma}^{PINNs}-MAE_{\gamma}^{new}}{MSE_{\gamma}^{ \prime PINNs}}, \tag{3.28}\] \[ERR_{2} =\frac{RE_{\gamma}^{PINNs}-RE_{\gamma}^{new}}{RE_{\gamma}^{ \prime PINNs}}, \tag{3.29}\] where 'new' can be replaced by 'gPINNs' and 'TL-gPINNs'. Finally, the contrast in respect of efficiency and accuracy is presented in Table 2, including the elapsed time, mean absolute error, relative \(\mathbb{L}_{2}\) error and error reduction rates. #### 3.1.2 Data-driven discovery of quadratic variable coefficient \(\gamma(t)\). Figure 3: (Color online) One-soliton solution \(A(x,t)\) of the vcNLS equation by TL-gPINNs: (a) The density diagrams and comparison between the predicted solutions and exact solutions at the three temporal snapshots of \(|A(x,t)|\); (b) The error density diagram of \(|A(x,t)|\). Here, we fix \(\alpha(t)=\frac{t^{2}}{2},\beta(t)=\frac{t}{5}\) and our objective function is \(\gamma(t)=t^{2}\) based on the dataset of the corresponding solution \(A(x,t)\) for the variable coefficient nonlinear Schroodinger equation: \[A(x,t)=\frac{\mathrm{e}^{\frac{\mathrm{i}}{\mathrm{i}\hbar}t^{2}}\mathrm{e}^{(1 +x)-\frac{t^{3}}{\hbar}}}{1+\frac{\mathrm{e}^{2x}-\frac{2t^{3}}{\hbar}}{4}}. \tag{3.30}\] Since the functional form of the target variable coefficient \(\gamma(t)\) is quadratic and no longer linear as in Sec. 3.1.1, we add sampling data of it and change the term measuring the difference between the predicted values and the true values of \(\gamma(t)\) into \[MSE_{\gamma}=\frac{1}{2}\left(|\widehat{\gamma}(t_{0})-\gamma^{0}|^{2}+| \widehat{\gamma}(t_{1})-\gamma^{1}|^{2}\right). \tag{3.31}\] But apart from that, the loss functions of PINNs and gPINNs (TL-gPINNs) are consistent with the previous subsection. Obviously, \(N_{\gamma}=2\) here and then the training region is selected as \([x_{0},x_{1}]\times[t_{0},t_{1}]=[-4,4]\times[-2,2]\). After exploiting the same data discretization method, we divide the spatial region \([x_{0},x_{1}]=[-4,4]\) into \(N_{x}=513\) discrete equidistance points and the time region \([t_{0},t_{1}]=[-2,2]\) into \(N_{t}=201\) discrete equidistance points. The initial-boundary dataset (\(N_{A}=200\)) and the internal point set (\(N_{A_{in}}=2000\)) are sampled randomly from \(513\times 201\) data points of the solution \(A(x,t)\) and extract \(N_{f}=N_{g}=40000\) collocation points via the Latin hypercube sampling method. We firstly initialize weights of PINNs with Xavier initialization. A 7-hidden-layer feedforward neural network with 40 neurons per hidden layer and a 3-hidden-layer feedforward neural network with 30 neurons per hidden layer are constructed to learn the one soliton solution and the variable coefficient \(\gamma(t)\) of the vNLS equation, respectively. We use the hyperbolic tangent (tanh) activation function to add nonlinear factors into neural networks. At the end of the iteration process, the parameter data of PINNs is stored and then we use the saved data to fine-tune gPINNs with the same structure by changing the loss function into (3.19). In about 2293.3249 seconds, the data-driven solution of the vcNLS equation is obtained by gPINNs based on transfer learning (TL-gPINNs) and the relative \(\mathbb{L}_{2}\) errors of the real part \(u\), the imaginary part \(v\) and the modulus \(|A|\) are 1.336860e-03, 1.452912e-03 and 8.587186e-04. Simultaneously, the variable coefficient \(\gamma(t)\) is successfully inferred with the mean absolute error of 3.100830e-03 and relative \(\mathbb{L}_{2}\) error of 2.163681e-03. Fig. 5 displays the exact, learned and error density diagrams as well as evolution plots of one-soliton solution at different time points \(t=-1.5,0,1.5\). In Fig. 6, the curve plots of the predicted and the exact variable coefficient \(\gamma(t)\), the absolute error and the predicted 3D graph of the cubic soliton solution for the vcNLS equation are plotted. As can be seen from these diagrams and performance comparison of three methods shown in Table 3, TL-gPINN is capable of correctly identifying the unknown variable coefficient \(\gamma(t)\) and learning the cubic soliton solution with very high accuracy while gPINN doesn't work as expected. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **ResultsMethod** & PINNs & gPINNs & TL-gPINNs \\ \hline Elapsed time (s) & 302.1032 & 2579.5674 & 1862.3727 \\ \hline \(MAE_{\gamma}\) & 3.438509e-05 & 2.405417e-05 & 1.915842e-05 \\ \hline \(RE_{\gamma}\) & 1.732523e-05 & 1.199509e-05 & 9.511436e-06 \\ \hline \(ERR_{1}\) & - & 30.04\% & 44.28\% \\ \hline \(ERR_{2}\) & - & 30.77\% & 45.10\% \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison of three methods: the elapsed time, mean absolute errors and relative \(\mathbb{L}_{2}\) errors of the linear variable coefficient \(\gamma(t)\) as well as error reduction rates. Figure 4. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) The absolute error and comparison between the predicted and exact variable coefficient \(\gamma(t)\); (b) The three-dimensional plot of the data-driven one-soliton solution \(|A(x,t)|\). #### 3.1.3. Data-driven discovery of sine variable coefficient \(\gamma(t)\) After fixing \(\alpha(t)=\sin(t),\beta(t)=\frac{t}{5}\), we aim to infer the unknown \(\gamma(t)\) in the variable coefficient nonlinear Schroodinger equation based on the solution data: \[A(x,t)=\frac{\mathrm{e}^{\frac{\mathrm{e}^{\frac{\mathrm{i}}{\hbar}\ell^{2}} \mathrm{e}^{(1+\mathrm{i})x+2\cos(t)}}{1+\frac{\mathrm{e}^{\frac{\mathrm{i}}{ \hbar}\ell^{2}+\mathrm{e}^{\mathrm{i}}\cos(t)}}{8}}}. \tag{3.32}\] For simplicity, we confine our sampling and training in a rectangular region \((x,t)\in[-4,4]\times[-5,5]\). To generate a dataset for this example, we choose \(N_{A}=200\) points from the initial-boundary dataset and \(N_{A_{in}}=2000\) points from interior point set at random after equidistant discretization. In addition, we employ the Latin hypercube sampling method to select \(N_{f}=N_{g}=40000\) collocation points. Similarly, we also establish the fully-connected PINNs with Xavier initialization at first and proceed by adopting gPINNs with the advantage of transfer learning. The structure of networks, including the width and depth, activation function, definition of loss functions as well as the optimization algorithm, is the same as the previous subsection. Dynamic behaviors of the soliton solution \(A(x,t)\) and variable coefficient \(\gamma(t)\) inferred by TL-gPINNs are shown in Fig.7 and Fig.8, which contain comparison between the predicted solutions and exact ones, the three-dimensional \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **ResultsMethod** & PINNs & gPINNs & TL-gPINNs \\ \hline Elapsed time (s) & 789.8311 & 3133.7038 & 2293.3249 \\ \hline \(MAE_{\gamma}\) & 4.211790e-03 & 1.072530e-02 & 3.100830e-03 \\ \hline \(RE_{\gamma}\) & 3.003559e-03 & 7.299052e-03 & 2.163681e-03 \\ \hline \(ERR_{1}\) & - & -154.65\% & 26.38\% \\ \hline \(ERR_{2}\) & - & -143.01\% & 27.96\% \\ \hline \hline \end{tabular} \end{table} Table 3. Performance comparison of three methods: the elapsed time, mean absolute errors and relative \(\mathbb{L}_{2}\) errors of the quadratic variable coefficient \(\gamma(t)\) as well as error reduction rates. Figure 5. (Color online) One-soliton solution \(A(x,t)\) of the vcNLS equation by TL-gPINNs: (a) The density diagrams and comparison between the predicted solutions and exact solutions at the three temporal snapshots of \(|A(x,t)|\); (b) The error density diagram of \(|A(x,t)|\). Figure 6. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) The absolute error and comparison between the predicted and exact variable coefficient \(\gamma(t)\); (b) The three-dimensional plot of the data-driven one-soliton solution \(|A(x,t)|\). plots of predicted \(A(x,t)\) and the curve graph of the variable coefficient \(\gamma(t)\). The absolute error curve of \(\gamma(t)\) is drawn with black dotted line corresponding to the right coordinate axis in Fig.8 (a). An empirical inference is given that the high-frequency oscillation of absolute error is caused by the periodic oscillation and the change in concavity and convexity of the variable coefficient. It can be observed that we obtain a periodical soliton solution as sine or cosine function and the predicted variable coefficient is well fitted with the exact one with absolute error less than \(2\times 10^{-3}\). In addition, based on the results in Table 4, it illustrates that both the mean absolute error and relative \(\mathbb{L}_{2}\) error of the variable coefficient \(\gamma(t)\) achieved by TL-gPINNs reach the level of \(10^{-4}\), about one order of magnitude lower than those by PINNs. #### 3.1.4. Data-driven discovery of hyperbolic tangent variable coefficient \(\gamma(t)\) Given \(\alpha(t)=\tanh(t),\beta(t)=\frac{t}{5}\), our goal is to identify the unknown variable parameter \(\gamma(t)\) from the vcNLS equation with remarkable accuracy. \begin{table} \begin{tabular}{c|c|c|c} \hline **ResultsMethod** & PINNs & gPINNs & TL-gPINNs \\ \hline Elapsed time (s) & 729.1269 & 5283.3492 & 4272.9441 \\ \hline \(MAE_{\gamma}\) & 1.463990e-03 & 7.123562e-04 & 4.664226e-04 \\ \hline \(RE_{\gamma}\) & 2.703498e-03 & 1.363048e-03 & 7.559607e-04 \\ \hline \(ERR_{1}\) & - & 51.34\% & 68.14\% \\ \hline \(ERR_{2}\) & - & 49.58\% & 72.04\% \\ \hline \end{tabular} \end{table} Table 4. Performance comparison of three methods: the elapsed time, mean absolute errors and relative \(\mathbb{L}_{2}\) errors of the sine variable coefficient \(\gamma(t)\) as well as error reduction rates. Figure 8. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) The absolute error and comparison between the predicted and exact variable coefficient \(\gamma(t)\); (b) The three-dimensional plot of the data-driven one-soliton solution \(|A(x,t)|\). Figure 7. (Color online) One-soliton solution \(A(x,t)\) of the vcNLS equation by TL-gPINNs: (a) The density diagrams and comparison between the predicted solutions and exact solutions at the three temporal snapshots of \(|A(x,t)|\); (b) The error density diagram of \(|A(x,t)|\). After utilizing the same generation and sampling method of training data as above, we acquire the training set consists of \(N_{A}=200\) initial-boundary points, \(N_{A_{in}}=2000\) internal points and a random selection of \(N_{f}=N_{g}=40000\) collocation points in the given spatiotemporal domain \([x_{0},x_{1}]\times[t_{0},t_{1}]=[-2,4]\times[-5,5]\) where the corresponding soliton solution is \[A(x,t)=\frac{\mathrm{e}^{\frac{\mathrm{i}}{6}t^{2}}\mathrm{e}^{(1+\mathrm{i}) x-2\ln(\cosh(t))}}{1+\frac{\mathrm{e}^{2x-4\ln(\cosh(t))}}{8}}. \tag{3.33}\] The first step is to construct the conventional PINNs. The architecture of multi-out neural networks consists of one input layer, 7 hidden layers with 40 neurons per hidden layer and one output layer with 2 neurons to learn the real part \(u(x,t)\) and imaginary part \(v(x,t)\) of the soliton solution. A 3-hidden-layer feedforward neural network with 30 neurons per hidden layer is employed to infer the variable parameter \(\gamma(t)\). This process can be regarded as the pre-training of the gPINNs, which helps accelerate the convergence of training. Next, we initialize gPINNs with the saved weights of PINNs. The activation function and optimization algorithm used here are the \(tanh\) function and L-BFGS optimizer respectively. By leveraging TL-gPINNs, the data-driven soliton solution \(A(x,t)\) and variable coefficient \(\gamma(t)\) are plotted in Fig. 9 and Fig. 10. For the double coordinate plot in Fig. 10 (a), the black dotted line corresponding to the right coordinate axis represents the absolute error curve, which exhibits a certain degree of symmetry since the variable coefficient itself is centrosymmetric. Empirically speaking, the error will increase accordingly when the value of the function to be learned is large or changes greatly. However, the error is relatively small during the period with high slopes, i.e. \(t\in[-2,2]\). Presumably it's because the introduction of gradient information into the loss function is conducive to learn the features of variable coefficient where the slope is relatively large. We observe that this V-shaped soliton and the variable coefficient with the function type of hyperbolic tangent are both accurately inferred. Furthermore, Table 5 gives a brief overview of accuracy and efficiency of three methods. Figure 10. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) The absolute error and comparison between the predicted and exact variable coefficient \(\gamma(t)\); (b) The three-dimensional plot of the data-driven one-soliton solution \(|A(x,t)|\). Figure 9. (Color online) One-soliton solution \(A(x,t)\) of the vcNLS equation by TL-gPINNs: (a) The density diagrams and comparison between the predicted solutions and exact solutions at the three temporal snapshots of \(|A(x,t)|\); (b) The error density diagram of \(|A(x,t)|\). #### 3.1.5. Data-driven discovery of fractional variable coefficient \(\gamma(t)\) When \(\alpha(t)\), \(\beta(t)\) are respectively fixed as \(\frac{1}{2(1+t^{2})}\), \(\frac{t}{5}\) and the training of this case is confined in a rectangular region \((x,t)\in[-4,5]\times[-5,5]\), the target here is to infer the unknown variable coefficient \(\gamma(t)\) on the basis of the dataset of the corresponding solution \[A(x,t)=\frac{\mathrm{e}^{\frac{1}{10}t^{2}\mathrm{e}^{(1+1)x-\mathrm{arctan}( t)}{1+\frac{(2t^{2}+2)\mathrm{e}^{2x-\mathrm{arctan}(t)}}{8(t^{2}+1)}}}. \tag{3.34}\] Since there are large amounts of descriptions of the sampling method and network structure above, we won't reiterate them here to avoid repetition. All details are the same as the previous subsection. Table 6 summarizes the results of our experiment and compares the performance of PINNs, TL-gPINNs and gPINNs. A more detailed assessment of the predicted soliton solution \(A(x,t)\) and variable coefficient \(\gamma(t)\) by leveraging TL-gPINNs is presented in Fig.11 and Fig.12. Specifically, the comparison between the exact and the predicted solutions at different time points \(t=-3.75,0,3.75\) as well as that between the predicted and exact variable coefficient \(\gamma(t)\) is also displayed. A rule of thumb is that the error is large when the value of variable coefficient \(\gamma(t)\) is large or \(\gamma(t)\) changes sharply. The change of the absolute error curve plotted with black dashed line shown in Fig.12 (a) is in good agreement with this experiential conclusion to a certain extent. In addition, TL-gPINN is capable of accurately capturing the intricate nonlinear behaviors of the vcNLS equation, including the dynamic behaviors of the solution and the Kerr nonlinearity \(\gamma(t)\). ### Data-driven discovery of multiple variable coefficients We extend the research of data-driven discovery for single variable coefficient to that of multiple ones, and the hyper-parameters of which are given in outline in Table 1. For each case discussed here, the L-BFGS algorithm is utilized to optimize loss functions. .2.1. Data-driven discovery of two variable coefficients: linear \(\beta(t)\) and sine \(\gamma(t)\) In this part, we use the TL-gPINNs to identify two unknown variable coefficients: linear \(\beta(t)\) and sine \(\gamma(t)\) when the other variable coefficient (\(\alpha(t)=\sin(t)\)) is fixed and the training dataset consisting of initial-boundary data \(\{x_{A}^{i},t_{A}^{i},u^{i},v^{i}\}_{i=1}^{N_{A}}(N_{A}=200)\) and internal data \(\{x_{in}^{i},t_{in}^{i},u^{i},v^{i}\}_{i=1}^{N_{A_{im}}}(N_{A_{in}}=2000)\) is randomly selected. Then the loss functions of PINNs and gPINNs are redefined \[MSE_{inverse}=MSE_{A}+MSE_{f}+MSE_{A_{in}}+MSE_{\mathbf{A}}, \tag{3.35}\] \[MSE_{inverse}^{g}=MSE_{A}+MSE_{f}+MSE_{A_{in}}+MSE_{\mathbf{A}}+MSE_{g}, \tag{3.36}\] where \[MSE_{\mathbf{A}}=MSE_{\beta}+MSE_{\gamma}, \tag{3.37}\] \[MSE_{\beta}=|\widehat{\beta}(t_{0})-\beta^{0}|^{2}, \tag{3.38}\] \[MSE_{\gamma}=\frac{1}{2}\left(|\widehat{\gamma}(t_{0})-\gamma^{0}|^{2}+| \widehat{\gamma}(t_{1})-\gamma^{1}|^{2}\right). \tag{3.39}\] The depth and width of neural networks for inferring the solution and variable coefficients are listed in Table 1. By employing the TL-gPINN method, the data-driven soliton solution and variable coefficients for the vcNLS equation are successfully simulated. Comparison between the predicted and exact variable coefficients \(\beta(t)\) and \(\gamma(t)\) as well as the corresponding absolute errors is displayed in Fig. 13. It can be seen that the absolute error of linear \(\beta(t)\) is negligible compared with that of nonlinear \(\gamma(t)\), which exhibits the feature of high-frequency oscillation due to the periodic oscillation and the change in concavity and convexity of the variable coefficient. Table 7 gives a brief overview of the method performance. Figure 12. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) The absolute error and comparison between the predicted and exact variable coefficient \(\gamma(t)\); (b) The three-dimensional plot of the data-driven one-soliton solution \(|A(x,t)|\). Figure 13. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) Comparison between the predicted and exact variable coefficients \(\beta(t)\) and \(\gamma(t)\); (b) The absolute errors. #### 3.2.2. Data-driven discovery of three variable coefficients Note that all variable coefficients of the vcNLS equation are unknown here. \(\bullet\)**Linear \(\alpha(t)\), \(\beta(t)\) and \(\gamma(t)\)** For the identification of three linear variable coefficients, the term embodying the training data in the loss functions in Eq. (3.35) and (3.36) need to be modified \[MSE_{\mathbf{A}}=MSE_{\alpha}+MSE_{\beta}+MSE_{\gamma}, \tag{3.40}\] \[MSE_{\alpha}=|\widehat{\alpha}(t_{0})-\alpha^{0}|^{2}, \tag{3.41}\] \[MSE_{\beta}=|\widehat{\beta}(t_{0})-\beta^{0}|^{2}, \tag{3.42}\] \[MSE_{\gamma}=|\widehat{\gamma}(t_{0})-\gamma^{0}|^{2}. \tag{3.43}\] With the aid of the same generation and sampling method above, we obtain the training data (size: \(N_{A}=200,N_{A_{in}}=2000\)) in the given spatiotemporal region \([-4,4]\times[-4,4]\), where the corresponding soliton solution is \[A(x,t)=\frac{\mathrm{e}^{\frac{\mathrm{i}\pi t}{6}^{2}}\mathrm{e}^{(1+\mathrm{ i})x-2\ln(\cosh(t))}}{1+\frac{\mathrm{e}^{\mathrm{i}\pi-4\ln(\cosh(t))}}{8}}. \tag{3.44}\] The linear and tanh activation functions are adopted to infer the variable coefficients and soliton solution separately. Finally, Fig. 14 shows the curve plots of the predicted and the exact variable coefficients as well as absolute errors obtained by TL-gPINN, and Table 8 summarizes the detailed results of three methods in the term of prediction accuracy. The change of absolute error curves here is similar to that in Sec. 3.1.1. \begin{table} \begin{tabular}{c|c|c c|c} \hline \hline \multicolumn{2}{c|}{Results} & \multicolumn{3}{c|}{Method} \\ \cline{2-5} \multicolumn{2}{c|}{} & PINNs & gPINNs & TL-gPINNs \\ \hline \multirow{2}{*}{\(\beta(t)\)} & \(MAE_{\beta}\) (\(ERR_{1}\)) & 1.246162e-05 & 1.861258e-05 (**-49.36\%**) & 8.680616e-06 **(30.34\%)** \\ \cline{2-5} & \(RE_{\beta}\) (\(ERR_{2}\)) & 2.323916e-05 & 3.888068e-05 **(-67.31\%)** & 1.517259e-05 **(30.34\%)** \\ \hline \multirow{2}{*}{\(\gamma(t)\)} & \(MAE_{\gamma}\) (\(ERR_{1}\)) & 1.412442e-03 & 1.128851e-03 **(-67.31\%)** & 7.726441e-04 **(34.71\%)** \\ \cline{2-5} & \(RE_{\gamma}\) (\(ERR_{2}\)) & 2.712897e-03 & 2.220811e-03 **(18.14\%)** & 1.309694e-03 **(51.72\%)** \\ \hline \hline \end{tabular} \end{table} Table 7. Performance comparison of three methods: mean absolute errors and relative \(\mathbb{L}_{2}\) errors of the variable coefficients \(\beta(t)\) and \(\gamma(t)\) as well as error reduction rates. Figure 14. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) Comparison between the predicted and exact variable coefficients \(\alpha(t)\), \(\beta(t)\) and \(\gamma(t)\); (b) The absolute errors. \begin{table} \begin{tabular}{c|c|c c|c} \hline \hline \multicolumn{2}{c|}{Results} & \multicolumn{3}{c|}{Method} \\ \cline{2-5} \multicolumn{2}{c|}{} & PINNs & gPINNs & TL-gPINNs \\ \hline \multirow{2}{*}{\(\alpha(t)\)} & \(MAE_{\alpha}\) (\(ERR_{1}\)) & 7.617165e-05 & 1.224185e-04 **(-60.71\%)** & 6.102315e-05 **(19.89\%)** \\ \cline{2-5} & \(RE_{\alpha}\) (\(ERR_{2}\)) & 7.604356e-05 & 1.225268e-04 **(-61.13\%)** & 6.186360e-05 **(18.65\%)** \\ \hline \multirow{2}{*}{\(\beta(t)\)} & \(MAE_{\beta}\) (\(ERR_{1}\)) & 2.193591e-04 & 5.647830e-04 **(-157.47\%)** & 8.066458e-05 **(63.23\%)** \\ \cline{2-5} & \(RE_{\beta}\) (\(ERR_{2}\)) & 5.484737e-04 & 1.412454e-03 **(-157.52\%)** & 2.025123e-04 **(63.08\%)** \\ \hline \multirow{2}{*}{\(\gamma(t)\)} & \(MAE_{\gamma}\) (\(ERR_{1}\)) & 1.086061e-04 & 4.343718e-04 **(-299.95\%)** & 3.701118e-05 **(65.92\%)** \\ \cline{2-5} & \(RE_{\gamma}\) (\(ERR_{2}\)) & 5.447867e-05 & 2.165039e-04 **(-297.41\%)** & 1.842847e-05 **(66.17\%)** \\ \hline \hline \end{tabular} \end{table} Table 8. Performance comparison of three methods: mean absolute errors and relative \(\mathbb{L}_{2}\) errors of three linear variable coefficients as well as error reduction rates. \(\bullet\)**Linear \(\beta(t)\), fractional \(\alpha(t)\) and \(\gamma(t)\)** Based on the initial-boundary data of the soliton solution \[A(x,t)=\frac{\mathrm{e}^{\frac{\mathrm{e}\hbar t}{t}\mathrm{e}^{2}(1+\mathrm{i} )x-\arctan(t)}}{1+\frac{(2t^{2}+2)\mathrm{e}^{2x-2\arctan(t)}}{8(t^{2}+1)}}, \tag{3.45}\] corresponding to \(\alpha(t)=\frac{1}{2(1+t^{2})},\beta(t)=\frac{t}{5},\gamma(t)=\frac{1}{1+t^{2}}\), we utilize the TL-gPINNs to infer these three unknown variable coefficients. Here, the loss term of nonlinear variable coefficients should be changed into \[MSE_{vc}=MSE_{\alpha}+MSE_{\beta}+MSE_{\gamma}, \tag{3.46}\] where \[MSE_{\beta}=|\widehat{\beta}(t_{0})-\beta^{0}|^{2}, \tag{3.47}\] \[MSE_{\alpha}=\frac{1}{2}\left(|\widehat{\alpha}(t_{0})-\alpha^{0}|^{2}+| \widehat{\alpha}(t_{1})-\alpha^{1}|^{2}\right), \tag{3.48}\] \[MSE_{\gamma}=\frac{1}{2}\left(|\widehat{\gamma}(t_{0})-\gamma^{0}|^{2}+| \widehat{\gamma}(t_{1})-\gamma^{1}|^{2}\right). \tag{3.49}\] Results of function discovery for the vcNLS equation, i.e. comparisons between the predicted and exact variable coefficients \(\alpha(t)\), \(\beta(t)\) and \(\gamma(t)\) as well as their respective absolute errors are presented in Fig. 15. Similarly, the absolute error of linear variable coefficient \(\beta(t)\) is negligible compared with those of nonlinear ones. The variable coefficients \(\alpha(t)\) and \(\gamma(t)\) in the form of fractional polynomials also basically meets the rule of thumb mentioned in Sec. 3.1.5. However, the phenomenon of multiple intersections between error curves and more specific feature analysis remain to be further explored in future work. Besides, the running time of PINNs, TL-gPINNs and gPINNs is: 242.0248, 320.302 and 892.13 seconds, respectively. The performance comparison of these three methods is shown in Table 9. \begin{table} \begin{tabular}{c|c|c c|c} \hline \hline \multicolumn{2}{c|}{Results} & \multicolumn{3}{c}{Method} \\ \cline{3-5} \multicolumn{2}{c|}{} & PINNs & gPINNs & TL-gPINNs \\ \hline \multirow{2}{*}{\(\alpha(t)\)} & \(MAE_{\alpha}(ERR_{1})\) & 2.536442e-03 & 2.372040e-03 **(6.48\%)** & 2.069355e-03 **(18.42\%)** \\ \cline{2-5} & \(RE_{\alpha}(ERR_{2})\) & 1.855795e-02 & 1.840729e-02 **(0.81\%)** & 1.616184e-02 **(12.91\%)** \\ \hline \multirow{2}{*}{\(\beta(t)\)} & \(MAE_{\beta}(ERR_{1})\) & 3.263991e-05 & 2.116216e-05 **(35.16\%)** & 9.873565e-06 **(69.75\%)** \\ \cline{2-5} & \(RE_{\beta}(ERR_{2})\) & 5.729922e-05 & 3.738890e-05 **(34.75\%)** & 1.751263e-05 **(69.44\%)** \\ \hline \multirow{2}{*}{\(\gamma(t)\)} & \(MAE_{\gamma}(ERR_{1})\) & 5.610852e-03 & 5.163366e-03 **(7.98\%)** & 4.453668e-03 **(20.62\%)** \\ \cline{2-5} & \(RE_{\gamma}(ERR_{2})\) & 2.201791e-02 & 2.034598e-02 **(7.59\%)** & 1.753237e-02 **(20.37\%)** \\ \hline \hline \end{tabular} \end{table} Table 9. Performance comparison of three methods: mean absolute errors and relative \(\mathbb{L}_{2}\) errors of three variable coefficients as well as error reduction rates. Figure 15. (Color online) Results of function discovery for the vcNLS equation by TL-gPINNs: (a) Comparison between the predicted and exact variable coefficients \(\alpha(t)\), \(\beta(t)\) and \(\gamma(t)\); (b) The absolute errors. ### Result analysis According to the performance comparison of three methods (PINNs, TL-gPINNs and gPINNs) presented in Table. 2 - Table. 9, TL-gPINNs possess the notable performance of high accuracy whether in identifying single variable coefficient or in inferring multiple ones compared with the other two methods. Meanwhile, TL-gPINNs can accelerate convergence of iteration and reduce calculation time since the technique of transfer learning helps to mitigate the problem of inefficiency caused by extra loss terms of the gradient. The reason why gPINN doesn't perform up to expectations here may be that the solution \(A(x,t)\) for the variable coefficient nonlinear Schroodinger equation is complex-valued and each constraint function in neural networks should be decomposed into two parts: the real and imaginary parts. Thus, the loss function itself consists of many constraint terms even without regard to the gradient restriction. When solving the multi-objective optimization problems, the local optimum that it ultimately converges to is obtained based on the competitive relationship between various objectives. Therefore, the result may not necessarily be better even if more constraints are imposed. Evidently, the experiments show that gPINN has lower prediction accuracy than PINN even at the cost of sacrificing efficiency especially in Case 3.1.2 (shown in Table 3), Case 3.1.5 (Table 6) and Case 3.2.2 (Table 8). The advantage of the TL-gPINN method lies in that gPINN inherits the saved weight matrixes and bias vectors of PINN at the end of the iteration process as the initialization parameters, and thus the subsequent training of gPINN is based on that of PINN by leveraging the transfer learning technique instead of training from scratch. Consequently, TL-gPINN is steadier on precision promotion compared to gPINN, a method which has been proved to be efficient in improving the accuracy of PINN [33]. What's more, the loss curve figures of inferring linear variable coefficient \(\gamma(t)\) in Sec. 3.1.1 are plotted in Fig. 16 for the sake of more intuitive analysis. Here, only loss functions corresponding to the real part, i.e. \(MSE_{u},MSE_{f_{u}}\) and \(MSE_{g_{u}}\), are considered and counterparts of the imaginary part (\(MSE_{v},MSE_{f_{e}}\) and \(MSE_{g_{v}}\)) change approximately in the same way. The values of each loss term at the beginning and end of iterations are listed in Table 10. As we can see from Fig. 16 (a), \(MSE_{g_{u}}\) fluctuated at a relatively high level and the value of the last iteration is almost the same as that at the beginning of the iteration while \(MSE_{u}\) and \(MSE_{f_{u}}\) decreased to 4.078804e-07 and 9.956128e-07 respectively during the training process of PINNs, where the loss of gradient \(MSE_{g_{u}}\) has no contribution to optimization. In Fig. 16 (b) and Table 10, it is obvious to note that the values of loss terms of the gradients \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**ResultsMethod**} & \multicolumn{2}{c|}{PINNs} & \multicolumn{2}{c|}{gPINNs} & \multicolumn{2}{c}{TL-gPINNs} \\ \cline{2-7} & The zeroth & The last & The zeroth & The last & The zeroth & The last \\ & iteration & iteration & iteration & iteration & iteration & iteration \\ \hline \(MSE_{u}\) & 6.672856e-02 & 4.078804e-07 & 6.672856e-02 & 1.650002e-07 & 4.078804e-07 & 1.530691e-07 \\ \(MSE_{v}\) & 8.681140e-02 & 4.347213e-07 & 8.681140e-02 & 2.090391e-07 & 4.347213e-07 & 2.113367e-07 \\ \(MSE_{f_{u}}\) & 2.073652e-04 & 9.956128e-07 & 2.073652e-04 & 1.071937e-07 & 9.956128e-07 & 8.313516e-08 \\ \(MSE_{f_{v}}\) & 4.143952e-04 & 1.020429e-06 & 4.143952e-04 & 1.637525e-07 & 1.020429e-06 & 1.001443e-07 \\ \(MSE_{g_{u}}\) & 5.406206e-05 & 2.326252e-05 & 5.406206e-05 & 6.86929e-07 & 2.326252e-05 & 5.017756e-07 \\ \(MSE_{g_{v}}\) & 8.511276e-05 & 3.057861e-05 & 8.511276e-05 & 7.017902e-07 & 3.057861e-05 & 4.487380e-07 \\ \(MSE_{\gamma}\) & 16.521566 & 3.637979e-12 & 16.521566 & 2.273737e-13 & 3.637979e-12 & 2.273737e-13 \\ \hline \end{tabular} \end{table} Table 10. Results of losses at the beginning and end of iteration in inferring linear variable coefficient \(\gamma(t)\) for the vNLS equation by three methods. Figure 16. (Color online) Evolution of the loss functions in inferring linear variable coefficient \(\gamma(t)\) for the vNLS equation by two methods: (a) PINN; (b) TL-gPINN. (i.e.,\(MSE_{g_{u}}\) and \(MSE_{g_{v}}\)) are larger by several orders of magnitude than those of other loss terms in the zeroth iteration when the weight transfer is just completed. Specifically, the values of \(MSE_{u},MSE_{v},MSE_{f_{u}},MSE_{f_{v}},MSE_{u_{in}},MSE_{v_{in}}\) are approximately remain between 10e-07 and 10e-06, and that of \(MSE_{\gamma}\) maintains at 10e-11 to 10e-10 while the values of \(MSE_{g_{u}}\) and \(MSE_{g_{v}}\) are at a high level of 10e-5 to 10e-4. It reveals that there is still some deviation between the variable coefficients themselves and the ones learned by the PINN method from the perspective of gradients. In other words, the PINN method lacks sufficient attention to gradients and leads to inadequate optimization, which may be an underlying cause why the training of gPINNs can go on effectively after finishing the weight transfer. Then the values of \(MSE_{g_{u}}\) dropped pretty steadily while \(MSE_{u}\) and \(MSE_{f_{u}}\) showed a downward trend after an initial ascent. Meanwhile, the process of their ascent happens to be that of the fastest descent of \(MSE_{g_{u}}\), and we deduced that it may be a process of escaping from the local optimal point obtained by PINN, where the values of gradient loss are large although those of other loss terms are at a fairly low level. With regard to efficiency, gPINNs significantly increase the time cost of training due to the introduction of additional gradient constraints while TL-gPINNs shorten the training time in contrast to the original gPINNs by taking full advantage of transfer learning. In short, the TL-gPINN method achieves the highest prediction accuracy among the three methods whether in inferring unknown single variable coefficient or in identifying multiple ones. However, gPINN shows an unstable performance here and even performs no better than PINN in accuracy in some cases. For TL-gPINNs, the application of transfer learning technique can contribute to both higher efficiency and greater reliability than the original PINN. It outperforms the PINNs in accuracy and gPINNs in both accuracy and efficiency. Thereupon the TL-gPINN method is more superior compared with the PINN and gPINN here. ## 4. Analysis and discussion ### Robustness analysis Numerical results presented in Sec. 3.1 are based on noise-free training data, and here we carry out experiments when the training data was corrupted with noise to test the robustness of the TL-gPINNs. Specifically, the training data, including the initial-boundary data \(\{x_{A}^{i},t_{A}^{i},u^{i},v^{i}\}_{i=1}^{N_{A}}\), internal data \(\{x_{m}^{i},t_{in}^{i},u^{i},v^{i}\}_{i=1}^{N_{A_{in}}}\) and the data \(\{t_{\gamma}^{i},\gamma^{i}\}_{i=1}^{N_{\gamma}}\) of the variable coefficient \(\gamma(t)\), is corrupted by four different noise levels: 0.5%, 1%, 3% and 5%. Performance comparison of three methods in identifying variable coefficient \(\gamma(t)\) for the vNLS equation under different noise conditions. \begin{table} \begin{tabular}{c|c|c|c c c c c} \hline \multicolumn{2}{c|}{Results} & \multicolumn{5}{c}{Correct \(\gamma(t)\)} \\ \cline{3-8} \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{} & \(t\) & \(t^{2}\) & \(\sin(t)\) & \(\tanh(t)\) & \(\frac{1}{1+t^{2}}\) \\ \hline \multirow{8}{*}{0.5\% noise} & \multicolumn{2}{c|}{\(MAE_{\gamma}\)} & 1.322670e-04 & 9.615011e-03 & 1.097929e-03 & 3.973503e-03 & 1.009619e-03 \\ \cline{2-8} & \multicolumn{2}{c|}{\(RE_{\gamma}\)} & 6.551441e-05 & 7.959068e-03 & 2.579889e-03 & 5.486678e-03 & 2.968212e-03 \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & TL-gPINNs & 0.00\% & 3.16\% & 49.86\% & 51.81\% & 5.06\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & gPINNs & -73.90\% & -90.00\% & -26.63\% & 39.10\% & -10.64\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & TL-gPINNs & 0.00\% & 1.55\% & 39.72\% & 50.65\% & 10.76\% \\ \hline \multirow{8}{*}{1\% noise} & \multicolumn{2}{c|}{\(MAE_{\gamma}\)} & 3.642581e-04 & 1.241316e-02 & 3.276918e-03 & 1.005220e-02 & 2.160257e-03 \\ \cline{2-8} & \multicolumn{2}{c|}{\(RE_{\gamma}\)} & 1.825354e-04 & 1.403889e-02 & 8.239684e-03 & 1.449078e-02 & 6.147466e-03 \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & TL-gPINNs & 48.18\% & 14.93\% & 9.34\% & 9.38\% & 13.12\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & gPINNs & 27.43\% & -0.74\% & -12.80\% & 11.36\% & 7.65\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & TL-gPINNs & 48.11\% & 0.09\% & -2.99\% & 8.76\% & 33.17\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & gPINNs & 27.17\% & -2.20\% & -5.96\% & 10.90\% & 1.89\% \\ \hline \multirow{8}{*}{3\% noise} & \multicolumn{2}{c|}{\(MAE_{\gamma}\)} & 3.007159e-04 & 3.890978e-02 & 5.462772e-03 & 8.25448e-03 & 2.393983e-03 \\ \cline{2-8} & \multicolumn{2}{c|}{\(RE_{\gamma}\)} & 1.475556e-04 & 3.570587e-02 & 1.526540e-02 & 1.280577e-02 & 7.430092e-03 \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & TL-gPINNs & 32.08\% & 5.61\% & 25.55\% & 11.41\% & 26.18\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & gPINNs & 23.02\% & 6.66\% & 20.13\% & -24.42\% & -27.61\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & TL-gPINNs & 32.65\% & 0.68\% & 9.56\% & 9.59\% & 37.61\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & gPINNs & 23.34\% & -1.64\% & 7.05\% & -23.97\% & -7.41\% \\ \hline \multirow{8}{*}{5\% noise} & \multicolumn{2}{c|}{\(ERR_{1}\)} & TL-gPINNs & 32.08\% & 5.61\% & 25.55\% & 11.41\% & 26.18\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & gPINNs & 23.02\% & 6.66\% & 20.13\% & -24.42\% & -27.61\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & TL-gPINNs & 32.65\% & 0.68\% & 9.56\% & 9.59\% & 37.61\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & gPINNs & 23.34\% & -1.64\% & 7.05\% & -23.97\% & -7.41\% \\ \cline{2-8} \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & \multicolumn{2}{c|}{\(ERR_{2}\)} & \multicolumn{2}{c|}{\(ERR_{1}\)} & \multicolumn{2}{c|}{\(ERR_{2}\)} & \multicolumn{2}{c|}{\(ERR_{2}\)} & \multicolumn{2}{c|}{\(ERR_{2}\)} & \multicolumn{2}{c|}{\(ERR_{2}\)} & \multicolumn{2}{c|}{\(ERR_{2}\)} \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{1}\)} & TL-gPINNs & 32.65\% & 0.68\% & 9.56\% & 9.59\% & 37.61\% \\ \cline{2-8} & \multicolumn{2}{c|}{\(ERR_{2}\)} & gPINNs Table 11 summarizes the results of the numerical experiments in the conditions of different noise levels and the indexes \(MAE_{\gamma}\) and \(RE_{\gamma}\) listed here are achieved by TL-gPINNs. The detailed results of PINNs and gPINNs are not provided here but shown in Table 13 in Appendix A due to length limitations. Here, the reason why 0.00% appears is that TL-gPINNs converge rapidly after merely a few iterations, which means the local optimum obtained by PINNs also belongs to TL-gPINNs and then the training will not continue after initialization with saved weight data of PINNs. According to the mean absolute error (\(MAE_{\gamma}\)) and relative \(\mathbb{L}_{2}\) error (\(RE_{\gamma}\)) achieved by TL-gPINNs, different types of the variable coefficients \(\gamma(t)\) can be identified accurately via this method. Evidently, the predictions of the unknown variable coefficient retain good robustness even when the training data was corrupted with different levels of noise. It also turns out that the accuracy of TL-gPINNs doesn't necessarily become worse with the increase of noise intensity, but may also increase in some cases. Since the values of \(ERR_{1}\) and \(ERR_{2}\) indicate the degree of prediction accuracy improvement in the sense of the mean absolute error (\(MAE_{\gamma}\)) and relative \(\mathbb{L}_{2}\) error (\(RE_{\gamma}\)) respectively, the results demonstrated that the ability of TL-gPINNs in precision promotion also remains robust to noise. We observe that the vast majority of experiments by TL-gPINNs have better performance than that of gPINNs in enhancing the accuracy of inferring the unknown variable coefficient and improving the generalization capability after assessing and comparing \(ERR_{1}\) and \(ERR_{2}\) of these two methods. Meanwhile, the higher efficiency of TL-gPINNs compared with the original gPINNs is a distinct advantage as well. Based on the performance in Sec. 3.1 and Sec. 4.1, regardless of whether the training data is corrupted with noise or not, TL-gPINNs possess the ability to successfully infer the unknown variable coefficient \(\gamma(t)\) with satisfactory accuracy. Taken overall, the TL-gPINNs meet the robustness and computational accuracy standards required in practice. ### Parametric sensitivity analysis The training results of neural networks are influenced by many factors, such as the architecture of neural networks and the size of training dataset. Thus, the parametric sensitivity analysis is conducted here to disclose the effect of these hyper-parameters on predictions of the single nonlinear variable coefficient \(\gamma(t)\). \(\bullet\)**The architecture of neural networks** With regard to the structure of fully-connected neural networks (FNN), the emphasis is put on the number of weighted layers (depth) and the number of neurons per hidden layer (width). Then we explore how the change of width and depth of the branch network for inferring the variable coefficient will affect the experimental results. Meanwhile, we mainly investigate nonlinear variable coefficient \(\gamma(t)\) mentioned in Sec. 3.1, which is more common in practice. For each form of the unknown nonlinear variable coefficient \(\gamma(t)\), two hyper-parameters are changed: depth from 4 to 5 and width from 10 to 50 with step size 10. Finally, heat maps of relative \(\mathbb{L}_{2}\) errors are shown in Fig. 17 in order to display the experimental results more intuitively, and the detailed results are given in Table 14 in Appendix A. The figures in the first, second and third columns are the visualization of relative \(\mathbb{L}_{2}\) errors given by PINNs, TL-gPINNs and gPINNs respectively. The darker the color, the greater the error. For each group of experiments, we will compare the performance of the three methods and use the red dotted line to frame the one with the smallest error in the heat maps. Evidently, the color of heat maps in the second column is the lightest on the whole. Also, the proportion of numerical experiments with the smallest error is the largest. Since the weights and biases as the initialization parameters of TL-gPINN are inherited from PINN, the color depth that reflects the value of relative \(\mathbb{L}_{2}\) errors of PINN and TL-gPINN is highly correlated according to heat maps in Fig. 17. It may contribute to the stability of TL-gPINN in significant accuracy enhancement. Numerically, the average (10 runs) relative \(\mathbb{L}_{2}\) errors of nonlinear variable coefficient \(\gamma(t)\) as well as the error reduction rates of TL-gPINNs and gPINNs are listed in Table 12. Undoubtedly, it illustrates that our proposed method (TL-gPINN) outperforms the other two (PINN and gPINN) thoroughly. For numerous cases above, TL-gPINN always performs well and has stable improvement of accuracy under different width and depth of the branch network for the identification of nonlinear variable coefficients. \(\bullet\)**The size of training dataset** \begin{table} \begin{tabular}{c|c c|c} \hline \multirow{2}{*}{Correct nonlinear \(\gamma(t)\)} & \multicolumn{3}{c}{Relative \(\mathbb{L}_{2}\) errors(\(ERR_{2}\))} \\ \cline{2-4} & PINNs & gPINNs & TL-gPINNs \\ \hline \(t^{2}\) & 5.375348e-03 & 6.562848e-03(**-22.09\%**) & 4.711041e-03(**12.36\%**) \\ \(\sin(t)\) & 2.603825e-03 & 2.651862e-03(**-1.84\%**) & 1.18419e-03(**54.52\%**) \\ \(\tanh(t)\) & 7.727333e-03 & 5.430929e-03(**29.72\%**) & 3.814903e-03(**50.63\%**) \\ \(\frac{1}{1+t^{2}}\) & 8.590022e-03 & 7.668054e-03(**10.73\%**) & 4.530164e-03(**47.26\%**) \\ \hline \end{tabular} \end{table} Table 12. Average performance comparison of three methods in identifying nonlinear variable coefficient \(\gamma(t)\) for the vcNLS equation under different width and depth. The difference between the inverse problem and the forward one lies in the incorporation of some extra measurements \(\{x_{in}^{i},t_{in}^{i},u^{i},v^{i}\}_{i=1}^{N_{A_{in}}}\) of the internal region. Hence, the major consideration is the size of internal data, i.e. the value of \(N_{A_{in}}\). Considering the randomness involved in sampling and initialization, the setting of the parameter _seed_ in the codes will affect the numerical results. We perform six groups of numerical experiments for each nonlinear variable coefficient \(\gamma(t)\) and the value of \(N_{A_{in}}\) changes from 500 to 3000 with step size 500. Meanwhile, each group contains five experiments under the condition of different initial seeds to explore the impact of randomness on the results. Here, we are chiefly concerned with the accuracy of the nonlinear \(\gamma(t)\) obtained by TL-gPINNs as well as the error reduction rates of TL-gPINNs and gPINNs compared with PINNs, which are shown in Fig. 18 and Table 15 in Appendix A. In Fig. 18, the orange and blue lines correspond to the mean error reduction rates (\(ERR_{2}\)) of five numerical experiments by using TL-gPINNs and gPINNs respectively, and the shade regions depict the max-min ones. It can be concluded from figures above that TL-gPINN has higher error reduction rates for each case whether in average, maximum, or minimum sense. However, \(ERR_{2}\) of gPINNs is even less than 0% in many examples, which means the accuracy of gPINN is reduced rather than improved compared to the traditional PINN method. Furthermore, the size of the shaded area to some extent reflects the stability of the method. Thus, TL-gPINN apparently is more stable and accurate than gPINN based on error reduction rates of relative \(\mathbb{L}_{2}\) error (\(ERR_{2}\)) under different size of training dataset. Figure 17. (Color online) Relative \(\mathbb{L}_{2}\) errors of nonlinear variable coefficients \(\gamma(t)\) via three methods under different depth and width: (a) quadratic \(\gamma(t)\); (b) sine \(\gamma(t)\); (c) hyperbolic tangent \(\gamma(t)\); (d) fractional \(\gamma(t)\). ## 5. Conclusion Traditional numerical methods have many limitations in solving inverse problems, especially in dealing with noisy data, complex regions, and high-dimensional problems. Moreover, the inverse problem of the function discovery is a relatively under explored field. In this paper, for the sake of overcoming deficiency of the discrete characterization of the PDE loss in neural networks and improving accuracy of function feature description, we propose gradient-enhanced PINNs based on transfer learning (TL-gPINNs) for inverse problems of inferring unknown variable coefficients and give a new viewpoint on gPINNs. The TL-gPINN method uses a two-step optimization strategy and gradually increases the difficulty. Firstly, the original PINN is applied in the inverse problem of the variable coefficient equations. Then for further optimization, gPINN inherits the saved weight matrixes and bias vectors of PINN at the end of the iteration process as the initialization parameters and the introduction of the gradient term contributes to the accuracy enhancement of variable coefficients. Moreover, the trunk and branch networks are established to infer the solution and variable coefficients separately in order to eliminate mutual influence. The effectiveness of TL-gPINNs is demonstrated in identifying several types of single variable coefficients, including linear, quadratic, sine, hyperbolic tangent and fractional functions as well as multiple ones for the well-known variable coefficient nonlinear Schroodinger (vcNLS) equation in the field of integrable systems. Meanwhile, abundant dynamic behaviors of the corresponding soliton solution can be well reproduced. Plenty of numerical experiments are carried out to compare the performance of PINNs, TL-gPINNs and gPINNs. It has been proved that gPINN learned the unknown parameters more accurately than PINN for the inverse problems in many examples, such as Poisson equation, diffusion-reaction equation, Brinkman-Forchheimer model and so on by Yu et al. However, the accuracy of gPINN is reduced rather than improved compared with the standard PINN method in inverse PDE problems of the vcNLS equation. Presumably it's because the loss function itself consists of many constraint terms even without regard to the gradient restriction and thus the result may not necessarily be better even if more constraints are imposed when solving the multi-objective optimization problems. What's worse, the computational cost of gPINN is higher than PINN unavoidably since the introduction of additional constraints on gradients gives rise to low efficiency. Consequently, one viable path towards accelerating the convergence of training could come by adopting the technique of transfer learning and thus the TL-gPINN method is put forward here. Through the comparison among the three methods, TL-gPINN has the highest prediction precision and can improve efficiency compared to gPINN. In other words, TL-gPINNs can successfully infer the unknown variable coefficients with satisfactory accuracy and outperform the PINNs in accuracy, and gPINNs in both accuracy and efficiency. Besides, we ulteriorly conduct robustness analysis Figure 18. (Color online) Error reduction rates of relative \(\mathbb{L}_{2}\) error (\(ERR_{2}\)) in identifying nonlinear variable coefficient \(\gamma(t)\) for the vcNLS equation achieved by TL-gPINNs and gPINNs compared with PINNs under different number of \(N_{A_{in}}\): (a) quadratic \(\gamma(t)\); (b) sine \(\gamma(t)\); (c) hyperbolic tangent \(\gamma(t)\); (d) fractional \(\gamma(t)\). and parametric sensitivity analysis. Numerical results also illustrate that the ability of TL-gPINNs to improve accuracy compared to the standard PINNs, and gPINNs also remains robust to noise and other hyper-parameters, including width and depth of the branch network and the size of training dataset. The TL-gPINN method applied in this paper is universal and can be adapted to the inverse problems of inferring unknown high-dimensional variable coefficients. In future work, we will strive to propose more targeted improvements that enhance accuracy without sacrificing efficiency on this subject. ## Acknowledgments The authors would like to thank Zhengwu Miao sincerely for providing with support and help. The project is supported by National Natural Science Foundation of China (No. 12175069 and No. 12235007), Science and Technology Commission of Shanghai Municipality (No. 21JC1402500 and No. 22DZ2229014) and Natural Science Foundation of Shanghai (No. 23ZR1418100).
2305.19235
Input State Stability of Gated Graph Neural Networks
In this paper, we aim to find the conditions for input-state stability (ISS) and incremental input-state stability ($\delta$ISS) of Gated Graph Neural Networks (GGNNs). We show that this recurrent version of Graph Neural Networks (GNNs) can be expressed as a dynamical distributed system and, as a consequence, can be analysed using model-based techniques to assess its stability and robustness properties. Then, the stability criteria found can be exploited as constraints during the training process to enforce the internal stability of the neural network. Two distributed control examples, flocking and multi-robot motion control, show that using these conditions increases the performance and robustness of the gated GNNs.
Antonio Marino, Claudio Pacchierotti, Paolo Robuffo Giordano
2023-05-30T17:26:19Z
http://arxiv.org/abs/2305.19235v3
# On Stability of Gated Graph Neural Networks ###### Abstract In this paper, we aim to find the conditions for input-state stability (ISS) and incremental input-state stability (\(\delta\)ISS) of Gated Graph Neural Networks (GGNNs). We show that this recurrent version of Graph Neural Networks (GNNs) can be expressed as a dynamical distributed system and, as a consequence, can be analysed using model-based techniques to assess its stability and robustness properties. Then, the stability criteria found can be exploited as constraints during the training process to enforce the internal stability of the neural network. Two distributed control examples, flocking and multi-robot motion control, show that using these conditions increases the performance and robustness of the gated GNNs. distributed control, graph neural network, stability analysis ## I Introduction Multi-agent systems have been successfully studied in the past few years [1]. With respect to single-agent approaches, coordinated multi-agent systems are expected to collaboratively solve tasks and offer more flexibility, all features that make these systems suited to solve problems in a variety of disciplines including computer science, electrical engineering, and robotics [2]. The collaborative control of multiple agents must take into account the needs of the group. For multi-robot applications, individual robot motion should be generated by using not only local sensing data, but also knowledge about the group state, usually retrieved through communication with a limited number of (neighboring) team members [3]. Hence, communication is one of the key elements to realize distributed solution in multi-agent systems. In the last decade, the control community has widely adopted neural networks in data-driven control applications, taking advantage of their superior approximation capabilities [4]. In distributed control, neural networks are particularly convenient since they can approximate distributed policies without the need of cumbersome optimizations and designs. In the literature, we can find multiple examples of data driven approaches especially based on reinforcement learning [5, 6] fed by input data such as images [7]. However, these approaches work on local sensing data without communicating with the other agents. The communication is used in distributed machine learning where the learning process is partitioned on several machines that contribute to the group knowledge [8]. On the contrary, in data-driven distributed control, a recent trend involves using Graph Neural Networks (GNNs) to encode distributed control and planing solutions. GNNs perform prediction and analysis on graphs, a convenient topological representation in different kinds of problems like text classification, protein interface predictions, and social network decisions [9]. For these latter, GNNs are more effective than the classical neural network architectures [10]. At the same time, they gave a new perspective in realization of distributed control. Gama et al. [11] use GNNs to define a distributed LQR controller, casting the linear-quadratic problem as a self-supervised learning to find the best GNN-based distributed control, thanks to the native GNN distributed nature. The same authors [12] develop a GNN-based flocking control deployed on large team scale. Further examples can be found in space coverage [13], multi robots path [14] and motion planning [15] also in obstacles rich environment GNN [16]. GNNs are used also in approaches for enhancing multi agents perception [17] or performing a distributed active information acquisition [18] translating the multi-robot information gathering problem to a graph representation and formulate GNN-based decision maker. In the recent literature, a very relevant discussion is about making the data-driven based method robust and stable [19]. In this context, many works applied contraction analysis to demonstrate recurrent neural network stability [20], or directly closed-loop stability in continuous learning [21] and adaptive control [22]. Other works have attempted to formulate new neural network models for achieving closed-loop stability, such as [23] where the controller is obtained from an Hamiltonian Deep Neural Networks and the stability is guaranteed by the compositional properties of port-Hamiltonian systems. Recently, the authors in [24, 25] demonstrated the ISS and incremental ISS (\(\delta\)ISS) [26] for LSTMs and GRUs, two of the most popular recurrent neural network models. Inspired by these last results, the goal of this work is to characterize the \(\delta\)ISS properties of the recurrent version of GNN, i.e. Gated Graph Neural Networks (GGNN) [27]. These models use gated mechanisms to deploy distributed recurrent models able to reason on temporal- and spatial-based relationships among the agents. The \(\delta\)ISS property is a stronger property than plain ISS and leads to asymptotically convergence of two state trajectories when their respective input sequences are close, regardless of the initial conditions for the states. Therefore, we directly focus on the incremental stability results avoiding the complexity of other frameworks, e.g., contraction analysis. To the best of our knowledge, this is the first time ISS and \(\delta\)ISS are proven for GGNN, whereas previous works have focused on limited stability results like stability to graph perturbations [27, 28]. Instead, this letter considers
2305.14077
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that rate-optimal benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets.
Moritz Haas, David Holzmüller, Ulrike von Luxburg, Ingo Steinwart
2023-05-23T13:56:29Z
http://arxiv.org/abs/2305.14077v2
# Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension ###### Abstract The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets. ## 1 Introduction While neural networks have shown great practical success, our theoretical understanding of their generalization properties is still limited. A promising line of work considers the phenomenon of benign overfitting, where researchers try to understand when and how models that interpolate noisy training data can generalize (Zhang et al., 2021; Belkin et al., 2018, 2019). In the high-dimensional regime, where the dimension grows with the number of sample points, consistency of minimum-norm interpolants has been established for linear models and kernel regression (Hastie et al., 2022; Bartlett et al., 2020; Liang and Rakhlin, 2020; Bartlett et al., 2021). In fixed dimension, minimum-norm interpolation with standard kernels is inconsistent (Rakhlin and Zhai, 2019; Buchholz, 2022). In this paper, we shed a differentiated light on benign overfitting with kernels and neural networks. We argue that the dimension-dependent perspective does not capture the full picture of benign overfitting. In particular, we show that harmless interpolation with kernel methods and neural networks is possible, even in small fixed dimension, with adequately designed kernels and activation functions. The key is to properly design estimators of the form'signal+spike'. While minimum-norm criteria have widely been considered a useful inductive bias, we demonstrate that designing unusual norms can resolve the shortcomings of standard norms. For wide neural networks, harmless interpolation can be realized by adding tiny fluctuations to the activation function. Such networks do not require regularization and can simply be trained to overfit (Figure 1). On a technical level, we additionally prove that overfitting in kernel regression can only be consistent if the estimators have large derivatives. Using neural tangent kernels or neural network Gaussian process kernels, we can translate our results from kernel regression to the world of neural networks (Neal, 1996; Jacot et al., 2018). In particular, our results enable the design of activation functions that induce benign overfitting in fixed dimension: the spikes in kernels can be translated into infinitesimal fluctuations that can be added to an activation function to achieve harmless interpolation with neural networks. Such small high frequency oscillations can fit noisy observations without affecting the smooth component too much. Training finite neural networks with gradient descent shows that spiky-smooth activation functions can indeed achieve good generalization even when interpolating small, low-dimensional data sets (Figure 1 b,c). Thanks to new technical contributions, our inconsistency results significantly extend existing ones. We use a novel noise concentration argument (Lemma D.6) to generalize existing inconsistency results on minimum-norm interpolants to the much more realistic regime of overfitting estimators with comparable Sobolev norm scaling, which includes training via gradient flow and gradient descent with "late stopping" as well as low levels of ridge regression. Moreover, a novel connection to eigenvalue concentration results for kernel matrices (Proposition 4) allows us to relax the smoothness assumption and to treat heteroscedastic noise in Theorem 5. Lastly, our Lemma E.1 translates inconsistency results from bounded open subsets of \(\mathbb{R}^{d}\) to the sphere \(\mathbb{S}^{d}\), which leads to results for the neural tangent kernel and neural network Gaussian processes. ## 2 Setup and prerequisites **General approach.** We consider a general regression problem on \(\mathbb{R}^{d}\) with an arbitrary, fixed dimension \(d\) and analyze kernel-based approaches to solve this problem: kernel ridge regression, kernel gradient flow and gradient descent, minimum-norm interpolation, and more generally, overfitting norm-bounded estimators. We then translate our results to neural networks via the neural network Gaussian process and the neural tangent kernel. Let us now introduce the formal framework. **Notation.** We denote scalars by lowercase letters \(x\), vectors by bold lowercase letters \(x\) and matrices by bold uppercase letters \(X\). We denote the eigenvalues of \(A\) as \(\lambda_{1}(\mathbf{A})\geq\ldots\geq\lambda_{n}(\mathbf{A})\) and the Moore-Penrose pseudo-inverse by \(\mathbf{A}^{+}\). We say that a probability distribution \(P\) has lower and upper bounded density if its density \(p\) satisfies \(0<c<p(\mathbf{x})<C\) for suitable constants \(c,C\) and all \(x\) on a given domain. **Regression setup.** We consider a data set \(D=((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n}))\in( \mathbb{R}^{d}\times\mathbb{R})^{n}\) with i.i.d. samples \((\mathbf{x}_{i},y_{i})\sim P\), written as \(D\sim P^{n}\), where \(P\) is a probability distribution on \(\mathbb{R}^{d}\times\mathbb{R}\). We define \(\mathbf{X}\coloneqq(\mathbf{x}_{1},\ldots,\mathbf{ x}_{n})\) and \(\mathbf{y}\coloneqq(y_{1},\ldots,y_{n})^{\top}\in\mathbb{R}^{n}\). Random variables \((\mathbf{x},y)\sim P\) denote test points Figure 1: **Spiky-smooth overfitting in 2 dimensions.****a.** We plot the predicted function for ridgeless kernel regression with the Laplace kernel (blue) versus our spiky-smooth kernel (4) with Laplace components (orange) on \(\mathbb{S}^{1}\). The dashed black line shows the true regression function, black ’x’ denote noisy training points. Further details can be found in Section 6.2. **b.** The predicted function of a trained 2-layer neural network with ReLU activation (blue) versus ReLU plus shifted high-frequency sin-function (8) (orange). Using the weights learned with the spiky-smooth activation function in a ReLU network (green) disentangles the spike component from the signal component. **c.** Training error (solid lines) and test error (dashed lines) over the course of training for b. evaluated on \(10^{4}\) test points. The dotted black line shows the optimal test error. The spiky-smooth activation function does not require regularization and can simply be trained to overfit. independent of \(D\), and \(P_{X}\) denotes the probability distribution of \(\mathbf{x}\). The (least squares) _empirical risk_\(R_{D}\) and _population risk_\(R_{P}\) of a function \(f:\mathbb{R}^{d}\to\mathbb{R}\) are defined as \[R_{D}(f)\coloneqq\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2},\qquad R_{P}(f) \coloneqq\mathbb{E}_{\mathbf{x},y}[(y-f(\mathbf{x}))^{2}]\.\] We assume \(\operatorname{Var}(y|\mathbf{x})<\infty\) for all \(\mathbf{x}\). Then, \(R_{P}\) is minimized by the target function \(f_{P}^{*}(\mathbf{x})=\mathbb{E}[y|\mathbf{x}]\), and the _excess risk_ of a function \(f\) is given by \[R_{P}(f)-R_{P}(f_{P}^{*})=\mathbb{E}_{\mathbf{x}}(f_{P}^{*}(\mathbf{x})-f(\mathbf{x}))^{2}\.\] We call a data-dependent estimator \(f_{D}\)_consistent for \(P\)_ if its excess risk converges to \(0\) in probability, that is, for all \(\varepsilon>0\), \(\lim_{n\to\infty}P^{n}\left(D\in(\mathbb{R}^{d}\times\mathbb{R})^{n}\ |\ \ R_{P}(f_{D})-R_{P}(f_{P}^{*})\geq \varepsilon\right)=0.\) We call \(f_{D}\)_consistent in expectation for \(P\)_ if \(\lim_{n\to\infty}\mathbb{E}_{D}R_{P}(f_{D})-R_{P}(f_{P}^{*})=0\). We call \(f_{D}\)_universally consistent_ if is it consistent for all Borel probability measures \(P\) on \(\mathbb{R}^{d}\times\mathbb{R}\). **Solutions by kernel regression.** Recall that a kernel \(k\) induces a reproducing kernel Hilbert space \(\mathcal{H}_{k}\), abbreviated RKHS (more details in Appendix B). For \(f\in\mathcal{H}_{k}\), we consider the objective \[\mathcal{L}_{\rho}(f)\coloneqq\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(\mathbf{x}_{i}))^ {2}+\rho\|f\|_{\mathcal{H}_{k}}^{2}\] with regularization parameter \(\rho\geq 0\). Denote by \(f_{t,\rho}\) the solution to this problem that is obtained by optimizing on \(\mathcal{L}_{\rho}\) in \(\mathcal{H}_{k}\) with gradient flow until time \(t\in[0,\infty]\), using fixed a regularization constant \(\rho>0\), and initializing at \(f=0\in\mathcal{H}_{k}\). We show in Appendix C.1 that it is given as \[f_{t,\rho}(\mathbf{x})\coloneqq k(\mathbf{x},\mathbf{X})\left(\mathbf{I}_{n}-e^{-\frac{2}{n}t (k(\mathbf{X},\mathbf{X})+\rho n\mathbf{I}_{n})}\right)\left(k(\mathbf{X},\mathbf{X})+\rho n\mathbf{I} _{n}\right)^{-1}\mathbf{y}\, \tag{1}\] where \(k(\mathbf{x},\mathbf{X})\) denotes the row vector \((k(\mathbf{x},\mathbf{x}_{i}))_{i\in[n]}\) and \(k(\mathbf{X},\mathbf{X})=(k(\mathbf{x}_{i},\mathbf{x}_{j}))_{i,j\in[n]}\) the kernel matrix. \(f_{t,\rho}\) elegantly subsumes several popular kernel regression estimators as special cases: (i) classical kernel ridge regression for \(t\to\infty\), (ii) gradient flow on the unregularized objective for \(\rho\searrow 0\), and (iii) kernel "ridgeless" regression \(f_{\infty,0}(\mathbf{x})=k(\mathbf{x},\mathbf{X})k(\mathbf{X},\mathbf{X})^{+}\mathbf{y}\) in the joint limit of \(\rho\to 0\) and \(t\to\infty\). If \(k(\mathbf{X},\mathbf{X})\) is invertible, \(f_{\infty,0}\) is the interpolating function \(f\in\mathcal{H}_{k}\) with the smallest \(\mathcal{H}_{k}\)-norm. **From kernels to neural networks: the neural tangent kernel (NTK) and the neural network Gaussian process (NNGP).** Denote the output of a NN with parameters \(\mathbf{\theta}\) on input \(\mathbf{x}\) by \(f_{\mathbf{\theta}}(\mathbf{x})\). It is known that for suitable random initializations \(\mathbf{\theta}_{0}\), in the infinite-width limit the random initial function \(f_{\mathbf{\theta}_{0}}\) converges in distribution to a Gaussian Process with the so-called Neural Network Gaussian Process (NNGP) kernel (Neal, 1996; Lee et al., 2018; Matthews et al., 2018). In Bayesian inference, the posterior mean function is then of the form \(f_{\infty,\rho}\). With minor modifications (Arora et al., 2019; Zhang et al., 2020), training infinitely wide NNs with gradient flow corresponds to learning the function \(f_{t,0}\) with the neural tangent kernel (NTK) (Jacot et al., 2018; Lee et al., 2019). If only the last layer is trained, the NNGP kernel should be used instead (Daniely et al., 2016). For ReLU activation functions, the RKHS of the infinite-width NNGP and NTK on the sphere \(\mathbb{S}^{d}\) is typically a Sobolev space (Bietti and Bach, 2021; Chen and Xu, 2021), see Appendix B.4. ## 3 Related work We here provide a short summary of related work. A more detailed account is provided in Appendix A. **Kernel regression.** With appropriate regularization, kernel ridge regularization with typical universal kernels like the Gauss, Matern, and Laplace kernels is universally consistent (Steinwart and Christmann, 2008, Chapter 9). Optimal rates in Sobolev RKHS can also be achieved using cross-validation of the regularization \(\rho\)(Steinwart et al., 2009) or early stopping rules (Yao et al., 2007; Raskutti et al., 2014; Wei et al., 2017). In the high-dimensional regime, the class of functions that is learnable with rotation-invariant kernels is quite limited (Donhauser et al., 2021; Ghorbani et al., 2021; Liang et al., 2020). **Inconsistency results.** Besides Rakhlin and Zhai (2019) and Buchholz (2022), Beaglehole et al. (2022) derive inconsistency results for ridgeless kernel regression given assumptions on the spectral tail in the Fourier basis, and Li et al. (2023) show that polynomial convergence is impossible for common kernels including ReLU NTKs. Mallinar et al. (2022) conjecture inconsistency for interpolation with ReLU NTKs based on their semi-rigorous result, which essentially assumes that the eigenfunctions can be replaced by structureless Gaussian random variables. Lai et al. (2023) show an inconsistency-type result for overfitting two-layer ReLU NNs with \(d=1\), but for fixed inputs \(\mathbf{X}\). They also note that an earlier inconsistency result by Hu et al. (2021) relies on an unproven result. Mucke and Steinwart (2019) show that global minima of NNs can overfit both benignly and harmfully, but their result does not apply to gradient descent training. Overfitting with typical linear models around the interpolation peak is inconsistent (Ghosh and Belkin, 2022; Holzmuller, 2021). **Classification.** For binary classification, benign overfitting is a more generic phenomenon than for regression (Muthukumar et al., 2021; Shamir, 2022), and consistency has been shown under linear separability assumptions (Montanari et al., 2019; Chatterji and Long, 2021; Frei et al., 2022), through complexity bounds for reference classes (Cao and Gu, 2019; Chen et al., 2019) or as long as the total variation distance of the class conditionals is sufficiently large and \(f^{*}(\mathbf{x})=\mathbb{E}[y|\mathbf{x}]\) lies in the RKHS with bounded norm (Liang and Recht, 2023). Chapter 8 of Steinwart and Christmann (2008) discusses how the overlap of the two classes may influence learning rates under positive regularization. ## 4 Inconsistency of overfitting with common kernel estimators We consider a regression problem on \(\mathbb{R}^{d}\) in arbitrary, fixed dimension \(d\) that is solved by kernel regression. In this section, we derive several new results, stating that overfitting estimators with moderate Sobolev norm are inconsistent, in a variety of settings. In the next section, we establish the other direction: overfitting estimators can be consistent when we adapt the norm that is minimized. ### Beyond minimum-norm interpolants: general overfitting estimators with bounded norm Existing generalization bounds often consider the perfect minimum norm interpolant. This is a rather theoretical construction; estimators obtained by training with gradient descent algorithms merely overfit and, in the best case, approximate interpolants with small norm. In this section, we extend existing bounds to arbitrary overfitting estimators whose norm does not grow faster than the minimum norm that would be required to interpolate the training data. Before we can state the theorem, we need to establish some technical assumptions. Assumptions on the data generating process.The following assumptions (as in Buchholz (2022)) allow for quite general domains and distributions. They are standard in nonparametric statistics. 1. Let \(P_{X}\) be a distribution on a bounded open Lipschitz domain \(\Omega\subseteq\mathbb{R}^{d}\) with lower and upper bounded Lebesgue density. Consider data sets \(D=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\}\), where \(\mathbf{x}_{i}\sim P_{X}\) i.i.d. and \(y_{i}=f^{*}(\mathbf{x}_{i})+\varepsilon_{i}\), where \(\varepsilon_{i}\) is i.i.d. Gaussian noise with positive variance \(\sigma^{2}>0\) and \(f^{*}\in C_{c}^{\infty}(\Omega)\backslash\{0\}\) denotes a smooth function with compact support. Assumptions on the kernel.Our assumption on the kernel is that its RKHS is equivalent to a Sobolev space. For integers \(s\in\mathbb{N}\), the norm of a Sobolev space \(H^{s}(\Omega)\) can be defined as \[\|f\|_{H^{s}(\Omega)}^{2}:=\sum_{0\leq|u|\leq s}\|D^{\alpha}f\|_{L_{2}(\Omega)} ^{2},\] where \(D^{\alpha}\) denotes partial derivatives in multi-index notation for \(\alpha\). It measures the magnitude of derivatives up to some order \(s\). For general \(s>0\), \(H^{s}(\Omega)\) is (equivalent to) an RKHS if and only if \(s>d/2\). For example, Laplace and Matern kernels (Kanagawa et al., 2018, Example 2.6) have Sobolev RKHSs. The RKHS of the Gaussian kernel \(\mathcal{H}^{\mathrm{Gauss}}\) is contained in every Sobolev space, \(\mathcal{H}^{\mathrm{Gauss}}\subsetneq H^{s}\) for all \(s\geq 0\)(Steinwart and Christmann, 2008, Corollary 4.36). Due to its smoothness, the Gaussian kernel is potentially even more prone to harmful overfitting than Sobolev kernels (Mallinar et al., 2022). We make the following assumption on the kernel: 1. Let \(k\) be a positive definite kernel function whose RKHS \(\mathcal{H}_{k}\) is equivalent to the Sobolev space \(H^{s}\) for \(s\in(\frac{d}{2},\frac{3d}{4}]\). Now we are ready to state the main result of this section: **Theorem 1** (**Overfitting estimators with small norms are inconsistent**).: _Let assumptions (D1) and (K) hold. Let \(c_{\mathrm{fit}}\in(0,1]\) and \(C_{\mathrm{norm}}>0\). Then, there exist \(c>0\) and \(n_{0}\in\mathbb{N}\) such that the following holds for all \(n\geq n_{0}\) with probability \(1-O(1/n)\) over the draw of the data set \(D\) with \(n\) samples: Every function \(f\in\mathcal{H}_{k}\) that satisfies the following two conditions_ 1. \(\frac{1}{n}\sum_{i=1}^{n}(f(x_{i})-y_{i})^{2}\leq(1-c_{\mathrm{fit}})\cdot \sigma^{2}\) _(training error of_ \(f\) _is below Bayes risk)_ 2. \(\|f\|_{\mathcal{H}_{k}}\leq C_{\mathrm{norm}}\|f_{\infty,0}\|_{\mathcal{H}_{k}}\) _(norm comparable to minimum-norm interpolant (_1_)),_ _has an excess risk that satisfies_ \[R_{P}(f)-R_{P}(f^{*})\geq c>0. \tag{2}\] In words: In fixed dimension \(d\), every differentiable function \(f\) that overfits the training data and is not much "spikier" than the minimum RKHS-norm interpolant is inconsistent! **Proof idea.** Our proof follows a similar approach as Rakhlin and Zhai (2019); Buchholz (2022), and also holds for kernels with adaptive bandwidths. For small bandwidths, \(\|f_{\infty,0}\|_{L_{2}(P_{X})}\) is too small, because \(f_{\infty,0}\) decays to \(0\) between the training points, which shows that purely "spiky" estimators are inconsistent. For all other bandwidths, interpolating \(\Theta(n)\) many noisy labels \(y_{i}\) incurs \(\Theta(1)\) error in an area of volume \(\Omega(1/n)\) around \(\Theta(n)\) data points with high probability, which accumulates to a total error \(\Omega(1)\). Our observation is that the same logic holds when overfitting by a constant fraction. Formally, we show that \(f^{*}\) and \(f\) must then be separated by a constant on a constant fraction of training points, with high probability, by using the fact that a constant fraction of the total noise cannot concentrate on less than \(\Theta(n)\) noise variables, with high probability (Lemma D.6). The full proof can be found in Appendix D. Assumption (O) is necessary in Theorem 1, because optimally regularized kernel ridge regression fulfills all other assumptions of Theorem 1 while achieving consistency with minimax optimal convergence rates (see Section 3). The necessity of Assumption (N) is demonstrated by Section 5. The following proposition establishes that Theorem 1 covers the entire overfitting regime of the popular (regularized) gradient flow estimators \(f_{t,\rho}\) for all times \(t\in[0,\infty]\) and any regularization \(\rho\geq 0\). The proof in Appendix C.2 also covers gradient descent. **Proposition 2** (**Popular estimators fulfill the norm bound (N)**).: _Let \(t\in[0,\infty]\) and let \(\rho\geq 0\) arbitrary. Then \(f_{t,\rho}\) as defined in (1) fulfills Assumption (N) with \(C_{\mathrm{norm}}=1\)._ ### Inconsistency of overfitting with neural kernels We would now like to apply the above results to neural kernels, which would allow us to translate our inconsistency results from the kernel domain to neural networks. However, to achieve this, we need to take one more technical hurdle: the equivalence results for NTKs and NNGPs only hold for probability distributions on the sphere \(\mathbb{S}^{d}\) (detailed summary in Appendix B.4). Lemma E.1 provides the missing technical link: It establishes a smooth correspondence between the respective kernels, Sobolev spaces, and probability distributions. The inconsistency of overfitting with (deep) ReLU NTKs and NNGP kernels then immediately follows from adapting Theorem 1 via Lemma E.1. **Theorem 3** (**Overfitting with neural network kernels in fixed dimension is inconsistent**).: _Let \(c\in(0,1)\), and let \(P\) be a probability distribution with lower and upper bounded Lebesgue density on an arbitrary spherical cap \(T\coloneqq\{\mathbf{x}\in\mathbb{S}^{d}\mid x_{d+1}<v\}\subseteq\mathbb{S}^{d}\), \(v\in(-1,1)\). Let \(k\) either be_ 1. _the fully-connected ReLU NTK with_ \(0\)_-initialized biases of any fixed depth_ \(L\geq 2\)_, and_ \(d\geq 2\)_, or_ _(ii) the fully-connected ReLU NNGP kernel without biases of any fixed depth \(L\geq 3\), and \(d\geq 6\)._ _Then, if \(f_{t,\rho}\) fulfills Assumption \((O)\) with probability at least \(c\) over the draw of the data set \(D\), \(f_{t,\rho}\) is inconsistent for \(P\)._ Theorem 3 also holds for more general estimators as in Theorem 1, cf. the proof in Appendix E. Mallinar et al. (2022) already observed empirically that overfitting common network architectures yields suboptimal generalization performance on large data sets in fixed dimension. Theorem 3 now provides a rigorous proof for this phenomenon since sufficiently wide trained neural networks and the corresponding NTKs have a similar generalization behavior (e.g. (Arora et al., 2019, Theorem 3.2)). ### Relaxing smoothness and noise assumptions via spectral concentration bounds In this section, we consider a different approach to derive lower bounds for the generalization error of overfitting kernel regression: through concentration results for the eigenvalues of kernel matrices. On a high level, we obtain similar results as in the last section. The novelty of this section is on the technical side, and we suggest that non-technical readers skip this section in their first reading. We define the convolution kernel of a given kernel \(k\) as \(k_{*}(\mathbf{x},\mathbf{x}^{\prime})\coloneqq\int k(\mathbf{x},\mathbf{x}^{\prime\prime})k( \mathbf{x}^{\prime\prime},\mathbf{x}^{\prime})\,\mathrm{d}P_{X}(\mathbf{x}^{\prime\prime})\), which is possible whenever \(k(\mathbf{x},\cdot)\in L_{2}(P_{X})\) for all \(\mathbf{x}\). The latter condition is satisfied for bounded kernels. Our starting point is the following new lower bound: **Proposition 4** (**Spectral lower bound**).: _Assume that the kernel matrix \(k(\mathbf{X},\mathbf{X})\) is almost surely positive definite, and that \(\mathrm{Var}(y|\mathbf{x})\geq\sigma^{2}\) for \(P_{X}\)-almost all \(\mathbf{x}\). Then, the expected excess risk satisfies_ \[\mathbb{E}_{D}R_{P}(f_{t,\rho})-R_{P}^{*}\geq\frac{\sigma^{2}}{n}\sum_{i=1}^{ n}\mathbb{E}_{\mathbf{X}}\frac{\lambda_{i}(k_{*}(\mathbf{X},\mathbf{X})/n)\left(1-e^{-2t( \lambda_{i}(k(\mathbf{X},\mathbf{X})/n)+\rho)}\right)^{2}}{(\lambda_{i}(k(\mathbf{X},\mathbf{ X})/n)+\rho)^{2}}. \tag{3}\] Using concentration inequalities for kernel matrices and the relation between the integral operators of \(k\) and \(k_{*}\), it can be seen that for \(t=\infty\) and \(\rho=0\), every term in the sum in Eq. (3) should converge to \(1\) as \(n\to\infty\). However, since the number of terms in the sum increases with \(n\) and the convergence may not be uniform, this is not sufficient to show inconsistency in expectation. Instead, relative concentration bounds that are even stronger than the ones by Valdivia (2018) would be required to show inconsistency in expectation. However, by combining multiple weaker bounds and further arguments on kernel equivalences, we can still show inconsistency in expectation for a class of dot-product kernels on the sphere, including certain NTK and NNGP kernels (Appendix B.4): **Theorem 5** (**Inconsistency for Sobolev dot-product kernels on the sphere**).: _Let \(k\) be a dot-product kernel on \(\mathbb{S}^{d}\), i.e., a kernel of the form \(k(\mathbf{x},\mathbf{x}^{\prime})=\kappa(\langle\mathbf{x},\mathbf{x}^{\prime}\rangle)\), such that its RKHS \(\mathcal{H}_{k}\) is equivalent to a Sobolev space \(H^{s}(\mathbb{S}^{d})\), \(s>d/2\). Moreover, let \(P\) be a distribution on \(\mathbb{S}^{d}\times\mathbb{R}\) such that \(P_{X}\) has a lower and upper bounded density w.r.t. the uniform distribution \(\mathcal{U}(\mathbb{S}^{d})\), and such that \(\mathrm{Var}(y|\mathbf{x})\geq\sigma^{2}>0\) for \(P_{X}\)-almost all \(\mathbf{x}\in\mathbb{S}^{d}\). Then, for every \(C>0\), there exists \(c>0\) independent of \(\sigma^{2}\) such that for all \(n\geq 1\), \(t\in(C^{-1}n^{2s/d},\infty]\), and \(\rho\in[0,Cn^{-2s/d})\), the expected excess risk satisfies_ \[\mathbb{E}_{D}R_{P}(f_{t,\rho})-R_{P}^{*}\geq c\sigma^{2}>0\.\] The assumptions of Theorem 5 and Theorem 3 differ in several ways. Theorem 5 applies to arbitrarily high smoothness \(s\) and therefore to ReLU NTKs and NNGPs in arbitrary dimension \(d\). Moreover, it applies to distributions on the whole sphere and allows more general noise distributions. On the flip side, it only shows inconsistency in expectation, which we believe could be extended to inconsistency for Gaussian noise. Moreover, it only applies to functions of the form \(f_{t,\rho}\) but provides an explicit bound on \(t\) and \(\rho\) to get inconsistency. For \(t=\infty\), the bound \(\rho=O(n^{-2s/d})\) appears to be tight, as larger \(\rho\) yield consistency for comparable Sobolev kernels on \(\mathbb{R}^{d}\)(Steinwart et al., 2009, Corollary 3). The spectral lower bounds in Theorem F.2 show that our approach can directly benefit from developing better kernel matrix concentration inequalities. Conversely, the investigation of consistent kernel interpolation might provide information about where such concentration inequalities do not hold. Consistency via spiky-smooth estimators - even in fixed dimension In Section 4, we have seen that when common kernel estimators overfit, they are inconsistent for many kernels and a wide variety of distributions. We now design consistent interpolating kernel estimators. The key is to violate Assumption (N) and allow for quickly exploding derivatives. ### Almost universal consistency of spiky-smooth ridgeless kernel regression In high dimensional regimes (where the dimension \(d\) is supposed to grow with the number of data points), benign overfitting of linear and kernel regression has been understood by an additive decomposition of the minimum-norm interpolant into a smooth regularized component that is responsible for good generalization, and a spiky component that interpolates the noisy data points while not harming generalization (Bartlett et al., 2021). This inspires us to enforce such a decomposition in arbitrary fixed dimension by adding a sharp kernel spike \(\rho\tilde{k}_{\gamma_{n}}\) to a common kernel \(\tilde{k}\). In this way, we can still generate any Sobolev RKHS (see Appendix G.2). **Definition 6** (Spiky-smooth kernel).: Let \(\tilde{k}\) denote any universal kernel function on \(\mathbb{R}^{d}\). We call it the smooth component. Consider a second, translation invariant kernel \(\tilde{k}_{\gamma}\) of the form \(k_{\gamma}(\mathbf{x},\mathbf{y})=q(\frac{\mathbf{x}-\mathbf{y}}{\gamma})\), for some function \(q:\mathbb{R}^{d}\to\mathbb{R}\). We call it the spiky component. Then we define the \(\rho\)_-regularized spiky-smooth kernel with spike bandwidth \(\gamma\)_ as \[k_{\rho,\gamma}(\mathbf{x},\mathbf{y})=\tilde{k}(\mathbf{x},\mathbf{y})+\rho\cdot\tilde{k}_{ \gamma}(\mathbf{x},\mathbf{y}),\qquad\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}. \tag{4}\] We now show that the minimum-norm interpolant of the spiky-smooth kernel sequence with properly chosen \(\rho_{n},\gamma_{n}\to 0\) is consistent for a large class of distributions, on a space with fixed (possibly small) dimension \(d\). We establish our result under the following assumption (as in Mucke and Steinwart (2019)), which is weaker than our previous Assumption (D1). * There exists a constant \(\beta_{X}>0\) and a continuous function \(\phi:[0,\infty)\to[0,1]\) with \(\phi(0)=0\) such that the data generating probability distribution satisfies \(P_{X}(B_{t}(\mathbf{x}))\leq\phi(t)=O(t^{\beta_{X}})\) for all \(\mathbf{x}\in\Omega\) and all \(t\geq 0\) (here \(B_{t}(\mathbf{x})\) denotes the Euclidean ball of radius \(t\) around \(\mathbf{x}\)). **Theorem 7** (Consistency of spiky-smooth ridgeless kernel regression).: _Assume that the training set \(D\) consists of \(n\) i.i.d. pairs \((\mathbf{x},y)\sim P\) such that the marginal \(P_{X}\) fulfills (D2) and \(\mathbb{E}y^{2}<\infty\). Let the kernel components satisfy:_ * \(\tilde{k}\) _is a universal kernel, and_ \(\rho_{n}\to 0\) _and_ \(n\rho_{n}^{4}\to\infty\)_._ * \(\tilde{k}_{\gamma_{n}}\) _denotes the Laplace kernel with a sequence of positive bandwidths_ \((\gamma_{n})\) _fulfilling_ \(\gamma_{n}=O\left(n^{-(2+\alpha)/\beta_{X}}/\log(n)\right)\)_, where_ \(\alpha>0\) _arbitrary._ _Then the minimum-norm interpolant of the \(\rho_{n}\)-regularized spiky-smooth kernel sequence \(k_{n}\coloneqq k_{\rho_{n},\gamma_{n}}\) is consistent for \(P\)._ Proof idea.With sharp spikes \(\gamma\to 0\), it holds that \(\tilde{k}_{\gamma}(\mathbf{X},\mathbf{X})\approx\mathbf{I}_{n}\), with high probability. Hence, ridgeless kernel regression with the spiky-smooth kernel interpolates the training set while approximating kernel ridge regression with the smooth component \(\tilde{k}\) and regularization \(\rho\). The theorem even holds under much weaker assumptions on the decay behavior of the spike component \(\tilde{k}_{\gamma_{n}}\), including Gaussian and Matern kernels. The full version of the theorem and its proof can be found in Appendix G. It also applies to kernels and distributions on the sphere \(\mathbb{S}^{d}\). Figure 2: The spiky-smooth kernel with Laplace components (orange) consists of a Laplace kernel (blue) plus a Laplace kernel of height \(\rho\) and small bandwidth \(\gamma\). ### From spiky-smooth kernels to spiky-smooth activation functions So far, our discussion revolved around the properties of kernels and whether they lead to estimators that are consistent. We now turn our attention to the neural network side. The big question is whether it is possible to specifically design activation functions that enable benign overfitting in fixed, possibly small dimension. We will see that the answer is yes: similarly to adding sharp spikes to a kernel, we add tiny fluctuations to the activation function. Concretely, we exploit (Simon et al., 2022, Theorem 3.1). It states that any dot-product kernel on the sphere that is a dot-product kernel in every dimension \(d\) can be written as an NNGP kernel or an NTK of two-layer fully-connected networks with a specifically chosen activation function. Further details can be found in Appendix H. **Theorem 8** (**Connecting kernels and activation functions**(Simon et al., 2022)).: _Let \(\kappa:[-1,1]\to\mathbb{R}\) be a function such that \(k_{d}:\mathbb{S}^{d}\times\mathbb{S}^{d}\to\mathbb{R},k_{d}(\mathbf{x},\mathbf{x}^{ \prime})=\kappa(\langle\mathbf{x},\mathbf{x}^{\prime}\rangle)\) is a kernel for every \(d\geq 1\). Then, there exist \(b_{i}\geq 0\) with \(\sum_{i=0}^{\infty}b_{i}<\infty\) such that \(\kappa(t)=\sum_{i=0}^{\infty}b_{i}t^{i}\), and for any choice of signs \((s_{i})_{i\in\mathbb{N}}\subseteq\{-1,+1\}\), the kernel \(k_{d}\) can be realized as the NNGP or NTK of a two-layer fully-connected network with activation function_ \[\phi_{NNGP}^{k_{d}}(x)=\sum_{i=0}^{\infty}s_{i}(b_{i})^{1/2}h_{i}(x),\qquad \phi_{NTK}^{k_{d}}(x)=\sum_{i=0}^{\infty}s_{i}\left(\frac{b_{i}}{i+1}\right)^{ 1/2}h_{i}(x). \tag{5}\] _Here, \(h_{i}\) denotes the \(i\)-th Probabilist's Hermite polynomial normalized such that \(\|h_{i}\|_{L_{2}(\mathcal{N}(0,1))}=1\)._ The following proposition justifies the approach of adding spikes \(\rho^{1/2}\phi^{\mathds{k}_{\gamma}}\) to an activation function to enable harmless interpolation with wide neural networks. Here we state the result for the case of the NTK; an analogous result holds for induced NNGP activation functions. **Proposition 9** (**Additive decomposition of spiky-smooth activation functions**).: _Fix \(\tilde{\gamma},\rho>0\) arbitrary. Let \(k=\tilde{k}+\rho\tilde{k}_{\gamma}\) denote the spiky-smooth kernel where \(\tilde{k}\) and \(\tilde{k}_{\gamma}\) are Gaussian kernels of bandwidth \(\tilde{\gamma}\) and \(\gamma\), respectively. Assume that we choose signs \(\{s_{i}\}_{i\in\mathbb{N}}\) and then the activation functions \(\phi_{NTK}^{k}\), \(\phi_{NTK}^{\tilde{k}}\) and \(\phi_{NTK}^{\tilde{k}}\) as in Theorem 8. Then, for \(\gamma>0\) small enough, it holds that_ \[\|\phi_{NTK}^{k}-(\phi_{NTK}^{\tilde{k}}+\sqrt{\rho}\cdot\phi_{NTK}^{\tilde{k} _{\gamma}})\|_{L_{2}(\mathcal{N}(0,1))}^{2}\leq 2^{1/2}\rho\gamma^{3/2}\exp \left(-\frac{1}{\gamma}\right)+\frac{4\pi(1+\tilde{\gamma})\gamma}{\tilde{ \gamma}}.\] **Proof idea.** When the spikes are sharp enough (\(\gamma\) small enough), the smooth and the spiky component of the activation function are approximately orthogonal in \(L_{2}(\mathcal{N}(0,1))\) (Figure 3c), so that the spiky-smooth activation function can be approximately additively decomposed into the smooth activation component \(\phi^{\tilde{k}}\) and the spike component \(\phi^{\tilde{k}}\) responsible for interpolation. To motivate why the added spike functions \(\rho^{1/2}\phi^{\tilde{k}_{\gamma}}\) should have small amplitudes, observe that Gaussian activation components \(\phi^{\tilde{k}_{\gamma}}\) satisfy \[\|\phi_{NNGP}^{\tilde{k}_{\gamma}}\|_{L_{2}(\mathcal{N}(0,1))}^{2}=1,\qquad\| \phi_{NTK}^{\tilde{k}_{\gamma}}\|_{L_{2}(\mathcal{N}(0,1))}^{2}=\frac{\gamma}{ 2}\left(1-\exp\left(-\frac{2}{\gamma}\right)\right). \tag{6}\] Hence, the average amplitude of NNGP spike activation components \(\rho^{1/2}\phi^{\tilde{k}_{\gamma}}\) does not depend on \(\gamma\), while the average amplitude of NTK spike components decays to \(0\) with \(\gamma\to 0\). Since consistency requires the quasi-regularization \(\rho\to 0\), the spiky component of induced NTK as well as NNGP activation functions should vanish for large data sets \(n\to\infty\) to achieve consistency. ## 6 Experiments Now we explore how appropriate spiky-smooth activation functions might look like and whether they indeed enable harmless interpolation for trained networks of finite width on finite data sets. Further experimental results are reported in Appendix I.1 ### What do common activation functions lack in order to achieve harmless interpolation? To understand which properties we have to introduce into activation functions to enable harmless interpolation, we plot NTK spike components \(\phi^{k_{\gamma}}\) induced by the Gaussian kernel (Figure 3a,b) as well as their Hermite series coefficients (Figure 3c). Remarkably, the spike components \(\phi^{k_{\gamma}}\) approximately correspond to a shifted, high-frequency \(\sin\)-curve, when choosing the signs \(s_{i}\) in (5) to alternate every second \(i\), that is \(s_{i}=+1\) iff \(\lfloor i/2\rfloor\) even (Figure 3a). We empirically determine (Appendix I.6) that the NNGP activation functions are well approximated by the fluctuation function \[\omega_{\text{NNGP}}(x;\gamma)\coloneqq\sqrt{2}\cdot\sin\left(\sqrt{2/\gamma }\cdot x+\pi/4\right)=\sin\left(\sqrt{2/\gamma}\cdot x\right)+\cos\left(\sqrt{ 2/\gamma}\cdot x\right), \tag{7}\] where the last equation follows from the trigonometric addition theorem. For small bandwidths \(\gamma\), the NTK activation functions are increasingly well approximated by \[\omega_{\text{NTK}}(x;\gamma)\coloneqq\sqrt{\gamma}\cdot\sin\left(\sqrt{2/ \gamma}\cdot x+\pi/4\right)=\sqrt{\gamma/2}\left(\sin\left(\sqrt{2/\gamma} \cdot x\right)+\cos\left(\sqrt{2/\gamma}\cdot x\right)\right). \tag{8}\] With decreasing bandwidth \(\gamma\to 0\) the frequency increases, while the amplitude decreases for the NTK and remains constant for the NNGP (see Eq. (6)). Plotting equivalent spike components \(\phi^{k_{\gamma}}\) with different choices of the signs \(s_{i}\) (Figure 3b and Appendix I.5) suggests that harmless interpolation requires activation functions that contain **small high-frequency oscillations** or that **explode at large**\(|x|\), which only affects few neurons. The Hermite series expansion of suitable activation functions should contain **non-negligible weight spread across high-order coefficients** (Figure 3c). While Simon et al. (2022) already truncate the Hermite series of induced activation functions at order 5, Figure 3c shows that an accurate approximation of spiky-smooth activation functions requires the truncation index to be larger than \(2/\gamma\). Only a careful implementation allows us to capture the high-order fluctuations in the Hermite series of the spiky activation functions. ### Training neural networks to achieve harmless interpolation in low dimension In Figure 1, we plot the results of (a) ridgeless kernel regression and (b) trained 2-layer neural networks with standard choices of kernels and activation functions (blue) as well as our spiky-smooth alternatives (orange). We trained on 15 points sampled i.i.d. from \(x=(x_{1},x_{2})\sim\mathcal{U}(\mathbb{S}^{1})\) and \(y=x_{1}+\varepsilon\) with \(\varepsilon\sim\mathcal{N}(0,0.25)\). The figure shows that both the Laplace kernel and standard ReLU networks interpolate the training data too smoothly in low dimension, and do not generalize well. However, our spiky-smooth kernel and neural networks with spiky-smooth activation functions achieve close to optimal generalization while interpolating the training data with sharp spikes. We achieve this by using the adjusted activation function with high-frequency oscillations \(x\mapsto\text{ReLU}(x)+\omega_{\text{NTK}}(x;\frac{1}{5000})\) as defined in Eq. (8). With this choice, we avoid activation functions with exploding behavior, Figure 3: **a., b. Gaussian NTK activation components \(\phi^{k_{\gamma}}_{NTK}\) defined via (5) induced by the Gaussian kernel with varying bandwidth \(\gamma\in[0.2,0.1,0.05]\) (the darker, the smaller \(\gamma\)) for **a.** bi-alternating signs \(s_{i}=+1\) iff \(\lfloor i/2\rfloor\) even, and **b.** randomly iid chosen signs \(s_{i}\sim\mathcal{U}(\{-1,+1\})\). **c.** Coefficients of the Hermite series of a Gaussian NTK activation component with varying bandwidth \(\gamma\). Observe peaks at \(2/\gamma\). For reliable approximations of activation functions use a truncation \(\geq 4/\gamma\). The sum of squares of the coefficients follows Eq. (6). Figure 1.8 visualizes NNGP activation components. which would induce exploding gradients. Other choices of amplitude and frequency in Eq. (8) perform worse. Over the course of training (Figure 0(c)), the standard ReLU network exhibits harmful overfitting, whereas the NN with a spiky-smooth activation function quickly interpolates the training set with nearly optimal generalization. Training details and hyperparameter choices can be found in Appendix I.1. Although the high-frequency oscillations perturb the gradients, the NN with spiky smooth activation has a stable training trajectory using gradient descent with a large learning rate of 0.4 or stochastic gradient descent with a learning rate of 0.04. Since our activation function is the sum of two terms, we can additively decompose the network into its ReLU-component and its \(\omega_{\text{NTK}}\)-component. Figure 0(b) and Appendix I.2 demonstrate that our interpretation of the \(\omega_{\text{NTK}}\)-component as'spiky' is accurate: The oscillations in the hidden neurons induced by \(\omega_{\text{NTK}}\) interfere constructively to interpolate the noise in the training points and regress to 0 between training points. This entails immediate access to the signal component of the trained neural network in form of its ReLU-component. ## 7 Conclusion Conceptually, our work shows that inconsistency of overfitting is quite a generic phenomenon for regression in fixed dimension. However, particular spiky-smooth estimators enable benign overfitting, even in fixed dimension. We translate the spikes that lead to benign overfitting in kernel regression into infinitesimal fluctuations that can be added to activation functions to consistently interpolate with wide neural networks. Our experiments verify that neural networks with spiky-smooth activation functions can exhibit benign overfitting even on small, low-dimensional data sets. Technically, our inconsistency results cover many distributions, Sobolev spaces of arbitrary order, and arbitrary RKHS-norm-bounded overfitting estimators. Lemma E.1 serves as a generic tool to extend generalization bounds to the sphere \(\mathbb{S}^{d}\), allowing us to cover (deep) ReLU NTKs and ReLU NNGPs. Future work.While our experiments serve as a promising proof of concept, it remains unclear how to design activation functions that enable harmless interpolation of more complex neural network architectures and data sets. As another interesting insight, our consistent kernel sequence shows that although kernels may have equivalent RKHS (see Appendix G.2), their generalization error can differ arbitrarily much; the constants of the equivalence matter and the narrative that depth does not matter in the NTK regime as in Bietti and Bach (2021) is too simplified. More promisingly, analyses that extend our analysis in the infinite-width limit to a joint scaling of width and depth could help us to understand the influence of depth (Fort et al., 2020; Li et al., 2021; Seleznova and Kutyniok, 2022). ## Acknowledgements Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2075 - 390740016 and EXC 2064/1 - Project 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Moritz Haas and David Holzmuller. We want to thank Tizian Wenzel for interesting discussions. We also thank Nadine Grosse, Jens Wirth, and Daniel Winkle for helpful comments on Sobolev spaces.
2304.13431
Implicit Counterfactual Data Augmentation for Deep Neural Networks
Machine-learning models are prone to capturing the spurious correlations between non-causal attributes and classes, with counterfactual data augmentation being a promising direction for breaking these spurious associations. However, explicitly generating counterfactual data is challenging, with the training efficiency declining. Therefore, this study proposes an implicit counterfactual data augmentation (ICDA) method to remove spurious correlations and make stable predictions. Specifically, first, a novel sample-wise augmentation strategy is developed that generates semantically and counterfactually meaningful deep features with distinct augmentation strength for each sample. Second, we derive an easy-to-compute surrogate loss on the augmented feature set when the number of augmented samples becomes infinite. Third, two concrete schemes are proposed, including direct quantification and meta-learning, to derive the key parameters for the robust loss. In addition, ICDA is explained from a regularization aspect, with extensive experiments indicating that our method consistently improves the generalization performance of popular depth networks on multiple typical learning scenarios that require out-of-distribution generalization.
Xiaoling Zhou, Ou Wu
2023-04-26T10:36:40Z
http://arxiv.org/abs/2304.13431v1
# Implicit Counterfactual Data Augmentation ###### Abstract Machine-learning models are prone to capturing the spurious correlations between non-causal attributes and classes, with counterfactual data augmentation being a promising direction for breaking these spurious associations. However, explicitly generating counterfactual data is challenging, with the training efficiency declining. Therefore, this study proposes an implicit counterfactual data augmentation (ICDA) method to remove spurious correlations and make stable predictions. Specifically, first, a novel sample-wise augmentation strategy is developed that generates semantically and counterfactually meaningful deep features with distinct augmentation strength for each sample. Second, we derive an easy-to-compute surrogate loss on the augmented feature set when the number of augmented samples becomes infinite. Third, two concrete schemes are proposed, including direct quantification and meta-learning, to derive the key parameters for the robust loss. In addition, ICDA is explained from a regularization aspect, with extensive experiments indicating that our method consistently improves the generalization performance of popular depth networks on multiple typical learning scenarios that require out-of-distribution generalization. Counterfactual, implicit augmentation, spurious correlation, meta-learning, regularization, generalization. ## I Introduction Deep learning models are supposed to learn invariances and make stable predictions based on some right causes. However, models trained with empirical risk minimization are prone to learning spurious correlations and suffer from high generalization errors when the training and test distributions do not match [1, 2]. For example, dogs are mostly on the grass in the training set. Thus, a dog in the water can easily be misclassified as a "drake" due to its rare scene context ("water") in the "dog" class, as illustrated in Fig. 1. A promising solution for improving the models' generalization and robustness is to learn causal models [3], as if a model can concentrate more on the causal correlations but not the spurious associations between non-causal attributes and classes, stable and exact predictions are more likely. Counterfactual augmentation has become popular for causal models because of its clear explanation and being model-agnostic. For instance, Lu et al. [4] and He et al. [5] augmented the data effectively by swapping identity pronouns in texts. Moreover, Chang et al. [6] introduced two new image generation procedures that included counterfactual and factual data augmentations to reduce spuriousness between backgrounds of images and labels, achieving higher accuracy in several challenging datasets. Mao et al. [2] utilized a novel strategy to learn robust representations that steered generative models to manufacture interventions on features caused by confounding factors. Nevertheless, the methods presented above suffer from several shortcomings. Specifically, it is not trivial to explicitly find all confounding factors, and the training efficiency will decline as excess augmented images are involved in training. It should be mentioned that implicit data augmentation settles the inefficiency of explicit augmentation by avoiding the generation of excess samples. ISDA [7] conducts a pioneering study on implicit data augmentation, which is inspired by the observation that the deep features in a network are usually linearized. Then, it translates samples along the semantic directions in the feature space based on an assumed class-wise augmentation distribution. By deriving an upper bound on the expected cross-entropy (CE) loss, ISDA enables optimization of only the upper bound to achieve data augmentation in an efficient way. Moreover, MetaSAug [8] innovatively applies the idea of ISDA to the long-tailed recognition, which does not modify the augmentation distribution of ISDA but optimizes the covariance matrices using metadata, yielding good performance on imbalanced data. Besides, RISDA [9] constructs novel augmentation distributions of tail classes by mixing the information from relevant categories, thus more effectively enriching samples in tail categories. However, these methods adopt purely class-wise semantic augmentation strategies, and thus samples in the same class have identical augmentation distributions that are inaccurate. Fig. 1(a) illustrates samples in the same class that may be negatively influenced by different attributes (or classes), where an ideal augmentation strategy should consider these sample-wise non-causal attributes. Additionally, these methods adopt the same augmentation strength for each instance, ignoring that inappropriate distributions (e.g., imbalanced label and attribute distributions) also lead Fig. 1: (a): Illustration for images affected by spurious correlations due to rare attributes (e.g., posture, color, and scene context). \(C_{1}\), \(C_{2}\), and \(C_{3}\) are the dog, polar bear, and drake classes, respectively. The solid line connects the sample’s ground-truth class, and the dotted line connects the class with a spurious correlation with the sample. (b): Illustration for attribute imbalance.
2309.03033
Deep Learning for Polycystic Kidney Disease: Utilizing Neural Networks for Accurate and Early Detection through Gene Expression Analysis
With Polycystic Kidney Disease (PKD) potentially leading to fatal complications in patients due to the formation of cysts in kidneys, early detection of PKD is crucial for effective management of the condition. However, the various patient-specific factors that play a role in the diagnosis make it an intricate puzzle for clinicians to solve, leading to possible kidney failure. Therefore, in this study we aim to utilize a deep learning-based approach for early disease detection through gene expression analysis. The devised neural network is able to achieve accurate and robust prediction results for possible PKD in kidneys, thereby improving patient outcomes. Furthermore, by conducting a gene ontology analysis, we were able to predict the top gene processes and functions that PKD may affect.
Kapil Panda, Anirudh Mazumder
2023-09-06T14:22:24Z
http://arxiv.org/abs/2309.03033v2
Deep Learning for Polycystic Kidney Disease (PKD): Utilizing Neural Networks for Accurate and Early Detection through Gene Expression Analysis ###### Abstract With Polycystic Kidney Disease (PKD) potentially leading to fatal complications in patients due to the formation of cysts in kidneys, early detection of PKD is crucial for effective management of the condition. However, the various patient-specific factors that play a role in the diagnosis make it an intricate puzzle for clinicians to solve, leading to possible kidney failure. Therefore, in this study we aim to utilize a deep learning-based approach for early disease detection through gene expression analysis. The devised neural network is able to achieve accurate and robust prediction results for possible PKD in kidneys, thereby improving patient outcomes. Furthermore, by conducting a gene ontology analysis, we were able to predict the top gene processes and functions that PKD may affect. machine learning, artificial intelligence, polycystic kidney disease, deep learning, neural networks ## I Introduction Polycystic Kidney Disease (PKD) is a prevalent, yet under-researched hereditary renal disorder characterized by the formation of numerous cysts in the kidneys [1]. Over time, these cysts can enlarge and disrupt the standard kidney structure, impairing kidney function and thus leading to various complications, such as high blood pressure, kidney stones, and, in severe cases, kidney failure [3]. Even though this disease is regarded as one the top kidney ailments of patients in the US, it is still relatively unexplored, hence leading to late detection and ineffective care. Therefore, early and accurate diagnosis of PKD is crucial for timely intervention and effective management of the condition to improve patient outcomes, however, the various patient-specific factors that play a role in the diagnosis make it an intricate puzzle for nephrologists and clinicians to solve [2]. In recent years, the progress made in artificial intelligence and machine learning, specifically deep learning, has opened up new possibilities in healthcare for both detection and prediction [4]. Neural networks, a cornerstone of deep learning, have demonstrated exceptional proficiency in image recognition, feature extraction, and classification tasks [5]. With the incredible capabilities of these algorithms and models, healthcare professionals now have powerful tools that can aid in analyzing vast amounts of medical data with unparalleled precision and efficiency, thereby increasing the efficacy of medical prescriptions. However, certain domains have yet to be immersed in the frontiers of AI and ML, with PKD being one such disease that has had no previous research using machine learning done in it[13]. Therefore, in this research we aim to leverage the power of deep learning by utilizing neural networks to aid in PKD detection for accurate and early diagnosis. Utilizing methods such as synthetic data creation and data preprocessing, an MLP algorithm and stacking ensemble were trained to see if they could learn whether or not a patient had PKD. Furthermore, using a gene ontology tool, we found robust results indicating the processes and functions of the gene expressions that the model found highly correlated with PKD to gain deeper insights into the underlying molecular mechanisms that are affected by the disease. ## II Methodology ### _Materials_ The materials that were used for this project were Python for the computation and mice gene expression data that was acquired from [6]. ### _Mice Data Based Algorithm_ #### Ii-A1 Data Preprocessing The algorithm started with the mice-based data getting pre-processed, allowing an algorithm to be used to learn the data. Due to limitations on acquiring human data on PKD, we had to resort to mice genetic data. According to [16], almost all of the genes in mice share some functions with the genes in humans, especially in kidneys, allowing for mice to be indicative test subjects on the impact of PKD on humans. Thus, it is not uncommon in the industry for laboratories to use mice data when testing for human diseases. Within this dataset, some of the data was dropped to eliminate all the undefined points in the dataset and then a scaler was used to standardize the data. #### Ii-A2 Machine Learning A Multilayer Perceptron Classifier(MLP) was utilized to learn the data. An MLP was utilized due to their ability to solve nonlinear problems. They are good at learning the relevant features and data, and they stack a layer of their neurons, where each layer is learning different parts of the data, allowing the neural network to get a hierarchical representation of the data. Additionally, MLPs are based on backpropagation, in which they can update the weights of different connections between neurons to minimize the difference between the actual and predicted output values. An MLP was used for this data rather than a stacking ensemble due to its ability to understand the complexity of data better than stacking ensembles. In contrast, stacking ensembles are better at highlighting the diversity of the models, working better than pure individual models. Additionally, the outputs of MLPs are more complex to understand because they are neural networks rather than machine learning algorithms. Lastly, MLPs are better able to capture an understanding of the data than stacking ensembles because MLPs are based on neural networks. In contrast, stacking ensembles are based on smaller, more basic models, all utilized together. #### Ii-B3 Clustering Algorithm The data was clustered based on the probability of getting PKD and the gene expression. To cluster, a K-means algorithm was used because of its ability to maximize the similarity of variables within each cluster while also minimizing the similarity of variables in different clusters, which ensures that when it is creating the centroids, they are as distinct as possible from one another, while also being very indicative of what data belongs in each cluster. Additionally, based on the output of the clustering algorithm, it can be further interpreted to see what genes are most indicative of PKD. When the clustering algorithm was created, three centroids were utilized to cluster the data to create specific groupings of which genes had the highest likelihood of getting PKD compared to the other genes. ### _Synthetic Data Based Algorithm_ #### Ii-C1 Data Creation In this study, we also used synthetic data for our model. The utilization of synthetic data in this research is of paramount importance for several reasons [17]. Firstly, it addresses the inherent challenge of limited real-world clinical data on PKD. Synthetic data serves as a crucial supplement, enabling the development and training of machine learning algorithms in the absence of comprehensive clinical datasets. Secondly, synthetic data creation provides an opportunity to control and manipulate various parameters, such as the percentage of patients with PKD, which is challenging with real clinical data. This flexibility allows for comprehensive experimentation and model training under different scenarios, ultimately enhancing the robustness and adaptability of the algorithm. The synthetic data was created through Sklearn's synthetic dataset creation algorithm. Synthetic data is generated by a model rather than using real-world data. This data creation method was utilized for our problem due to a need for real-world data about humans with PKD [7]. The parameters that were used to create this data were 1000 samples were created, 5000 features were created with 100 of the features being important, two classes were created(has PKD or not), and the values were weighted at 20% chance of having PKD. All of these values were kept as controls throughout the research. However, for future analysis, the 20% value could change based on further research on the percentage of patients who have PKD. Synthetic data is more accessible for the data augmentation needed to see if a machine learning algorithm could learn whether or not a patient has PKD based on gene expression data due to the data for PKD being hard to collect and analyze. Additionally, using this specific set of parameters, a full synthetic dataset was created to allow the machine learning model to be trained to evaluate further if it could accurately understand how to classify whether or not a patient has PKD. #### Ii-C2 Data Preprocessing Data preprocessing was done to use the dataset and to split the data. A scaler was used to standardize the data by removing the mean and scaling to the variance. Additionally, scaling the data was vital to ensure that the dataset's features were standardized, which is useful when dealing with complex features with varying scales and distributions. Additionally, this helps machine learning algorithms converge more and converge faster due to their ability to minimize the difference in feature scales. Then, a train test split happened, splitting the data into 80% training data and 20% testing data. #### Ii-C3 Machine Learning The machine learning algorithm had a few parts, as it was a stacking ensemble-based machine learning algorithm. The stacking ensemble consisted of three different machine learning algorithms to learn the data: Support Vector Classifier(SVC), Random Forest Classifier(RF), and Gradient Boost Classifier(GB). The meta-classifier was a Logistic Regression(LR) algorithm. An SVC was used due to its ability to separate between classes and because it can find complex relationships within the data. Additionally, an RF was used as part of the machine learning algorithm due to its ability to achieve high accuracy and robustness and distinguish which features were the most important. Lastly, a GB was used due to its ability to provide high accuracy while focusing on decreasing the error due to bias within the algorithm. Additionally, an LR was utilized for the machine learning algorithm as the meta-classifier as it allows the creation of weights to the prediction on the base models, allowing for an aggregation that considers individual processors' reliability. On top of all the independent machine learning algorithms, a stacking ensemble was used to accurately utilize all of the algorithms to create a more robust machine learning model. A more robust model is created by utilizing all of the strengths of the independent machine learning algorithms to accurately predict if a patient has PKD based on synthetic data. These strengths could also be weighted based on the LR algorithm, as it can show which base models have the most significant effect on the machine learning algorithm and how it can maximize the accuracy. Additionally, the stacking ensemble allows for all of the algorithms to be able to make their predictions. In contrast, the stacking ensemble can utilize the other algorithms' responses and curate the best response. The ability of the stacking ensemble to curate the best response allows it to achieve high accuracies and figure out what parts and pieces of data are the most indicative when predicting. ## III Results ### _Model Performance on Synthetic Data_ The developed machine learning model achieved an accuracy of 78% when run on the synthetic data produced, indicating a moderate success level. However, this level of accuracy needs to be improved for a diagnostic tool intended for PKD detection. In the context of PKD, where early and accurate diagnosis is crucial for effective intervention and management, a sub-optimal accuracy level like this could lead to misdiagnoses or delayed treatments, potentially exacerbating patient outcomes. ### _Model Performance on Mice Data_ A more realistic dataset was used to address the initial model's performance limitations on synthetic data acquired from mice afflicted with PKD. Remarkably, upon testing the model using this more representative dataset, an accuracy of 92.23% was achieved, signifying a remarkable leap in predictive performance. The substantial improvement in accuracy suggests that the Multilayer Perceptron Classifier (MLP) successfully leveraged the realistic data to capture the disease's intricate patterns. This achievement is particularly significant as it points toward the model's potential to discern more distinct and nuanced data representations, which is crucial when dealing with complex conditions such as PKD. #### Ii-B1 Feature Selector The feature selector shows which features had the highest correlation with whether or not the mouse had PKD. Although the values are relatively low, they can definitively identify the possible features leading to the MLP algorithm accurately predicting whether or not a mouse has PKD. As seen in 2, the algorithm is affected by the fold changes in the gene expression the most, and it shows the highest correlation with the computed output, showing potential for the fold change to identify a relationship with patients with PKD. Although the fold change is the most indicative, it is relatively close to the over features, showing that many of these different features impact the MLP's understanding of the data and its ability to classify whether or not a mouse has PKD. ### _Comparison_ #### Ii-B1 Dataset Some key differences potentially lead to the difference in accuracy between the two different algorithms. One significant difference between the datasets of the two different algorithms is that synthetic data is generated data, so it may be that the data being created is not indicative of anything particularly related to PKD, making it more prone to lowered accuracy than the standard clinically tested mice data. On the other hand, although the synthetic data may not be helping the algorithm, it is possible that the mice data needs to be more generalizable to humans due to the differences between the two species. Additionally, this shows that it is vital to see whether or not it is possible to make synthetic data contextual to humans, utilizing more popular resources such as mice data. Although, there is a difference between mice and human data, the mice data still has indications on the human processes. As seen in [15], mice are very good for modelling biological processes in humans, as they have a lot of biological processes which are similar. Additionally, specifically in the context of kidneys, there is evidence that mice have similar kidney processes as humans. Additionally, due to a lack of adequate human clinical data on polycystic kidney disease (PKD), the research team had to rely heavily on introspection studies of PKD in mice models as well as synthetic data generated from computational modeling. Without access to sufficient real-world data from human PKD patients to train machine learning algorithms, the use of these alternative data sources like mouse models and synthetic data was absolutely vital. The mouse models, while not a perfect analogue, still provided important insights into the biological mechanisms and progression of PKD that could inform the development of AI systems. The synthetic data served as a supplementary training set to "fill in the gaps" where human clinical data was missing or inadequate for properly training the algorithms. Without the ability to leverage these alternate forms of data, it would have been tremendously difficult to develop AI systems capable of reliably analyzing and understanding the complexities of human PKD cases. The multi-modal approach, combining real human data where available with data from mouse models and synthetic data generation, provided a robust overall training dataset that allowed the research team to make meaningful progress in applying AI to better understand PKD. #### Ii-B2 Algorithms The different models may be leading to a difference in the accuracy of the two algorithms. The difference could be because MLP is a neural network, potentially Fig. 1: Feature Correlations on Synthetic Data Fig. 2: Feature Correlations on Mice Data allowing it to learn and understand the patterns in the data better. At the same time, stacking ensembles are bogged down to the multiple different, more general, and simple base classes. On the other hand, the stacking ensemble may be more robust since it is built with a few other methods on top of the actual stacking-based model. ### _Clustered Analysis_ The results of the clustering algorithm can be seen in Figure 3. The results of the clustering algorithm show which gene expression is the most likely to have PKD. By looking at this data set, we can centralize the genes to see which ones are most likely to get PKD, further allowing us to analyze this gene expression using a gene ontology tool. ### _Gene Ontology_ To better understand the genes most indicative of PKD [14], we focused on the top cluster identified by our model since it showed the highest risk of getting PKD and conducted a gene ontology analysis using the GoProcess enrichment tool [8]. A comprehensive analysis of the clustered gene expressions yielded intriguing insights into the molecular underpinnings of PKD, shedding light on potential mechanisms that contribute to the disease's pathogenesis. As depicted in Figure 4, our investigation revealed a distinct enrichment of gene processes that are particularly affected within the top cluster. Notably, two prominent gene processes, "Positive Regulation of Regulated Secretory Pathway" [9] and "Protein Lipidation" [10], emerged as focal points for understanding the complex interactions driving PKD. Positive regulation of regulated secretory pathways encompasses a cellular phenomenon characterized by the augmentation of a precise and controlled process involving releasing specific molecules from cells. This process is orchestrated in response to intricate signaling cues, highlighting the intricacies of cellular communication. Our findings indicate a noteworthy connection between PKD-associated genes in this cluster and their potential influence on augmenting the release of specific molecules from cells. This heightened secretion, triggered by the upregulated gene processes, could play a pivotal role in the disease's progression, potentially implicating aberrant cellular communication pathways in developing cystic structures within the kidneys. Another correlating process our study found to PKD-related genes within the top cluster was that of protein lipidation. This process involves attaching lipid molecules, often fatty acids, to proteins. This modification can significantly impact proteins' structure, function, and localization within cells. Our findings suggest that the genes linked to PKD in this cluster may influence the lipidation of proteins, potentially leading to altered cellular functions. The intricate interplay between lipids and proteins is a fundamental aspect of cellular physiology, and its disruption could contribute to the anomalies observed in the context of PKD. Fig. 3: Clustering Based on Chance of PKD Furthermore, in this analysis, we aimed to identify the gene functions most affected by PKD in this top cluster. As seen in Figure 5, we found a distinct enrichment of the gene functions "BH3 Domain Binding" [11] and "Death Domain Binding" [12] when analyzing the gene ontology. BH3 domain binding is a fundamental aspect of cellular regulation, particularly in cell survival and apoptosis. The BH3 domain, an essential structural motif in specific proteins, is central in orchestrating cellular responses to stress signals and external cues. Our findings suggest a profound connection between PKD-associated genes within the identified cluster and their involvement in BH3 domain binding interactions. This observation offers a tantalizing glimpse into the potential regulatory mechanisms that these genes might modulate, possibly influencing the delicate balance between cell survival and programmed cell death. The aberrations in BH3 domain binding interactions might contribute to the abnormal cellular processes associated with PKD, hinting at a potential avenue through which the disease exerts its effects. Similarly, our exploration of death domain binding interactions within the context of PKD uncovers an intricate layer of molecular signaling pathways that might be perturbed in disease progression. The death domain, a distinct protein module implicated in transmitting signals related to cell death and inflammation, is crucial in orchestrating cellular responses to various stimuli. Identifying PKD-associated genes within this cluster as influential in death domain binding interactions signifies their potential involvement in shaping the cellular fate decisions that underpin PKD's development. The disruption of these interactions could contribute to the misregulation of cell survival and inflammatory responses, both of which are pivotal aspects of PKD pathology. Fig. 4: GOProcess Gene Process Analysis ## IV Discussion ### _Conclusion_ This research devised an effective and accurate prediction model for early detection of Polycyystic Kidney Disease with the help of deep learning. While the initial accuracy of 78% achieved on synthetic data marked a promising starting point, the significant leap to an accuracy of 92.23% when employing the MLP algorithm on real mice data signifies a remarkable advancement in PKD detection. Furthermore, exploring clustering techniques for gene expression data further enriched our understanding of PKD, allowing us to identify gene clusters with higher likelihoods of PKD occurrence. The subsequent gene ontology analysis offered glimpses into specific processes and functions influenced by PKD-associated genes, shedding light on potential pathways and molecular interactions involved in PKD development and progression, as well as possible effects of PKD on the body. ### _Potential Implications_ The findings of this study carry significant implications for the field of Polycyystic Kidney Disease (PKD) detection and diagnosis, as well as the broader intersection of artificial intelligence and healthcare. The exploration of machine learning techniques in PKD diagnosis underscores the potential of advanced computational methods to address complex medical challenges. The achieved accuracy of 78% on the synthetic dataset serves as a valuable insight into the initial capabilities of machine learning models in PKD detection. While this accuracy signifies a notable advancement, its limitations are apparent in the context of PKD's critical need for accurate and early diagnosis. However, the remarkable leap in accuracy to 92.23% achieved with the MLP algorithm using the mice dataset presents a promising avenue for improving PKD detection. The success of the MLP algorithm in capturing the nuances of PKD patterns reflects its potential to discern complex data representations, critical for a condition as multifaceted as PKD. This achievement highlights the power of deep learning models in delving deeper into data intricacies, and it encourages further exploration into utilizing neural networks for medical diagnostic tasks. Furthermore, On the other hand, comparing the performance of the MLP algorithm and the stacking ensemble offers insights into the strengths and limitations of different machine learning approaches. The substantial difference in accuracy raises questions about the fundamental differences between these algorithms in terms of their ability to understand complex relationships within data. This comparison also prompts a deeper examination of the underlying mechanisms that contribute to the MLP's outstanding performance and whether similar capabilities could be integrated into other ensemble-based models. Finally, the study's methodological insights offer considerations for future research directions. The potential limitations of synthetic data and the need to validate models on authentic human data underscore the importance of diverse and representative datasets. The exploration of clustering methods for gene expression data has potential implications beyond PKD diagnosis, providing a means to identify gene clusters that could inform further research into disease mechanisms and potential therapeutic targets. In the context of healthcare, this research showcases the evolving role of artificial intelligence in medical diagnosis. While this study focuses on PKD, the methods and insights can serve as a template for applying machine learning techniques to other complex diseases, where early detection and precise diagnosis are of paramount importance. This research thus contributes to the ongoing dialogue on the integration of AI and healthcare, encouraging multidisciplinary collaborations that can harness technology to enhance patient care, improve diagnostic accuracy, and ultimately advance medical science. ### _Future Work_ While this study has made significant strides in leveraging deep learning for PKD detection, several avenues for future research remain to further enhance the accuracy and applicability of the developed models. One critical area of improvement lies in the utilization of more authentic and diverse human data for training and testing. The success achieved with the mice dataset demonstrates the potential of the MLP algorithm, but its viability can only be fully analyzed with human-specific data. Acquiring a comprehensive dataset that encompasses a wide range of patient profiles, genetic variations, and disease stages is essential, and therefore we aim to collaborate with medical institutions in the future to gather such datasets to further this model into clinical use. Moreover, exploring ways to overcome data scarcity through data augmentation techniques or synthetic data generation methods that accurately simulate human data can provide valuable insights into the robustness of the models. Furthermore, this study primarily focused on PKD detection, but the potential of deep learning in predicting disease Fig. 5: GOProcess Gene Function Analysis progression warrants exploration. Developing predictive models that can forecast the progression of PKD and identify patients at higher risk of developing complications could significantly impact patient care and treatment strategies. ## V Acknowledgment We would like to thank the University of North Texas for providing us with the resources and support to conduct this research. The invaluable guidance and encouragement from our professors and mentors have been instrumental in shaping the direction and scope of this study. We would also like to acknowledge the National Kidney Foundation for their support and inspiration in conducting this project. Finally, we would also like to thank our families for supporting us throughout our research.
2305.17387
Learning from Integral Losses in Physics Informed Neural Networks
This work proposes a solution for the problem of training physics-informed networks under partial integro-differential equations. These equations require an infinite or a large number of neural evaluations to construct a single residual for training. As a result, accurate evaluation may be impractical, and we show that naive approximations at replacing these integrals with unbiased estimates lead to biased loss functions and solutions. To overcome this bias, we investigate three types of potential solutions: the deterministic sampling approaches, the double-sampling trick, and the delayed target method. We consider three classes of PDEs for benchmarking; one defining Poisson problems with singular charges and weak solutions of up to 10 dimensions, another involving weak solutions on electro-magnetic fields and a Maxwell equation, and a third one defining a Smoluchowski coagulation problem. Our numerical results confirm the existence of the aforementioned bias in practice and also show that our proposed delayed target approach can lead to accurate solutions with comparable quality to ones estimated with a large sample size integral. Our implementation is open-source and available at https://github.com/ehsansaleh/btspinn.
Ehsan Saleh, Saba Ghaffari, Timothy Bretl, Luke Olson, Matthew West
2023-05-27T06:46:08Z
http://arxiv.org/abs/2305.17387v2
# Learning from Integral Losses in Physics Informed Neural Networks ###### Abstract This work proposes a solution for the problem of training physics informed networks under partial integro-differential equations. These equations require infinite or a large number of neural evaluations to construct a single residual for training. As a result, accurate evaluation may be impractical, and we show that naive approximations at replacing these integrals with unbiased estimates lead to biased loss functions and solutions. To overcome this bias, we investigate three types of solutions: the deterministic sampling approach, the double-sampling trick, and the delayed target method. We consider three classes of PDEs for benchmarking; one defining a Poisson problem with singular charges and weak solutions, another involving weak solutions on electro-magnetic fields and a Maxwell equation, and a third one defining a Smoluchowski coagulation problem. Our numerical results confirm the existence of the aforementioned bias in practice, and also show that our proposed delayed target approach can lead to accurate solutions with comparable quality to ones estimated with a large number of samples. Our implementation is open-source and available at [https://github.com/ehsansaleh/btspin](https://github.com/ehsansaleh/btspin). ## 1 Introduction Physics Informed Neural Networks (PINNs) [23] can be described as solvers of a particular Partial Differential Equation (PDE). Typically, these problems consist of three defining elements. A sampling procedure selects a number of points for learning. Automatic differentiation is then used to evaluate the PDE at these points and define a residual. Finally, a loss function, such as the Mean Squared Error (MSE), is applied to these residuals, and the network learns the true solution by minimizing this loss through back-propagation and stochastic approximation. These elements form the basis of many methods capable of learning high-dimensional parameters. A wealth of existing work demonstrate the utility of this approach to solving a wide array of applications and PDE forms [16; 25; 15]. One particular problem in this area, is the prevalent assumption around our ability to accurately evaluate the PDE residuals for learning. In particular, many partial integro-differential forms may include an integral or a large summation within them. Weak solutions using the divergence or the curl theorems, or the Smoluchowski coagulation equation [28] are two examples of such forms. In such instances, an accurate evaluation of the PDE elements, even at a single point, can become impractical. Naive approximations, such as replacing integrals with unbiased estimates, can result in biased solutions, as we will show later. This work is dedicated to the problem of learning PINNs with loss functions containing a parametrized integral or large summation. To tackle such challenging problems, this paper investigates three learning approaches: the deterministic sampling approach, the double-sampling trick, and the delayed target method. The deterministic sampling approach takes the integration stochasticity away. However, it optimizes a different objective and deterministic sampling schemes may not scale well to higher sampling dimensions. The double-sampling trick enjoys strong theoretical guarantees, but relies on a particular point sampling process and a squared loss form and may not be suitable for offline learning. Bootstrapping neural networks with delayed targets has already been studied in other contexts, such as the method of learning from Temporal Differences (TD-learning) in approximate dynamic programming [27]. Time and time again, TD-learning has proven preferable over the ordinary MSE loss minimization (known as the Bellman residual minimization) [24; 5; 31; 4]. Another example is the recent trend of semi-supervised learning, where teacher-student frameworks result in accuracy improvements of classification models by pseudo-labelling unlabelled examples for training [8; 22; 1; 14]. The main contributions of this work are: (1) we formulate the integral learning problem under a general framework and show the biased nature of standard approximated loss functions; (2) we present three techniques to solve such problems, namely the deterministic sampling approach, the double-sampling trick, and the delayed target method; (3) we detail an effective way of implementation for the delayed target method compared to a naive one; and (4) we compare the efficacy of the proposed solutions using numerical examples on a Poisson problem with singular charges, a Maxwell problem with magnetic fields, and a Smoluchowski coagulation problem. ## 2 Problem Formulation Consider a typical partial integro-differential equation \[f_{\theta}(x):=\mathbb{E}_{P(x^{\prime}|x)}[g_{\theta}(x^{\prime})]+y. \tag{1}\] The \(f_{\theta}(x)\) and \(g_{\theta}(x^{\prime})\) are parametrized, and \(y\) includes all the non-parametrized terms in the PDE. The right side of the equation serves as the target value for \(f_{\theta}(x)\). For notation simplicity, we assume a fixed value for \(x\) in the remainder of the manuscript. However, all analyses are applicable to randomized \(x\) without any loss of generality. Equation (1) is a general, yet concise, form for expressing partial integro-differential equations. To motivate, we will express two examples. Example 1The Poisson problem is to solve the system \(\nabla^{2}U=\rho\) for \(U\) given a charge function \(\rho\). This is equivalent to finding a solution for the system \[E=\nabla U,\qquad\nabla\cdot E=\rho. \tag{2}\] A weak solution can be obtained through enforcing the divergence theorem over many volumes: \[\int_{\partial\Omega}E\cdot\hat{n}\quad\mathrm{d}S=\iint_{\Omega}\nabla\cdot E \quad\mathrm{d}V, \tag{3}\] where \(\hat{n}\) is the normal vector perpendicular to \(\mathrm{d}S\). The weak solutions can be preferable over the strong ones when dealing with singular or sparse \(\rho\) charges. Figure 1: Training with the MSE loss under different sample sizes per-surface (\(N\)). The heatmaps show the analytical solution (left), the low-variance training with \(N=100\) (middle), and the high-variance training with \(N=1\) (right). The smaller the \(N\), the more biased the training objective becomes towards finding smoother solutions. The right panel shows the training curves; the training loss and the integration variance represent \(\hat{\mathcal{L}}_{\theta}(x)\) and \(\mathbb{V}_{P(x^{\prime}|x)}[g_{\theta}(x^{\prime})]\) in Equation (15), respectively. For \(N=1\), the training loss seems to be floored at the same value as the integration variance (i.e., approximately \(0.3\)). However, with \(N=100\), the model produces better solutions, lower training losses, and higher integration variances. To solve this system, we parametrize \(E(x)\) as the gradient of a neural network predicting the \(U\) potentials. To convert this into the form of Equation (1), we replace the left integral in Equation (3) with an arbitrarily large Riemann sum and write \[\int_{\partial\Omega}E\cdot\hat{n}\quad\mathrm{d}S=\frac{A}{M}\sum_{i=1}^{M}E_{ \theta}(x_{i})\cdot\hat{n}_{i}, \tag{4}\] where \(A=\int_{\partial\Omega}1\;\mathrm{d}S\) is the surface area and the \(x_{i}\) samples are uniform on the surface. To convert this system into the form of Equation (1), we then define \[x:=x_{1},\qquad f_{\theta}(x):=\frac{A}{M}E_{\theta}(x)\cdot\hat{ n}_{1},\qquad x^{\prime}\sim\text{Unif}(\{x_{i}\}_{i=2}^{M}),\] \[g_{\theta}(x_{i}):=-\frac{A(M-1)}{M}E_{\theta}(x_{i})\cdot\hat{ n}_{i},\qquad y:=\iint_{\Omega}\rho\mathrm{d}V. \tag{5}\] Example 2In static electromagnetic conditions, one of the Maxwell Equations, the Ampere circuital law, is to solve the system \(\nabla\times A=B\), \(\nabla\times B=J\) for \(A\) given the current density \(J\) in the 3D space (we assumed a unit physical coefficient for simplicity). Here, \(B\) represents the magnetic field and \(A\) denotes the magnetic potential vector. A weak solution for this system can be obtained through enforcing the Stokes theorem over many volumes: \[\int_{\partial\Omega}\nabla\times A\cdot\mathrm{d}\mathbf{l}=\iint_{\Omega}J \cdot\mathrm{d}S, \tag{6}\] where \(\mathrm{d}S\) is an infinitesimal surface normal vector. Just like the Poisson problem, the weak solutions can be preferable over the strong ones when dealing with singular inputs, and this equation can be converted into the form of Equation (1) similarly. Example 3The Smoluchowski coagulation equation simulates the evolution of particles into larger ones, and is described as \[\frac{\partial\rho(x,t)}{\partial t}= \int_{0}^{x}K(x-x^{\prime},x^{\prime})\rho(x-x^{\prime},t)\rho(x ^{\prime},t)\mathrm{d}x^{\prime}-\int_{0}^{\infty}K(x,x^{\prime})\rho(x,t) \rho(x^{\prime},t)\mathrm{d}x^{\prime}, \tag{7}\] where \(K(x,x^{\prime})\) is the coagulation kernel between two particles of size \(x\) and \(x^{\prime}\). The particle sizes \(x\) and \(x^{\prime}\) can be generalized into vectors, inducing a higher-dimensional PDE to solve. To solve this problem, we parametrize \(\rho(x,t)\) as the output of a neural network with parameters \(\theta\) and write \[f_{\theta}(x):=\frac{\partial\rho_{\theta}(x,t)}{\partial t}, \qquad g_{\theta}^{(1)}(x^{\prime}):=A_{1}K(x-x^{\prime},x^{\prime})\rho_{ \theta}(x-x^{\prime},t)\rho_{\theta}(x^{\prime},t),\] \[g_{\theta}^{(2)}(x^{\prime}):=A_{2}K(x,x^{\prime})\rho_{\theta}(x,t)\rho_{\theta}(x^{\prime},t). \tag{8}\] The \(x^{\prime}\) values in both \(g_{\theta}^{(1)}\) and \(g_{\theta}^{(2)}\) are sampled from their respective uniform distributions, and \(A_{1}\) and \(A_{2}\) are used to normalize the uniform integrals into expectations. Finally, \(y=0\) and we can define \(g_{\theta}(x^{\prime})\) in a way such that \[\mathbb{E}_{x^{\prime}}[g_{\theta}(x^{\prime})]:=\mathbb{E}_{x^{\prime}}[g_{ \theta}^{(1)}(x^{\prime})]+\mathbb{E}_{x^{\prime}}[g_{\theta}^{(2)}(x^{\prime })]. \tag{9}\] The standard way to solve systems such as Examples 1, 2, and 3 with PINNs, is to minimize the following mean squared error (MSE) loss [23; 9]: \[\mathcal{L}_{\theta}(x):=\big{(}f_{\theta}(x)-\mathbb{E}_{P(x^{\prime}|x)}[g_ {\theta}(x^{\prime})]-y\big{)}^{2}. \tag{10}\] Since computing exact integrals may be impractical, one may contemplate replacing the expectation in Equation (10) with an unbiased estimate, as implemented in NVIDIA's Modulus package [21]. This prompts the following approximate objective: \[\hat{\mathcal{L}}_{\theta}(x):=\mathbb{E}_{\{x^{\prime}_{i}\}_{i=1}^{N}}\bigg{[} \big{(}f_{\theta}(x)-\frac{1}{N}\sum_{i=1}^{N}g_{\theta}(x^{\prime}_{i})-y \big{)}^{2}\bigg{]}. \tag{11}\] We therefore analyze the approximation error: \[\hat{\mathcal{L}}_{\theta}(x)= \mathbb{E}_{\{x^{\prime}_{i}\}}\bigg{[}\bigg{(}\big{(}f_{\theta}( x)-\mathbb{E}_{x^{\prime\prime}}[g_{\theta}(x^{\prime\prime})]-y\big{)}+ \big{(}\mathbb{E}_{x^{\prime\prime}}[g_{\theta}(x^{\prime\prime})]-\frac{1}{N }\sum_{i=1}^{N}g_{\theta}(x^{\prime}_{i})\big{)}\bigg{)}^{2}\bigg{]}. \tag{12}\] By decomposing the squared sum, we get \[\hat{\mathcal{L}}_{\theta}(x)= \mathbb{E}_{\{x^{\prime}_{i}\}}\big{[}\big{(}\big{(}f_{\theta}( x)-\mathbb{E}_{x^{\prime\prime}}[g_{\theta}(x^{\prime\prime})]-y\big{)}^{2} \big{]}+\mathbb{E}_{\{x^{\prime}_{i}\}}\big{[}(\mathbb{E}_{x^{\prime\prime}}[ g_{\theta}(x^{\prime\prime})]-\frac{1}{N}\sum_{i=1}^{N}g_{\theta}(x^{\prime}_{i}))^{2} \big{]}\] \[-2\mathbb{E}_{\{x^{\prime}_{i}\}}\big{[}\big{(}f_{\theta}(x)- \mathbb{E}_{x^{\prime\prime}}[g_{\theta}(x^{\prime\prime})]-y\big{)}\big{(} \mathbb{E}_{x^{\prime\prime}}[g_{\theta}(x^{\prime\prime})]-\frac{1}{N}\sum_{i =1}^{N}g_{\theta}(x^{\prime}_{i})\big{)}\big{]}. \tag{13}\] Since \(\mathbb{E}_{x^{\prime\prime}}[g_{\theta}(x^{\prime\prime})]=\mathbb{E}_{\{x^{ \prime}_{i}\}}[\frac{1}{N}\sum_{i=1}^{N}g_{\theta}(x^{\prime}_{i})]\), the last term in Equation (13) is zero, and we have \[\hat{\mathcal{L}}_{\theta}(x)=\mathcal{L}_{\theta}(x)+\mathbb{V}_{P(\{x^{ \prime}_{i}\}|x)}[\frac{1}{N}\sum_{i=1}^{N}g_{\theta}(x^{\prime}_{i})]. \tag{14}\] Figure 2: Training the same problem as in Figure 1 with delayed targets and \(N=1\). _The top left panel_ shows a diverged training with \(M=100\) in Equation (24). _The lower left panel_ corresponds to \(M=10\), which has a converging training curve even though producing an overly smooth solution. In _the lower right panel_, we set \(\lambda=1\) which allowed setting the simulated \(M\) as \(1000\) while maintaining a stable training loss. In each panel, the left and right heatmaps show the main and the target model predictions, respectively, and the right plots show the training curves. The green curves show the training loss for the delayed target method, and the standard training curves with \(N=1\) and \(100\) are also shown using dotted red and blue lines for comparison, respectively. _The top right panel_ shows an example of deterministic vs. i.i.d. sampling of the surface points \(\{x^{\prime}_{i}\}_{i=1}^{N}\) in the Poisson problem. For each sampled sphere, the surface points and their normal vectors are shown with \(N=100\) samples. With deterministic sampling, the points are evenly spaced to cover the sampling domain. If the \(x^{\prime}_{i}\) are sampled in an i.i.d. manner, Equation (14) simplifies further to \[\hat{\mathcal{L}}_{\theta}(x)=\mathcal{L}_{\theta}(x)+\frac{1}{N}\mathbb{V}_{P(x ^{\prime}|x)}[g_{\theta}(x^{\prime})]. \tag{15}\] The induced excess variance in Equation (15) can bias the optimal solution. As a result, optimizing the approximated loss will prefer smoother solutions over all \(\{x^{\prime}_{i}\}_{i=1}^{N}\) samples. It is worth noting that this bias is mostly harmful due to its parametrized nature; the only link through which this bias can offset the optimal solution is its dependency on \(\theta\). This is in contrast to any non-parametrized stochasticity in the \(y\) term of Equation (10). Non-parameterized terms cannot offset the optimal solutions, since stochastic gradient descent methods are indifferent to them. ## 3 Proposed Solutions Based on Equation (15), the induced bias in the solution has a direct relationship with the stochasticity of the conditional distribution \(P(x^{\prime}|x)\). If we were to sample the \((x,x^{\prime})\) pairs deterministically, the excess variance in Equation (15) would disappear. However, this condition is satisfied only by modifying the problem conditions. Next, we introduce three preliminary solutions to this problem: the _deterministic sampling strategy_, the _double-sampling trick_, and the _delayed target method_. **The deterministic sampling strategy**: One approach to eliminate the excess variance term in Equation (14), is to sample the \(\{x^{\prime}_{i}\}_{i=1}^{N}\) set in a way that \(P(\{x^{\prime}_{i}\}|x)\) would be a point mass distribution at a fixed set \(\mathbb{A}_{x}\). This way, \(P(\{x^{\prime}_{i}\}|x)\) yields a zero excess variance: \(\mathbb{V}[\frac{1}{N}\sum_{i=1}^{N}g_{\theta}(\mathbb{A}_{x}^{(i)})]=0\). This induces the following deterministic loss. \[\hat{\mathcal{L}}_{\theta}^{\text{DET}}(x):=\bigg{(}f_{\theta}(x)-\frac{1}{N} \sum_{i=1}^{N}g_{\theta}(\mathbb{A}_{x}^{(i)})-y\bigg{)}^{2}. \tag{16}\] Although this approach removes the excess variance term in Equation (14) thanks to its deterministic nature, it biases the optimization loss by re-defining it: \(\mathcal{L}_{\theta}(x)\neq\hat{\mathcal{L}}_{\theta}^{\text{DET}}(x)\). The choice of the \(\mathbb{A}_{x}\) set can influence the extent of this discrepancy. One reasonable choice is to evenly space the \(N\) samples to cover the entire sampling domain as uniformly as possible. For a demonstration, Figure 2 shows a number of example sets used for applying the divergence theorem to the Poisson problem. Of course, this sampling strategy can be impractical in high-dimensional spaces, as the number of samples needed to cover the entire sampling domain grows exponentially with the sampling space dimension. **The double-sampling trick**: If we have two independent \(x^{\prime}\) samples, namely \(x^{\prime}_{1}\) and \(x^{\prime}_{2}\), we can replace the objective in Equation (11) with \[\hat{\mathcal{L}}_{\theta}^{\text{DBL}}(x)=\mathbb{E}_{x^{\prime}_{1},x^{ \prime}_{2}\sim P(x^{\prime}|x)}\bigg{[}\big{(}f_{\theta}(x)-g_{\theta}(x^{ \prime}_{1})-y\big{)}\cdot\big{(}f_{\theta}(x)-g_{\theta}(x^{\prime}_{2})-y \big{)}\bigg{]}. \tag{17}\] It is straightforward to show that \(\hat{\mathcal{L}}_{\theta}^{\text{DBL}}(x)=\mathcal{L}_{\theta}(x)\); the uncorrelation between \(g_{\theta}(x^{\prime}_{1})\) and \(g_{\theta}(x^{\prime}_{2})\) will remove the induced bias on average. However, this approach requires the access to two i.i.d. samples, which may not be plausible in many sampling schemes. In particular, Monte-Carlo samplings used in reinforcement learning do not usually afford the learning method with freedom to choose Figure 3: The results of the deterministic and double sampling techniques on the Poisson problem. The left plots demonstrate the solutions with \(N=1\), while the right plots show the solutions with \(N=100\). The training curves represent the mean squared error to the analytical solution vs. the training epochs. With \(N=1\), the double sampling trick exhibits divergence in training, and the deterministic sampling process yields overly-smooth functions similar to the standard solution in Figure 1. However, with \(N=100\), both the deterministic and double-sampling approaches exhibit improvements. According to the training curves, the delayed target method with \(N=1\) yields the best solutions in this problem. multiple next samples or being able to reset to a previous state. Besides reinforcement learning, offline learning using a given collection of samples may make this approach impractical. **The delayed target method**: This approach replaces the objective in Equation (11) with \[\mathcal{L}_{\theta}^{\text{DT}}(x)=\mathbb{E}_{P(x^{\prime}|x)}\bigg{[}\big{(}f _{\theta}(x)-g_{\theta^{*}}(x^{\prime})-y\big{)}^{2}\bigg{]}, \tag{18}\] where we have defined \(\theta^{*}:=\operatorname*{arg\,min}_{\tilde{\theta}}\mathcal{L}_{\tilde{ \theta}}(x)\). Assuming a complete function approximation set \(\Theta\) (where \(\theta\in\Theta\)), we know that \(\theta^{*}\) satisfies Equation (1) at all points. Therefore, we have \[\nabla_{\theta}\mathcal{L}_{\theta}(x)\big{|}_{\theta=\theta^{*}}=\nabla_{ \theta}\mathcal{L}_{\theta}^{\text{DT}}(x)\big{|}_{\theta=\theta^{*}}=0. \tag{19}\] Therefore, we can claim \[\theta^{*}=\operatorname*{arg\,max}_{\theta}\mathbb{E}_{x}[\mathcal{L}_{ \theta}^{\text{DT}}(x)]=\operatorname*{arg\,min}_{\theta}\mathbb{E}_{x}[ \mathcal{L}_{\theta}(x)]. \tag{20}\] In other words, optimizing Equation (18) should yield the same solution as optimizing the true objective \(\mathcal{L}_{\theta}(x)\) in Equation (10). Of course, finding \(\theta^{*}\) is as difficult as solving the original problem. The simplest heuristic replaces \(\theta^{*}\) with a supposedly independent, yet identically valued, version of the latest \(\theta\) named \(\theta^{\text{Target}}\), hence the delayed, detached, and bootstrapped target naming conventions: \[\hat{\mathcal{L}}_{\theta}^{\text{DT}}(x)=\mathbb{E}_{P(\{x^{\prime}_{i}\}|x) }\bigg{[}\big{(}f_{\theta}(x)-\frac{1}{N}\sum_{i=1}^{N}g_{\theta^{\text{target }}}(x^{\prime}_{i})-y\big{)}^{2}\bigg{]}. \tag{21}\] Our hope would be for this approximation to improve as well as \(\theta\) over training. The only practical difference between implementing this approach and minimizing the loss in Equation (10) is to use an incomplete gradient for updating \(\theta\) by detaching the \(g(x^{\prime})\) node from the computational graph in the automatic differentiation software. This naive implementation of the delayed target method can lead to divergence in optimization, as we will show in Section 4 with numerical examples (i.e., Figure 2). Here, we introduce two mitigation factors contributing to the stabilization of such a technique. Moving target stabilizationOne disadvantage of the aforementioned technique is that it does not define a global optimization objective; even the average target for \(f_{\theta}(x)\) (i.e., \(\mathbb{E}\big{[}g_{\theta^{\text{target}}}(x^{\prime})\big{]}+y\)) changes throughout the training. Therefore, a naive implementation can risk training instability or even divergence thanks to the moving targets. Figure 4: The solution and performance curves in higher-dimensional Poisson problems. _The left panel_ shows the solution curves for the delayed target (\(N=1\)), the standard (\(N=100\)), and the double-sampling (\(N=100\)) methods. The top and the bottom rows show 2- and 10-dimensional problems, respectively. In these problems, a single charge is located at the origin, so that the analytical solution is a function of the evaluation point radii \(\|x\|\). The horizontal axis shows the evaluation point radii, and covers 99% of points within the training volumes. _The right chart_ is a performance curve against the problem dimension (lower is better). The normalized MSE values were shown to be comparable. These results suggest that (1) higher dimensions make the problem challenging, and (2) delayed targeting with \(N=1\) is comparable to standard trainings with \(N=100\). To alleviate the fast moving targets issue, prior work suggested fixing the target network for many time-steps [19]. This causes the training trajectory to be divided into a number of episodes, where the target is locally constant and the training is therefore locally stable in each episode. Alternatively, this stabilization can be implemented continuously using Polyak averaging; instead of fixing the target network for a window of \(T\) steps, the target parameters \(\theta^{\text{Target}}\) can be updated slowly with the rule \[\theta^{\text{Target}}\leftarrow\gamma\theta^{\text{Target}}+(1-\gamma)\theta. \tag{22}\] This exponential moving average defines a corresponding stability window of \(T=O(1/(1-\gamma))\). Prior imposition for highly stochastic targetsIn certain instances, the target \(\frac{1}{N}\sum_{i=1}^{N}g_{\theta^{\text{Inpg}}}(x^{\prime}_{i})+y\) in Equation (18) can be excessively stochastic, leading to divergence in the training of the delayed target model. In particular, based on the settings defined in Equation (5) for the Poisson problem, we can write \(g_{\theta}(x_{i})=(M-1)f_{\theta}(x_{i})\). Therefore, we can analyze the target variance as \[\mathbb{V}_{\{x^{\prime}_{i}\}_{i=1}^{N}}\big{[}\frac{1}{N}\sum_{i=1}^{N}g_{ \theta^{\text{Inpg}}}(x^{\prime}_{i})+y\big{]}=\frac{(M-1)^{2}}{N}\mathbb{V}_ {\{x^{\prime}_{i}\}_{i=1}^{N}}[f_{\theta^{\text{Inpg}}}(x^{\prime}_{i})]+ \mathbb{V}[y]. \tag{23}\] Ideally, \(M\rightarrow\infty\) in order for Equation (4) to hold. Setting arbitrarily large \(M\) will lead to unbounded target variances in Equation (23), which can slow down the convergence of the training or result in divergence. In particular, such unbounded variances can cause the main and the target models to drift away from each other leading to incorrect solutions. One technique to prevent this drift, is to impose a Bayesian prior on the main and the target models. Therefore, to discourage this divergence phenomenon, we regularize the delayed target objective in Equation (24) and replace it with \[\hat{\mathcal{L}}_{\theta}^{\text{DTR}}:=\hat{\mathcal{L}}_{\theta}^{\text{DT }}(x)+\lambda\cdot(f_{\theta}(x)-f_{\theta^{\text{Inpg}}}(x))^{2}. \tag{24}\] A formal description of the regularized delayed targeting process is given in Algorithm 1, which covers both of the moving target stabilization and the Bayesian prior imposition. ``` 1:Initialize \(\theta^{\text{DTR}}\leftarrow\theta^{\text{DTR}}\) 2:for\(i=1,\dots,N\)do [MISSING_PAGE_POST] three unit Dirac-delta charges at \([0,0]\), \([-0.5,-0.5]\), and \([0.5,0.5]\). We also study higher-dimensional Poisson problems with a unit charge at the origin in Figure 4. Our second example looks at finding the magnetic potentials and fields around a current circuit, which defines a singular current density profile \(J\). Finally, to simulate particle evolution dynamics, we consider a Smoluchowski coagulation problem where particles evolve from an initial density. We designed the coagulation kernel \(K\) to induce non-trivial solutions in our solution intervals. We employ a multi-layer perceptron as our deep neural network, using 64 hidden neural units in each layer, and the \(\tanh\) activation function. We trained our networks using the Adam [12] variant of the stochastic gradient descent algorithm under a learning rate of \(0.001\). For a fair comparison, we afforded each method 1000 point evaluations for each epoch. Due to space limitations, a wealth of ablation studies with more datasets and other experimental details were left to the supplementary material. The Poisson problem with singular chargesTo show the solution bias, we first train two models: one with \(N=100\) samples per sphere, and another one with only \(N=1\) sample per sphere. These models represent a baseline for later comparisons. Based on Equation (15), the induced solution bias should be lower in the former scenario. Figure 1 shows the solution defined by these models along with the analytical solution and their respective training curves. The model trained with high estimation variance derives an overly smooth solution. We hypothesize that this is due to the excess variance in the loss. This hypothesis is confirmed by matching the training loss and the excess variance curves; the training loss of the model with \(N=1\) is lower bounded by its excess variance, although it successfully finds a solution with smaller excess variance than the \(N=100\) model. An alternative capable of producing similar quality solutions with only \(N=1\) samples would be ideal. To investigate the effect of highly stochastic targets on delayed target models, Figure 2 shows the training results with both \(M=100\) and \(M=10\). The former is unstable, while the latter is stable; this confirms the influence of \(M\) in the convergence behavior of the delayed target trainings. Furthermore, when this divergence happens, a clear drift between the main and the target models can be observed. Figure (2) shows that imposing the Bayesian prior of Equation (24) can lead to training convergence even with a larger \(M=1000\), which demonstrates the utility of our proposed solution. We also investigated the performance of the deterministic and double-sampling techniques in this problem. Figure 3 shows these results when \(N=1\) and \(N=100\) samples are used for integral estimation. With \(N=1\), the training with the deterministic sampling approach is stable and yields similar results to those seen in Figure (1). The double-sampling trick, on the other hand, exhibits unstable trainings and sub-optimal solutions. We suspect that (a) the singular nature of the analytical Figure 6: Training results on the Smoluchowski coagulation problem. The top left panel shows the ground truth solution, along with the standard \(N=100\) and \(N=1\) solution heatmaps minimizing the \(\hat{\mathcal{L}}_{\theta}(x)\) in Equation (15). The training loss and the integration variance represent the \(\hat{\mathcal{L}}_{\theta}(x)\) and \(\mathbb{V}_{P(x^{\prime}|x)}[g_{\theta}(x^{\prime})]\) quantities in Equation (15). The top right figure shows the training curve for both of the standard trainings. The bottom left panel shows the delayed target solution heatmaps using \(N=1\) sample with its training curve next to it. solution, and (b) the stochasticity profile of the training loss function \(\hat{\mathcal{L}}_{\theta}^{\text{DBL}}(x)\) in Equation (17) are two of the major factors contributing to this outcome. With \(N=100\), both the deterministic and double-sampling trainings yield stable training curves and better solutions. This suggests that both methods can still be considered as viable options for training integro-differential PINNs, conditioned on that the specified \(N\) is large enough for these methods to train stably and well. The regularized delayed target training with \(N=1\) sample is also shown in the training curves of Figure 3 for easier comparison. The delayed target method yields better performance than the deterministic or double-sampling in this problem. This may seemingly contradict the fact that the double-sampling method enjoys better theoretical guarantees than the delayed target method, since it optimizes a complete gradient. However, our results are consistent with recent findings in off-policy reinforcement learning; even in deterministic environments where the application of the double-sampling method can be facilitated with a single sample, incomplete gradient methods (e.g., TD-learning) may still be preferable over the full gradient methods (e.g., double-sampling) [24, 5, 31, 4]. Intuitively, incomplete gradient methods detach parts of the gradient, depriving the optimizer from exercising full control over the decent direction and make it avoid over-fitting. In other words, incomplete gradient methods can be viewed as a middle ground between zero-order and first-order optimization, and may be preferable over both of them. Figure 4 also studies the effect of problem dimensionality on our methods. The results confirm that the problem becomes significantly more difficult with higher dimensions. However, the delayed target models maintain comparable quality to standard trainings with large \(N\). The Maxwell problem with a rectangular current circuitFigure 5 shows the training results for the Maxwell problem. The results suggest that the standard and the deterministic trainings with small \(N\) produce overly smooth solutions. The double-sampling method with small \(N\) improves the solution quality at first, but has difficulty maintaining a stable improvement. However, delayed targeting with small \(N\) seems to produce comparable solutions to the standard training with large \(N\). The Smoluchowski coagulation problemFigure 6 shows the training results for the Smoluchowski coagulation problem. Similar to the results in Figure 1, the standard training using \(N=1\) sample for computing the residual summations leads to biased and sub-optimal solution. However, the standard training with \(N=100\) samples suffers less from the effect of bias. The delayed target solution using only \(N=1\) sample produces comparable solution quality to the standard evaluation with \(N=100\) and is not bottlenecked by the integration variance. Figure 7 compares the solution quality for each of the standard and delayed target methods under different problem dimensions. The results suggest that the delayed target solution maintains its quality even in higher dimensional problems, where the excess variance issue leading to biased solutions may be more pronounced. Figure 7: The solution mean squared error to the ground truth in the 1, 2, and 3-dimensional Smoluchowski coagulation problem. The vertical axis shows the solution error, and the horizontal axis shows the training epochs. The standard solutions were trained by the ordinary MSE loss \(\mathcal{L}_{\theta}(x)\) in Equation (10) with \(N=1\) and \(N=100\) samples. The delayed target solution used \(N=1\) sample, yet produced slightly better results than the standard method with \(N=100\). Conclusion In this work, we investigated the problem of learning PINNs in partial integro-differential equations. We presented a general framework for the problem of learning from integral losses, and theoretically showed that naive approximations of the parametrized integrals lead to biased loss functions due to the induced excess variance term in the optimization objective. We confirmed the existence of this issue in numerical simulations. Then, we proposed three solutions to account for this issue. In particular, we detailed a delayed targeting recipe for this class of problems, and verified that it can solve these problems effectively without access to too many samples for integral evaluations. Our numerical results support the utility of our proposed method on three challenging problems, (1) a Poisson problem with singular charges, (2) an electromagnetic problem under a Maxwell equation, and (3) a Smoluchowski coagulation problem. The limitations of our work include its narrow scope to learning PINNs; this work could have broader applications in other areas of machine learning. Also, future work should consider the applications of the proposed method to more problem classes both in scientific and traditional machine learning areas. Developing adaptive processes for setting each method's hyper-paramters is another worthwhile future endeavor.
2303.16459
GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization
There are plenty of graph neural network (GNN) accelerators being proposed. However, they highly rely on users' hardware expertise and are usually optimized for one specific GNN model, making them challenging for practical use. Therefore, in this work, we propose GNNBuilder, the first automated, generic, end-to-end GNN accelerator generation framework. It features four advantages: (1) GNNBuilder can automatically generate GNN accelerators for a wide range of GNN models arbitrarily defined by users; (2) GNNBuilder takes standard PyTorch programming interface, introducing zero overhead for algorithm developers; (3) GNNBuilder supports end-to-end code generation, simulation, accelerator optimization, and hardware deployment, realizing a push-button fashion for GNN accelerator design; (4) GNNBuilder is equipped with accurate performance models of its generated accelerator, enabling fast and flexible design space exploration (DSE). In the experiments, first, we show that our accelerator performance model has errors within $36\%$ for latency prediction and $18\%$ for BRAM count prediction. Second, we show that our generated accelerators can outperform CPU by $6.33\times$ and GPU by $6.87\times$. This framework is open-source, and the code is available at https://github.com/sharc-lab/gnn-builder.
Stefan Abi-Karam, Cong Hao
2023-03-29T05:08:21Z
http://arxiv.org/abs/2303.16459v2
GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization ###### Abstract There are plenty of graph neural network (GNN) accelerators being proposed. However, they highly rely on users' hardware expertise and are usually optimized for one specific GNN model, making them challenging for practical use. Therefore, in this work, we propose GNNBuilder, the first automated, generic, end-to-end GNN accelerator generation framework. It features four advantages: (1) GNNBuilder can automatically generate GNN accelerators for a wide range of GNN models arbitrarily defined by users; (2) GNNBuilder takes standard PyTorch programming interface, introducing zero overhead for algorithm developers; (3) GNNBuilder supports end-to-end code generation, simulation, accelerator optimization, and hardware deployment, realizing a push-button fashion for GNN accelerator design; (4) GNNBuilder is equipped with accurate performance models of its generated accelerator, enabling fast and flexible design space exploration (DSE). In the experiments, first, we show that our accelerator performance model has errors within \(36\%\) for latency prediction and \(18\%\) for BRAM count prediction. Second, we show that our generated accelerators can outperform CPU by \(6.33\times\) and GPU by \(6.87\times\). This framework is open-source, and the code is available at [https://anonymous.dopen.science/r/gnn-builder-83B4/](https://anonymous.dopen.science/r/gnn-builder-83B4/). ## I Introduction Graph Neural Networks (GNNs) are a powerful and popular tool for solving learning tasks where the data can be represented as a graph. Among different applications, GNNs can be used for node-level, edge-level, and graph-level tasks, such as drug discovery [1], recommender systems [2], social network analysis [3], traffic forecasting [4], electronic health records analysis [5], scene graph understanding [6], electronic design automation [7], natural language processing [8], autonomous driving [9], and high-energy physics [10]. Among these applications, some have real-time constraints for GNN inference and require hardware acceleration. One example is autonomous driving systems that use GNNs to process LIDAR point cloud data [11]. Another prominent example is in high-energy physics, where GNNs are used for real-time particle detection [12] and jet lag detection [13], which must be processed within several nano-second. Given the acceleration needs for GNN inference, there are many GNN accelerators being proposed. Examples include earliest ASIC accelerators proposed by Auten et al. [14], HyGCN [15], and EnGN [16], as well as most recent accelerators such as AWB-GCN [17], BoostGCN [18], I-GCN [19], GNAX [20], Rubik [21], and GraphACT [22]. Among them, Rubik and GraphACT aim to accelerate GCN training using ASIC and FPGA, respectively. Despite the great success of GNN accelerators, there are still significant **limitations**. First, _existing GNN accelerators are model-specific but not generic_. Specifically, most GNN accelerators focus on only one or two most popular GNN models, such as Graph Convolution Network (GCN) [23] or GraphSage [24], and provide fixed accelerator structures, fixed GNN layer types, activations, and other design choices that are specific to the implemented model. These accelerators are not generic and _cannot handle advanced GNNs such as anisotropic GNNs, GNNs with edge embeddings, or complicated aggregation functions_[25, 26, 27]. The fundamental reason is that most existing GNN accelerators simplify GNN computations to be a sequence of general or sparse matrix multiplications, which _does not hold true_ for those advanced GNNs. Second, most of the accelerators are hard-coded and require extensive hardware expertise to adapt to new GNN models_. There is no existing tools that can generate GNN accelerators automatically, optimally, and without any hardware knowledge. There are only two existing works that can support automated accelerator generation: DeepBuring-GL [28] and HP-GNN [29]. DeepBuring-GL targets inference acceleration but is limited to a fixed GCN or GraphSAGE model. HP-GNN targets training acceleration but not real-time-inference. Moreover, HP-GNN proposes its own model API and lacks the flexibility to support a wide range of GNN architectures and different features. Table I summarizes the limitations of DeepBuring-GL and HP-GNN. Therefore, researchers and practitioners cannot explore the best GNN model for their target applications in software and easily deploy their application-specific models to hardware for acceleration. Motivated by the existing limitations of GNN accelerator designs and tools, we propose GNNBuilder, a generic, feature-rich, and extensible framework for end-to-end GNN accelerator generation, simulation, optimization, and deployment on FPGAs with bitstreams. To be generic, we follow the _message passing mechanism_ of GNN models, which can express almost all types of GNN models at the theoretical formulation level, as stated by a recent work [30]. To be extensible, we directly take _standard PyTorch_ as the programming language, which allows programmers to design their own GNN models freely and can be directly used for training. * **Generic: wide range of GNN model support**. Our proposed framework, GNNBuilder, offers a wide range of support for GNN models through an explicit message passing approach. In addition to supporting state-of-the-art models such as GCN, GIN, GraphSAGE, and PNA, GNNBuilder allows for the customization of various features such as layer type, activation, quantization, aggregation, and pooling. This level of customization is not offered in HP-GNN, as summarized in Table I. * **Extensibility: Interoperability with PyTorch**. GNNBuilder is the first work that allows users to define their model architectures freely in native PyTorch using a parameterizable GNNModel PyTorch module. This allows users to seamlessly integrate accelerator design as part of existing deep learning workflows. Therefore, GNNBuilder not only supports standard GNNs (as listed in Table II) but can extend to almost all customized GNN models supported in PyTorch Geometric. * **Support for node-level and graph-level tasks + node and edge input features**. Our _GNNBuilder_ supports node-level and graph-level task outputs, as well as node-level and edge-level feature inputs. This allows GNNBuilder's support a wide range of acceleration applications, including drug screening, high-energy physics, and point cloud processing. * **Accelerator Design Space Exploration (DSE) and Optimization**. _GNNBuilder_ provides tools to help designers automatically select the best configurations for the generated accelerator, such as hardware parallelism, resource allocation, and fixed-point precision, instead of manual exploration. This automated DSE can significantly improve performance in seconds, as opposed to days, to achieve the best latency under fixed resource constraints with a trade-off in model accuracy. * **Open-source Python API with end-to-end workflow**. Our GNNBuilder provides an open-source Python library with APIs that allow users to define their own models from development to deployment in a push-button fusion with zero hardware expertise required. It is an end-to-end workflow including: hardware-compatible simulation, testbench build and execution, automated hardware code generation and synthesis, and deployment on FPGA with host code. * **Superior performance against CPU and GPU**. GNNBuilder generates high-performance accelerators on FPGA that outperform PyTorch Geometric CPU and GPU baselines on various datasets by **6.33\(\times\)** and **6.87\(\times\)** respectively. ## II Related Work and Motivations ### _Related Work_ #### Ii-A1 GNN Accelerators and Graph Accelerators The increasing use of Graph Neural Networks (GNNs) in real-time and large-data applications in the research community and industry has led to numerous GNN accelerator studies. A recent survey [31] provides an overview of GNN accelerators for CPU, GPU, ASIC, FPGA, and heterogeneous platforms. Some specific GNN accelerators include Auten et al. [32], HyGCN [15], AWB-GCN [17], EnGN [16], GRIP [33], GCNAX [20], Rubik [21], GraphACT [22], Boost-GCN [18], and I-GCN [19]. These accelerators explore different implementations and model-specific design choices to achieve speedups in GNN inference and training. More recent accelerators, such as GenGNN [34] and FlowGNN [35], also adopt a GNN model agnostic approach for inference acceleration without sacrificing performance. #### Ii-A2 GNN Accelerator Automation Some existing works explore the automated generation of hardware accelerators for GNNs. One key work is DeepBuring-GL [28] which is focused on generating GNN inference accelerators for CPU-FPGAs systems such as Xilinx's Alveo U50. However this work only support a fixed subset of GCN-based architectures. Another work, HP-GNN [29], also targets acceleration but for GNN training on CPU-FPGA platforms. HP-GNN also supports a subset GCN-based and GraphSAGE-based architectures. ### _Limitations_ #### Ii-B1 GNN Accelerators Existing GNN acceleration approaches primarily focus on fixed model architectures for inference and often support only isotropic models, which allows them to leverage sparse matrix multiplication acceleration techniques. These approaches generally implement GCN or GIN architectures by simplifying computations with sparse matrix multiplications (SpMM) and general matrix multiplications (GEMM). However, advanced GNNs cannot be reduced to mere matrix multiplications and require specialized graph preprocessing and model computation patterns that are not easily generalizable to anisotropic models. The limitations of these approaches stem from their focus on optimization techniques that hinder generalization to more advanced GNN architectures. In contrast, recent works such as GenGNN and FlowGNN propose hardware architectures that can accommodate advanced model architectures with anisotropic message passing support by adopting an explicit message passing hardware dataflow. This offers a more flexible solution for a broader range of GNN models. #### Ii-B2 GNN Accelerator Automation Current accelerator automation approaches, as shown in Table I, have limitations in generalizing to advanced GNN architectures. DeepBurningGL and HP-GNN allow end-to-end code generation but are limited to GCN and GraphSAGE models. They lack support for anisotropic GNNs like PNA, expressive GNNs such as GIN, and features like mean and variance neighbor pooling, arbitrary activation functions, skip connections, sum/mean/max global pooling, and MLP prediction heads. Additionally, they do not offer simple fixed-point quantization or code generation for fixed-point and floating-point testbenches, essential for rapid debugging. These limitations restrict the applicability of existing frameworks for researchers and practitioners working with diverse GNN models. ## III GNNBuilder Framework Overview ### _GNNBuilder Components_ GNNBuilder aims to provide users with a streamlined process to design, implement, validate, and optimize GNN models, transitioning from standard PyTorch models to FPGA bitstreams. As depicted in Fig. 1, GNNBuilder consists of five components: **Compiler front-end** parses the native PyTorch GNN model definition, including the number of GNN layers, layer type, activation type, data precision, pooling type, aggregation type, and MLP definition. **Code generator** builds upon a library of pre-defined hardware accelerator templates that adopt the message passing mechanism, making it compatible with various GNN types. We generate High-Level Synthesis (HLS) code targeting FPGAs, supported by the Xilinx Vitis HLS tool [36]. **Design space exploration and performance model** enables automated DSE for accelerator generation, encompassing hardware parallelism, resource allocation, and quantization (data precision). **Simulation and testbench** facilitates transparent hardware-compatible simulation using automatically generated testbenches, ensuring the correctness of accelerator functionality. It also generates plain C++ code for "true" quantization simulation, accurately reflecting on-FPGA quantization accuracy. **Hardware synthesis and deployment** automatically generates hardware synthesis scripts, synthesizes FPGA bitstreams, and produces host code for executing the bitstream. Table II presents the representative GNNs supported by our framework. Although these are examples, GNNBuilder can flexibly accommodate a wide range of customized GNN models, including residual and skip connections, arbitrary quantization, aggregation functions, graph attention, activation, global pooling, and MLP head. Such user-defined features can be naturally expressed using PyTorch, granting GNNBuilder exceptional extensibility. ### _Programming Model and User APIs_ Table III showcases the user APIs provided by GNNBuilder, and Listing 1 illustrates an example of the user interface for a customized GNN model. To begin, a user defines a GNNModel instance, incorporating an MLP and a xxxxConv_GNNB module (e.g., PNAConv_GNNB). GNNBuilder offers wrapper classes for each graph convolution layer, enabling the user to specify parallelism factors p_in and p_out. The higher-level GNNModel supports arguments for defining architecture parameters and separate parallelism factors for the GNN head (gnn_p_in, gnn_p_hidden, gnn_p_out) and the MLP head (p_in, p_hidden, p_out). The user can train and manipulate the GNNModel instance as a standard PyTorch module. A user can then define a GNNBuilder Project instance. The Project class has several arguments to define build paths, the GNNModel model instance, the PyTorch Geometric dataset for the model task, max_nodes and max_edges, numerical precision, and average number of nodes, edges, and node in-degree for synthesis runtime estimation. After creating a Project instance, the user can call the code generation functions to produce the model kernel HLS code, the kernel testbench code and data, the testbench makefile, and the Vitis HLS build script. Post code generation, the user can call build_and_run_testbench() to build and execute the testbench, and run_vitis_hls_synthesis() to execute the Vitis HLS synthesis process. These execution scripts also return data for the testbench runtime, mean absolute error (MAE), and synthesis latency / resource usage. ``` importtorch.nnasnn fromtorch_geometric.datasetsimport MoleculeNet importgnnbuilderasgnnb dataset=MoleculeNet(root="./tmp/MoleculeNet",name="hiv") model=gnnb.GNNModel( graph_input_feature_dim=dataset. num_features, graph_input_edge_dim=dataset. num_edge_features, gnn_hidden_dim=16, gnn_num_layers=2, gnn_output_dim=9, gnn_conv=gnnb.SAEGConv_GNNB, gnn_activation=nn.ReLU, gnn_skip_connection=True, global_pooling=gnnb.GlobalPooling(["add", "mean", "max"]), mlp_head=gnnb.MLP(in_dim=8 * 3,out_dim= dataset.num_classes,hidden_dim=8, hidden_layers=3, activation=nn.ReLU,p_in =8, p_hidden=4, p_out=1), output_activation=None, gnn_p_in=1, gnn_p_hidden=8, gnn_p_out=41 ) MAX_NODES=600 MAX_EDGES=600 num_modes_avg,num_edges_avg=gnnb. compute_average_nodes_and_edges(dataset) degree_avg=gnnb.utils. comp_average_degree(dataset) proj=gnnb.Project( "gnn_model", model, "classification_integer", VITIS_HLS_PATH, BUILD_DIR, dataset-dataset, max_nodes=MAX_NODES, max_edges=MAX_EDGES, num_modes_guses=num_nodes_avg, num_edges_guess=n_dense_avg, forget_get=get_avg, fpx=FPX(32,16), fpqa_part="xcu280-fsvh2892-2L-e", n_jobs=32, } proj.gen_hw_model() proj.gen_testbench() proj.gen_maskfile() proj.gen_vitis_hls_tcl_script() proj.gen_maskfile_vitis() prob.data=proj.build_and_run_testbench() print(tb_data) synth_data=proj.run_vitis_hls_synthesis() print(synth_data) ``` Listing 1: Example usage of GNNBuilder Framework. ## IV GNNBuilder Model Architecture Each GNN model in gnn_builder framework is based on a parameterized GNNModel (subclass of torch.nn.Module) architecture, designed to work seamlessly within the PyTorch ecosystem. GNNBuilder supports node-, edge-, and graph-level tasks using a simple linear model architecture (Fig. 2). The **GNN Backbone** consists of graph convolution layers, activations, and skip connections, with customizable parameters. Supported GNNConv layers include GCN, GraphSAGE, GIN, and PNA. For edge and node tasks, users can remove the pooling and MLP head. The **Global Graph Pooling** module aggregates node embeddings using sum, mean, or max pooling. The **MLP Head** transforms the pooled output for the specified task, with customizable input/output embedding sizes, hidden layers, and activation functions. These models are defined using existing PyTorch and PyTorch Geometric layers, with user-provided keyword arguments for customization. The template-based compiler matches components from a GNNModel class and parameters to code templates in the HLS code generation output. ## V Accelerator Architecture Our accelerator implementation adopts an explicit message-passing architecture that implements dataflow optimization within the GNN Backbone, individual GNN Conv. Layers, and the MLP head. This maximizes latency while implementing efficient streaming data movement using FIFO streams rather than memory buffers. This is the main optimization that shows the best performance gains. Fig. 1: Workflow of the GNNBuilder framework. ### _Message Passing and Graph Convolution Kernels_ Inspired by GenGNN [34] and FlowGNN [35], we adopt an explicit message-passing architecture for graph convolution layers, allowing us to support GNN layers like PNA, which are not compatible with traditional SpMM accelerator approaches. For each node, the operations illustrated in Figure 3 are performed. We first gather the node's neighbor indices using the neighbor table and offset table. Then, we iterate through each neighbor index to load its associated embedding from the input node embedding table, transform the embedding with \(\phi(\cdot)\), and aggregate it with a partial aggregation. After processing all neighbors, we finalize the partial aggregation, combine it with the current node embedding, and transform it with the apply function \(\gamma(\cdot)\). The resulting computed embedding is then written to the output node embedding table. The functions \(\phi(\cdot)\), \(\gamma(\cdot)\), and the aggregation(s) used depend on the specific layer being implemented. Kernels for GCN, GraphSAGE, GIN, and PNA layers are included in the initial GNNBuilder kernel library. Developing custom kernels is possible by contributing the appropriate hardware kernel code for the layer of interest to GNNBuilder's kernel template library, as well as providing a matching GNNConv class that links the kernel in GNNBuilder's Python library. This can be accomplished through a pull request with minimal effort, and the rest of our framework remains agnostic to the specific types a user intends to add support for. ### _Other Components_ _Graph Data_: In the model kernel, buffers depend on the number of nodes (num_nodes) or edges (num_edges), with buffer sizes set to an upper bound determined by MAX_NODES and MAX_EDGES parameters in a GNNBuilder Project instance. Model kernels require input graphs in **CO**ordinate format matrix with an input node feature table. The COO matrix is a MAX_EDGES\(\times\) 2 integer array, while the input node feature table is a MAX_NODES\(\times\) input_dim fixed-point datatype array. In-degree and out-degree buffers have a size of MAX_NODES. Additionally, a neighbor table stores each node's neighbors, and a neighbor offset table indexes each node's block of neighbors, both sized MAX_EDGES and MAX_NODES, respectively. _Degree + Neighbor Table Computation_: Before model computation, the degree table of the input graph must be calculated. Node degrees are used by various graph convolutions for normalization purposes. Since these values are only known at runtime, the in-degree and out-degree tables need to be computed on-the-fly in the accelerator for each input graph. The COO format of input graphs allows for computation within the bounds of num_edges. Subsequently, the neighbor table and neighbor offset table are computed simultaneously using two loops: one iterating over num_edges and the other over num_nodes. _Partial Aggregations_: To efficiently aggregate neighbor embeddings with constant memory (\(O(1)\) space complexity), GNNBuilder defines single-pass algorithms for aggregation Fig. 3: The high-level hardware kernel architecture for GNNConv layers. Fig. 2: The GNNBuilder model architecture for graph level tasks. that avoid the need for buffering all neighbor embeddings in an intermediate buffer, which would consume substantial BRAMs. GNNBuilder supports sum, min, max, mean, variance, and standard deviation aggregations. Each aggregation is associated with a data structure for storing partial and final aggregation data. For variance, Welford's one-pass algorithm [37] is used to compute variance efficiently. _Linear Layer_: GNNBuilder implements tiled matrix multiplication for linear layers, enabling hardware parallelization. The parallelization factor for each linear layer is controlled by the BLOCK_SIZE_IN and BLOCK_SIZE_OUT template arguments for the linear kernel function. These arguments determine the partition factors for input, weight, and bias arrays, thus controlling the parallelism of the multiply-accumulate (MAC) operations. _Global Pooling_: GNNBuilder supports sum, mean, and max global graph pooling, aggregating node embeddings across all nodes into a single embedding of the same size. Multiple pooling methods can be combined using concatenation. _Activations_: GNNBuilder supports ReLU, Sigmoid, Tanh, and GELU [38] activations, implemented using fixed-point math functions from the Vitis HLS fixed-point math library. ## VI Accelerator Generation and Implementation Automated kernel generation is a key advantage of GNNBuilder, allowing seamless conversion of software models defined using PyTorch into hardware accelerators. This approach reduces development friction by eliminating the need for customized APIs. GNNBuilder efficiently generates code through dynamic introspection of software model objects, combined with a template-based compiler and a pre-defined kernel library. ### _Kernel Code Generation_ GNNBuilderis built on a template-based compiler which facilitates code generation by generating C++ HLS code for the top-level model kernel and associated header directly from a PyTorch model. This enables conditional and loop control flows for template blocks, useful for features like skip-connections, double-buffer array selection, and mapping layer kernel calls in the correct order with accurate input/output size. The parameterized structure of the GNNModel allows GNNBuilder to match appropriate function calls to corresponding kernels from the C++ header-only template library. This approach is extensible, enabling users to add support for other layers, aggregations, or activations by creating associated kernels in the template library and updating the Jinja template. ### _Hardware Simulation and Verification Testbenches_ GNNBuilder allows designers to generate and build C++ testbenches for their models, facilitating rapid testing of fixed-point quantizations without synthesizing designs. The testbench code, model parameters, dataset graphs, true output, and PyTorch model outputs are exported as binary files. During runtime, the testbench reads these files, loads weights into the model kernel, evaluates the kernel on all dataset inputs, and compares the output to the PyTorch model outputs. The testbench calculates verification metrics, such as mean absolute error between the PyTorch-generated model output and the kernel output, and averaged kernel runtime. These values are written to text files during runtime. For fixed-point models, the testbench ensures accurate fixed-point representations of input graphs and model parameters. By utilizing the Vitis HLS fixed-point library [36], functional equivalence with hardware modeling is maintained. The floating-point data from PyTorch is cast to the user-specified fixed-point format in the testbench. ### _Hardware Deployment on FPGA_ Using the run_vitis_hls_synthesis() function, users can build a synthesized accelerator (Verilog RTL code) and execute the implementation flow to generate Vivado IP blocks (.zip) or Vitis Kernels (.xo) for hardware deployment. This streamlines the workflow from software model to fully implemented design. GNNBuilder supports implementing Vitis kernels on platforms like Alveo U50 and Alveo U280, including full bitstream generation (.xclbin) and a host code testbench for on-chip graph dataset evaluation. This testbench, similar to the C++ testbench, uses Xilinx's runtime library, XRT, and OpenCL for FPGA interfacing from the host CPU. ## VII Performance Model and Design Space Exploration ### _FPGA Model Implementation_ We implemented our models on the Xilinx Alveo U280 FPGA accelerator at 300 MHz using Vitis HLS [36] and Vitis [39] tools from Xilinx. Our framework directly provides the generated HLS code to the synthesis tools, accompanied by suitable build scripts. ### _Hardware Performance Model_ To assess the effectiveness of runtime modeling in DSE, we examine direct-fit models for latency and BRAM prediction, comparing them to Vitis HLS-reported post-synthesis values. We focus on BRAM usage for resource modeling, as it is the primary constraint-violating resource. The direct-fit latency and BRAM models are random forest regressors, fitted on datasets of model configurations and their post-synthesis values. Empirical testing showed that random forests outperformed linear/polynomial models, support vector machines, and gradient boosting tree models in avoiding overfitting. These direct-fit models necessitate a pre-synthesized design database. As the number of possible configurations is too large for brute-force exploration, sparsely sampling the design space enables fitted models to interpolate between sampled designs, providing accurate estimates for unseen configurations. ### _Design Space Exploration_ Instead of requiring users to run HLS builds for each design configuration or create datasets, we provide serialized trained versions of the direct-fit models described earlier making it feasible to performance brute force or random sampling of the model configuration space. Evaluating these direct-fit models takes milliseconds, compared to minutes for HLS synthesis. This can reduce performance prediction runtime enables users to develop intelligent co-design tools for real-time optimization, paving the way for train-time model sparsity, quantization, and neural architecture search, among other possibilities. ## VIII Experimental Setup ### _Hardware Performance Model_ The accuracy of the runtime and BRAM models is assessed against a database of 400 synthesized designs, randomly sampled from a configuration space of model parameters (see Listing 2). For the fitted models, a random forest regressor with 10 estimators is used. The models are evaluated using the mean absolute percent error (MAPE) between the true post-synthesis metrics and predicted metrics. To examine overfitting, a 5-fold cross-validation (CV) is conducted, averaging the test MAPE for each fold to obtain the final cross-validation MAPE. ``` QM9DATSET=QM9(root="./tmp/QMP9"). index_select(list(range(1000))) DATSET_IN_DIM=QM9DATSET.num_features DATSET_OUT_DIM=QM9DATSET[0].y.ravel(). shape[0] ``` MEDIAN_NODES,MEDIAN_EDGES= compute_median_nodes_and_edges( QMB_DATSET,round_val=True) MEDIAN_DEGREE=compute_median_degree( QMB_DATSET) MAX_NODES=600 MAX_EDGES=600 CONVS=["gcn", "gn", "pna", "sage"] GNN_HDDEN_DIM=[64,128,256] GNN_OUT_DIM=[64,128,256] GNN_NUM_LAZERS=[1,2,3,4] GNN_SKIP CONNCTIONs=[True,False] MLP_HDDEN_DIM=[64,128,256] MLP_NUM_LAYERS=[1,2,3,4] GNN_P_HDDEN=[2,4,8] GNN_P_OUT=[2,4,8] MLP_P_THN=[2,4,8] MLP_P_THDDEN=[2,4,8] ``` Listing 2: Design Space used for Hardware Performance Model Dataset ### _Accelerator Performance Evaluation_ This section evaluates various model architecture configurations across multiple datasets, comparing the proposed hardware implementations: * **PyG-CPU**: A PyTorch Geometric CPU model * **PyG-GPU**: A PyTorch Geometric GPU model * **CPP-CPU**: A C++ floating-point CPU model * **FPGA-Base**: Proposed hardware model without parallelism * **FPGA-Parallel**: Proposed hardware model with parallelism Performance is analyzed using a fixed GNN model with varying GNNConv layers (GCN, GraphSAGE, GIN, and PNA), across graph-level task datasets such as QM9, ESOL, FreeSolv, Lipophilicity, and HIV from MoleculeNet [1]. The CPU models are evaluated on an Intel Xeon Gold 6226R, while the GPU models are assessed on an NVIDIA RTX A6000. The hardware models (FPGA-Base and FPGA-Parallel) are implemented as described in Section VII-A. Each baseline is evaluated on a batch size of 1, with the runtimes for CPU and GPU implementations computed by averaging the runtime of the first 1000 graphs of each dataset (or the complete dataset if it contains fewer than 1000 graphs). FPGA implementations' runtimes are obtained from the worst-case estimate provided by Vitis HLS after synthesis. The architecture configuration in Listing 3 is used for all models. The FPGA-Parallel implementations employ different parallelism factors for GCN, SAGE, and GIN models (gnn_p_in=1, gnn_p_hidden=16, gnn_p_out=8, p_in=8, p_hidden=8, p_out=1), while PNA models use gnn_p_hidden=8 and gnn_p_out=8. These models utilize <16, 10> bit fixed-point data representations. FPGA-Base implementations have parallel factors set to \(1\) and implement node features using <32, 16> bit fixed-point types. ``` model=gnnph.GNNModel( graph_input_feature_dim=dim_in, graph_input_edge_dim=0, gnn_hidden_dim=128, gnn_num_layers=6, gnn_output_time=64, gnn_conv=conv, gnn_activation=nn.ReLU, gnn_skip_connection=True, global_pooling=gnnph.GlobalPooling(["add", mean="max"]) mlp_head=MLP(in_dim=64 * 3, out_dim= dim_out, hidden_dim=64, hidden_layers=4, activation=nn.ReLU), output_activation=None, } ``` Listing 3: Model Arch. for Benchmark ## IX Results ### _Analytical Performance Model_ The results of fitting the latency and BRAM models on our database of generated designs are illustrated in Figure 4. The direct-fit latency model achieved a CV MAPE of approximately 36%, while the direct-fit BRAM model obtained a CV MAPE of approximately 17%. Figure 4 demonstrates that the direct-fit model consistently predicts the true value with few outliers. These findings indicate that directly fitting models on a design database, which sparsely samples the design configuration space, is an effective and straightforward approach for performance modeling in GNNBuilder, enabling rapid Design Space Exploration (DSE). ### _DSE Exploration_ To exemplify the speed up of direct fit models over standard evaluation of the HLS tool, we also analyze the performance estimate compute time for all 400 model configurations used to train the direct fit models. We present the results in Figure 5, which can be viewed as a timeline of runs. All model calls for the direct fit models to finish in under a second, while all Vitis HLS synthesis runs finish in under two days. An average direct fit model call takes \(1.7\) ms, while an average Vitis HLS synthesis run takes \(9.4\) minutes. This difference is around 6 orders of magnitude emphasizing the the real-time performance estimation of direct fit models. ### _Accelerator Performance Evaluation_ The performance results for the proposed accelerator hardware framework in comparison to other implementations are shown in Figure 6, Figure 7, and Table IV. The values in Table IV indicate the speedup factors of teh FPGA-Parallel implementation for the latency values averaged across datasets. For all cases, there is at least a 6x speedup in the parallelized FPGA implementation over the PyG CPU, PyG GPU, and C++ CPU implementations. Across all models, there is a geometric mean speedup of **6.33\(\times\)** over PyG-CPU and **6.87\(\times\)** over PyG-GPU. The resource usage also shows more room for BRAM and DSP utilization across models indicating there is more room for increased parallelism and higher speedups. ## X Conclusion In this paper, we introduced **GNNBuilder**, a versatile end-to-end GNN accelerator generation framework with a user Fig. 4: Comparison of latency prediction models with true post-synthesis latency and BRAM usage reported from Vitis HLS Fig. 5: Cumulative runtime for evaluating 400 design to predict model latency and BRAM usage. The x-axis represents time going forward from left to right, and each point represents a performance estimate which has finished computing. Fig. 6: GNN model runtime across a range of architectures, datasets, and implementations (y-axis in log-scale). Fig. 7: Resource usage of FPGA-Base and FPGA-Parallel model implementations. friendly Python API. GNNBuilder supports a wide range of expressive GNNs, seamlessly integrates with PyTorch modules, and offers unique features uncommon in other inference accelerators. We demonstrated its capabilities in generating hardware kernels, testbenches, running testbenches on PyTorch Geometric datasets, and launching Vitis HLS synthesis kernels. Our framework also enables efficient DSE and outperforms CPU and GPU implementations by exploiting hardware parallelism. Future work involves optimizing graph convolution kernels, exploring intelligent DSE search methods, train-time co-design, and expanding our kernel template library to accommodate more graph convolution kernels such as GAT [40] and other emerging GNN architectures. The initial software framework is available at [https://anonymous.4open.science/r/gnn-builder-83B4/](https://anonymous.4open.science/r/gnn-builder-83B4/) for both software and hardware practitioners.
2310.04171
Dynamic Relation-Attentive Graph Neural Networks for Fraud Detection
Fraud detection aims to discover fraudsters deceiving other users by, for example, leaving fake reviews or making abnormal transactions. Graph-based fraud detection methods consider this task as a classification problem with two classes: frauds or normal. We address this problem using Graph Neural Networks (GNNs) by proposing a dynamic relation-attentive aggregation mechanism. Based on the observation that many real-world graphs include different types of relations, we propose to learn a node representation per relation and aggregate the node representations using a learnable attention function that assigns a different attention coefficient to each relation. Furthermore, we combine the node representations from different layers to consider both the local and global structures of a target node, which is beneficial to improving the performance of fraud detection on graphs with heterophily. By employing dynamic graph attention in all the aggregation processes, our method adaptively computes the attention coefficients for each node. Experimental results show that our method, DRAG, outperforms state-of-the-art fraud detection methods on real-world benchmark datasets.
Heehyeon Kim, Jinhyeok Choi, Joyce Jiyoung Whang
2023-10-06T11:41:38Z
http://arxiv.org/abs/2310.04171v3
# Dynamic Relation-Attentive Graph Neural Networks for Fraud Detection ###### Abstract Fraud detection aims to discover fraudsters deceiving other users by, for example, leaving fake reviews or making abnormal transactions. Graph-based fraud detection methods consider this task as a classification problem with two classes: frauds or normal. We address this problem using Graph Neural Networks (GNNs) by proposing a dynamic relation-attentive aggregation mechanism. Based on the observation that many real-world graphs include different types of relations, we propose to learn a node representation per relation and aggregate the node representations using a learnable attention function that assigns a different attention coefficient to each relation. Furthermore, we combine the node representations from different layers to consider both the local and global structures of a target node, which is beneficial to improving the performance of fraud detection on graphs with heterophily. By employing dynamic graph attention in all the aggregation processes, our method adaptively computes the attention coefficients for each node. Experimental results show that our method, DRAG, outperforms state-of-the-art fraud detection methods on real-world benchmark datasets. fraud detection, graph anomaly detection, graph neural networks, relation attentive, dynamic attention ## I Introduction Graph-based fraud detection methods, also called graph anomaly detection methods, represent objects that should be determined to be fraud or benign as nodes and make edges between them [1, 2]. For example, in YelpChi benchmark dataset [3], nodes are reviews and edges are created based on three different factors: (1) whether the reviews were written by the same user, (2) whether the reviews were written in the same month, and (3) whether the reviews had the same star rating. Each of these three factors can be considered as a different relation since their semantics are distinct [4]. Several recently proposed fraud detection methods distinguish different relations in computing node representations [5, 6, 7, 8, 9, 10]. For example, CARE-GNN [11] uses a relation-aware neighbor aggregator and BWGNN [12] performs a propagation process for each relation and apply a maximum pooling. Also, FRAUDRE [13] learns the fraud-aware graph convolution model under each relation. In general, these relation-aware approaches have shown superior performance over the methods that ignore relations and consider all edges equally [14, 15]. In this paper, we propose DRAG which is a Dynamic Relation-Attentive Graph neural network (Figure 1), which decomposes the original graph by relations to learn a node representation per relation along with a self-transformation, resulting in multiple representations for each node. We consider the self-loop used in self-transformation as another relation. At each layer, DRAG aggregates the multiple representations for each node with different learnable attention weights for the relations. The final representation is computed by aggregating representations from different layers, where not only the last layer's representation but also intermediate layers' representations are taken into account. In all these processes, we employ a dynamic graph attention mechanism [16] to let DRAG have various distributions of attention coefficients, which can differ for each node. Experimental results on real-world datasets show that DRAG outperforms eight different baseline methods. Our implementations and datasets are available at [https://github.com/bdi-lab/DRAG](https://github.com/bdi-lab/DRAG). **Related Work:** In the original GAT [17], the static attention problem has been issued, a phenomenon in which all nodes have a fixed ranking of attention coefficients. To resolve this issue, GATv2 [16] has been proposed by introducing a dynamic attention, which swaps the order of operations of applying a linear projection layer and the non-linear function. On the other hand, WIRGAT [18] has been proposed to compute relation-wise representations for each node using a static attention and simply sum over the node representations. Different from DRAG, none of these methods considers dynamic relation-attentive and layer-attentive aggregations. Heterophily in graphs has been considered as a challenging issue in graph-based fraud detection [19, 20, 21] since nodes are connected to other nodes belonging to a different class. DRAG alleviates this issue by learning attention coefficients, which weigh the importance of each neighbor regarding computing the representation of the target node. More importantly, these attention coefficients can vary depending on each target node since we utilize the dynamic attention mechanism. ## II Problem Definition Let us consider an undirected multi-relation graph \(G=(\mathcal{V},\mathcal{R},\mathcal{E},\mathbf{X})\) where \(\mathcal{V}\) is a set of nodes, \(\mathcal{R}\) is a set of relations, \(|\mathcal{V}|=n\), \(|\mathcal{R}|=m\), \(\mathcal{E}=\{(v_{i},r_{k},v_{j})|v_{i}\in\mathcal{V},r_{k}\in\mathcal{R},v_{j} \in\mathcal{V}\}\), \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is a set of features for the nodes, and \(d\) is the dimension of each feature vector. There can be multiple edges between a pair of nodes if they have different relations. For example, given \(v_{i}\) and \(v_{j}\), \(\mathcal{E}\) can include both \((v_{i},r_{k},v_{j})\) and \((v_{i},r_{k^{\prime}},v_{j})\). Each node is labeled as a normal node or a fraudulent node. Let \(\mathbf{y}\in\{0,1\}^{n}\) denote a vector for the labels such that \(y_{i}=1\) if the \(i\)-th node is a fraud and \(y_{i}=0\) otherwise. ## III DRAG: Dynamic Relation-Attentive Graph Neural Network We describe DRAG which computes node representations using relation-wise and layer-wise dynamic attention mechanisms. ### _Node Representation per Relation and Self-Transformation_ We propose learning multiple node representations for each node by computing a representation per relation and a self-transformation. To appropriately compute an attention coefficient [17], we add a self-loop to each node for all relations. The neighborhood set of \(v_{i}\) for \(r_{k}\) is defined by \(\mathcal{N}_{ik}\coloneqq\{v_{j}|(v_{j},r_{k},v_{i})\in\mathcal{E}\}\) where \(i=1,\cdots,n\) and \(k=1,\cdots,m\). The node representation of \(v_{i}\) for \(r_{k}\) at the \(l\)-th layer is denoted by \(\mathbf{h}_{i,k}^{(l)}\in\mathbb{R}^{d^{\prime}}\) where \(d^{\prime}\) is the dimension of the node representation. Let \(\mathbf{h}_{i}^{(l)}\in\mathbb{R}^{d^{\prime}}\) be the node representation of \(v_{i}\) at the \(l\)-th layer. We compute \(\mathbf{h}_{i}^{(0)}=\text{MLP}(\mathbf{x}_{i})\) for \(v_{i}\in\mathcal{V}\) where MLP is a multi-layer perceptron. Using the dynamic multi-head attention [16] with \(N_{\alpha}\) heads, we compute the node representation for \(r_{k}\) at each head as follows: \[\mathbf{h}_{i,k}^{(l)}=\sigma\left(\sum_{v_{j}\in\mathcal{N}_{ik}}\alpha_{ijk }^{(l)}\mathbf{P}_{k}^{(l)}\mathbf{h}_{j}^{(l)}\right) \tag{1}\] where \(\mathbf{P}_{k}^{(l)}\in\mathbb{R}^{(d^{\prime}/N_{\alpha})\times d^{\prime}}\) and \(\alpha_{ijk}^{(l)}\) is computed by \[\alpha_{ijk}^{(l)}=\frac{\text{exp}\left(\mathbf{a}_{k}^{(l)}\sigma\left(\bm {W}_{k}^{(l)}|\mathbf{h}_{i}^{(l)}||\ \mathbf{h}_{j}^{(l)}|\right)\right)}{\sum_{v_{j^{\prime}}\in\mathcal{N}_{ik}} \text{exp}\left(\mathbf{a}_{k}^{(l)}\sigma\left(\mathbf{W}_{k}^{(l)}|\mathbf{h}_{i }^{(l)}||\ \mathbf{h}_{j^{\prime}}^{(l)}|\right)\right)} \tag{2}\] where \(||\) denotes a vertical concatenation, \(\mathbf{W}_{k}^{(l)}\in\mathbb{R}^{d^{\prime}\times 2d^{\prime}}\), \(\mathbf{a}_{k}^{(l)}\in\mathbb{R}^{d^{\prime}}\), \(\sigma(\cdot)\) is a non-linear function, \(l=0,1,\cdots,L\), and \(L\) is the total number of layers. To aggregate the outputs from \(N_{\alpha}\) heads, we concatenate the resulting representations from different heads [22]; as a result, we have \(\mathbf{h}_{i,k}^{(l)}\in\mathbb{R}^{d^{\prime}}\). We apply the same concatenation strategy to aggregate results from multi-head outputs in all following attention mechanisms. By computing a node representation per relation using (1), we have \(m\) representations for each node. We also compute the (\(m+1\))-th representation for a node by considering a self-transformation: \(\mathbf{h}_{i,m+1}^{(l)}=\text{MLP}\left(\mathbf{h}_{i}^{(l)}\right)\) where \(\mathbf{h}_{i,m+1}^{(l)}\in\mathbb{R}^{d^{\prime}}\). ### _Relation-Attentive Aggregation_ For each node, we have (\(m\)+1) representations at each layer as described above. DRAG aggregates these representations using a dynamic attention with \(N_{\beta}\) attention heads: \[\mathbf{h}_{i}^{(l+1)}=\sigma\left(\sum_{k=1}^{m+1}\beta_{ik}^{(l)}\overline{ \mathbf{P}}^{(l)}\mathbf{h}_{i,k}^{(l)}\right) \tag{3}\] where \(\overline{\mathbf{P}}^{(l)}\in\mathbb{R}^{(d^{\prime}/N_{\beta})\times d^{\prime}}\) and \(\beta_{ik}^{(l)}\) is computed by \[\beta_{ik}^{(l)}=\frac{\text{exp}\left(\overline{\mathbf{a}}^{(l)}\sigma\left( \overline{\mathbf{W}}^{(l)}|\mathbf{h}_{i}^{(l)}||\ \mathbf{h}_{i,k}^{(l)}|\right)\right)}{\sum_{k^{\prime}=1}^{m+1}\text{exp} \left(\overline{\mathbf{a}}^{(l)}\sigma\left(\overline{\mathbf{W}}^{(l)}|\mathbf{h} _{i}^{(l)}||\ \mathbf{h}_{i,k^{\prime}}^{(l)}|\right)\right)} \tag{4}\] where \(\overline{\mathbf{W}}^{(l)}\in\mathbb{R}^{d^{\prime}\times 2d^{\prime}}\), and \(\overline{\mathbf{a}}^{(l)}\in\mathbb{R}^{d^{\prime}}\) for \(l=0,1,\cdots,L\). The attention coefficient \(\beta_{ik}^{(l)}\) indicates the importance of the \(k\)-th relation for computing the representation of \(v_{i}\) at the \(l\)-th layer. This attention coefficient can differ depending on nodes and layers. Fig. 1: Overview of DRAG. A node representation is computed by each relation and by a self-transformation. Using learnable attention weights, the node representations are aggregated over the relations. The final node representation is learned by aggregating the representations from different layers and used to predict whether the node is a fraud. ### _Aggregation with Multiple Layers_ It has been known that, when solving a node classification problem under heterophily [19], it is helpful to explicitly consider the local and global neighbors by combining intermediate representations [23]. To leverage this property, we aggregate representations from different layers: \[\mathbf{h}_{i}^{(L+1)}=\sigma\left(\sum_{l=0}^{L}\boldsymbol{\gamma}_{il} \widetilde{\boldsymbol{P}}\mathbf{h}_{i}^{(l)}\right) \tag{5}\] with \(N_{\gamma}\) heads, where \(\widetilde{\boldsymbol{P}}\in\mathbb{R}^{(d^{\prime}/N_{\gamma})\times d^{ \prime}}\), \(\mathbf{h}_{i}^{(L+1)}\in\mathbb{R}^{d^{\prime}}\) is the final representation of \(v_{i}\), and \(\boldsymbol{\gamma}_{il}\) is computed by \[\boldsymbol{\gamma}_{il}=\frac{\text{exp}\left(\widetilde{\mathbf{a}}\ \sigma\left(\widetilde{\boldsymbol{W}}[\mathbf{x}_{i}\ ||\ \mathbf{h}_{i}^{(l)}]\right)\right)}{\sum_{l^{\prime}=0}^{L}\text{exp} \left(\widetilde{\mathbf{a}}\ \sigma\left(\widetilde{\boldsymbol{W}}[\mathbf{x}_{i}\ ||\ \mathbf{h}_{i}^{(l^{\prime})}]\right)\right)} \tag{6}\] where \(\widetilde{\boldsymbol{W}}\in\mathbb{R}^{d^{\prime}\times(d+d^{\prime})}\) and \(\widetilde{\mathbf{a}}\in\mathbb{R}^{d^{\prime}}\). Note that the attention coefficient \(\boldsymbol{\gamma}_{il}\) is learned to imply the importance of the \(l\)-th layer's representation for computing the final representation of node \(v_{i}\). ### _Loss Function of_ DRAG Using the final representation of each node, we predict the node label by computing \(\widehat{y}_{i}=\text{MLP}\left(\mathbf{h}_{i}^{(L+1)}\right)\), where \(\widehat{y}_{i}\) indicates \(v_{i}\)'s probability of being a fraud. The loss function of DRAG is defined by \(\mathcal{L}_{\text{DRAG}}=\sum_{v_{i}\in\mathcal{S}}-y_{i}\text{log}\left( \widehat{y}_{i}\right)\) where \(\mathcal{S}\subset\mathcal{V}\) is a training set. ## IV Experiments We compare the performance of DRAG with state-of-the-art fraud detection methods using real-world datasets. ### _Datasets and Experimental Setup_ To test a performance of a fraud detection method, we assume that we observe \(p\%\) of the node labels and use them as a training set. Following a conventional setting [12], we divide the remaining nodes into a validation set and a test set with a ratio of 1:2. We use two well-known real-world datasets, YelpChi [3] and Amazon [24, 25]. In these datasets, there exist three different relations as shown in Table II, where ALL indicates the total number of edges disregarding relations. Note that multiple edges between two nodes resulting from different relations are counted as one edge in ALL. In Amazon, we found that there are 2,104 duplicated nodes that do not have unique features. Since these nodes occur a data leaking issue, we removed these nodes.1 Footnote 1: In Amazon, the features include the feedback summary length, the entropy of ratings, the length of the username, and the ratio of helpful votes [25]; the combination of these features should distinguish a node. It is very unlikely that multiple nodes share identical features, and thus we believe the duplicated nodes should be removed. We consider the following eight different baseline methods: MLP, GraphSAGE [26], GAT [17], GATv2 [16], FRAUDE [13], CARE-GNN [11], PC-GNN [27], and \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \#Nodes & \#Frauds & Relation & \#Edges & \(d\) \\ \hline \multirow{3}{*}{Yelp\_Chi_} & \multirow{3}{*}{45,954} & \multirow{3}{*}{6,677 (14.53\%)} & R-U-R & 49,315 & \\ & & & R-T-R & 573,616 & \\ & & & R-S-R & 3,402,743 & \\ \cline{3-6} & & & ALL & 3,846,979 & \\ \hline \multirow{3}{*}{Amazon} & \multirow{3}{*}{9,840} & \multirow{3}{*}{821 (8.34\%)} & U-P-U & 150,917 & \multirow{3}{*}{25} \\ & & U-S-U & 297,9223 & & \\ \cline{1-1} & & U-V-U & 838,682 & & \\ \cline{1-1} & & & ALL & 3,627,491 & \\ \hline \hline \end{tabular} \end{table} TABLE II: Dataset Statistic. \begin{table} \begin{tabular}{c c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{1\%} & \multicolumn{2}{c|}{10\%} & \multicolumn{2}{c}{40\%} \\ & & F1-macro & AUC & F1-macro & AUC & F1-macro & AUC \\ \hline \multirow{6}{*}{YelpChi} & MLP & 0.6150\(\pm\)0.0072 & 0.7253\(\pm\)0.0098 & 0.6720\(\pm\)0.0069 & 0.8010\(\pm\)0.0053 & 0.7140\(\pm\)0.0048 & 0.8489\(\pm\)0.0064 \\ & GraphSAGE & 0.6269\(\pm\)0.0107 & 0.7363\(\pm\)0.0145 & 0.6971\(\pm\)0.0074 & 0.8293\(\pm\)0.0058 & 0.7456\(\pm\)0.0078 & 0.8800\(\pm\)0.0061 \\ & GAT & 0.6183\(\pm\)0.0133 & 0.7188\(\pm\)0.0176 & 0.6763\(\pm\)0.0089 & 0.7970\(\pm\)0.0118 & 0.7190\(\pm\)0.0085 & 0.8492\(\pm\)0.0087 \\ & GATv2 & 0.6283\(\pm\)0.0109 & 0.7366\(\pm\)0.0129 & 0.6938\(\pm\)0.0077 & 0.8204\(\pm\)0.0060 & 0.7524\(\pm\)0.0098 & 0.8804\(\pm\)0.0067 \\ & FRAUDE & 0.5868\(\pm\)0.0208 & 0.7232\(\pm\)0.0182 & 0.6236\(\pm\)0.0178 & 0.7773\(\pm\)0.01040 & 0.6367\(\pm\)0.0136 & 0.8107\(\pm\)0.0197 \\ & CARE-GNN & 0.6151\(\pm\)0.0119 & 0.7290\(\pm\)0.0133 & 0.6859\(\pm\)0.0302 & 0.8223\(\pm\)0.0223 & 0.6943\(\pm\)0.0150 & 0.8316\(\pm\)0.0113 \\ & PC-GNN & 0.6335\(\pm\)0.0154 & 0.7412\(\pm\)0.0184 & 0.6950\(\pm\)0.0112 & 0.8239\(\pm\)0.0093 & 0.7202\(\pm\)0.0125 & 0.8495\(\pm\)0.0138 \\ & BWGNN-Homo & 0.5797\(\pm\)0.0183 & 0.7016\(\pm\)0.0213 & 0.6316\(\pm\)0.0280 & 0.7772\(\pm\)0.0173 & 0.6425\(\pm\)0.0604 & 0.8515\(\pm\)0.0103 \\ & BWGNN-Hetero & 0.6558\(\pm\)0.0118 & 0.7764\(\pm\)0.0196 & 0.7137\(\pm\)0.0197 & 0.8455\(\pm\)0.0146 & 0.7176\(\pm\)0.0705 & 0.9026\(\pm\)0.0105 \\ & DRAG (Ours) & **0.6884\(\pm\)0.0094** & **0.8279\(\pm\)0.0100** & **0.7462\(\pm\)0.0076** & **0.8833\(\pm\)0.0056** & **0.7988\(\pm\)0.0067** & **0.9233\(\pm\)0.0053** \\ \hline \multirow{6}{*}{Amazon} & MLP & **0.9069\(\pm\)0.0084** & 0.9120\(\pm\)0.0241 & **0.9044\(\pm\)0.0083** & 0.9524\(\pm\)0.0101 & 0.9114\(\pm\)0.0073 & 0.9695\(\pm\)0.0038 \\ & GraphSAGE & 0.8999\(\pm\)0.0108 & 0.9095\(\pm\)0.0193 & 0.8966\(\pm\)0.0095 & **0.9549\(\pm\)0.0092** & 0.9123\(\pm\)0.0065 & **0.9741\(\pm\)0.0031** \\ \cline{1-1} & GAT & 0.8685\(\pm\)0.0303 & 0.9126\(\pm\)0.0177 & 0.8874\(\pm\)0.0115 & 0.9475\(\pm\)0.0127 & 0.9023\(\pm\)0.0071 & 0.9640\(\pm\)0.0144 \\ \cline{1-1} & GATv BWGNN [12]. We use the official implementations of these methods and the hyperparameters provided in the code or in the original paper. In DRAG, we search the hyperparameters by considering the learning rate in \(\{0.01,0.001\}\), the weight decay in \(\{0.001,0.0001\}\), \(L\in\{1,2,3\}\), the number of heads in \(\{2,8\}\). We fix the batch size to be 1,024 and \(d^{\prime}=64\) for all experiments. For all methods, we set the dimension of the final node representation to 64 and max epochs to 1,000 for fair comparisons. ### _Fraud Detection Performance_ In Table I, we show the fraud detection performance of the methods with different \(p\in\{1,10,40\}\) in terms of two standard metrics, F1-macro and AUC. F1-macro is the unweighted mean of the F1 scores of two classes. AUC is the area under the ROC curve, representing the true positive rate against the false positive rate at various thresholds. We repeat all experiments ten times and report the average and the standard deviation. The best performance is boldfaced, and the second-best performance is underlined. In YelpChi, we see that DRAG significantly outperforms the baseline methods in all settings; DRAG shows the best performance in terms of both F1-macro and AUC with three different ratios of labels. In Amazon, DRAG shows comparable performance to the best-performing method. ### _Qualitative Analysis & Ablation Studies_ As described in Section III-B, DRAG learns \(\beta_{ik}^{(l)}\) which indicates the importance of \(r_{k}\) to \(v_{i}\) at the \(l\)-th layer. Figure 2 shows the kernel density estimate plots of \(\beta_{ik}^{(l)}\) values at each layer in YelpChi with \(L=2\) and Amazon with \(L=3\). We see that each relation has a distinguishing distribution of the attention coefficients, which also differs between layers. Also, DRAG learns \(\gamma_{il}\) indicating the importance of the \(l\)-th layer representation for \(v_{i}\) as described in Section III-C. Figure 3 shows the distributions of \(\gamma_{il}\). While Layer-2 tends to have a large importance in YelpChi, Layer-0 has a relatively large importance in Amazon. In Figure 2 and Figure 3, we see that the attention coefficient values are not concentrated on specific values, and some of their distributions are multimodal. This shows that our dynamic graph attention mechanism works as expected, resulting in various attention coefficients depending on target nodes. We conduct ablation studies by disregarding relation types (w/o rel. types), by removing the layer aggregation (w/o layer agg.), and by utilizing only a single layer instead of multiple layers (w/ single layer) in DRAG as presented in Table III. We see that the performance of DRAG significantly degrades by dropping either the relation-attentive aggregation or the layer-attentive aggregation, which indicates that both of these play a critical role in DRAG. In addition, we observe that considering higher-order neighbors via multiple layers helps increase fraud detection performance, mainly when enough labels are provided. ## V Conclusion & Future Work We propose a dynamic attention-based fraud detection, performing relation-wise and layer-wise attentive aggregations. The learnable attention coefficients allow DRAG to concentrate more on neighbors, relations, and layers beneficial to predict the label of the target node. By dynamically adapting the attention coefficients for individual nodes, this attention Fig. 3: \(\gamma_{il}\) attention coefficients in YelpChi and Amazon. \begin{table} \begin{tabular}{c|c c c} \hline & 1\% & 10\% & 40\% \\ \hline DRAG & 0.8279\(\pm\)0.0100 & 0.8833\(\pm\)0.0056 & 0.9233\(\pm\)0.0053 \\ w/o rel. types & 0.7200\(\pm\)0.0134 & 0.8079\(\pm\)0.0134 & 0.8716\(\pm\)0.0054 \\ w/o layer agg. & 0.7153\(\pm\)0.1349 & 0.8377\(\pm\)0.1128 & 0.8775\(\pm\)0.1260 \\ w/ single layer & 0.8214\(\pm\)0.0085 & 0.8790\(\pm\)0.0085 & 0.9076\(\pm\)0.0087 \\ \hline \end{tabular} \end{table} TABLE III: Ablation studies of DRAG. AUC scores on YelpChi using different percentages of labels are reported. Fig. 2: The kernel density estimate plots of \(\beta_{ik}^{(l)}\) attention coefficients at each layer in YelpChi (\(L=2\)) and Amazon (\(L=3\)). mechanism is especially effective in fraud detection on graphs with heterophily. In future work, we will investigate our attention mechanisms from a theoretical point of view. Specifically, we will explore how the attention coefficients are learned under a specific heterophily property. Moreover, we plan to extend our approaches and methods to more complex relational graphs [28, 29]. Also, we will extend DRAG to handle evolving graphs where new nodes appear and new edges are formed over time [30].
2307.10246
Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)
Can we obtain insights about the brain using AI models? How is the information in deep learning models related to brain recordings? Can we improve AI models with the help of brain recordings? Such questions can be tackled by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures, and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic cognitive science and neuroscience research. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus may also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, several neural encoding and decoding models have been recently proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a summary and discussion about future trends. Given the large amount of recently published work in the computational cognitive neuroscience (CCN) community, we believe that this survey enables an entry point for DNN researchers to diversify into CCN research.
Subba Reddy Oota, Zijiao Chen, Manish Gupta, Raju S. Bapi, Gael Jobard, Frederic Alexandre, Xavier Hinaut
2023-07-17T06:54:36Z
http://arxiv.org/abs/2307.10246v2
# Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) ###### Abstract How does the brain represent different modes of information? Can we design a system that automatically understands what the user is thinking? Such questions can be answered by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic research in cognitive science and neuroscience. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, recently several neural encoding and decoding models have been proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a brief summary and discussion about future trends. Given the large amount of recently published work in the 'computational cognitive neuroscience' community, we believe that this survey nicely organizes the plethora of work and presents it as a coherent story. ## 1 Introduction Neuroscience is the field of science that studies the structure and function of the nervous system of different species. It involves answering interesting questions like the following1. (1) How learning occurs during adolescence, and how it differs from the way adults learn and form memories. (2) Which specific cells in the brain (and what connections they form with other cells), have a role in how memories are formed? (3) How animals cancel out irrelevant information arriving from the senses and focus only on information that matters. (4) How do humans make decisions? (5) How humans develop speech and learn languages. Neuroscientists study diverse topics that help us understand how the brain and nervous system work. Footnote 1: [https://zuckermaninstitute.columbia.edu/file/5184/download?](https://zuckermaninstitute.columbia.edu/file/5184/download?) token=qzId8yR **Motivation:** The central aim of neuroscience is to unravel how the brain represents information and processes it to carry out various tasks (visual, linguistic, auditory, etc.). Deep neural networks (DNN) offer a computational medium to capture the unprecedented complexity and richness of brain activity. _Encoding_ and _decoding_ stated as computational problems succinctly encapsulate this puzzle. As the previous surveys systematically explore the brain encoding and decoding studies with respect to only language [1, 1], this survey summarizes the latest efforts in how DNNs begin to solve these problems and thereby illuminate the computations that the unreachable brain accomplishes effortlessly. **Brain encoding and decoding**: Two main tasks studied in cognitive neuroscience are brain encoding and brain decoding, as shown in Figure 1. Encoding is the process of learning the mapping \(e\) from the stimuli \(S\) to the neural activation \(F\). The mapping can be learned using features engineering or deep learning. On the other hand, decoding constitutes learning mapping \(d\), which predicts stimuli \(S\) back from the brain activation \(F\). However, in most cases, brain decoding aims at predicting a stimulus representation \(R\) rather than actually reconstructing \(S\). In both cases, the first step is to learn a semantic representation \(R\) of the stimuli \(S\) at the train time. Next, for encoding, a regression function \(e:R\to F\) is trained. For decoding, a function \(d:F\to R\) is trained. These functions \(e\) and \(d\) can then be used at test time to process new stimuli and brain activations, respectively. **Techniques for recording brain activations**: Popular techniques for recording brain activations include single Micro Electrode (ME), Micro-Electrode array (MEA), Electro-Cortico Graphy (ECoG), Positron emission tomography (PET), functional MRI (fMRI), Magneto-encephalography (MEG), Electro-encephalography (EEG) and Near-Infrared Spectroscopy (NIRS). These techniques differ in their spatial resolution of neural recording and temporal resolution. fMRI enable high spatial but low time resolution. Hence, they are good for examining which parts of the brain handle critical functions. fMRI takes 1-4 seconds to complete a scan. This is far lower than the speed at which humans can process language. On the other hand, both MEG and EEG have high time but low spatial resolution. They can preserve rich syntactic information [1] but cannot be used for source analysis. fNIRS are a compromise option. Their time resolution is better than fMRI, and spatial resolution is better than EEG. However, this spatial and temporal resolution balance may not compensate for the loss in both. **Stimulus Representations**: Neuroscience datasets contain stimuli across various modalities: text, visual, audio, video and other multimodal forms. Representations differ based on modality. Older methods for _text-based stimulus representation_ include text corpus co-occurrence counts, topic models, syntactic, and discourse features. In recent times, both semantic and experiential attribute models have been explored for text-based stimuli. Semantic representation models include distributed word embeddings, sentence representation models, recurrent neural networks (RNNs), and Transformer-based language models. Experiential attribute models represent words in terms of human ratings of their degree of association with different attributes of experience, typically on a scale of 0-6 or binary. Older methods for _visual stimulus representation_ used visual field filter bank and Gabor wavelet pyramid for visual stimuli, but recent methods use models like ImageNet-pretrained convolutional neural networks (CNNs) and concept recognition methods. For _audio stimuli_, phoneme rate and the presence of phonemes have been leveraged, besides deep learning models like SoundNet. Finally, for multimodal stimulus representations, researchers have used both early fusion and late fusion deep learning methods. In the early fusion methods, information across modalities is combined in the early steps of processing. While in late fusion, the combination is performed only at the end. We discuss stimulus representation methods in detail in Sec. 2. **Naturalistic Neuroscience Datasets**: Several neuroscience datasets have been proposed across modalities (see Figure 2). These datasets differ in terms of the following criteria: (1) Method for recording activations: fMRI, EEG, MEG, etc. (2) Repetition time (TR), i.e. the sampling rate. (3) Characteristics of fixation points: location, color, shape. (4) Form of stimuli presentation: text, video, audio, images, or other multimodality. (5) Task that participant performs during recording sessions: question answering, property generation, rating quality, etc. (6) Time given to participants for the task, e.g., 1 minute to list properties. (7) Demography of participants: males/females, sighted/blind, etc. (8) Number of times the response to stimuli was recorded. (9) Natural language associated with the stimuli. We discuss details of proposed datasets in Sec. 3. **Brain Encoding**: Other than using the standard stimuli representation architectures, brain encoding literature has focused on studying a few important aspects: (1) Which models lead Figure 1: Computational Cognitive Neuroscience of Brain Encoding and Decoding: Datasets & Stimulus Representations to better predictive accuracy across modalities? (2) How can we disentangle the contributions of syntax and semantics from language model representations to the alignment between brain recordings and language models? (3) Why do some representations lead to better brain predictions? How are deep learning models and brains aligned in terms of their information processing pipelines? (4) Does joint encoding of task and stimulus representations help? We discuss these details of encoding methods in Sec. 5. **Brain Decoding**: Ridge regression is the most popular brain decoder. Recently, a fully connected layer [1] or multi-layered perceptrons (MLPs) [23] have also been used. While older methods attempted to decode to a vector representation using stimuli of a single mode, newer methods focus on multimodal stimuli decoding [13, 12]. Decoding using Transformers [11, 12, 13], and decoding to actual stimuli (word, passage, image, dialogues) have also been explored. We discuss details of these decoding methods in Sec. 6. **Computational Cognitive Science (CCS) Research goals**: CCS researchers have primarily focused on two main areas [14] (also, see Figure 3). (1) Improving predictive Accuracy. In this area, the work is around the following questions. (a) Compare feature sets: Which feature set provides the most faithful reflection of the neural representational space? (b) Test feature decodability: "Does neu Figure 3: Alignment between deep learning systems and human brains [12]. Figure 2: Representative Samples of Naturalistic Brain Dataset: (LEFT) Brain activity recorded when subjects are reading and listening to the same narrative [1], and (RIGHT) example naturalistic image stimuli from various public repositories: BOLD5000 [13], SSFMRI [1], and VIM-1 [14]. ral data Y contain information about features X?" (c) Build accurate models of brain data: The aim is to enable simulation of neuroscience experiments. (2) Interpretability. In this area, the work is around the following questions. (a) Examine individual features: Which features contribute most to neural activity? (b) Test correspondences between representational spaces: "CNNs vs ventral visual stream" or "Two text representations". (c) Interpret feature sets: Do features X, generated by a known process, accurately describe the space of neural responses Y? Do voxels respond to a single feature or exhibit mixed selectivity? (d) How does the mapping relate to other models or theories of brain function? We discuss some of these questions in Sections 5 and 6. ## 2 Stimulus Representations In this section, we discuss types of stimulus representations that have been proposed in the literature across different modalities: text, visual, audio, video and other multimodal stimuli. **Text Stimulus Representations**: Older methods for text-based stimuli representation include text corpus co-occurrence counts [16, 15, 17], topic models [15], syntactic features and discourse features [23]. In recent times, for text-based stimuli, both semantic models as well as experiential attribute models have been explored. Semantic representation models include word embedding methods [15, 16, 17, 18, 19, 20], sentence representations models (see Figure 4) [23, 24, 25], RNNs [10, 26] and Transformer methods [16, 17, 18, 19, 20, 21, 22]. Popular word embedding methods include textual (i.e., Word2Vec, fastText, and GloVe), linguistic (i.e., dependency), conceptual (i.e., RWSGwn and ConceptNet), contextual (i.e., ELMo). Popular sentence embedding models include average, max, content of avg and max, SIF, fairseq, skip, GenSen, InferSent, ELMo, BERT, RoBERTa, USE, QuickThoughts and GPT-2. Transformer-based methods include pretrained BERT with various NLU tasks, finetuned BERT, Transformer-XL, GPT-2, BART, BigBird, LED, and LongT5. Experiential attribute models represent words in terms of human ratings of their degree of association with different attributes of experience, typically on a scale of 0-6 [1, 1, 19, 17] or binary [1, 20]. **Visual Stimulus Representations**: For visual stimuli, older methods used visual field filter bank [14] and Gabor wavelet pyramid [15, 16]. Recent methods use models like CNNs [23, 24, 25, 26] and concept recognition models [1]. **Audio Stimuli Representations**: For audio stimuli, phoneme rate and presence of phonemes have been leveraged [17]. Recently, authors in [20] used features from an audio deep learning model called SoundNet for audio stimuli representation. **Multimodal Stimulus Representations**: To jointly model the information from multimodal stimuli, recently, various multimodal representations have been used. These include processing videos using audio-image representations like VGG+SoundNet [20] or using image+text combination models like GloVe+VGG and ELMo+VGG in [26]. Recently, the usage of multimodal text+vision models like CLIP, LXMERT, and VisualBERT was proposed in [1]. ## 3 Naturalistic Neuroscience Datasets We discuss the popular text, visual, audio, video and other multimodal neuroscience datasets that have been proposed in the literature. Table 1 shows a detailed overview of brain recording type, language, stimulus, number of subjects (\(|\)S\(|\)) and the task across datasets of different modalities. Figure 2 shows examples from a few datasets. **Text Datasets**: These datasets are created by presenting words, sentences, passages or chapters as stimuli. Some of the text datasets include Harry Potter Story [20], ZUCO EEG [12] and datasets proposed in [1, 19, 20, 21]. In [19, 20], participants were asked to verbally enumerate in one minute the properties (features) that describe the entities the words refer to. There were four groups of participants: 5 sighted individuals were presented with a pictorial form of the nouns, 5 sighted individuals with a verbal-visual (i.e., written Italian words) form, 5 sighted individuals with a verbal auditory (i.e., spoken Italian words) form, and 5 congenitally blind with a verbal auditory form. Data proposed by [1] contains 70 Italian words taken from seven taxonomic categories (abstract, attribute, communication, event/action, person/social role, location, object/tool) in the law and music domain. The word list contains concrete as well as abstract words. ZUCO dataset [12] contains sentences for which fMRIs were obtained for 3 tasks: normal reading of movie reviews, normal reading of Wikipedia sentences and task-specific reading of Wikipedia sentences. For this dataset curation, sentences were presented Figure 4: Language Model to the subjects in a naturalistic reading scenario. A complete sentence is presented on the screen. Subjects read each sentence at their own speed, i.e., the reader determines for how long each word is fixated and which word to fixate next. **Visual Datasets**: Older visual datasets were based on binary visual patterns [13]. Recent datasets contain natural images. Examples include Vim-1 [11], BOLD5000 [15], Algonauts [10], NSD [14], Things-data[1], and the dataset proposed in [1]. BOLD5000 includes \(\sim\)20 hours of MRI scans per each of the four participants. 4,916 unique images were used as stimuli from 3 image sources. Algonauts contains two sets of training data, each consisting of an image set and brain activity in RDM format (for fMRI and MEG). Training set 1 has 92 silhouette object images, and training set 2 has 118 object images with natural backgrounds. Testing data consists of 78 images of objects on natural backgrounds. Most of the visual datasets involve passive viewing, but the dataset in [1] involved the participant doing the one-back repetition detection task. **Audio Datasets**: Most of the proposed audio datasets are in English [11, 1, 12], while there is one [11] on Italian. The participants were involved in a variety of tasks while their brain activations were measured: Property generation [11, 12], passive listening [11, 12], question answering [1] and imagining themselves personally experiencing common scenarios [13]. In the last one, participants underwent fMRI as they reimagined the scenarios (e.g., resting, reading, writing, bathing, etc.) when prompted by standardized cues. Narratives [14] used 17 different stories as stimuli. Across subjects, it is 6.4 days worth of recordings. **Video Datasets**: Recently, video neuroscience datasets have also been proposed. These include BBC's Doctor Who [15], Japanese Ads [15], Pippi \begin{table} \begin{tabular}{|p{11.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{2}{|c|}{**Dataset**} & **Authors** & **Type** & **Lang.** & **Stimulus** & **S** & **Task** \\ \hline \multirow{10}{*}{} & Harry Potter & [14] & MRU & English & Reading Chapter 9 of Harry Potter and the Soreerer’s Stone & 9 & Story understanding \\ \cline{2-7} & & MRU & MEG & & & & \\ \cline{2-7} & & [11] & fMRI & Italian & Verbal, pictorial or auditory presentation of 40 concrete nouns, four times & 20 & Property Generation \\ \cline{2-7} & & [11] & fMRI & Italian & Reading 70 concrete and abstract nouns from two music, five times & 7 & Imaging a situation with noun \\ \cline{2-7} & ZuCo & [11] & fMRI & English & Reading 1107 sentences with 21,629 words from movie reviews & 12 & Rate movie quality \\ \cline{2-7} & 240 Sentences with Content Words & [1] & fMRI & English & Reading 240 active voice sentences describing everyday situations & 14 & Passive reading \\ \cline{2-7} & BCCW-EEG & [14] & EEG & Japanese & Reading 20 newspaper articles for \(\sim\)30-40 minutes & 40 & Passive reading \\ \cline{2-7} & Subset Moh Radio Hour & [15] & fMRI & English & Reading 11 stories & 9 & Passive reading and Listening \\ \cline{2-7} & & [11] & fMRI & - & Viewing routing edges (8 times), expanding/contracting rings (8 times), rotating 36 Gabor filters (4 times), grad (36 times) & 9 & Passive viewing \\ \cline{2-7} & Vim-1 & [11] & MRU & - & Viewing sequences of 1870 natural photos & 2 & Passive viewing \\ \cline{2-7} & Generic Object Decoder & [11] & fMRI & - & Viewing 1,200 images from 150 object categories; 50 images from 50 & 5 & Repetition detection \\ \cline{2-7} & BOLD5000 & [15] & fMRI & - & Viewing 5254 images depicting real-world scenes & 4 & Passive viewing \\ \cline{2-7} & Algonauts & [14] & fMRI & - & Viewing 92 silhouette object images and 118 images of objects on natural & 15 & Passive viewing \\ \cline{2-7} & NSD & [14] & fMRI & - & Viewing 7300 natural images & 8 & Passive viewing \\ \cline{2-7} & THINGS & [11] & fMRI & - & Viewing 31188 natural images & 8 & Passive viewing \\ \cline{2-7} & THINGS & [11] & MRU & Italian & Verbal, pictorial or auditory presentation of 40 concrete nouns, 4 times & 20 & Property Generation \\ \cline{2-7} & The Moth Radio Hour & [11] & fMRI & English & Listening eleven 10-minute stories & 7 & Passive Listening \\ \cline{2-7} & The Moth Radio Hour & [11] & EEG & English & Listening Chapter of dance’s Adventures in Wonderland (2,129 & 33 & Question answering words in 84 sentences) as read by Kristen McQuillan & 26 & Inagine personal experiences \\ \cline{2-7} & & [11] & fMRI & English & Listening one of 20 scenarios names, 5 times & 345 & Passive Listening \\ \cline{2-7} & Narratives & [14] & fMRI & English & Listening 27 diverse naturalistic spoken stories. 891 functional scans & 345 & Passive Listening \\ \cline{2-7} & Natural Stories & [14] & fMRI & English & Listening Mook-Radio-Hour naturalistic spoken stories. & 19 & Passive Listening \\ \cline{2-7} & The Little Pinace & [14] & fMRI & English & Listening audiobook for about 100 minutes. & 112 & Passive Listening \\ \cline{2-7} & MEG-MASC & [11] & MEG & English & Listening two hours of naturalistic stories. 208 MED sensors & 1 & Passive listening \\ \cline{2-7} & BBC's Doctor Who & [15] & fMRI & English & Viewing spatiotemporal visual and auditory videos (30 episodes). 120.8 whole-brain volumes (\(\sim\)23) of using single-spreading and data, and 12 values (11 min) of repeated narrative short episodes. 22 repetitions & 1 & Passive viewing \\ \cline{2-7} & Japanese Ads & [15] & fMRI & Japanese Video 88 and 2427 NYPT Japanese ad movies (15-308). 7200 train and 1200 test m/MRIs for web; MRIs & MRIs & MRIs & MRIs & MRIs \\ \cline{2-7} & Pippi Langkous & [14] & ECG & Swedish Viewing 30 \(\sim\) 35 concepts of a feature film (in total, 6.5 min long), edited & 37 & Passive viewing \\ \cline{2-7} & Algonauts & [14] & fMRI & English & Viewing 1000 short video clips (3 sec each) & 10 & Passive viewing \\ \cline{2-7} & Natural Stories & [11] & fMRI & English & Weaching natural short movie clips & 5 & Passive viewing \\ \cline{2-7} & Natural Stot Clips & [11] & fMRI & English & Watching 170 natural short video clips & 10 & Passive viewing \\ \cline{2-7} & 60 Concrete Nouns & [14] & fMRI & English & Viewing 60 different word-picture pairs from 12 categories, 6 times each & 9 & Passive viewing \\ \cline{2-7} & & [14] & MEG & English & Reading 0 concrete nouns along with line drawings. 20 questions per & 9 & Question answering \\ \cline{2-7} & & [14] & MEG & English & Reading 0 concrete nouns (audisovisual word and picture stimuli: bunny, bear, hark, dog, mouth, food, hand, and nose; 12 times repeated) & 24 & Passive viewing and listening \\ \cline{2-7} & & [14] & fMRI & English & Viewing 180 Words with Picture, Sentences, word clouds; reading 96 & 16 & Passive viewing and reading text passages; 72 passages. 3 times repeated. & 24 & Passive viewing and listening \\ \cline{2-7} & & [14] & fMRI & Chinese & Viewing and listening 50 concrete nouns from 10 semantic categories. & 7 & Passive viewing and listening \\ \cline{2-7} & Neuromond & [1] & fMRI & English & Watching TV series (Friends, MovieID) & 6 & Passive viewing and listening \\ \hline \end{tabular} \end{table} Table 1: Naturalistic Neuroscience Datasets Langkous [1] and Algonauts [1]. Japanese Ads data contains data for two sets of movies were provided by NTT DATA Corp: web and TV ads. There are also four types of cognitive labels associated with the movie datasets: scene descriptions, impression ratings, ad effectiveness indices, and ad preference votes. Algonauts 2021 contains fMRIs from 10 human subjects that watched over 1,000 short (3 sec) video clips. **Other Multimodal Datasets**: Finally, beyond the video datasets, datasets have also been proposed with other kinds of multimodality. These datasets are audiovisual [1, 2], words associated with line drawings [15, 16], pictures along with sentences and word clouds [12]. These datasets have been collected using a variety of methods like fMRIs [15, 12], MEG [10] and fNIRS [14, 15]. Specifically, in [16], subjects were asked to perform a QA task, while their brain activity was recorded using MEG. Subjects were first presented with a question (e.g., "Is it manmade?"), followed by 60 concrete nouns, along with their line drawings, in a random order. For all other datasets, subjects performed passive viewing and/or listening. ## 4 Evaluation Metrics Two metrics are popularly used to evaluate brain encoding models: 2V2 accuracy [13, 14] and Pearson Correlation [17], as shown in Figure 5. They are defined as follows. Given a subject and a brain region, let \(N\) be the number of samples. Let \(\{Y_{i}\}_{i=1}^{N}\) and \(\{\hat{Y}_{i}\}_{i=1}^{N}\) denote the actual and predicted voxel value vectors for the \(i^{th}\) sample. Thus, \(Y\in R^{N\times V}\) and \(\hat{Y}\in R^{N\times V}\) where \(V\) is the number of voxels in that region. **2V2 Accuracy** is computed as \(\frac{1}{N_{C_{2}}}\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}I[\{\text{cosD}(Y_{i},\hat{Y }_{i})+\text{cosD}(Y_{j},\hat{Y}_{j})\}<\{\text{cosD}(Y_{i},\hat{Y}_{j})+\text{ cosD}(Y_{j},\hat{Y}_{i})\}]\) where cosD is the cosine distance function. \(I[c]\) is an indicator function such that \(I[c]=1\) if \(c\) is true, else it is 0. The higher the 2V2 accuracy, the better. **Pearson Correlation** is computed as PC=\(\frac{1}{N}\sum_{i=1}^{n}corr[Y_{i},\hat{Y}_{i}]\) where corr is the correlation function. Brain decoding methods are evaluated using popular metrics like pairwise and rank accuracy [12, 16]. Other metrics used for brain decoding evaluation include R\({}^{2}\) score, mean squared error, and using Representational Similarity Matrix [1, 15]. **Pairwise Accuracy** To measure the pairwise accuracy, the first step is to predict all the test stimulus vector representations using a trained decoder model. Let S = [S\({}_{0}\), S\({}_{1}\),\(\cdots\),S\({}_{n}\)], \(\hat{S}\) = [\(\hat{S}_{0}\), \(\hat{S}_{1}\),\(\cdots\),\(\hat{S}_{n}\)] denote the "true" (stimuli-derived) and predicted stimulus representations for \(n\) test instances resp. Given a pair \((i,j)\) such that \(0\leq i,j\leq n\), score is 1 if \(corr\)(S\({}_{i}\),\(\hat{S}_{i}\)) + \(corr\)(S\({}_{j}\),\(\hat{S}_{j}\)) > \(corr\)(S\({}_{i}\),\(\hat{S}_{j}\)) + \(corr\)(S\({}_{j}\),\(\hat{S}_{i}\)), else 0. Here, \(corr\) denotes the Pearson correlation. Final pairwise matching accuracy per participant is the average of scores across all pairs of test instances. For computing rank accuracy, we first compare each decoded vector to all the "true" stimuli-derived semantic vectors and ranked them by their correlation. The classification performance reflects the rank \(r\) of the stimuli-derived vector for the correct word/picture/stimuli: \(1-\frac{r-1}{\#instances-1}\). The final accuracy value for each participant is the average rank accuracy across all instances. ## 5 Brain Encoding Encoding is the learning of the mapping from the stimulus domain to the neural activation. The quest in brain encoding is for "reverse engineering" the algorithms that the brain uses for sensation, perception, and higher-level cognition. Recent breakthroughs in applied NLP enable reverse engineering the language function of the brain. Similarly, pioneering results have been obtained for reverse engineering the function of ventral visual stream in object recognition founded on the advances and remarkable success of deep CNNs. The overall schema of building a brain encoder is shown in Figure 6. Initial studies on brain encoding focused on smaller data sets and single modality of brain responses. Early models used word representations [13]. Rich contextual representations derived from RNNs such as LSTMs resulted in superior encoding models [17, 16] of narratives. The recent Figure 5: Evaluation Metrics for Brain Encoding and Decoding. (LEFT) Pearson Correlation, (MIDDLE) 2V2 Accuracy [13], and (RIGHT) Pairwise Accuracy. efforts are aimed at utilizing the internal representations extracted from transformer-based language models such as ELMo, BERT, GPT-2, etc for learning encoding models of brain activation [11, 12]. High-grain details such as lexical, compositional, syntactic, and semantic representations of narratives are factorized from transformer-based models and utilized for training encoding models. The resulting models are better able to disentangle the corresponding brain responses in fMRI [12]. Finally, is has been found that the models that integrate task and stimulus representations have significantly higher prediction performance than models that do not account for the task semantics [13, 14]. Similarly, in vision, early models focused on independent models of visual processing (object classification) using CNNs [23]. Recent efforts in visual encoding models focus on using richer visual representations derived from a variety of computer vision tasks [24]. Instead of feed-forward deep CNN models, using shallow recurrence enabled better capture of temporal dynamics in the visual encoding models [15, 16]. Table 2 summarizes various encoding models proposed in the literature related to textual, audio, visual, and multimodal stimuli. Figure 7 classifies the encoding literature along various stimulus domains such as vision, auditory, multimodal, and language and the corresponding tasks in each domain. **Linguistic Encoding**: A number of previous works have investigated the alignment between pretrained language models and brain recordings of people comprehending language. Huth et al. [10] have been able to identify brain ROIs (Regions of Interest) that respond to words that have a similar meaning and have thus built a "semantic atlas" of how the human brain organizes language. Many studies have shown accurate results in mapping the brain activity using neural distributed word embeddings for linguistic stimuli [11, 12, 13, 14]. Unlike earlier models where each word is represented as an independent vector in an embedding space, [13] built encoding models using rich contextual representations derived from an LSTM language model in a story listening task. With these contextual representations, they demonstrated dissociation in brain activation - auditory cortex (AC) and Broca's area in shorter context whereas left Temporal-Parietal junction (TPJ) in longer context. [10] presents the first multimodal framework for evaluating six types of word embedding (Word2Vec, WordNet2Vec, GloVe, FastText, ELMo, and BERT) on 15 datasets, including eye-tracking, EEG and fMRI signals recorded during language processing. With the recent advances in contextual representations in NLP, few studies incorporated them in relating sentence embeddings with brain activity patterns [24, 11, 12]. More recently, researchers have begun to study the alignment of language regions of the brain with the layers of language models and found that the best alignment was achieved in the middle layers of these models [13, 14, 15]. Schrimpf et al. [16] examined the relationship between 43 diverse state-of-the-art language models. They also studied the behavioral signatures of human language processing in the form of self-paced reading times, and a range of linguistic functions assessed via standard engineering tasks from NLP. They found that Transformer-based models perform better than RNNs or word-level embedding models. Larger-capacity models perform better than smaller models. Models initialized with random weights (prior to training) perform surprisingly similarly in neural predictivity as compared to final trained models, suggesting that network architecture contributes as much or more than experience dependent learning to a model's match to the brain. Antonello et al. [16] proposed a "language representation embedding space" and demonstrated the effectiveness of the features from this embedding in predicting fMRI responses to linguistic stimuli. **Disentangling the Syntax and Semantics**: The representations of transformer models like BERT, GPT-2 have been Figure 6: Schema for Brain Encoding shown to linearly map onto brain activity during language comprehension. Several studies have attempted to disentangle the contributions of different types of information from word representations to the alignment between brain recordings and language models. Wang et al. (2020) proposed a two-channel variational autoencoder model to dissociate sentences into semantic and syntactic representations and separately associate them with brain imaging data to find feature-correlated brain regions. To separate each syntactic feature, Zhang et al. (2022) proposed a feature elimination method, called Mean Vector Null space Projection. Compared with word representations, word syntactic features (parts-of-speech, named entities, semantic roles, dependencies) seem to be distributed across brain networks instead of a local brain region. In the previous two studies, we do not know whether all or any of these representations effectively drive the linear mapping between language models (LMs) and the brain. Toneva et al. (2022) presented an approach to disentangle supra-word meaning from lexical meaning in language models and showed that supra-word meaning is predictive of fMRI recordings in two language regions (anterior and posterior temporal lobes). Caucheteux et al. (2021) proposed a taxonomy to factorize the high-dimensional activations of language models into four combinatorial classes: lexical, compositional, syntactic, and semantic representations. They found that (1) Compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices. (2) Contrary to previous claims, syntax and semantics are not associated with separated modules, but, instead, appear to share a common and distributed neural substrate. While previous works studied syntactic processing as cap \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Stimuli** & **Authors** & **Dataset** & **Lang.** & **Stimulus Representations** & \(|\)\(\mathbf{S}|\) & **Dataset** & **Model** \\ \hline \multirow{11}{*}{[1]} & [Main and Huh, 2018] & fMRI & English & LSTM & 6 & Subset Moth Radio Hour & Ridge \\ \cline{2-8} & [Toneva and Webe, 2019] & fMRI / MEG & English & ELMo, BERT, Transformer-XL & 9 & Story understanding & Ridge \\ \cline{2-8} & [Toneva _et al._, 2020] & MEG & English & BERT & 9 & Question-Answering & Ridge \\ \cline{2-8} & [Schrimpf, 2021b] & fMRI/ECG & English & 43 language models (e.g. GloVe, ELMo, BERT, GPT-2, XLNET) & 20 & Neural architecture of language & Ridge \\ \cline{2-8} & [Gautier and Levy, 2019] & fMRI & English & BERT, line-tuned NLP tasks (Sentiment, Natural language inference), Scrambling language model & 7 & Imagine a situation with the noun & Ridge \\ \cline{2-8} & [Deniz _et al._, 2019] & fMRI & English & GloVe & 9 & Subset Moth Radio Hour & Ridge \\ \cline{2-8} & [Jain _et al._, 2020] & fMRI & English & LSTM & 6 & Subset Moth Radio Hour & Ridge \\ \cline{2-8} & [Caucheteux _et al._, 2021] & fMRI & English & GPT-2, Basic syntax features & 345 & Narratives & Ridge \\ \cline{2-8} & [Antonello _et al._, 2021] & fMRI & English & GloVe, BERT, GPT-2, Machine Translation, POS tasks & 6 & Moth Radio Hour & Ridge \\ \cline{2-8} & [Reddy and Wehee, 2021] & fMRI & English & Constitutive, Basic syntax features and BERT & 8 & Harry Potter & Ridge \\ \cline{2-8} & [Goldstein _et al._, 2022] & fMRI & English & GloVe, GPT-2 next word, pre-onset, post-onset & 8 & FC6G & \\ \cline{2-8} & [Goldstein _et al._, 2022] & fMRI & English & BERT and GLUE tasks & 82 & Pereira \& Narratives & Ridge \\ \cline{2-8} & [Mona _et al._, 2022] & fMRI & English & ENSTM, ELMo, Longformer & 82 & Narratives & Ridge \\ \cline{2-8} & [Merlin and Toneva, 2022] & fMRI & English & BERT, Next word prediction, multi-word semantics, & 8 & Harry Potter & Ridge \\ \cline{2-8} & [Toneva _et al._, 2022] & fMRI/ MEG & English & ELMo, BERT, Context Residuals & 8 & Harry Potter & Ridge \\ \cline{2-8} & [Aw and Toneva, 2022] & fMRI & English & BART, Longformer, Long-T5, BigBird, and corresponding Booksum models as well & 8 & Passive reading & Ridge \\ \cline{2-8} & [Zhang _et al._, 2022b] & fMRI & English & Noise Count & 19, 12 & Zhang & Ridge \\ \cline{2-8} & [Mona _et al._, 2023a] & fMRI & English & Constitutive, Dependency trees, Basic syntax features and BERT & 82 & Narratives & Ridge \\ \cline{2-8} & [Oota _et al._, 2023b] & MEG & English & Basic syntax features, GloVe and BERT & 8 & MEG-MASC & Ridge \\ \cline{2-8} & [Tickute _et al._, 2023] & fMRI & English & BERT-Large, GPT-2 XL & 12 & Reading Sentences & Ridge \\ \cline{2-8} & [Kauf _et al._, 2023] & fMRI & English & BERT-Large, GPT-2 XL & 12 & Pereira & Ridge \\ \cline{2-8} & [Singh _et al._, 2023] & fMRI & English & BERT-Large, GPT-2 XL, Text Perturbations & 5 & Pereira & Ridge \\ \cline{2-8} & [Wang _et al._, 2019] & fMRI & & 21 downstream vision tasks & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Kublius _et al._, 2019] & fMRI & CNN models AlexNet, ResNet, DenseNet & 7 & Algonauts & Ridge \\ \cline{2-8} & [Dwivedi _et al._, 2021] & fMRI & & 21 downstream vision tasks & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Mona and Webbe, 2022] & fMRI & CNN models AlexNet & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Conwell _et al._, 2023] & fMRI & & CNN models AlexNet & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Miller _et al._, 2022] & fMRI & English & WA2Vec2.0 & 345 & Narratives & Ridge \\ \cline{2-8} & [Vaidya _et al._, 2022] & fMRI & English & ARC, AST, Wav2Vec2.0, and HuBERT & 7 & Moth Radio Hour & Ridge \\ \cline{2-8} & [Tuckute _et al._, 2022] & fMRI & English & 19 Speech Models (e.g. DeepSpeech, WA2Vec2.0, and VQ-VAE) & 19 & Passive listening & Ridge \\ \cline{2-8} & [Oota _et al._, 2023c] & fMRI & English & 5 basic and 25 deep learning based speech models & 6 & Moth Radio Hour & Ridge \\ \cline{2-8} & [Tera, CPC, APC, WA2Vec2.0, HuBERT, DistilHung & & & & \\ \cline{2-8} & [Oota _et al._, 2023d] & fMRI & English & Wav2Vec2.0 and SUPERB tasks & 82 & Narratives & Ridge \\ \cline{2-8} & [Dong and Toneva, 2023] & fMRI & English & Merlo Reserve & 5 & Neuumod & Ridge \\ \cline{2-8} & [Popham _et al._, 2021] & fMRI & English & 985D Semantic Vector & 5 & Moth Radio Hour \& Short Movie & Ridge \\ \cline{2-8} & [Oota _et al._, 2022d] & fMRI & English & CLIP, VisualBERT, LXMERT, CNNs and BERT & 5, 82 & Periez \& Narratives & Ridge \\ \cline{2-8} & [La _et al._, 2022] & fMRI & English & BiVL & 5 & Pereira \& Short Movie Clips & Ridge \\ \cline{2-8} & [Tang _et al._, 2023] & fMRI & English & BridgeTower & 5 & Moth Radio Hour \& Short Movie & Ridge \\ \cline{2-8} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 2: Summary of Representative Brain Encoding Studies tured through complexity measures (syntactic surprisal, node count, word length, and word frequency), very few have studied the syntactic representations themselves. Studying syntactic representations using fMRI is difficult because: (1) representing syntactic structure in an embedding space is a non-trivial computational problem, and (2) the fMRI signal Figure 7: Brain Encoding Survey Tree is noisy. To overcome these limitations, Reddy et al. [2021] proposed syntactic structure embeddings that encode the syntactic information inherent in natural text that subjects read in the scanner. The results reveal that syntactic structure-based features explain additional variance in the brain activity of various parts of the language system, even after controlling for complexity metrics that capture the processing load. Toneva et al. [2021] further examined whether the representations obtained from a language model align with different language processing regions in a similar or different way. **Linguistic properties in LMs and brains**: Understanding the reasons behind the observed similarities between language comprehension in LMs and brains can lead to more insights into both systems. Several works [Schwartz _et al._2019, Kumar _et al._2022, Aw and Toneva, 2022, Merlin and Toneva, 2022, Oota _et al._2022b] have found that using a fine-tuned BERT leads to improved brain predictions. However, it is not clear what type of information in the fine-tuned BERT model led to the improvement. It is unclear whether and how the two systems align in their information processing pipeline. Aw and Toneva [2022] used four pre-trained large language models (BART, Longformer Encoder Decoder, Big-Bird, and LongT5) and also trained them to improve their narrative understanding, using the method detailed in Figure 8. However, it is not understood whether prediction of the next word is necessary for the observed brain alignment or simply sufficient, and whether there are other shared mechanisms or information that is similarly important. Merlin and Toneva [2022] proposed two perturbations to pretrained language models that, when used together, can control for the effects of next word prediction and word-level semantics on the alignment with brain recordings. Specifically, they find that improvements in alignment with brain recordings in two language processing regions-Inferior Frontal Gyrus (IFG) and Angular Gyrus (AG)-are due to next word prediction and word-level semantics. However, what linguistic information actually underlies the observed alignment between brains and language models is not clear. Recently, Oota et al. [2022e] tested the effect of a range of linguistic properties (surface, syntactic and semantic) and found that the elimination of each linguistic property results in a significant decrease in brain alignment across all layers of BERT. **Visual Encoding**: CNNs are currently the best class of models of the neural mechanisms of visual processing [Du _et al._2020, Beliy _et al._2019, Oota _et al._2019, Nishida _et al._2020]. How can we push these deeper CNN models to capture brain processing even more stringently? Continued architectural optimization on ImageNet alone no longer seems like a viable option. Kubilius et al. [2019] proposed a shallow recurrent anatomical network CORnet that follows neuroanatomy more closely than standard CNNs, and achieved the state-of-the-art results on the Brain-score benchmark. It has four computational areas, conceptualized as analogous to the ventral visual areas V1, V2, V4, and IT, and a linear category decoder that maps from the population of neurons in the model's last visual area to its behavioral choices. Despite the effectiveness of CNNs, it is difficult to draw specific inferences about neural information processing using CNN- derived representations from a generic object-classification CNN. Hence, Wang et al. [2019] built encoding models with individual feature spaces obtained from 21 computer vision tasks. One of the main findings is that features from 3D tasks, compared to those from 2D tasks, predict a distinct part of visual cortex. **Auditory Encoding**: Speech stimuli have mostly been represented using encodings of text transcriptions [Huth _et al._2016] or using basic features like phoneme rate, the sum of squared FFT coefficients [Pandey _et al._2022], etc. Text transcription-based methods ignore the raw audio-sensory information completely. The basic speech feature engineering method misses the benefits of transfer learning from rigorously pretrained speech DL models. Recently, several researchers have used popular deep learning models such as APC [Chung _et al._2020], Wav2Vec2.0 [Baevski _et al._2020], HuBERT [Hsu _et al._2021], and Data2Vec [Baevski _et al._2022] for encoding speech stimuli. Millet et al. [2022] used a self-supervised learning model Wav2Vec2.0 to learn latent representations of the speech waveform similar to those of the human brain. They find that the functional hierarchy of its transformer layers aligns with the cortical hierarchy of speech in the brain, and reveals the whole-brain organisation of speech processing with an unprecedented clarity. This means that the first transformer layers map onto the low-level auditory cortices (A1 and A2), the deeper layers (orange and red) map onto brain regions associated with higher-level processes (e.g. STS and IFG). Vaidya et al. [2022] present the first systematic study to bridge the gap between recent four self-supervised speech representation methods (APC, Wav2Vec, Wav2Vec2.0, and HuBERT) and computational models of the human auditory system. Similar to [Millet _et al._2022], they find that self-supervised speech models are the best models of auditory areas. Lower layers best modeled low-level areas, and upper-middle layers were most predictive of phonetic and semantic areas, while layer representations follow the accepted hierarchy of speech processing. Tuckute et al. [2022] analyzed 19 different speech models and find that some audio models derived in engineering contexts (model applications ranged from speech recognition and speech enhancement to audio captioning and audio source separation) produce poor predictions of auditory cortical responses, many task-optimized audio speech deep learning models outpredict a standard spectrotemporal model of the auditory cortex and exhibit hierarchical layer-region correspondence with auditory cortex. **Multimodal Brain Encoding**: Multimodal stimuli can be best encoded using recently proposed deep learning based multimodal models. Oota et al. [2022d] experimented with multimodal models like Contrastive Language-Image Pre-training (CLIP), Learning Cross-Modality Encoder Representations from Transformers (LXMERT), and VisualBERT and found VisualBERT to the best. Similarly, Wang et al. [2022] find that multimodal models like CLIP better predict neural responses in visual cortex, since image captions typically contain the most semantically relevant information in an image for humans. [Dong and Toneva, 2023] present a systematic approach to probe multi-modal video Transformer model by leveraging neuroscientific evidence of multimodal information processing in the brain. The authors find that in termediate layers of a multimodal video transformer are better at predicting multimodal brain activity than other layers, indicating that the intermediate layers encode the most brain-related properties of the video stimuli. Recently, [10] investigated a multimodal Transformer as the encoder architecture to extract the aligned concept representations for narrative stories and movies to model fMRI responses to naturalistic stories and movies, respectively. Since language and vision rely on similar concept representations, the authors perform a cross-modal experiment in which how well the language encoding models can predict movie-fMRI responses from narrative story features (story \(\rightarrow\) movie) and how well the vision encoding models can predict narrative story-fMRI responses from movie features (movie \(\rightarrow\) story). Overall, the authors find that cross-modality performance was higher for features extracted from multimodal transformers than for linearly aligned features extracted from unimodal transformers. ## 6 Brain Decoding Decoding is the learning of the mapping from neural activations back to the stimulus domain. Figure 9 depicts the typical workflow for building an image/language decoder. **Decoder Architectures**: In most cases, the stimulus representation is decoded using typical ridge regression models trained on each voxel and its 26 neighbors in 3D to predict each dimension of the stimulus representation. Also, decoding is usually performed using the most informative voxels [17]. In some cases, a fully connected layer [1] or a multi-layered perceptron [20] has been used. In some studies, when decoding is modeled as multi-class classification, Gaussian Naive Bayes [21, 22] and SVMs [14] have also been used for decoding. Figure 10 summarizes the literature related to various decoding solutions proposed in vision, auditory, and language domains. **Decoding task settings**: The most common setting is to perform decoding to a vector representation using a stimuli of a single mode (visual, text or audio). Initial brain decoding experiments studied the recovery of simple concrete nouns and verbs from fMRI brain activity [23] where the subject watches either a picture or a word. Sun et al. [24] used several sentence representation models to associate brain activities with sentence stimulus, and found InferSent to perform the best. More work has focused on decoding the text passages instead of individual words [20]. Some studies have focused on multimodal stimuli based decoding where the goal is still to decode the text representation vector. For example, Pereira et al. [20] trained the decoder on imaging data of individual concepts, and showed that it can decode semantic vector representations from imaging data of sentences about a wide variety of both concrete and abstract topics from two separate datasets. Further, Oota Figure 8: Comparison of brain recordings with language models trained on web corpora (LEFT) and language models trained on book stories (RIGHT) [20]. et al. [2022c] propose two novel brain decoding setups: (1) multi-view decoding (MVD) and (2) cross-view decoding (CVD). In MVD, the goal is to build an MV decoder that can take brain recordings for any view as input and predict the concept. In CVD, the goal is to train a model which takes brain recordings for one view as input and decodes a semantic vector representation of another view. Specifically, they study practically useful CVD tasks like image captioning, image tagging, keyword extraction, and sentence formation. To understand application of Transformer models for decoding better, Gauthier et al. [2019] fine-tuned a pre-trained BERT on a variety of NLU tasks, asking which lead to im \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Stimuli** & **Authors** & **Dataset Type** & **Lang.** & **Stimulus Representations** & **S** & **Dataset** & **Model** \\ \hline \multirow{6}{*}{[30]} & Pereira _et al._, 2018 & fMRI & English & Word2Vec, GloVe, BERT & 17 & Pereira & Ridge \\ \cline{2-9} & Wang _et al._, 2020 & fMRI & English & BERT, RoBERTa & 6 & Pereira & Ridge \\ \cline{2-9} & Oota _et al._, 2022c & fMRI & English & GloVe, BERT, RoBERTa & 17 & Pereira & Ridge \\ \cline{2-9} & Tang _et al._, 2022 & fMRI & English & GPT, fine-tuned GPT on Reddit comments and autobiographical stories & 7 & Moth Radio Hour & Ridge \\ \hline \multirow{6}{*}{[30]} & Beliy _et al._, 2019 & fMRI & \multicolumn{2}{p{113.8pt}|}{End-to-End Encoder-Decoder, Decoder-Encoder, AlexNet} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} \\ \cline{2-9} & Takaei and Nishimoto, 2022 & fMRI & Latent Diffusion Model, CLIP & 4 & NSD & Ridge \\ \cline{2-9} & Ozcelik and VanRullen, 2023 & fMRI & \multicolumn{2}{p{113.8pt}|}{VDVAE, Latent Diffusion Model} & \multicolumn{1}{p{113.8pt}|}{7} & NSD & \multicolumn{1}{p{113.8pt}|}{} \\ \cline{2-9} & Chen _et al._, 2023b & fMRI & \multicolumn{2}{p{113.8pt}|}{Latent Diffusion Model, CLIP} & \multicolumn{1}{p{113.8pt}|}{3} & \multicolumn{1}{p{113.8pt}|}{HCP fMRI-Video-Dataset} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} \\ \cline{2-9} & Defossez _et al._, 2022 & MEG,EEG & English & \multicolumn{1}{p{113.8pt}|}{MEL Spectrogram, Wav2Vec2.0} & \multicolumn{1}{p{113.8pt}|}{169} & MEG-MASC & Ridge, CLIP \\ \cline{2-9} & (Gwilliams _et al._, 2022) & MEG & English & Phonemes & 7 & MEG-MASC & \\ \hline \end{tabular} \end{table} Table 3: Summary of Representative Brain Decoding Studies Figure 10: Brain Decoding Survey Tree Figure 9: Schema for Brain Decoding. LEFT: Image decoder _[Smith et al., 2011]_, RIGHT: Language Decoder _[Wang et al., 2019]_ provements in brain-decoding performance. They find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance. Toneva et al. (2019) study how representations of various Transformer models differ across layer depth, context length, and attention type. Some studies have attempted to reconstruct words (Affolter et al., 2020), continuous language (Tang et al., 2022), images (Du et al., 2020; Beliy et al., 2019; Fang et al., 2020; Lin et al., 2022), speech (Defossez et al., 2022) or question-answer speech dialogues (Moses et al., 2019) rather than just predicting a semantic vector representation. Lastly, some studies have focused on reconstructing personal imagined experiences (Berezutskaya et al., 2020) or application-based decoding like using brain activity scanned during a picture-based mechanical engineering task to predict individuals' physics/engineering exam results (Cetron et al., 2019) and reflecting whether current thoughts are detailed, correspond to the past or future, are verbal or in images (Smallwood and Schooler, 2015). Table 3 aggregates the brain decoding literature along different stimulus domains such as textual, visual, and audio. ## 7 Conclusion, Limitations, and Future Trends **Conclusion** In this paper, we surveyed important datasets, stimulus representations, brain encoding and brain decoding methods across different modalities. A glimpse of how deep learning solutions throw light on putative brain computations is given. **Limitations** Naturalistic datasets of passive reading/listening offer ecologically realistic settings for investigating brain function. However, the lack of a task (as in a controlled psycholinguistic experiment) that probes the participant's understanding of the narrative limits the inferences that can be made on what the participant's brain is actually engaged in while passively following the stimuli. This becomes even more important when multi-lingual, multispectral participants process stimuli in L2 language or script - it is unclear if the brain activity reflects the processing of L2 or active suppression L1 while focusing on L2 (Malik-Moraleda et al., 2022). **Future Trends** Some of the future areas of work in this field are as follows: (1) While there is work on the text, understanding the similarity in information processing between visual/speech/multimodal models versus natural brain systems remains an open area. (2) Decoding to actual multimodal stimuli seems feasible thanks to recent advances in generation using deep learning models. (3) Deeper understanding of the degree to which damage to different parts of the human brain could lead to the degradation of cognitive skills. (4) How can we train artificial neural networks in novel self-supervised ways such that they compose word meanings or comprehend images and speech like a human brain? (5) How can we leverage improved neuroscience understanding to suggest changes in proposed artificial neural network architectures to make them more robust and accurate? We hope that this survey motivates research along the above directions.
2304.04005
A new transformation for embedded convolutional neural network approach toward real-time servo motor overload fault-detection
Overloading in DC servo motors is a major concern in industries, as many companies face the problem of finding expert operators, and also human monitoring may not be an effective solution. Therefore, this paper proposed an embedded Artificial intelligence (AI) approach using a Convolutional Neural Network (CNN) using a new transformation to extract faults from real-time input signals without human interference. Our main purpose is to extract as many as possible features from the input signal to achieve a relaxed dataset that results in an effective but compact network to provide real-time fault detection even in a low-memory microcontroller. Besides, fault detection method a synchronous dual-motor system is also proposed to take action in faulty events. To fulfill this intention, a one-dimensional input signal from the output current of each DC servo motor is monitored and transformed into a 3d stack of data and then the CNN is implemented into the processor to detect any fault corresponding to overloading, finally experimental setup results in 99.9997% accuracy during testing for a model with nearly 8000 parameters. In addition, the proposed dual-motor system could achieve overload reduction and provide a fault-tolerant system and it is shown that this system also takes advantage of less energy consumption.
Seyed Mohammad Hossein Abedy Nejad, Mohammad Amin Behzadi, Abdolrahim Taheri
2023-04-08T13:36:33Z
http://arxiv.org/abs/2304.04005v1
A new transformation for embedded convolutional neural network approach toward real-time servo motor overload fault-detection ###### Abstract Overloading in DC servo motors is a major concern in industries, as many companies face the problem of finding expert operators, and also human monitoring may not be an effective solution. Therefore, this paper proposed an embedded Artificial intelligence (AI) approach using a Convolutional Neural Network (CNN) using a new transformation to extract faults from real-time input signals without human interference. Our main purpose is to extract as many as possible features from the input signal to achieve a relaxed dataset that results in an effective but compact network to provide real-time fault detection even in a low-memory microcontroller. Besides, fault detection method a synchronous dual-motor system is also proposed to take action in faulty events. To fulfill this intention, a one-dimensional input signal from the output current of each DC servo motor is monitored and transformed into a 3d stack of data and then the CNN is implemented into the processor to detect any fault corresponding to overloading, finally experimental setup results in 99.9997% accuracy during testing for a model with nearly 8000 parameters. In addition, the proposed dual-motor system could achieve overload reduction and provide a fault-tolerant system and it is shown that this system also takes advantage of less energy consumption. Embedded AI, CNN, signal transformation, real-time fault-detection, dual-motor fault-tolerance ## 1 Introduction DC servo motors are commonly use in house applications, industries and military systems. They have been chosen for different tasks such as: elevators, cars, trains and etc [1-4]. These tasks might continue for even a year or few seconds. The important point is DC servo motors had to work properly until the given task is done. Any other scenarios may cause damage to the whole system which contains these actuators or even lead to fatality. A common problem that can stop these motors from operating is overloading, this might happen due to environmental issues or commonly higher torque which the motor can overcome, and eventually results in malfunctioning and bad timing. . Furthermore Available commercial motors in this situation will continue unstoppably to consume as much current as possible to overcome the load, and this will lead to the temperature raise which causes unsafe situations and can lead to damages even to the contiguous equipment [5, 6]. To avoid this, motors operation will be monitored but it's a burden for a human to detect any overload faults distinguish sudden changes, and measure the temperature raise, but this is not the only problem, studies have shown that many countries such as the United States, Hong Kong, and European countries struggle with shortage of skilled operators. Milutin Radonji'c et al at [2], pointed to these problems proposed IOT monitoring, however, this system is cable of enhancement and Therefore a new approach toward automation is required to solve mentioned problems. Thanks to many researchers in the fault-detection, fault -diagnosis, and fault-tolerant, nowadays many tools and solutions are available for the fault-detection. Explicit detection using classic control theory or implicit detection using available methods such as fuzzy logic and adaptive neural Fuzzy inference system (ANFIS), genetic algorithm, Bayesian classifier, support vector machine (SVM), and generally classical machine learning (ML) methods, artificial neural network (ANN), and convolutional neural network (CNN) or generally deep learning methods (DL) [8, 9, 10, 11, 12]. In the vast ocean of AI research, Lei Yaguo et al at [13], review ML applications. and Onur Avci, et al [14], review structural fault-detection using classical ML methodologies, these research can be considered a good introduction. In addition Mohammad Sabih and et al in [15] used image classification for fault-detection is similar to our intention. Zhuyun Chen, et al at [16] paper is about mechanical fault extraction using continuous wavelet transform and CNN network provides good insight into signal preprocessing and performing CNN also Lucas C.Brito, et al [17], chooses an unsupervised AI approach to detect faults in rotary equipment. in addition, they implemented Shapley Additive Explanations (SHAP) to interpret classified results from an unsupervised models which is noticeable. Ronny Francis Ribeiro Junior et al [18], carry out signal processing and fault-extraction using one-dimensional input and ANN model. F. Jia, et al at [19] a recommend supervised deep-learning model for fault-detection purposes in presence of a large dataset. Furthermore, it is tangible that Implicit fault detection using AI has many advantages over other methods, to exemplify some of its advantages it will reduce cost remarkably and with many open-source applications and models available through the internet, implementing a new model will be so fast and straightforward. For our objective two different approaches are available, the first one is unsupervised learning and the other is supervised learning [20, 21, 22, 23, 24]. In the first approach, faults are not needed to be identified, but the AI algorithms such as anomaly detection or k-mean clustering can extract faults automatically as a faulty signal has different characteristics, but it is simple to gather an acceptable amount of identified fault and normal signals will be used supervised learning algorithms. Figure 1: Monitored signal during actuation without fault also, another reason can be added for our choice although properties like average, standard deviation, and other statistical quantities help to improve our insight over the dataset but the behavior of the fault or technically its average or other properties of even healthy signals are not the same for different task hence it was not possible for us to employed methods such as anomaly detection. A significant subset of supervised learning methods is deep learning, which contains models similar to the neural network, artificial neural network (ANN) is a common model in deep learning, and understanding its functionality and construction is not complex anymore concerning for to available open source tools, however for more complex purposes like computer vision or image processing which input data is a three-dimension or higher order matrix, different neural network architecture is required. Convolutional neural networks (CNN) are state-of-art methods that can detect small to large-scale features from given inputs, for example, they can detect borders, shapes, parts of a human faces,, gender, age, animals, vehicle parts, and so on from given images. It can use to detect faults, but in real-word problems we are mostly expecting signals not images, and signals are one-dimension entities in most cases. So common approach is to build one-dimensional input ANN model to detect faults in monitoring signals, however, ANN models are not the most efficient method at least in our case of overload detection (signal processing) as the nature of signals varies over different speeds and target positions. Also, sequence models are another available option, for example, Junchuan Shi and et al at [25], perform LSTM sequential networks for planetary gearbox fault-detection, these models receive input through a chain of connected computational units called sequence and data can flow in one or two directions. These models are the best choice for the one-dimensional data processing such as natural language processing and speech recognition, however the complexity and high computational cost which constrain hardware implementation are the drawbacks and prevent us to employed it them for our purpose [26, 27]. Hence in this paper over-loading fault-detection is investigated through by converting the monitored signal to three-dimensional data which whose features can be extracted precisely by implementing a compact CNN model. therefore; an efficient and also mobile model will be obtained with a low computational cost which can be implemented in a low-power microcontroller such as Aurdino. In addition to fault-detection implementation, a synchronous dual-motor system can be used to increase fault-tolerance of the system, it was shown [28] that this system. takes advantages of reducing energy-consumption this will be discussed in section IV). To present our research rest of the paper is organized as follows, in the next section (0) overloading fault and intuition toward fault detection will be discussed, in the third section (**Error! Reference source not found.**) a compact CNN model is presented. And in the section 0) data acquisition method is described. Successively experimental implementation and results are provided in the section (I). Finally, conclusions are drawn in section VII). Figure 2: Three channel of data during multi faulty events The first step to establish our fault detection model is to understand overloading faults in monitored signals since labeled data and identified faults is essential for training the model and our supervised approach, hence in this section we inspect collected signals to extract faults. As it is illustrated in Figure 3, on the overloading event output current of the motor will increase and fluctuate over a range until the shutdown command, this pattern will be repeated for the same overloading event, but with different scales over different speeds. The important features to extract are fluctuation range, amount of fluctuation, current speed, target position, the sudden raise slop, the rate of sudden raise in the sampling interval, size of dissipated charges, and the amount of the temperature rise, checking all of these parameters will help an operator to detect faulty signals. The major concern is that many of mentioned parameters must be checked only by monitoring the output signal, this is a burden for the operator besides discerning faulty signals with common fluctuation during actuation which is shown in Figure 1, deteriorating this situation even for a skilled technician. Since it is needed to detect faults for our CNN model, an accurate faulty signal is needed for our dataset, to make this signal interpretable, one idea is to transfer signals to achieve a smooth and interpretable trend. Many solutions are available [29, 30], the one which meets our intention is to integrate signals in a specific range of time. As it is shown in Figure 4 the simple integration transformation results in a valuable trend that provide many specifications such as target position or reference speed and also charge flow, but the important point is that the sudden jump in the following trend will be clear and detectable even more than raw signals. So it could be realized that transformed signals with integration provide even more features to extract faults rather than origin signals and are compatible with our needs, subsequently, these two data can be combined to detect a fault, till here two channels of data for a specific time range are obtained if somehow similar channels of data are obtained, it's possible to construct a three-dimensional array similar to images with RGB channels, therefore, a CNN model can be implemented to extract overloading fault with respect to more details rather than one-dimensional signal. Differentiation is chosen as the third channel to reach the required three-dimensional data. The reason behind this choice is that differentiation would extract changes in the trend remarkably. Now the idea is clarified, the original signal is used in accompaniment of integral and differential transform (Figure 4) to construct our image then trained the CNN model and extract faults from provided image. Finally, it is expected to achieve a dataset that simplify the classification process as we could enhance and augment the data hence compact and low-parameter model can be obtained which helps real-time embedded implementation. Figure 4: Faulty signal with integral and differential transform Figure 3: Faulty signal acquired during monitoring for short range of time Since a way to obtain three channels from a one-dimensional signal is found, now it's possible to employ CNN [31, 32]. This model can extract and classify data using a pipeline of connected convolutional layers. Each convolutional layer, filters input data with properties like padding[33] and stride [34] and then pass the result through the activation function [35] as well as implements pooling [36]. One of the well-known CNN structures is the ResNet model [37, 38]. This model benefits from skip connections (**Errori Reference source not found.**). This skip connection let produce a deeper but compact network and facilitate information flow through the model. The building block of the ResNet can be expressed as: \[Y=\ \mathcal{F}((\omega^{1+2}\ X^{l+2}+b^{l+2})+X^{l}) \tag{1}\] Where \(Y\) is the output of the ResNet Block, respectively \(\omega\) and \(b\) are the trainable parameter of the network, X is the input given to the block and \(l\) represent the block number in the network. The important issue is to determine how to put these convolutional layers together, A compact ResNet model called toy_ResNet is chosen for simplicity and being compact which means fewer parameters but more efficient in comparison to other models. The model summary is given in the Table 1. ## IV Data acquisition For experimental deployment and evaluating the purposed model, first, we need to acquire the CNN model's parameters during training in accompany of validation dataAs Different models may obtain during training and validation, testing data is also considered to choose the best model. To collect data, the output current (ground current) is simply monitored by the analoginput of the microcontroller without the need for any extra sensor. This approach is shown in Figure 6. To understand the procedure the ground voltage of the motor is read by the analog input of the microcontroller in presence of an electrical resistance then using equation (2) output current can be obtained. This data is gathered through discrete time intervals and mapping by default via the microcontroller. This data will be stored in a stack of data with a length of 1024 which reshape and is normalized to a 32*32 array, simultaneously numerical integration and differentiation calculate in a similar 1024-lengthstack of data. For data acquisition purpose, simulated overloading will assert and then each time three stack of data will be stored with a label that express fault, correspondingly three stacks of data for the normal operation of the motor will be stored too. Now the system can train and choose the best model for classification between faulty and normal operation using the CNN model. \[I=\frac{V}{R} \tag{2}\] \begin{table} \begin{tabular}{c c c} \hline \hline **Layer (type)** & **Param** & **Connected to** \\ \hline **img (InputLayer)** & 0 & - \\ conv2d\_7 (Conv2D) & 224 & img \\ conv2d\_8 (Conv2D) & 1168 & conv2d\_7 \\ max\_pooling2d\_1 (MaxPooling2D) & 0 & conv2d\_8 \\ conv2d\_9 (Conv2D) & 1160 & max\_pooling2d\_1 \\ conv2d\_10 (Conv2D) & 0 & conv2d\_9 \\ add\_2 (Add) & 1160 & conv2d\_10 \\ & & max\_pooling2d\_1 \\ conv2d\_11 (Conv2D) & 1168 & add\_2 \\ conv2d\_12 (Conv2D) & 0 & conv2d\_11 \\ \hline \hline \end{tabular} \end{table} Table 1: Model summary of the toy_resnet Figure 5: Skip connection example; this connection provide data from previous layers and help to build deeper network Figure 6: Schematic of monitoring ground current of DC servo motor Fault-tolerance and Redundant Dual-motor system After fault-detection consequent action will fire, for overload purposes, in traditional implementation this action in confined to prompting an alert, however with the dual-motor system is capable of reducing energy consumption intrinsically and consumes even less energy in comparison with a single motor system. this is an advantage besides redundancy. as Redundancy always can help to achieve a fault-tolerant system, a redundant motor can compensate for any fault or failure that occurs during the prime motor. Therefore; in absence of a dimensional restriction in the overall system design and also a limitation of costs, dual-motor is a remarkable implementation for bringing the fault-tolerance ability to the system. In the rest of this section the natural energy-reduction behavior of the dual-motor system will be discussed, it can easily justify the important manner using motor efficiency map, as it is shown in Figure 7. the efficiency of the working motor is dependent on current torque and speed, there is an extreme point that lead to the most efficient operation of the motor, therefore; results in less energy consumption, selecting a motor for any purpose concerning forto this map and this point will lead to optimal working. It is also inferred that working with higher torque will diminish the efficiency of the motor, but if a second motor is added and split the torque this result in better efficiency for the whole system, and its shown in the result section that this can help the system to increase the overall energy-saving rate and achieve energy-consumption less than a single motor system. ## VI Experimental setup and results As is inferred from Table 1 the CNN model consists of nearly 8000 parameters. Therefore, even typical low-power microcontrollers can afford to store these parameters and then process inputs chunk by chunk. The Chunking data implemented for processing is hired only when the model is implemented into the microcontroller due to hardware limitation, but this also helps to improve calculation faster. As it was discussed our CNN model can be directly implemented into microcontrollers like Arduino, and many free libraries are available such as TensorFlow light or Yolo light. Alternatively, it is possible to use serialized connection to pass and receive data between an external processor which contains the CNN model. Both methods were hired and will be discussed. In the first step, training and testing data were extracted by the method introduced in the previous section, and eventually, dataset containing 18,000 images, was labeled with 0 as no-fault and 1 as faulty obtained. This dataset was divided to 15,000 images as a training set and respectively 2,000 and 1,000 images as validation and testing. Figure 8: Extracted images from one-dimensional signals, CNN model will be train by following dataset and will be used to detect faults Figure 7: efficiency map of a servo motor Then the toy-ResNet model is employed to classify our dataset, the model established using Python and Keras framework, the results are provided in Table 2 and Figure 10 and, Figure 11. Since fault-detection model can be prepared, the next step is to implement the model into hardware layer. Two methods are chosen for implementation: * Direct implementation (First scenario) In the very first attempt (Figure 12), we implemented a synchronous dual-motor system and inject the CNN model's parameters directly to the Arduino using the tensor flow light library. The proposed system carries two microcontrollers that delegate action to corresponding actuator and in addition monitor the other actuator, this redundancy is helpful in faulty event therefore if one of the motors malfunction the other one can compensate and let the operation continue. As it is discussed that the overall system containing primary and secondary motors will consume less energy in comparison to a single operating motor in the healthy situation this is shown in Figure 14. * External Processor (Second scenario) Figure 11: Accuracy trend of the model training Figure 12: real-time overloading fault-detection via synchronous dual-motor system Figure 9: Schematic of proposed procedure aimed at overloading fault-detection The alternative procedure (Figure 13) is to use an external processor which contains our CNN model, this processor receives three channels of data through serialized connection by the microcontroller and determines if overloading is detected. ## VII Conclusion In this research, the feasibility of converting one-dimensional signals to three-dimensional is investigated to access more features therefore simple and shallow models can be employed and implemented at low cost. Overloading fault detection chose as a case study and it was figured out that the output current of DC servo motor can be a good benchmark for overloading fault detection. Then, by integration and differentiation transformation, it was possible to extract three channels similar to the RGB channel of images. In the next step, the Resnet model is hired to classify faulty and normal signals. Finally, the model implemented directly into a low-power microcontroller and into the external processor. Testing was compatible with the expected result. It was inferred that a much simpler dataset could be built by adding two more dimensions to our data, therefore; a low-parameter model could easily classify the dataset and efficiently detect a fault. To compare the functionality between the direct implementation scenario and the external processor scenario, the first one was a burden to deploy however resulted in faster response since serialized data transportation and response were trimmed. in addition, monitoring could not be established properly. The second scenario was easy to deploy and well monitoring however this implementation caused a delay in the system due to data exchange. Finally, it was concluded that the proposed dual-motor system can reduced energy consumption and help to achieve, a fault-tolerant system. By all observations, using the proposed transformation is recommended which is similar to PID (proportion, integration, differentiation) in control theory to be used in data augmentation and signal transformation. this transformation can be also considered as a special convolutional layer, and in our feature works it was considered to prove that the optimal convolution layer is the same as the proposed PID transformation. Figure 14: Energy consumption of singe motor and dual-motor system; about 3% energy is saved. Figure 13: fault-detection using external processor; in this scenario signals transfer via serialized connection from microcontroller
2307.09065
Learning Adaptive Neighborhoods for Graph Neural Networks
Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data. However, many works assume a given graph structure. When the input graph is noisy or unavailable, one approach is to construct or learn a latent graph structure. These methods typically fix the choice of node degree for the entire graph, which is suboptimal. Instead, we propose a novel end-to-end differentiable graph generator which builds graph topologies where each node selects both its neighborhood and its size. Our module can be readily integrated into existing pipelines involving graph convolution operations, replacing the predetermined or existing adjacency matrix with one that is learned, and optimized, as part of the general objective. As such it is applicable to any GCN. We integrate our module into trajectory prediction, point cloud classification and node classification pipelines resulting in improved accuracy over other structure-learning methods across a wide range of datasets and GCN backbones.
Avishkar Saha, Oscar Mendez, Chris Russell, Richard Bowden
2023-07-18T08:37:25Z
http://arxiv.org/abs/2307.09065v1
# Learning Adaptive Neighborhoods for Graph Neural Networks ###### Abstract Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data. However, many works assume a given graph structure. When the input graph is noisy or unavailable, one approach is to construct or learn a latent graph structure. These methods typically fix the choice of node degree for the entire graph, which is suboptimal. Instead, we propose a novel end-to-end differentiable graph generator which builds graph topologies where each node selects both its neighborhood and its size. Our module can be readily integrated into existing pipelines involving graph convolution operations, replacing the predetermined or existing adjacency matrix with one that is learned, and optimized, as part of the general objective. As such it is applicable to any GCN. We integrate our module into trajectory prediction, point cloud classification and node classification pipelines resulting in improved accuracy over other structure-learning methods across a wide range of datasets and GCN backbones. We will release the code. ## 1 Introduction The success of Graph Neural Networks (GNNs) [6, 1, 24], has led to a surge in graph-based representation learning. GNNs provide an efficient framework to learn from graph-structured data, making them widely applicable where data can be represented as a relation or interaction system. They have been effectively applied in a wide range of tasks [25], [33] including particle physics [4] and protein science [10]. In a GNN, each node iteratively updates its state by interacting with its neighbors, typically through message passing. However, a fundamental limitation of such architectures is the assumption that the underlying graph is provided. While node or edge features may be updated during message passing, the graph topology remains fixed, and its choice may be suboptimal for various reasons. For instance, when classifying nodes on a citation network, an edge connecting nodes of different classes can diminish classification accuracy. These edges can degrade performance by propagating irrelevant information across the graph. When no graph is explicitly provided, such domain knowledge can be exploited to learn structures optimized for the task at hand [8, 3, 9, 7]. However, in tasks where knowledge of the optimal graph structure is unknown, one common practice is to generate a \(k\)-nearest neighbor (\(k\)-NN) graph. In such cases, \(k\) is a hyperparameter and tuned to find the model with the best performance. For many applications, fixing \(k\) is overly restrictive as the optimal choice of \(k\) may vary for each node in the graph. While there has been an emergence of approaches which learn the graph structure for use in downstream GNNs [43, 13, 15], all of them treat the node degree \(k\) as a fixed hyperparameter. We propose a general differentiable graph-generator (DGG) module for learning graph topology with or without an initial edge structure. Rather than learning graphs with fixed node degrees \(k\), our module generates local topologies with an adaptive neighborhood size. This module can be placed within any graph convolutional network, and jointly optimized with the rest of the network's parameters, learning topologies which favor the downstream task without hyper-parameter selection or indeed any additional training signal. The primary contributions of this paper are as follows: 1. We propose a novel, differentiable graph-generator (DGG) module which jointly optimizes both the neighborhood size, and the edges that should belong to each neighborhood. Note a key limitation of existing approaches [43, 15, 13, 8, 3, 7, 37] is their inability to learn neighborhood sizes. 2. Our DGG module is directly integrable into any pipeline involving graph convolutions, where either the given adjacency matrix is noisy, or unavailable and must be determined heuristically. In both cases, our DGG generates the adjacency matrix as part of the GNN training and can be trained end-to-end to optimize performance on the downstream task. Should a good graph structure be known, the generated adjacency matrix can be learned to remain close to it while optimizing performance. 3. To demonstrate the power of the approach, we integrate our DGG into a range of SOTA pipelines -- without modification -- across different datasets in trajectory prediction, point cloud classification and node classification and show improvements in model accuracy. ## 2 Related work **Graph Representation Learning:** GNNs [1] are a broad class of neural architectures for modelling data which can be represented as a set of nodes and relations (edges). Most use message-passing to build node representations by aggregating neighborhood information. A common formulation is the Graph Convolution Network (GCNs) which generalizes the convolution operation to graphs [16, 5, 38, 11]. More recently, the Graph Attention Network (GAT) [35] utilizes a self-attention mechanism to aggregate neighborhood information. However, these works assumed that the underlying graph structure is fixed in advance, with the graph convolutions learning features that describe pre-existing nodes and edges. In contrast, we simultaneously learn the graph structure while using our generated adjacency matrix in downstream graph convolutions. The generated graph topology of our module is jointly optimized alongside other network parameters with feedback signals from the downstream task. **Graph Structure Learning:** In many applications, the optimal graph is unknown, and a graph is constructed before training a GNN. One question to ask is: "Why isn't a fully-connected graph suitable?" Constructing adjacency matrices weighted by distance or even an attention mechanism [35] over a fully-connected graph incorporates many task-irrelevant edges, even if their weights are small. While an attention mechanism can zero these out -- i.e., discover a subgraph within the complete graph -- discovering this subgraph is challenging given the combinatorial complexity of graphs. A common remedy is to sparsify a complete graph by selecting the \(k\)-nearest neighbors (\(k\)-NN). Although this can prevent the propagation of irrelevant information between nodes, the topology of the constructed graph may have no relation to the downstream task. Not only can irrelevant edges still exist, but pairs of relevant nodes may remain unconnected and can lead GCNs to learn representations with poor generalization [43]. To overcome this, recent works constructed bespoke frameworks which learn the graph's adjacency matrix for specific tasks. For instance, in human pose estimation, some methods [31, 20] treat the elements of the adjacency matrix as a set of learnable weights. However, as each element is treated as a learnable parameter, the learned adjacency matrix is unlinked to the representation space and can only be used in tasks where there is a known correspondence between training and test nodes. This is not the case for many vision and graph tasks. Others have [15, 7, 17] employed variational inference frameworks to sample the entire adjacency matrix. Franceschi _et al._[9] jointly learned the graph structure and the parameters of a GCN by approximately solving a bilevel program. NodeFormer [37] and IDGL [3] instead learned latent topologies using multi-head attention [34]. There are two key differences between these methods and ours. First, we simplify optimization by factorizing the adjacency matrix distribution from which we sample the neighborhood for each node, as opposed to sampling the entire matrix. Second, these methods are bespoke frameworks specifically designed for node and graph classification. They leverage knowledge of the task in their loss functions, such as graph smoothness and sparsity [3]. As these methods are tailored to graph-based tasks only, they cannot be dropped into any GCN without modification, limiting their applicability to non-graph tasks like vision. In contrast, our module is both GCN and task-agnostic, and can be integrated into any GCN pipeline and trained using the downstream task loss. In contrast to the bespoke frameworks above, recent methods [43, 21, 13] took a more module-based approach similar to ours. As these approaches learned the graph structure entirely from the downstream task loss, there is less domain knowledge to leverage compared to methods constructed for specific tasks. Consequently, sparsity is often induced through a \(k\)-NN graph. Here, \(k\) is a scalar hyperparameter selected to control the learned graph's node degree. Unlike these works, we generate neighborhoods of varying size by learning a distribution over the edges _and_ over the node degree \(k\). Each node samples its top-\(k\) neighbors (where \(k\) is now a continuous variable), allowing it to individually select its neighborhood and the edges that should belong to it, in a differentiable manner. Additionally, a known 'ideal' graph structure can be used as intermediate supervision to further constrain the latent space. ## 3 Method Here, we provide details of our differentiable graph generation (DGG) module. We begin with notation and the statistical learning framework guiding its design, before describing the module, and how it is combined with graph convolutional backbone architectures. **Notation** We represent a graph of \(N\) nodes as \(G=(V,E)\): where \(V\) is the set of nodes or vertices, and \(E\) the edge set. A graph's structure can be described by its adjacency matrix \(A\), with \(a_{ij}=1\) if an edge connects nodes \(i\) and \(j\) and \(a_{ij}=0\) otherwise. This binary adjacency matrix \(A\) is directed, and potentially asymmetrical. **Problem definition** We reformulate the baseline prediction task based on a fixed graph with an adaptive variant where the graph is learned. Typically, such baseline tasks make learned predictions \(Y\) given a set of input features \(X\) and a graph structure \(A\) of node degree \(k\): \[Y=Q_{\phi}(X,A(k)), \tag{1}\] where \(Q_{\phi}\) is an end-to-end neural network parameterized by learnable weights \(\phi\). These formulations require a pre determined graph-structure \(A(k)\), typically based on choice of node degree \(k\), and take \(A(k)\) as additional input to the model. In contrast, we _learn_ both \(A\) and \(k\) in an end-to-end manner, and use them to make predictions \(Y\). As graphs are inherently binary, with edges either present or absent, they are not directly optimizable using gradient descent. Instead, we consider a distribution of graphs, \(\mathcal{G}\), which then induce a distribution of labels, \(\mathcal{Y}\), in the downstream task. This distribution takes the factorized form: \[P(Y|X)=\sum_{A\in\mathcal{G}}\sum_{k\in\mathbb{N}^{|V|}}Q_{\phi}(X,A)P(A|X,k)P( k|X), \tag{2}\] where \(P(k|X)\) is the distribution of node degree \(k\) given \(X\) (i.e., the choice of \(k\) in \(k-\)NN), \(P(A|X,k)\) the distribution of graph structures \(A\) conditioned on the learned \(k\) and input \(X\), and \(P(Y|X)\) is the downstream distribution of labels conditioned on data \(X\). For clarity, the adjacency \(A\) represents a subgraph of a complete graph over \(X\), and \(k\) is a multidimensional variable controlling the number of top-\(k\) neighbors for each node individually. To avoid learning individual probabilities for each possible graph \(A\) in an exponential state space, we further assume that \(P(A|X,k)\) has a factorized distribution where each neighborhood is sampled independently, i.e. \(P(A|X,k)=\prod_{i\in V}P(a_{i}|X,k)\). We model the distributions over adjacencies \(A\) and \(k\) with tractable functions: \[P(Y|X)\approx\sum_{A}\sum_{k}Q_{\phi}(X,A)Q_{\theta}(A|X,k)Q_{\rho}(k|X), \tag{3}\] where \(Q_{\theta}\) and \(Q_{\rho}\) are functions parameterized by \(\theta\) and \(\rho\) to approximate \(P(A|X,k)\) and \(P(k|X)\), respectively. In Fig. 1, we illustrate the functions of our method compared to the typical prediction task in Eq. 1. Using this formulation, we train the entire system end-to-end to minimize the expected loss when sampling \(Y\). This can be efficiently performed using stochastic gradient descent. In the forward pass, we first sample a subgraph/set of nodes \(X\) from the space of datapoints, and conditioning on \(X\) we sample \(A\) and compute the associated label \(Y\). When computing the gradient step, we update \(Q_{\phi}(X,A)\) as normal and update the distributions using two standard reparametrization tricks: one for discrete variables [12] such that \(Q_{\theta}(A|X,k)\) can generate differentiable graph samples \(A^{\prime}\), and another for continuous variables [14] of \(k^{\prime}\) drawn from \(Q_{\rho}(k|X)\): \[\begin{split} P(Y|X)\approx\sum_{A^{\prime}}\sum_{k^{\prime}}Q_{ \phi}(X,A^{\prime}),\\ \text{where }A^{\prime}\sim Q_{\theta}(A|X,k^{\prime})\text{ and }k^{ \prime}\sim Q_{\rho}(k|X).\end{split} \tag{4}\] As both the graph structure \(A^{\prime}\) and variable \(k^{\prime}\) samplers are differentiable, our DGG module can be readily integrated into pipelines involving graph convolutions and jointly trained end-to-end. ### Differentiable Graph Generation Our differentiable graph-generator (DGG) takes a set of nodes \(V=\{v_{1},...,v_{N}\}\) with \(d\)-dimensional features \(\mathbf{X}\in\mathbb{R}^{N\times d}\) and generates a (potentially) asymmetric adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). This adjacency matrix can be used directly in any downstream graph convolution operation (see Module Instantiation below). As illustrated by Fig. 2, the DGG module consists of four components: 1. **Node encoding:** this component projects the input node features \(\mathbf{X}\in\mathbb{R}^{N\times d}\) to a latent representation \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\), which forms the primary representation space of the model. 2. **Edge ranking**: this takes the latent node features \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\) and generates a matrix representing a stochastic ordering of edges \(\mathbf{E}\in\mathbb{R}^{N\times N}\) drawn from a learned distribution over the edge-probabilities (\(A^{\prime}\sim Q_{\theta}(A|X,k^{\prime})\) from Eq. 4). 3. **Degree estimation**: this component estimates the number of neighbors each individual node is connected to. It takes as input the latent node features \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\) and generates random samples \(k\in\mathbb{R}^{N}\) drawn from a learned distribution over the node degree (\(k^{\prime}\sim Q_{\rho}(k|X)\) from Eq. 4). 4. **Differentiable top-\(k\) edge selector**: takes \(k\) and the edge-samples \(e\) and performs a soft thresholding that probabilistically selects the most important elements, Figure 1: (Left) A typical prediction task using graphs \(Y=Q_{\phi}(X,A)\) where \(A\) and \(k\) are predetermined. (Right) Our reformulation \(P(Y|X)\approx\sum_{A}\sum_{k}Q_{\phi}(X,A)Q_{\theta}(A|X,k)Q_{\rho}(k|X)\) which learns a distribution over \(A\) and \(k\) alongside the downstream task. Figure 2: Our differentiable graph-generator (DGG) takes input nodes \(\mathbf{X}\) and generates an adjacency matrix \(\mathbf{A}\). It consists of: (1) **Degree-estimator**: generates samples of \(k_{i}\) for each node, (2) **Edge-ranker**: generates edge samples \(\mathbf{e}_{i}\) for each node and (3) **Top-k selector**: takes \(k_{i}\) and edge samples \(\mathbf{e}_{i}\) and selects top-k elements in a differentiable manner to output a final adjacency \(\mathbf{A}\). based on the output of the Edge-ranking to output an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). We now explain these steps in more detail: **Node encoding** We construct a single latent space from the input node features, and use it for edge ranking and degree estimation. We first map input node features \(\mathbf{X}\in\mathbb{R}^{N\times d}\) into latent features \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\) using a multi-layer perceptron (MLP) \(N_{\phi}\) with weights \(\phi\): \(\mathbf{\hat{X}}=N_{\phi}(\mathbf{X})\). These latent features form the input for the rest of the DGG. Furthermore, they are output by the DGG and passed to the GCN downstream to prevent vanishing gradients. **Edge ranking** The edge ranking returns an implicit distribution of edge orderings, from which we sample the neighborhood for each node. For each node \(v_{i}\), it draws a set of scores \(\mathbf{e}_{i}=\{e_{ij}\}_{j}^{N}\) quantifying its relevance to all nodes \(v_{j}\in V\), including itself. To generate differentiable edge samples \(\mathbf{e}_{i}\), we use the Gumbel-Softmax [12]. Before locally scoring each edge embedding \(e_{ij}\in\mathbf{e}_{i}\) for node \(v_{i}\), we implement a global stage which constructs edge embeddings with both local and global dependencies: 1. Using latent node features \(\hat{\mathbf{x}}_{i}\in\hat{\mathbf{X}}\), determine local edge embeddings \(\hat{\mathbf{c}}_{ij}\in\mathbb{R}^{d^{\prime}}\) by passing each pair of node features through an MLP \(l_{\phi}\): \(\hat{\mathbf{c}}_{ij}=l_{\phi}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j})\). These embeddings now form a complete graph \(\mathcal{G}\) over the nodes, with each edge attributed \(\hat{\mathbf{c}}_{ij}\). 2. As each edge embedding \(\hat{\mathbf{c}}_{ij}\in\mathbf{C}\) is calculated independently of the others, we refine it to account for its dependencies to adjacent edges. We do this through edge-to-edge message passing. However, we avoid computing dependencies between all edges of the complete graph for two reasons: first, some edges may not have any common nodes, so passing messages between them could propagate irrelevant information, and secondly, it could be prohibitedly expensive. To restrict message-passing between adjacent edges only, we first compute the adjoint graph \(\mathcal{H}\) of the complete graph \(\mathcal{G}\). In the adjoint \(\mathcal{H}\), each edge is associated with a node, and two nodes are connected if and only if their corresponding edges in \(\mathcal{G}\) have a node in common. The adjoint's adjacency \(A^{\mathcal{H}}\) can be calculated using its incidence matrix \(L\), \(A^{\mathcal{H}}=L^{T}L-2I\). In the adjoint, each node embedding \(\hat{\mathbf{c}}_{i}\) is then updated using an average of its neighboring nodes \(\hat{\mathbf{c}}_{j}\) and passed through an MLP \(h_{\phi}\): \[\hat{\mathbf{c}}_{i}^{\prime}=\sum_{j\in\mathcal{N}(i)}h_{\phi}(\hat{\mathbf{c }}_{i}\parallel\mathbf{c}_{i}-\mathbf{c}_{j})\] (5) Having computed edge embeddings \(\mathbf{C}\in\mathbb{R}^{N\times N\times d^{\prime}}\) with global dependencies, we rank these edges for each node. Without loss of generality, we focus on a single node \(v_{i}\in V\), with latent features \(\hat{\mathbf{x}}_{i}\in\mathbb{R}^{d}\). We implement the approximation function \(Q_{\theta}(A|X,k)\) of the Edge-ranker as follows: 1. Using edge embeddings \(\hat{\mathbf{c}}_{ij}\in\mathbb{R}^{d^{\prime}}\), calculate edge probabilities \(\mathbf{p}_{i}\in\mathbb{R}^{N}\) for node \(v_{i}\) using an MLP \(m_{\theta}\): \[\mathbf{p}_{i}=\{m_{\theta}(\hat{\mathbf{c}}_{ij})|\forall j\in N\}.\] (6) Each element \(p_{ij}\in\mathbf{p}_{i}\) represents a similarity measure between the latent features of node \(v_{i}\) and \(v_{j}\). In practice, any distance measure can be used here. 2. Using Gumbel-Softmax over the edge probabilities \(\mathbf{p}_{i}\in\mathbb{R}^{N}\), we generate differentiable samples \(\mathbf{e}_{i}\in\mathbb{R}^{N}\) with Gumbel noise \(g\): \[\mathbf{e}_{i}=\left\{\frac{\exp((\log(p_{ij})+g_{i})+\tau)}{\sum_ {j}\exp((\log(p_{ij})+g_{i})+\tau)}\Big{|}\forall j\in N\right\},\] (7) \[g_{i}\sim\mathrm{Gumbel}(0,1)\] where \(\tau\) is a temperature hyperparameter controlling the interpolation between a discrete one-hot categorical distribution and a continuous categorical density. When \(\tau\to 0\), the edge energies \(e_{ij}\in\mathbf{e}_{i}\) approach a degenerate distribution. The temperature \(\tau\) is important for inducing sparsity, but given the exponential function, this results in a single element in \(\mathbf{e}_{i}\) given much more weighting than the rest, i.e., it approaches a one-hot argmax over \(\mathbf{e}_{i}\). As we want a variable number of edges to be given higher importance and others to be close to zero, we select a higher temperature and use the top-\(k\) selection procedure (detailed below) to induce sparsity. This additionally avoids the high-variance gradients induced by lower temperatures. **Degree estimation** A key limitation of existing graph generation methods [13, 15, 43] is their use of a fixed node degree \(k\) across the entire graph. This can be suboptimal as mentioned previously. In our approach, rather than fixing \(k\) for the entire graph, we sample it per node from a learned distribution. Focusing on a single node as before, the approximation function \(Q_{\rho}(k|X)\) of the Degree-estimator works as follows: 1. We approximate the distribution of latent node features \(\hat{\mathbf{x}}_{i}\in\mathbb{R}^{d}\) following a VAE-like formulation [14]. We encode its mean \(\mathbf{\mu}_{i}\in\mathbb{R}^{d}\) and variance \(\mathbf{\sigma}_{i}\in\mathbb{R}^{d}\) using two MLPs \(M_{\rho}\) and \(S_{\rho}\), and then reparametrize with noise \(\epsilon\) to obtain latent variable \(\mathbf{z}_{i}\in\mathbb{R}^{d}\): \[\begin{split}\mathbf{\mu}_{i},\mathbf{\sigma}_{i}&=M_{\rho} (\hat{\mathbf{x}}_{i}),S_{\rho}(\hat{\mathbf{x}}_{i}),\\ \mathbf{z}_{i}&=\mathbf{\mu}_{i}+\mathbf{\epsilon}_{i}\mathbf{ \sigma}_{i},\epsilon_{i}\sim\mathcal{N}(0,1).\end{split}\] (8) 2. Finally, we concatenate each latent variable \(\mathbf{z}_{i}\in\mathbb{R}^{d}\) with the L1-norm of the edge samples \(\mathbf{h}_{i}=||\mathbf{e}_{i}||_{1}\) and decode it into a scalar \(k_{i}\in\mathbb{R}\) with another MLP \(D_{\rho}\), representing a continuous relaxation of the neighborhood size for node \(v_{i}\): \[k_{i}=D_{\rho}(\mathbf{z}_{i})+\mathbf{h}_{i}.\] (9) Since \(\mathbf{h}_{i}\) is a summation of a node's edge probabilities, it can be understood as representing an initial estimate of the node degree which is then improved by combining with a second node representation \(\mathbf{z}_{i}\) based entirely on the node's features. Using the edge samples to estimate the node degree links these representation spaces back to the primary latent space of node features \(\hat{\mathbf{X}}\). **Top-\(k\) Edge-Selector** Having sampled edge weights, and node degrees \(k\), this function selects the top-\(k\) edges for each node. The top-\(k\) operation, i.e. finding the indices corresponding to the \(k\) largest elements in a set of values, is a piecewise constant function and cannot be directly used in gradient-based optimization. Previous work [40] framed the top-\(k\) operation as an optimal transport problem, providing a smoothed top-\(k\) approximator. However, as their function is only defined for discrete values of \(k\) it cannot be optimized with gradient descent. As an alternative that is differentiable with respect to \(k\), we relax the discrete constraint on \(k\), and instead use it to control the \(x\)-axis value of the inflection point on a smoothed-Heaviside function (Fig. 3). For a node \(v_{i}\in V\), of smoothed degree \(k_{i}\in\mathbb{R}\) and edges \(\mathbf{e}_{i}\in\mathbb{R}^{N}\), our Top-\(k\) Edge Selector outputs an adjacency vector \(\mathbf{a}_{i}\in\mathbb{R}^{N}\) where the \(k\) largest elements from \(\mathbf{e}_{i}\) are close to \(1\), and the rest close to \(0\). Focusing on a single node \(v_{i}\) as before, the implementation is as follows: 1. Draw 1D input points \(\mathbf{d}_{i}=\{1,...,N\}\) where \(N\) is the number of nodes in \(V\). 2. Pass \(\mathbf{d}_{i}\) through a hyperbolic tangent (tanh) which serves as a smooth approximation of the Heaviside function: \[\mathbf{h}_{i}=1-0.5*\left\{1+\tanh(\lambda^{-1}d_{i}-\lambda^{-1}k_{i}) \right\},\] (10) here \(\lambda>0\) is a temperature parameter controlling the gradient of the function's inflection point. As \(\lambda\to 0\), the smooth function approaches the Heaviside step function. The first-\(k\) values in \(\mathbf{h}_{i}=\{h_{ij}\}_{j}^{N}\) will now be closer to 1, while the rest closer to 0. 3. Finally, for each node \(i\) we sort its edge-energies \(\mathbf{e}_{i}=\{e_{ij}\}_{j}^{N}\) in descending order, multiply by \(\mathbf{h}_{i}=\{h_{ij}\}_{j}^{N}\) and then restore the original order to obtain the final adjacency vector \(\mathbf{a}_{i}=\{a_{ij}\}_{j}^{N}\). Stacking \(\mathbf{a}_{i}\) over all nodes \(v_{i}\in V\) creates the final adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). **Symmetric adjacency matrix** If the adjacency matrix \(A\) must be symmetric, this can be enforced by replacing it with \(A_{sym}\) where: \(\mathbf{A}_{sym}=(\mathbf{A}+\mathbf{A}^{T})/2\). **Straight through Top-\(k\) Edge Selector** To make our final adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) discrete, we follow the trick used in the Straight-Through Gumbel Softmax [12]: we output the discretized version of \(\mathbf{A}\) in the forward pass and the continuous version in the backwards pass. For the discretized version in the forward pass, we replace the smooth-Heaviside function in Eq. 10 with a step function. **Module Instantiation:** The DGG module can be easily combined with any graph convolution operation. A typical graph convolution [16] is defined as follows: \(\mathbf{X}^{\prime}=\hat{\mathbf{D}}^{-1/2}\hat{\mathbf{A}}\hat{\mathbf{D}}^{ -1/2}X\mathbf{\Theta}\). Here, \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) denotes the adjacency matrix with inserted self-loops, \(\hat{\mathbf{D}}\) its diagonal degree matrix and \(\mathbf{\Theta}\) its weights. To use this graph convolution with the DGG, we simply use our module to generate the adjacency matrix \(\hat{\mathbf{A}}\). ## 4 Experiments We evaluate our DGG on node classification, point cloud classification and trajectory prediction. We chose these tasks as they demonstrate the wide applicability of our module: (1) graphs for node classification require models that can generate edge structures from noisy input graphs, (2) point cloud classification tasks have no input graph structures and (3) trajectory prediction additionally requires models which can handle a variable number of nodes per batch. We compare against state-of-the-art structure learning methods in each domain. As far as we know, our structure-learning approach is the only one that can be easily applied without modification to any GCN pipeline in such a range of tasks. ### Node classification Beginning with node classification, we conduct ablations examining the behavior of different parts of the DGG, followed by comparisons to other state-of-the-art structure learning approaches. In the supplementary we include experiments investigating the effect of the DGG on downstream models under the addition of noisy edges to input graphs. We perform these experiments under both transductive and inductive scenarios, as well as semi-supervised and fully-supervised settings. **Datasets** In the transductive setting, we evaluate on three citation benchmark datasets Cora, Citeseer and Pubmed [26] introduced by [41]. In an inductive setting, we evaluate on Reddit [42] and PPI [11]. Further dataset details can be found in the supplementary. **Baselines and Implementation** As our DGG is a GCN-agnostic module that can be integrated alongside any graph convolution operation, we compare its performance to both other GCN-agnostic approaches and bespoke structure-learning architectures. To compare against other GCN-agnostic methods, we int egrate our DGG into four representative GCN backbones: GCN [16], GraphSage [11], GAT [35] and GCNII [2]. On these backbones, we compare against other GCN-agnostic structure learning methods: DropEdge [29], NeuralSparse [43], PTDNet [21]. Then we compare against bespoke architectures IDGL [3], LDS [9], SLAPS [8], NodeFormer [37] and VGCN [7]. To make our comparison fair against these bespoke architectures which learn the structure specifically for node classification, we integrate our DGG into a GCN backbone that is comparable to the bespoke architecture in design. Please see the supplementary for implementation details. **Training details** A node classification model partitions the latent space of node embeddings into separate classes. However, when message-passing, there is one phenomenon of the input graph that can limit classification accuracy: two nodes with different classes but similar features and an edge connecting them. Classifying these nodes is challenging as their feature similarity can be compounded by passing messages between them. The goal of the DGG is to move such nodes apart in the latent space such that there is no edge and communication between them. However, traversing the loss landscape from the initial random initialization of the network to one where the model is able to discriminate between these nodes can take several iterations using only the downstream classification loss. To speed up training, we add an intermediate loss to further partition the latent space. We do this by supervising the adjacency matrix generated by the DGG to remove all edges between classes and only maintain those within a class. We then anneal this loss over the training cycle, eventually leaving only the downstream classification loss. We provide more details in the supplementary. #### 4.1.1 Ablations In Table 1, we explore the effect of disabling different components of our DGG module when integrated into a GCN [16] for node classification: 1. _DGG without Degree Estimation and Differentiable Top-\(k\) Edge Selection_ -- we remove the Degree Estimator and instead fix \(k\) to select the top-\(k\) stochastically ordered edges. 2. _DGG with deterministic Edge Ranking_ -- we remove the noise in Eq. 7 of the Edge Ranker. 3. _DGG with deterministic Degree Estimation_ -- we remove the noise in Eq. 8 of the Degree Estimator. We perform these on Cora [41] and omit the annealed intermediate loss during training. Table 1 shows the benefit of learning a distribution over the node degree. When learning it deterministically, the accuracy decreases by 0.5%. This becomes significantly worse when the node degree is fixed for the entire graph rather than learned per node. Note also, the sensitivity with respect to choice of \(k\). A fixed node degree of \(k=10\) or \(k=1\) reduces accuracy by almost 30% vs a graph of 5. This is due to the graph convolution operation: as it has no adaptive weighting mechanism for a node's neighborhood, each of the neighbors is given the same weight. Naturally, this leads to information sharing between unrelated nodes, reducing the quality of node representation after message-passing. In contrast, by learning a distribution over the node degree we are able to select only the most relevant neighbors, even though these are then weighted equally in the graph convolution. Finally, the inclusion of noise in any of the DGG components does increase accuracy, but only by approximately 0.5% -- demonstrating both its benefit and the robustness of the DGG without it. #### 4.1.2 Results **Comparison to GCN-agnostic modules** In Table 2 we compare against GCN-agnostic structure learning methods. For fair comparison, we present two versions of our method: DGG-wl trained with the downstream loss only and DGG* trained with both the downstream and intermediate loss. DGG improves performance across all baselines and datasets. Against other approaches, DGG-wl generally outperforms the state-of-the-art NeuralSparse and PTDNet-wl (both trained with only the downstream loss). This can be attributed to our method for modelling sparsity, which explicitly lets each node to select the size of its neighborhood based on the downstream training signal. This training signal helps partition the node representation space, while the estimated node-degree additionally prevents communication \begin{table} \begin{tabular}{c c} \hline Model & Accuracy \\ \hline Fixed node degree, k = \(\{1,5,10,100\}\) & \(\{49.7,789,550,37.0\}\) \\ With deterministic Edge Ranking and Degree Estimation & 82.4 \\ With deterministic Edge Ranking & 82.7 \\ With deterministic Degree Estimation & 82.8 \\ DGG & **83.2** \\ \hline \end{tabular} \end{table} Table 1: Ablation study. DGG integrated into a GCN for node classification on Cora [41]. Figure 3: The differentiable Top-\(k\) Edge Selector. This component uses the node degree \(k_{i}\) output by the Degree Estimator to control the inflection point on a smooth-Heaviside function and uses it to select the top edges from \(\mathbf{e}_{i}\) output by the Edge Ranker. This produces an adjacency vector \(\mathbf{a}_{i}\) for each node, and stacking \(\mathbf{a}_{i}\) across all nodes produces the final adjacency matrix \(\mathbf{A}\). between distant nodes. Although PTDNet-wl does this implicitly through its attention mechanism, discovering this sparse subgraph of the input graph is challenging given its complexity. NeuralSparse on the other hand selects \(k\) for its entire generated subgraph, which is both suboptimal and requires additional hyperparameter tuning. Comparing methods which enforce additional constraints on the adjacency matrix, DGG* demonstrates larger accuracy gains than PTDNet*. PTDNet* regularizes its adjacency matrix to be of low-rank, as previous work [30] has shown that the rank of an adjacency matrix can reflect the number of clusters. This regularizer reasons about the graph's topology globally. While this may aid generalization, the accuracy difference may then be attributed to our intermediate loss providing stronger signals to discriminate between nodes with similar features but different classes (and therefore remove the edges between them). Furthermore, their regularizer uses the sum of the top-\(k\) singular values during training, where \(k\) again is a hyperparameter tuned to each dataset individually. Our method requires no additional parameters to be chosen. Finally in Table 3 we compare the low-rank constraint of PTDNet with our intermediate annealed loss. Our intermediate loss ('DGG-wl + int. loss') outperforms the low-rank constraint ('DGG-wl + low rank'). However, using both constraints ('DGG-wl + int. loss + low rank') increases classification accuracy further, suggesting the edges removed by both methods are complementary. **Comparison with bespoke architectures** In Table 4 we compare against bespoke architectures specifically designed for node classification. As each of these methods uses different experiment settings, we train our DGG-integrated architecture separately for each. See the supplementary for details on each setting and reasons for our choice of backbone. Our performance gains here can generally be attributed to factors: (1) our intermediate loss on the adjacency matrix and (2) our adjacency matrix factorizations where we learn the neighborhood for each node. Our intermediate loss particularly benefits from the experimental settings adopted by the other methods as they use larger training splits involving half the validation graph. Additionally, constructing the adjacency matrix by learning nodewise neighborhoods restricts the graph search space, making optimization easier. However, we note that some of these other methods are designed for node-classification on graphs which are orders of magnitude larger than Cora and Citeseer. In such cases, factorizing the adjacency per node, as we do, may be unfeasible. consider four datasets covering a range of scenarios from basketball to crowded urban environments. On each, we integrate our DGG into a SOTA GCN trajectory prediction pipeline and compare results to another task-agnostic structure learning approach, DGM [13]. **Datasets** We evaluate on four trajectory prediction benchmarks. 1. ETH [27] and UCY [18] -- 5 subsets of widely used real-world pedestrian trajectories. 2. STATS SportVU [32] -- multiple NBA seasons tracking trajectories of basketball players over a game. Stanford Drone Dataset (SDD) [28] -- top-down scenes across multiple areas at Stanford University. Further details on these datasets can be found in the supplementary. **Baselines and Implementation** We integrate our DGG module into two state-of-the-art trajectory prediction pipelines: Social-STGCNN [22] and DAGNet [23]. Our DGG is placed within both networks to generate the adjacency matrix on the fly and forms part of its forward and backward pass. Please see the supplementary for implementation details. **Evaluation metrics.** Model performance is measured with Average Displacement Error (ADE) and Final Displacement Error (FDE). ADE measures the average Euclidean distance along the entire predicted trajectory, while the FDE is that of the last timestep only. **Results** In Table 5, the integration of our DGG into Social-STGCNN reduces ADE/FDE compared to both the baseline and the integration of DGM. In Table 5 and 6 we demonstrate similar gains over DGM when integrated into DAGNet. First, this shows the benefit of inducing sparsity when message-passing over a distance weighted adjacency matrix like Social-STGCNN or even an attention-mechanism like DAGNet. The larger error reduction of our DGG compared to DGM may be attributed to DGM's use of a fixed node-degree \(k\) across its learned graph. While this can prevent the propagation of irrelevant information across the graph in some cases, in others it might limit the context available to certain nodes. We provide qualitative analysis in the supplementary. ### Point Cloud Classification We evaluate on another vision task of point cloud classification for models which use GCNs. This task differs from the previous two as predictions are made for the entire graph as opposed to node-wise. As with our trajectory prediction experiments, we integrate our DGG into SOTA classification architectures and compare against the other task-agnostic graph-learning module DGM [13]. **Datasets** We evaluate on ModelNet40 [39], consisting of CAD models for a variety of object categories. **Baselines and Implementation** We integrate our DGG into a SOTA ResGCN [19] and DGCNN [36]. Both models use a \(k\)-NN sampling scheme to construct its graph. We simply replace this sampler with our DGG and keep the rest of the network and training protocol the same. **Results** Our results in Table 7 demonstrate the benefits of learning an adaptive neighborhood size across the latent graph. DGM [13] learns a fully-connected latent graph and then imposes a fixed node degree of \(k=20\) across it (i.e. selecting the top 20 neighbors for each node). This marginally improves upon the baselines ResGCN [19] and DGCNN[36], which both also used fixed node-degrees \(k\). In contrast, we learn a distribution over the node degree from which we sample each node's neighborhood size. As shown in Table 7, the node degree varies in our models with a standard deviation of around 5-7 across both baselines. Our accuracy gains over the baseline and DGM can be attributed to this variance in neighborhood sizes across the graph. These gains can be understood when viewing an input point cloud as a composition of object parts. Building semantic representations for different parts may naturally require varying amounts of contextual points. For instance, the wheels of a car might be identifiable with a smaller neighborhood than the car's body. This may suggest why an adaptive neighborhood size is helpful in this case. ## 5 Conclusion We have presented a novel approach for learning graph topologies, and shown how it obtains state-of-the-art performance across multiple baselines and datasets for trajectory prediction, point cloud classification and node classification. The principal advantage of our approach is that it can be combined with any existing graph convolution layer, under the presence of noisy, incomplete or unavailable edge structures. ## Acknowledgements This project was supported by the EPSRC project ROSSINI (EP/S016317/1) and studentship 2327211 (EP/T517616/1). \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Original**} & \multicolumn{2}{c}{**DGM [13]** Gain (\%)} & \multicolumn{2}{c}{**DGG Gain (\%)**} \\ \multirow{2}{*}{**Split**} & \multirow{2}{*}{**Team**} & \multicolumn{1}{c}{ADE} & \multicolumn{1}{c}{FDE} & \multicolumn{1}{c}{ADE} & \multicolumn{1}{c}{FDE} & \multicolumn{1}{c}{ADE} & \multicolumn{1}{c}{FDE} \\ \cline{2-7} & & \multicolumn{1}{c}{ATE} & \multicolumn{1}{c}{4.29} & \multicolumn{1}{c}{-0.4\%} & \multicolumn{1}{c}{-0.2\%} & \multicolumn{1}{c}{**6.7\%**} & \multicolumn{1}{c}{**5.1\%**} \\ & & \multicolumn{1}{c}{DEF} & \multicolumn{1}{c}{2.09} & \multicolumn{1}{c}{2.97} & \multicolumn{1}{c}{-0.5\%} & \multicolumn{1}{c}{-0.1\%} & \multicolumn{1}{c}{**9.7\%**} & \multicolumn{1}{c}{**6.4\%**} \\ \cline{2-7} & \multicolumn{1}{c}{ATE} & \multicolumn{1}{c}{2.03} & \multicolumn{1}{c}{3.98} & \multicolumn{1}{c}{0.1\%} & \multicolumn{1}{c}{0.1\%} & \multicolumn{1}{c}{**7.2\%**} & \multicolumn{1}{c}{**8.2\%**} \\ \cline{2-7} & \multicolumn{1}{c}{DEF} & \multicolumn{1}{c}{1.53} & \multicolumn{1}{c}{3.07} & \multicolumn{1}{c}{0.2\%} & \multicolumn{1}{c}{0.3\%} & \multicolumn{1}{c}{**21.4\%**} & \multicolumn{1}{c}{**19.1\%**} \\ \hline \multirow{2}{*}{40-10} & \multicolumn{1}{c}{ATE} & \multicolumn{1}{c}{0.81} & \multicolumn{1}{c}{1.71} & \multicolumn{1}{c}{1.3\%} & \multicolumn{1}{c}{0.9\%} & \multicolumn{1}{c}{**15.5\%**} & \multicolumn{1}{c}{**17.0\%**} \\ & \multicolumn{1}{c}{DEF} & \multicolumn{1}{c}{0.72} & \multicolumn{1}{c}{1.49} & \multicolumn{1}{c}{0.8\%} & \multicolumn{1}{c}{0.8\%} & \multicolumn{1}{c}{**10.9\%**} & \multicolumn{1}{c}{**16.2\%**} \\ \hline \multicolumn{2}{l}{Mean} & \multicolumn{1}{c}{—} & \multicolumn{1}{c}{1.65} & \multicolumn{1}{c}{2.92} & \multicolumn{1}{c}{0.3\%} & \multicolumn{1}{c}{0.3\%} & \multicolumn{1}{c}{**11.9\%**} & \multicolumn{1}{c}{**12.0\%**} \\ \hline \hline \end{tabular} \end{table} Table 6: ADE/FDE metrics on the SportVU Basketball dataset using DAGNet. For DGM [13], \(k=3\). \begin{table} \begin{tabular}{l|l|c c|c} \hline \hline **Baseline** & **Method** & **Mean degree** & **S.D. degree** & **Accuracy** \\ \hline ResGCN [19] & Original & 9 & 0 & 93.3 \\ & DGM [13] & 20 & 0 & 93.5 \\ & DGG & 14.8 & 7.4 & **94.4** \\ \hline DGCNN [36] & Original & 40 & 0 & 92.9 \\ & DGM [13] & 20 & 0 & 93.3 \\ & DGG & 19.3 & 5.2 & **93.8** \\ \hline \hline \end{tabular} \end{table} Table 7: Point Cloud classification on ModelNet40 with our module and DGM [13] integrated into two different point cloud labelling architectures.
2306.08734
WavPool: A New Block for Deep Neural Networks
Modern deep neural networks comprise many operational layers, such as dense or convolutional layers, which are often collected into blocks. In this work, we introduce a new, wavelet-transform-based network architecture that we call the multi-resolution perceptron: by adding a pooling layer, we create a new network block, the WavPool. The first step of the multi-resolution perceptron is transforming the data into its multi-resolution decomposition form by convolving the input data with filters of fixed coefficients but increasing size. Following image processing techniques, we are able to make scale and spatial information simultaneously accessible to the network without increasing the size of the data vector. WavPool outperforms a similar multilayer perceptron while using fewer parameters, and outperforms a comparable convolutional neural network by ~ 10% on relative accuracy on CIFAR-10.
Samuel D. McDermott, M. Voetberg, Brian Nord
2023-06-14T20:35:01Z
http://arxiv.org/abs/2306.08734v1
# WavPool: A New Block for Deep Neural Networks ###### Abstract Modern deep neural networks comprise many operational layers, such as dense or convolutional layers, which are often collected into blocks. In this work, we introduce a new, wavelet-transform-based network architecture that we call the _multi-resolution perceptron_: by adding a pooling layer, we create a new network block, the _WavPool_. The first step of the multi-resolution perceptron is transforming the data into its multi-resolution decomposition form by convolving the input data with filters of fixed coefficients but increasing size. Following image processing techniques, we are able to make scale and spatial information simultaneously accessible to the network without increasing the size of the data vector. WavPool outperforms a similar multilayer perceptron while using fewer parameters, and outperforms a comparable convolutional neural network by \(\sim 10\%\) on relative accuracy on CIFAR-10. 2 Footnote 2: FERMILAB-CONF-23-278-CSAID ## 1 Introduction Discrete signals have the potential to represent an amount of information that scales exponentially with the number of bits. Despite this fact, modern computational methods have demonstrated an ability to abstract patterns from data with far less than an exponential degree of complexity. Sometimes, the only independent variable underlying these patterns is their location - e.g., position in a sentence provides nearly all of the information necessary for natural language processing. In other signals, patterns are present that are not apparent as a function of the given variables but are presented in the variables of a conjugate space. Frequency and time are conjugate to one another, which is why Fourier analysis is useful in analyzing sounds. Spatial scale and position are conjugate to one another, which is why spherical harmonic analysis is useful in analyzing the cosmic microwave background (Peebles and Yu, 1970; Bond and Efstathiou, 1984). Most data, however, are encoded in signals that are not organized in any single variable, instead exhibiting regularities of both position and scale that are not simply accessible from spatial or scale data alone. This suggests a need for a type of signal processing that remains simultaneously sensitive to conjugate pairs of variables. Many methods have been used to extract information and discover patterns in challengingly complex data sets. For example, matched filters have a long and successful history in the field of signal processing Turin (1960). Recently, deep learning has upended long-held notions of what is possible in pattern recognition, with significant implications for science and society. For example, the multi-layer perceptron (MLP) Rosenblatt (1958) gave the first hint of how stacking many layers of matrix arithmetic between layers of nonlinearity could produce adaptable and flexible learning systems Cybenko (1989); Hornik et al. (1989). Architectures that further generalized these techniques, like convolutional neural networks (CNNs) LeCun et al. (1998) and the transformer Vaswani et al. (2017), have proven transformative in extending deep learning techniques to two-dimensional images and text, respectively. Nevertheless, traditional architectures require significant complexity (e.g., multiple stacked convolutional layers) to identify data features. Wavelets are mathematical tools that can be used to provide simultaneous sensitivity to conjugate data - e.g., position and size or time and frequency. Therefore, they have the potential to provide unique insights on data that are not purely characterized by spatial or scale information. Wavelet families comprise functions of finite spatial extent that are related to one another through scaling and translational transformations. If a family provides a complete, orthonormal basis of functions, then convolving the family with a set of data will provide a decomposition of the data. Wavelets were first discovered in the early 20th century, but many of their more remarkable properties were not formalized, generalized, or fully explored until the 1980s and 1990s (Daubechies, 1992; Mallat, 1989; Meyer, 1993; Daubechies, 1996). Methods based on the wavelet transform have since played a foundational role in many fields. For example, they undergird modern computational methods like the JPEG image compression algorithm (Skodras et al., 2001). The wavelet transform and related functions like the Laplacian pyramid technique have been used in combination with deep learning techniques to achieve state-of-the-art performance on various image-related tasks - image reconstruction with autoencoders (Chen et al., 2017) or more generally convolutional layers (Lai et al., 2017; Liu et al., 2018), image classification (Fujieda et al., 2018), representation learning (Yao et al., 2022), and learnable wavelets for denoising with reinforcement learning Michau et al. (2022). In this work, we combine wavelets that have compact support (Mallat, 1989) with deep learning operations in a new method. We use Daubechies wavelets (Daubechies, 1992) to decompose data into components on different scales without increasing the size of the data vector. The wavelet decomposition facilitates training on features of different size and scale in parallel across all length scales spanned by the input. This is not possible for networks comprised of filters of fixed size, as in a CNN. By construction, this will contain the same information content as the original, non-decomposed data without increasing the size of the data vector. However, spatial and scale information about the data will be made simultaneously accessible to the neural network. We train a series of MLPs in parallel on the decomposed images. We refer to these multiple, parallel MLPs as a multi-resolution perceptron (MRP). We use a pooling layer to extract a classifier from the MRP. This trainable block is called "WavPool." ## 2 Theory of Multi-Resolution Decomposition In this section, we provide a review of some basic results in wavelet methods. The reader who is familiar with this formalism can skip to Sec. 3. Consider an input signal \(S\) defined on a D-dimensional grid. In this work, we will use a wavelet transform to partition this data into its multi-resolution decomposition (MRD). The MRD of \(S\) is a set of transformed signals, \[\mathcal{M}_{S}=\{C(S),W^{a}_{L}(S),\ldots,W^{a}_{1}(S)\}, \tag{1}\] where \(a\) is an index that depends on the dimensionality of \(S\); \(L\) is the number of levels of the decomposition, which depends on the wavelet that is used for the decomposition; \(W(S)\) are the _details_ of \(S\), which are the features of \(S\) at a particular spatial scale; and \(C(S)\) is the _smoothest view_ of \(S\), which is its global average for the Haar wavelet, but is a non-trivial matrix for other wavelets. The size of a signal \([S]\) is equal to the sum of the sizes of all of the constituent signals \(\zeta\) that comprise its MRD: \([S]=\sum_{\zeta\in\mathcal{M}_{S}}[\zeta]\). For a D-dimensional signal, the index \(a\) in Eq. (1) takes on \(2^{\mathrm{D}}-1\) different values. For example, for a 2-dimensional signal, \(a\) takes on three values, corresponding to the horizontal, vertical, and diagonal details at each level, as described in more detail presently. The number of levels \(L\) is set by the size of the signal and the _number of vanishing moments_\(n_{v}\) of the wavelet used in the decomposition: \[L=\lfloor\log_{2}[S]\rfloor-n_{v}+1. \tag{2}\] The wavelets used in this work, first described by Daubechies [1], are indexed by the number of vanishing moments they have, meaning the degree of the polynomial function that they can set to zero: the Daubechies-\(n_{v}\) wavelet will set a polynomial of degree \(n_{v}-1\) to zero. We now describe the components of the MRD. This requires two functions: a _smoothing wavelet_\(\phi\) and a _differencing wavelet_\(\psi\). The differencing wavelet is given by reversing the smoothing wavelet while simultaneously alternating the parity. If \(\phi\) has \(N\) entries indexed by \(i\), then the \(i^{\rm th}\) entry of \(\psi\) is \(\psi_{i}=(-1)^{i}\phi_{N-i}\). In Eq. (1), \(C(S)\) is the smoothest view of \(S\): this is also the mean of \(S\), and it is obtained by \[C(S)=\phi\underbrace{\circ(\phi\circ(\cdots\atop L\ {\rm times}))}_{L\ {\rm times}}, \tag{3}\] where \(\circ\) is the convolution operation and \(L\) is the maximum number of times \(\phi\), which is the same as the number of levels (see Eq. (2)). Each of the products in the convolution reduces the size of the input, so that each level of the MRD is a different size. The convolution procedure is applied iteratively to the smoothed images until a single number \(C(S)\) remains. One can generalize the smoothest view \(C(S)\) from Eq. (3) to signals that have been partially smoothed to level \(\ell\): \[C_{\ell}(S)=\phi\underbrace{\circ(\phi\circ(\cdots\atop\ell\ {\rm times}))}_{ \ell\ {\rm times}}, \tag{4}\] where \(\ell\) is the index of the spatial scale - the \(C_{\ell}(S)\) for larger values of \(\ell\) have been increasingly smoothed. The \(W^{a}_{\ell}(S)\) from Eq. (1) are the _details_ of \(S\). They are given by applying a differencing wavelet \(\psi^{a}\) on the partially smoothed images: \[W^{a}_{\ell+1}(S)=\psi^{a}\circ C_{\ell}(S)=\psi^{a}\circ(\phi\underbrace{ \circ(\phi\circ(\cdots\atop\ell\ {\rm times})}_{\ell\ {\rm times}})). \tag{5}\] \(W^{a}_{\ell}(S)\) represents features of \(S\) that exist at different spatial scales and exist for different spatial symmetries (as indexed by \(a\)). For larger values of \(\ell\), \(W^{a}_{\ell}(S)\) represents features of \(S\) that have larger spatial extent. The essential feature of the MRD is that different levels are of different sizes and contain distinct information about different spatial scales. This feature enables partitioning of the data to provide access to different information than is provided by the original data. In this work, we will exclusively use the Daubechies-1 wavelet (\(n_{v}=1\)), also known as the Haar wavelet. For the specific case of the one-dimensional Haar wavelet, the smoothing and differencing wavelets are \[\phi_{1,1}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\end{array}\right), \qquad\psi_{1,1}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&-1\end{array} \right), \tag{6}\] which are convolved over the input signal \(S\) with stride two. Therefore, the \(W_{\ell}(S)\) details are half as long as the \(C_{\ell-1}(S)\) smooth images from which they are obtained. For two-dimensional inputs, the Haar wavelets are \[\begin{array}{ll}\phi_{1,2}=\phi_{1,1}\otimes\phi_{1,1},&\psi^{v}_{1,2}=\psi _{1,1}\otimes\phi_{1,1},\\ \psi^{h}_{1,2}=\phi_{1,1}\otimes\psi_{1,1},&\psi^{d}_{1,2}=\psi_{1,1}\otimes \psi_{1,1},\end{array} \tag{7}\] where \(\otimes\) is the outer product and \(a=v,h,d\) designate the vertical, horizontal, and diagonal differences that are possible in two dimensions. For an MRD of a two-dimensional image, the two-dimensional Haar smoothing wavelet \(\phi_{1,2}\) is a \(2\times 2\) matrix, all of whose entries are \(1/2\). The two-dimensional Haar differencing wavelets \(\psi^{a}_{1,2}\) are also \(2\times 2\) matrices. Thus, \(C_{\ell}(S)\) and the \(W^{a}_{\ell}(S)\) each contain one-quarter as many entries as the \(C_{\ell-1}(S)\) from which they are obtained. The Haar wavelet also has the property that the sums of the details at level \(\ell\) equal the difference of the smoothed images at levels \(\ell-1\) and \(\ell\), when the signal at level \(\ell\) have expanded by repeating each entry. This is reminiscent of the Laplacian pyramid algorithm [1], except that the MRD generates these details constructively and provides orientation information in dimensions larger than one. We provide worked examples of the 2-dimensional Haar decomposition in Apps. B.1 and B.2. Importantly, the contents of the MRD, the set \(\mathcal{M}_{S}\), form a _partition_ of \(S\), such that \(S\) can be reconstructed via \[S=\phi^{-1}C(S)+\sum_{\ell=1}^{L}\sum_{a}(\psi^{-1})^{a}W_{\ell}^{a}(S), \tag{8}\] where \(\phi^{-1}\) and \(\psi^{-1}\) are the "reconstruction" or "inverse" wavelets, respectively. The numerical values of these depend on the conventions adopted for the MRD. We show a concrete example in the appendix where the inverse wavelets are identical to the wavelets used in the MRD. One implication of Eq. (8) is that \(\mathcal{M}_{S}\) and \(S\) are two fully equivalent, lossless representations of exactly the same information. This can be confirmed by counting degrees of freedom. For a one-dimensional input \(S\) of length \(2^{L}\) that we have decomposed with the Daubechies-1 wavelet, \(W_{\ell}(S)\) has length \(2^{L-\ell}\), and \(\mathcal{M}_{S}\) has length \(1+\sum_{\ell=0}^{L-1}2^{i}=2^{L}\), where the leading \(1\) comes from \(C(S)\) and the remaining terms come from the \(W_{\ell}(S)\). Despite retaining all the information of the original image, the MRD \(\mathcal{M}_{S}\) has rearranged this data in a novel way according to scale and, in dimensions larger than one, orientation. The MRD reorganizes the information contained in the input so as to enable direct access to information that is encoded in any way other than purely through local spatial correlations. Heuristically, the arrangement of an MRD is similar to telescopic series. This makes the MRD a convenient set of inputs to a DL network if we want that network to learn - in a size-agnostic way - aspects of data aside from those encoded purely through spatial locality. This stands in contrast to a CNN, which encodes spatial information through filters of fixed, predetermined size. In this work, we will henceforth only use the Haar wavelet applied to input signals \(S\) with two dimensions - e.g., images. This fixes the index \(a\) to take on three values - corresponding to vertical, horizontal, and diagonal differences, as in Eq. (7). For notational convenience, we will drop the explicit dependence on \(S\) in the average and the details. We summarize many of the technical terms we use in this work in App. A. We also connect our terminology to alternate terminology found throughout the literature. ## 3 New Method: The WavPool Block We introduce the _WavPool_ block to incorporate the wavelet decomposition into a neural network context. The WavPool takes in the levels of the MRD on an input image and applies a dense layer to each component of the transform. We refer to the output of this collection of many dense layers as the multi-resolution perceptron (MRP). The _Pool_ of WavPool is derived from the fact that we apply a maximum pooling operation to the MRP. The pooling is critical to capture the different layers of Figure 1: The MicroWav operation. The three-tailed operation applies the wavelet decomposition at a defined level \(\ell\) and assigns each of the three details to their own dense layer of size \(N_{\ell}\). granularity of the transform and allow the training process to determine the importance of different decomposition components. The fundamental _MicroWav_ layer is shown in Fig. 1. Each MicroWav component, which makes up the first part of the WavPool, operates on a specific level \(\ell\) of the MRD. Each image detail \(W^{a}_{\ell}\) is fed into one of three independent dense layers. The output of each level \(\ell\) has its own hidden size, \(N_{\ell}\). After performing experiments for an optimized network, as discussed in Sec. 4, we find that hidden layers of different sizes lead to better performance. To turn the output of the MicroWav layers into a classifier, we first collect the dense outputs corresponding to all \(L\) levels of the MRD. We call this combined output a multi-resolution perceptron (MRP): it is inspired by the perceptron Rosenblatt (1958), but the data has been processed by the MRD. We wish to train on all of the content of the MRP, but we also want to allow the layers to have different sizes to accommodate the different information content of each layer of the MRD. We fix the hidden layer sizes to scale inversely with \(\ell\), such that \(\ell\times N_{\ell}\) is constant. This has multiple implications. First, this scaling reduces the number of trainable parameters in the network because higher MicroWav levels have fewer outputs. Second, because this scaling is sub-exponential, the higher MicroWav levels are smaller. This helps the network avoid over-fitting to higher levels of the decomposition, which would otherwise have an unbalanced node-to-input feature ratio. Finally, this ensures that the MRD is not inverted at the end of the network, as it is in previous embeddings of wavelets in neural networks. Instead, this choice enforces _sparsity_ inside the WavPool, effectively compressing the lower levels of the decomposition. To enable a pooling operation on the differently sized outputs of the MicroWav layers, we apply a padding operation to the output of each of these sub-blocks. This ensures a uniform shape across the block, such that the shape of the stacked output of the MRP is \(L\times 3\times N_{1}\). The resulting block is compressed by a pooling operation. We name the entire network the _WavPool_. The complete WavPool diagram is shown in Fig. 2. In Tab. 1, we estimate the time complexity of components of the WavPool. The WavPool block itself derives most of its time complexity from the dense layers that compose the majority of its calculations. The Haar wavelet itself has linear time complexity with the size of its input \(n\), and the operations within MicroWav are sequentially applied. Thus, the MicroWav has time complexity of \(O(n)+O(\sum_{t=0}^{T}N_{t}\times n^{2})\). Within the MicroWav, \(T=3\), so this complexity can be simplified to \(O(n+3n^{3})\), or \(O(n^{3})\). This further applies to the full block. The stacking operation takes place in constant time, and the 3D Pooling operation is an operation \(O(n^{3})\). As all these operations are Figure 2: Architectural diagram of the WavPool block. The block comprises individual hidden-layer triplets (“MicroWav”; Fig. 1) that account for each level of the MRD, which are then fed into a pooling layer. This produces an output with shape \(L\times 3\times\max(N_{\ell})\). A 3D Pooling operation is applied to this output, followed by a dense layer. The breakdown of parameter and time complexity of each point in the network can be seen in Table 1 linearly applied, this produces a block with time complexity of \(O(H)+O(\sum_{t=0}^{T}N_{t}\times H^{2})+O(L)+O(\prod_{i=0}^{3}(k_{i})+O(m\times N_{ t}^{2})\), or more compactly, \(O(n^{3})\) The matrix multiplication required to compute dense layers is also \(O(n^{3})\). Therefore, the MLP we compare to has a time complexity of \(O(2n^{3})\), putting the complexity on par with the WavPool. Convolutional networks are more dependent on the size of their filters and number of channels, which is described as \(O(m_{in}\times m_{out}\times\prod_{i=0}^{D}k_{i}^{2})\), He and Sun (2014), where \(m\) refers to the channels in the convolution and \(k_{D}\) is the kernel size in each dimension. Because \(k_{i}\) has a maximum size of \(n\) (i.e., the kernel cannot exceed the size of the input data), in the case of our 2D experiment we constrain the complexity of the CNN to be \(O(m_{in}\times m_{out}\times n^{4})\) at a maximum. In some circumstances (e.g., with very large kernels or with many in and out channels), the CNN is more time-intensive than the maximum-complexity form of the WavPool. ## 4 Experiments: Comparing WavPool, MLP, and CNN To investigate the efficacy of WavPool networks, we conduct experiments that compare the WavPool network, a two-hidden layer dense network, and a CNN. The CNN comprises two convolutional layers, each of which has a batch norm layer between them. We test the networks on multiple datasets: MNIST LeCun et al. (2010); Fashion MNIST Xiao et al. (2017), and CIFAR-10 Krizhevsky (2009). In our experiments, CIFAR-10 was transform into greyscale to constrain the problem to two-dimensional space. We perform three trials on each data set with each network. For each trial, we changed the initialization weights and the subset of each dataset used by setting a different random seed. For all experiments, we use 4000 training images and 2000 validation images. Each trial is run for 120 epochs maximum - though subject to early stopping if the validation loss stalled after waiting for five epochs of patience. Networks are built with PyTorch Paszke et al. (2019). The wavelet-based networks are built with PyWavelet Lee et al. (2019) as a calculation method for the wavelet decomposition. All calculations were performed on a 10-core CPU. First, we perform these experiments with non-optimized parameters, for the purpose of showing how each block performs without an in-depth parameter exploration period. The parameters are chosen by leaving most parameters as the default parameters of PyTorch. The results of the non-optimized experiments can be found in Tab. 2. These values are the average of the scores obtained on the validation sets of the three different runs. Values without error bars did not have significant variation between runs. Next, we use Bayesian hyperparameter optimization methods Nogueira (2014, 2015) to find the best possible networks for each architecture and task combination. This was done three times with the goal of demonstrating the best capabilities of each network. This includes optimizing the learning rate, hidden layer sizes, and optimizer for all networks, and additionally the convolution kernel size and number of hidden channels for the CNN. The exact range of the optimization space can be found in App. D. The quality of a network is assessed after an abbreviated training period of 20 epochs, based on the F1 score on the validation set. The final network is then re-initialized and allowed to run for up to 120 epochs subject to early stopping as in the non-optimized case. The results of the experiments on optimized networks are shown in Tab. 3. We also provide a relative comparison of performance on these metrics in Tab. 4. We show the results of these experiments in Fig. 3. The solid lines show the mean training performance, the shaded regions show the envelope of the performance across the three runs, and the dotted lines shown the results from the best-performing validation run. \begin{table} \begin{tabular}{l l l} Step & Output Size & Time Complexity \\ \hline MicroWav (single \(\ell\)) & \(1\times 3\times N_{\ell}\) & \(O(n+3n^{3})\) \\ Padding/Stacking & \(L\times 3\times N_{1}\) & \(O(L)\) \\ 3D Pooling & \((L-k_{1}+1)\times(4-k_{1})\times(N_{1}-k_{2}+1)\) & \(O(n^{3})\) \\ Dense Output & \(m\) & \(O(n^{2}m)\) \\ \end{tabular} \end{table} Table 1: The time complexity and resulting matrix size for each step in the WavPool operation. \(N_{\ell}\) is the size of the output of the dense network that has been applied to each _detail_\(W_{\ell}\); \(k_{n}\) is the selected kernel size, a hyperparameter for the shape of the pooling operation; and \(m\) is the selected output size. WavPool requires significantly more trainable parameters than a single-channel CNN block with comparable architecture and complexity. This is attributed to the MicroWav's use of dense layers as means of holding trained component information. Reduction in the size of these dense layers is still feasible, as no operations (e.g., weight-pruning or quantization) have been applied to these dense layers in the MicroWav blocks, nor have they been individually tuned for ideal size given the size of the wavelet transform output. Because many signals are sparse in the wavelet representation Mallat (2008), further optimization of the WavPool network is an interesting and promising topic for future work. Nevertheless, the results show that the WavPool is less information-hungry than the CNN. For the MNIST and Fashion MNIST tasks, the number of parameters in the WavPool only increases by 2% from the non-optimized to the optimized version, while the optimized versions of the CNN require many more parameters than the non-optimized version. For WavPool, the number \begin{table} \begin{tabular}{l l l l l} Network & Task & Parameters & ROC AUC & Accuracy \\ \hline \hline \multirow{3}{*}{WavPool} & MNIST & 182192 & **0.953\(\pm\)0.001** & **0.916\(\pm\)0.002** \\ & Fashion-MNIST & 182192 & 0.907\(\pm\)0.001 & 0.832\(\pm\)0.001 \\ & CIFAR & 235964 & **0.629\(\pm\)0.004** & **0.331\(\pm\)0.007** \\ \hline \multirow{3}{*}{MLP Block} & MNIST & 209890 & 0.942 & 0.895\(\pm\)0.001 \\ & Fashion-MNIST & 209890 & **0.91\(\pm\)0.001** & **0.839\(\pm\)0.002** \\ & CIFAR & 273250 & 0.604\(\pm\)0.002 & 0.288\(\pm\)0.003 \\ \hline \multirow{3}{*}{CNN Block} & MNIST & **6806** & 0.95 & 0.91 \\ & Fashion-MNIST & **6806** & 0.898 & 0.815 \\ \cline{1-1} & CIFAR & **9046** & 0.603\(\pm\)0.003 & 0.285\(\pm\)0.006 \\ \end{tabular} \end{table} Table 2: Parameter count and validation score metrics for non-optimized networks. The network hyperparameters were arbitrarily chosen, along with the width of the networks/kernel size. The receiver operating characteristic (ROC) curve and accuracy values are for each network are obtained from a validation set. Without tuning, the WavPool performs the best on two out of the three tasks. The MLP narrowly outperforms WavPool on the Fashion-MNIST task. Figure 3: Performance of optimized networks on Fashion MNIST and CIFAR-10. The error bars show the variation of three trials with different network initialization and different dataset subsets. Validation for a single trial is represented with a dashed line. of parameters does not significantly increase for the Fashion-MNIST and MNIST tasks. The number of parameters decreases for CIFAR-10, while increasing predictive power significantly. For both non-optimized and optimized experiments, the WavPool outperforms the MLP and the CNN with respect to accuracy and AUC ROC for almost all classification problems. The exception is that the MLP outperforms WavPool at the sub-percent level on Fashion-MNIST in the non-optimized case. Despite its relatively high complexity (e.g., more independent layers) compared to a basic MLP, the WavPool contains at least \(10^{4}\) fewer trainable parameters in all cases. Trainable weights in the network are the ones in each MicroWav layer (see Fig. 1) and on the final dense layer of the WavPool (see Fig. 2). We show this from the increase in accuracy across the different blocks tested. While the blocks were exposed to the same amount of data in identical training schemes, the wavelet showed a \(\sim 10\%\) increase in accuracy over a standard MLP and a CNN block for tests on CIFAR-10. CIFAR-10 is itself a real image dataset, so the correlation of the pixels within a given image is both local and non-local Krizhevsky (2009). The decomposition of the input applied by WavPool is lossless, yet contains non-local and local information broken into different views of the data. The initial decomposition allows different parts of the network to learn information independently. ## 5 Conclusion In this work, we demonstrate the functionality and efficacy of a multi-resolution decomposition (MRD) within a neural network. Specifically, we decompose data using a Haar wavelet and train classifiers on this stacked, decomposed data. On each level of the transformed data, we apply a dense layer, which we refer to as a MicroWav unit. By training all of the MicroWav units in parallel, we create a multi-resolution perceptron (MRP), on which we use pooling to create the WavPool block. We observe better performance with this classifier relative to a comparable CNN on MNIST, Fashion-MNIST, and CIFAR-10 datasets. It is notable that WavPool has the capacity to outperform a CNN with comparable architectural size and complexity. CNNs abstract scale information from an image using fixed filter sizes and dynamic, trainable weights. The abstraction comes at the expense of increasing the size of the data vector in some implementation scenarios. In contrast, the MRP uses filters with fixed values, but varying in kernel size/spatial extent, to partition the data into components of varying size without increasing the size of the input data vector. We conclude that the multi-resolution aspect of the MRP makes information from all scales of the image readily available to the classifier. In addition, on multichannel data, we expect that WavPool will have better scaling properties while retaining its high performance. The MRP also holds promise for a wider range of applications. Similar techniques in combination with CNNs have shown promising, near-state-of-the-art performance on a variety of tasks Chen et al. (2017); Lai et al. (2017); Liu et al. (2018); Fujeda et al. (2018); Yao et al. (2022). Supplementing CNNs with MRPs or MicroWav-like blocks in these architectures is a promising route for future investigation. There is also potential for this method in Quantum Neural Network contexts. For example, the Haar wavelet is an operation requiring only 1 or -1, (as seen in equation 6 and 7), \begin{table} \begin{tabular}{l l l l l} Network & Task & Parameters & ROC AUC & Accuracy \\ \hline \hline \multirow{3}{*}{WavPool} & MNIST & 186722 & **0.978\(\pm\)0.001** & **0.96\(\pm\)0.001** \\ & Fashion-MNIST & 186722 & **0.925** & **0.865** \\ & CIFAR & 201672 & **0.658** & **0.386\(\pm\)0.001** \\ \hline \multirow{3}{*}{MLP Block} & MNIST & 216250 & 0.947 & 0.905 \\ & Fashion-MNIST & 217045 & 0.913\(\pm\)0.001 & 0.843\(\pm\)0.001 \\ & CIFAR & 215290 & 0.599 & 0.279 \\ \hline \multirow{3}{*}{CNN Block} & MNIST & **96375** & 0.977\(\pm\)0.002 & 0.959\(\pm\)0.003 \\ & Fashion-MNIST & **154154** & 0.887\(\pm\)0.012 & 0.797\(\pm\)0.02 \\ \cline{1-1} & CIFAR-10 & **125175** & 0.642\(\pm\)0.01 & 0.352\(\pm\)0.019 \\ \end{tabular} \end{table} Table 3: The same networks and datasets from table 2, but optimized using a Bayesian Optimization scheme. so it can be easily implemented in the schema required for quantum neural networks, because the operations involved in the Haar wavelet mimic those used on qubits Kwak et al. (2021). ## Acknowledgments We acknowledge the Deep Skies Lab as a community of multi-domain experts and collaborators who've facilitated an environment of open discussion, idea-generation, and collaboration. This community was important for the development of this project. We thank Aleksandra Ciprijanovic, Andrew Hearin, and Shubhendu Trivedi for comments on the manuscript. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
2306.06719
Proteinoid microspheres as proto-neural networks
Proteinoids, also known as thermal proteins, possess a fascinating ability to generate microspheres that exhibit electrical spikes resembling the action potentials of neurons. These spiking microspheres, referred to as protoneurons, hold the potential to assemble into proto-nano-brains. In our study, we investigate the feasibility of utilizing a promising electrochemical technique called differential pulse voltammetry (DPV) to interface with proteinoid nano-brains. We evaluate DPV's suitability by examining critical parameters such as selectivity, sensitivity, and linearity of the electrochemical responses. The research systematically explores the influence of various operational factors, including pulse width, pulse amplitude, scan rate, and scan time. Encouragingly, our findings indicate that DPV exhibits significant potential as an efficient electrochemical interface for proteinoid nano-brains. This technology opens up new avenues for developing artificial neural networks with broad applications across diverse fields of research.
Panagiotis Mougkogiannis, Andrew Adamatzky
2023-06-11T16:38:18Z
http://arxiv.org/abs/2306.06719v1
# Proteinoid microspheres as proto-neural networks ###### Abstract Proteinoids, also known as thermal proteins, possess a fascinating ability to generate microspheres that exhibit electrical spikes resembling the action potentials of neurons. These spiking microspheres, referred to as protoneurons, hold the potential to assemble into proto-nano-brains. In our study, we investigate the feasibility of utilizing a promising electrochemical technique called differential pulse voltammetry (DPV) to interface with proteinoid nano-brains. We evaluate DPV's suitability by examining critical parameters such as selectivity, sensitivity, and linearity of the electrochemical responses. The research systematically explores the influence of various operational factors, including pulse width, pulse amplitude, scan rate, and scan time. Encouragingly, our findings indicate that DPV exhibits significant potential as an efficient electrochemical interface for proteinoid nano-brains. This technology opens up new avenues for developing artificial neural networks with broad applications across diverse fields of research. keywords: thermal proteins, proteinoids, microspheres, unconventional computing ## 1 Introduction While there are numerous prototypes of organic electronic devices [1; 2; 3; 4], very few, if any, demonstrate substantial degrees of stability or bio-compatibility [5]. This is why we propose to explore thermal proteins [6], a unique class of organic devices, as a substrate and architecture for future non-silicon massive parallel computers. Proteinoids, also known as thermal proteins, are derived by subjecting amino acids to high temperatures until they reach their melting point, leading to polymerization and the formation of polymeric chains [6]. This polymerization process occurs in the absence of solvents, initiators, or catalysts, under an inert atmosphere, typically at temperatures ranging from 160 to 200 "C. Specifically, the tri-functional amino acids, such as glutamic acid, aspartic acid, or lysine, undergo cyclisation at elevated temperatures and serve as solvents and initiators for the polymerization of other amino acids [7; 6]. The intriguing capacity of proteinoid microspheres to generate action-potential-like spikes and spike trains has led to their consideration as analogs of proto-neurons, which are neuron-like cells that function without metabolic processes [8; 9; 10]. We explore the concept of nano brains (PNBs) [11; 12] to evaluate feasibility of proteinoid microspheres for physical imitation of artificial neuron networks (ANNs) [13]. Namely, we aim to imitate neuronal responses to external stimuli [14; 15; 16; 17; 18] in PNBs. We use differential pulse voltammetry (DPV) for assessing capabilitie of PBNs for pattern recognition. Comparative analyses of DPV and alternative approaches are conducted, and the current research status in this field is reviewed. The implications of the findings for subsequent research are also discussed. In this study, we employ techniques from the fields of electrochemical neuroscience, artificial neural networks (ANNs), and pattern recognition to analyze the spikes generated by PNBs. Specifically, we utilize differential pulse voltammetry (DPV) to measure the electrical signals produced by PNBs. Through this analysis, we aim to understand the behavior of PNBs and evaluate their capability for pattern recognition. Our objective is to review existing literature that explores the relationship between PNBs and ANNs, identify any gaps or limitations in the current understanding, and propose a research methodology to investigate the potential of PNBs in the field of pattern recognition [19; 20; 21]. Now, let's delve into the examination of the neural networking capabilities of biological neurons. A group of researchers at NIST (National Institute of Standards and Technology) has made significant advancements in this area by developing an artificial neuron that exhibits an astonishing firing rate of 100 billion times per second [13]. This remarkable speed surpasses the firing rate of a human brain cell by approximately tenfold. The research article highlights the use of niobium nitride, a superconducting material, in the artificial neuron. This material allows the neuron to switch between two distinct electrical resistance states when exposed to magnetic fields. The article discusses the possibilities and challenges associated with creating "neuromorphic" hardware that emulates the complex functioning of the human brain [13]. In their research, Wan et al. presented a breakthrough in the field of artificial neurons by showcasing the functionality of an artificial sensory neuron capable of gathering optic and pressure data from photodetectors and pressure sensors, respectively [19]. This neuron can transmit the combined information through an ionic cable and integrate it into post-synaptic currents using a synaptic transistor. The study highlights the significance of synchronizing two sensory cues, as it activates the artificial sensory neuron at different levels, enabling the control of skeletal myotubes and a robotic hand. Furthermore, the research demonstrates that the artificial sensory neuron enhances recognition capabilities for combined visual and haptic cues through the simulation of a multi-transparency pattern recognition task [19]. In their study, Boddy et al. employ artificial neural networks (ANNs) to effectively identify and classify marine phytoplankton using flow cytometry data, showcasing the capability of ANNs in recognizing patterns in biological data [20]. The article provides an overview of the structure and training process of three types of ANNs: backpropagation (multilayer perceptron), radial basis function (RBF), and learning vector quantization. These ANNs utilize supervised learning techniques and are well-suited for biological identification purposes. Additionally, the study highlights the effectiveness of Kohonen self-organizing maps (SOM) and adaptive resonance theory (ART) as classification methods [20]. In their research, Syed and colleagues introduce a groundbreaking concept that goes beyond the traditional fixed feedforward operation commonly found in contemporary artificial neural networks [22]. The study presents a novel class of synthetic neurons capable of adapting their functionality in response to feedback signals from neighboring neurons. These synthetic neurons demonstrate the ability to emulate complex brain functions, including spike frequency adaptation, spike timing-dependent plasticity, short-term memory, and chaotic dynamics [22]. Baluska et al. explore the evolutionary perspective of biomolecular structures and processes that contribute to the emergence and maintenance of cellular consciousness [11]. The proposition suggests that subcellular components, such as actin cytoskeletons and membranes, play a crucial role in nano-intentionality. This is attributed to the inherent structural adaptability of individual biomolecules, extending beyond cellular boundaries [11]. Present paper focuses on exploring the capabilities of proteinoid nano brains (PNBs) in processing signals obtained from a differential pulse voltammetry (DPV) electrode and their potential for pattern recognition, drawing inspiration from artificial neural networks (ANNs) [23; 24; 22]. The objective of this study is to investigate the ability of PNBs to detect spikes induced by DPV signals. We aim to assess the responsiveness of PNBs to DPV signals and their capacity to generate ANNs for pattern recognition purposes. Experimental results are presented, evaluating the pattern recognition performance of PNBs using DPV signals. The paper concludes by discussing the implications of the findings and providing recommendations for future research. ## 2 Methods High-purity amino acids, including L-Phenylalanine, L-Aspartic acid, L-Histidine, L-Glutamic acid, and L-Lysine (Fig. 1), were acquired from Sigma Aldrich with a purity exceeding 98%. The synthesis of proteinoids followed previously established methods [25]. The structural analysis of the proteinoids was conducted using scanning electron microscopy (SEM) with the FEI Quanta 650 equipment. Characterization of the proteinoids was performed using FT-IR spectroscopy [25]. To measure the electrical activity of the proteinoids, iridium-coated stainless steel sub-dermal needle electrodes (Spes Medica S.r.l., Italy), and high-resolution data logger equipped with a 24-bit A/D converter (ADC-24, Pico Technology, UK) were used. The electrodes were configured in pairs to measure the potential difference between them, with an inter-electrode distance of approximately 10 mm. Electrical activity was recorded at a sampling rate of one sample per second. The data logger recorded multiple measurements (typically up to 600 per second) and stored the mean value for analysis. Differential pulse voltammetry (DPV) can be used to take accurate measurements with the Zimmer & Peacock Anapot EIS. The Anapot EIS provides users with the flexibility to define measurement parameters for conducting differential pulse voltammetry (DPV) experiments. In order to perform a DPV measurement, several key parameters need to be specified, as follows the equilibrium time is set to 100 sec, the potential scan starts at -8 V, the potential scan ends at 8 V, the potential step size is set to 0.001 V, the pulse amplitude is set to 0.2 V, the pulse width is specified as 0.08 sec, the scan rate is set to 0.001 V/s. During the measurement process, the Anapot EIS applies brief pulses to the working electrode in small steps. It measures the current response twice in each step, capturing the current values before and after the pulse. This process is repeated until every phase of the potential scan is completed. By precisely controlling the measurement parameters and acquiring current response data at different potentials, the Anapot EIS enables comprehensive analysis and characterization of samples through differential pulse voltammetry. ## 3 Results Scanning electron microscopy revealed the complex proteinoid molecular network tuned to 1337 nm porosity (Fig. 2). The network of molecules observed in our experiments appears to show some morphological similarity to neural cultures [27]. The results of Figure 3 suggests that proteinoids behave as electrical semiconductors, likely due to their amino acid chain structure. This type of electrical Figure 1: Chemical structures of L-Glu:L-Asp, L-Glu:L-Phe:L-His, L-Lys:L-Phe:L-Glu, L-Arg,L-Glu, L-Asp:L-Pro, L-Glu:L-Asp:L-Phe. Figure 3: DPV measurements of 12 different proteinoids. The heigt of each peak is proportional to the number of microspheres present in the sample. Figure 2: Porosity of proteinoids. This graph shows average porosity (3.7982 um) of proteinoids, represented by a depth map, binary segmentation, pore space segmentation, and pore size distribution in um [26]. activity is reminiscent of that observed in neurons within the nervous system [28, 29, 30, 31]. Although the level of electrical activity is lower than that of neurons, the similarity in its behaviour is intriguing. Despite this, the results of Figure 3 suggest that the electrical nature of proteinoids may exhibit properties similar to what is observed in biological cells [32, 33, 21]. The data from Fig. 3 is further supported by our previous research that proteinoids could synchronize electrical activity [25, 21]. Based on the data presented in Table 2, it can be observed that the proteinoids exhibited a considerable range in the quantity of spikes they generated. The presence of this phenomenon was demonstrated through the fluctuating quantity of spikes that were detected in the voltage-current graphs. The findings indicate that the mean number of spikes observed was 385.8, with a range spanning from 8 spikes for the L-Glu:L-Asp:L-Pro sample to 900 spikes for the L-Phe sample. The time duration metrics of the previously mentioned spikes varied from 20.71 seconds for L-Phe to 2541 seconds for L-Glu:L-Asp:L-Pro. The data indicates that the proteinoids exhibited a mean inter-spike interval of 425.30 seconds. The combination of L-Glu, L-Asp, and L-Pro in a ratio of L-Glu:L-Asp:L-Pro resulted in a lower number of spikes, 8 compared to other combinations. Additionally, the mean inter spike interval (2541.00 sec) for these three combinations was slightly higher than that of the other combinations. Figure 4 depicts the relationship between pre-synaptic and post-synaptic neurons and their impact on proteinoid activity. The terms presynaptic and postsynaptic refer to the two sides of a synapse, which is the point of connection Figure 4: The presented colour map depicts the PSI and PPI values of various proteinoids. PSI, or post-synaptic index, quantifies the chemical or functional potency of inter-neuronal connections within a network. PPI stands for post postsynaptic index. It quantifies the efficacy of inter-neuronal connections in a given network. Darker colours of blue indicate elevated PSI values, whereas lighter colours of green indicate elevated PPI values. The map illustrates the correlation between post-synaptic and pre-synaptic neurons and their influence on proteinoid function. between two neurons or between a neuron and a target cell. The presynaptic neuron releases neurotransmitters, which are chemical messengers facilitating intercellular communication. The postsynaptic neuron receives and responds to the neurotransmitter by either firing or not firing an action potential [34]. The code in Fig. 4 assumes that proteinoid samples can form microspheres that function as primitive neuronal analogues, and that the measured potential values correspond to the electrical activity of these microspheres. Moreover, the code utilises temporal coding to convert current values (in \(\mu\)A) to binary values, indicating the presence or absence of a spike at a particular time for each proteinoid microsphere. A spike denotes an abrupt rise in the electrical potential across the membrane of a neuron or a proteinoid microsphere, signifying its activation or firing. The ANN assigns 10 proteinoid microspheres to 10 neurons and employs synaptic weights to depict their interconnections. Synaptic weights represent the relative strength or influence of one neuron on another. The synaptic weights are initialised randomly, without of any pre-existing knowledge or learning. Fig. 4 employs post-synaptic and pre-synaptic indices for neuron labelling in the ANN. The post-synaptic neuron receives signals from another neuron, while the pre-synaptic neuron sends signals to another neuron. The algorithm employs heatmaps to depict the synaptic weights of individual samples. Each cell in the heatmap represents a combination of post-synaptic and pre-synaptic neurons, and the colour denotes the synaptic weight value. Initially, the synaptic weights of a 10-neuron network were randomly initialised within the range of -1 to 1, as presented in Table 1. Furthermore, the way in which the input is converted into an output by the activation function (as depicted in Figure 5) may be understood as a depiction of the information exchange among neurons in the nervous system, wherein a greater input would yield a correspondingly higher output. Furthermore, the temporal codes may be regarded as a depiction of the action potential within the nervous system, in which the potential must attain a threshold prior to the enhancing of temporal codes and the consequent activation of output. Proteinoids can be used to make interpretations and \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|r|r|} \hline \(-1.0\) & \(1.0\) & \(1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) \\ \hline \(1.0\) & \(-1.0\) & \(1.0\) & \(1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) \\ \hline \(1.0\) & \(-1.0\) & \(-0.4\) & \(-1.0\) & \(-0.8\) & \(-1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) \\ \hline \(-1.0\) & \(-1.0\) & \(-1.0\) & \(-1.0\) & \(-1.0\) & \(-1.0\) & \(-1.0\) & \(-1.0\) & \(1.0\) & \(1.0\) \\ \hline \(-1.0\) & \(-1.0\) & \(0.2\) & \(-1.0\) & \(-1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(1.0\) \\ \hline \(-1.0\) & \(1.0\) & \(0.5\) & \(1.0\) & \(1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) & \(0.7\) \\ \hline \(1.0\) & \(-0.5\) & \(-0.6\) & \(0.7\) & \(1.0\) & \(1.0\) & \(-0.7\) & \(1.0\) & \(1.0\) & \(-1.0\) \\ \hline \(-1.0\) & \(1.0\) & \(1.0\) & \(-0.9\) & \(-1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(1.0\) \\ \hline \(1.0\) & \(-1.0\) & \(1.0\) & \(1.0\) & \(-1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) & \(1.0\) & \(-1.0\) \\ \hline \(0.3\) & \(1.0\) & \(-1.0\) & \(-1.0\) & \(-0.2\) & \(-1.0\) & \(0.1\) & \(-1.0\) & \(1.0\) & \(-1.0\) \\ \hline \end{tabular} \end{table} Table 1: Initial values of W matrix for a temporal coding neural network with 10 neurons and random weights. system based on this mathematical relationship. Let \(t\in\mathbb{R}^{n}\) be a vector of time values in seconds, \(p\in\mathbb{R}^{n\times 12}\) be a matrix of potential values in volts for 12 samples, \(N\in\mathbb{N}\) be the number of neurons in the ANN, \(T\in\mathbb{R}^{+}\) be the time window for temporal coding in seconds, \(\theta\in\mathbb{R}^{+}\) be the threshold for spike detection in volts, \(c\in\{0,1\}^{N\times n}\) be a matrix of temporal codes for each neuron over time, \(W\in[-1,1]^{N\times N}\) be a matrix of synaptic weights between neurons. Then, for each sample \(i=1,\ldots,12\) we have where \(1_{[p_{i},i>\theta]}(j)\) is an indicator function that returns 1 if \(p_{j,i}\) is greater than \(\theta\), and 0 otherwise. Neurons are distinguished by their unique temporal coding, input parameters, and synaptic weights. When the temporal code \((c_{\cdot,j})\) exceeds the threshold parameter \((\theta)\), the neuron representation fires an action potential through its axonal connections, similar to a real neuron. The proteinoid neurons bear resemblance to the neurons in an actual nervous system. The synaptic weights \((W)\) in a neural network are similar to the synapses in a biological nervous system, as they govern the potency of the link between axons and dendrites. Figures 3 and 5 differ in stimulation level. Figure 3 utilises DPV, whereas Figure 5 employs a power source that supplies a stable voltage through the proteinoid solution. According to the findings of the current research, proteinoids are capable of interpreting and responding to various forms of stimulation. When stimulated with DPV (Figure 3), the proteinoid solution unexpectedly produced oscillating signals as if it were a nervous system analogue. Again unexpectedly, when the proteinoid solution was stimulated with a stable power source (Figure 5), discrete signals were produced. Similar to a nervous system, the proteinoid solution was able to interpret and respond to various forms of stimulation, as indicated by the results. In addition, it appears that the stability of the power source may affect the modulation of the response. This phenomenon can be attributed to the proteinoids' ability to distinguish between the DPV and the stabilised power source, and to react accordingly. The results of this study indicate that proteinoid microspheres demonstrate an association between molecular properties and firing rates as presented in Figure 6. The firing rate increases significantly with increases in molecular weight and peptide length. This correlation between structural parameters and electrical activity alludes to the possibility of proteinoid microspheres acting as analogs of neurons and forming the basis of a primitive nervous system. The firing rate of proteinoid microspheres can be used as an indicator of their ability to replicate the functions of a neuron, such as transmitting information. This provides evidence for the potential use of proteinoid microspheres as substrates for artificial neural networks. The strength of the linear model suggests that further research should focus on a deeper understanding of the underlying mechanism that leads to the correlation between molecular parameters and firing rate. Moreover, the linear model could be used to predict the firing rates of proteinoid microspheres for improved design of artificial neural networks. The linear model that best fits our data is represented by the following equation. Figure 7 shows the scatter plot of the firing rate versus the molecular \begin{table} \begin{tabular}{|c|c|c|c|} \hline Proteinoid & Number & Mean & Frequency \\ & of & Inter-Spike & of \\ & Spikes & Interval (s) & Spiking (mHz) \\ \hline \hline L-Glu:L-Asp & 726 & 22.24 & 44.97 \\ L-Glu:L-Asp:L-Phe & 359 & 50.48 & 19.80 \\ L-Lys:L-Phe:L-Glu & 210 & 85.75 & 11.66 \\ L-Glu:L-Phe:L-His & 382 & 42.21 & 23.69 \\ L-Glu:L-Phe:PLLA & 555 & 32.71 & 30.57 \\ L-Lys:L-Phe:L-His:PLLA & 195 & 77.29 & 12.94 \\ L-Glu:L-Arg & 29 & 544.68 & 48.28 \\ L-Asp & 779 & 20.71 & 36.26 \\ L-Phe:L-Lys & 28 & 666.11 & 1.50 \\ L-Glu:L-Asp:L-Pro & 8 & 2541.00 & 0.39 \\ L-Phe & 900 & 12.32 & 81.15 \\ L-Glu:L-Phe & 12 & 1412.55 & 0.71 \\ \hline \end{tabular} \end{table} Table 2: Proteinoid spike characteristics: Number of spikes, Mean inter spike intervals (sec), and frequency of spiking (mHz). The series of measurements obtained for these proteinoids showed a threshold of spiking at 0.0005 uA with a minimum peak-distance of 5 sec. Figure 5: The potential of the proteinoid L-Glu:L-Phe when electrically stimulated reveals a temporal code that can be seen in the plots of initial weight and pre- and post-synaptic indices over time. weight and the peptide length, along with the regression plane of the model Figure 6: L-Phe:L-Lys exhibited the highest firing rate at 1008.2875 Hz.L-Phenylalanine exhibits the lowest firing rate at 591.6153 Hz. The firing rates (in Hz) for the proteinoids ‘L-Glu:L-Asp’, ‘L-Glu:L-Asp:L-Phe’, ‘L-Lys:L-Phe:L-Glu’, ‘L-Glu:L-Phe:L-His’, ‘L-Glu:L-Phe:PLLA’, ‘L-Lys:L-Phe:L-His:PLLA’, ‘L-Glu:L-Arg’, ‘L-Asp’, ‘L-Phe:L-Lys’, ‘L-Glu:L-Phe’, and ‘L-Glu:L-Phe’ are 535.4877, 436.2721, 542.9443, 567.0562, 498.2888, 650.4798, 732.9516, 529.072, 768.2345, 617.3223, 491.5065, and 665.2995, respectively. Figure 7: For 12 distinct proteinoid microspheres, a QSAR model was used to predict the mean firing rates in Hz, peptide length, and molecular weight in g/mol. \[f(x,y) =p_{00}+p_{10}x+p_{01}y\] \[+p_{20}x^{2}+p_{11}xy+p_{02}y^{2}\] \[+p_{21}x^{2}y+p_{12}xy^{2}+p_{03}y^{3}\] Coefficients (with 95% confidence bounds): \[p_{00} =2349\,\left[-3560,8258\right]\] \[p_{10} =-12.08\,\left[-45.24,21.07\right]\] \[p_{01} =-1770\,\left[-1.172\times 10^{4},8182\right]\] \[p_{20} =-0.1149\,\left[-0.6961,0.4664\right]\] \[p_{11} =48.49\,\left[-128.7,225.7\right]\] \[p_{02} =-2545\,\left[-1.132\times 10^{4},6227\right]\] \[p_{21} =0.04667\,\left[-0.1628,0.2561\right]\] \[p_{12} =-17.24\,\left[-79.99,45.5\right]\] \[p_{03} =1151\,\left[-2675,4977\right]\] The present research indicates that proteinoid microspheres exhibiting higher mean firing rates, predicted QSAR, and % deviations are more effective in transmitting signals than those with lower values. The observed phenomenon can be attributed to the increased capacity of the larger microspheres to accommodate a higher quantity of proteinoids, leading to a greater number of active neurons. Higher QSAR values suggest that microspheres are more likely to initiate a neuronal cascade, which is crucial for effective signal transmission. Mean firing rate is a crucial parameter for assessing the efficacy of a neuron in signal transmission, representing the average number of firings within a specified time frame. QSAR prediction refers to the anticipated capacity of a neuron to activate, derived from experimental data. The % deviation represents the disparity \begin{table} \begin{tabular}{l c c} \hline Sample & Mean firing rate (Hz) & Predicted QSAR (Hz) \\ \hline L-Glu:L-Asp & 535.4877 & 536.0542 \\ L-Glu:L-Asp:L-Phe & 436.2721 & 492.8753 \\ L-Lys:L-Phe:L-Glu & 542.9443 & 563.6253 \\ L-Glu:L-Phe:L-His & 567.0562 & 521.7084 \\ L-Glu:L-Phe:PLLA & 498.2888 & 551.9483 \\ L-Lys:L-Phe:L-His:PLLA & 650.4798 & -901.3635 \\ L-Glu:L-Arg & 732.9516 & -2041.8 \\ L-Asp & 529.072 & 723.4966 \\ L-Phe:L-Lys & 768.2345 & -2619.1 \\ L-Glu:L-Asp:L-Pro & 617.3223 & 1345.4 \\ L-Phe & 491.5065 & 471.338 \\ L-Glu:L-Phe & 665.2995 & -1084.7 \\ \hline \end{tabular} \end{table} Table 3: Mean firing rate, and predicted QSAR, of different proteinoid microsphere samples. between the anticipated outcome and the observed outcome of the experiment. Proteinoid microspheres have potential as a substrate for developing artificial brains and unconventional computing devices due to their ability to generate and transmit electrical activity and react to external stimuli, as reported by certain sources [35]. They can form programmable networks through pores and tubes. This study of proteinoid oscillations provides insights into their molecular dynamics and intermolecular interactions. ## 4 Discussion The findings of this paper shed light on the potential functions of proteinoids in neuronal circuitry, ranging from providing structure and format for electrical signals to acting as mediators in the transmission of physiological information. It is now possible to investigate communication in biological compounds through electrical oscillations and compare the results with those observed in more complex biological systems. The discussion section will delve deeper into the potential impact of this work on understanding the function of proteinoids in neuronal signaling and its implications for ongoing research into electrical communication in living organisms. Recent research suggests a correlation between proteinoid oscillations and communication, similar to the correlation discovered by Adamatzky et al. [36] in their investigation of oscillations in fungi. Communication between microspheres is crucial for the development and evolution of complex systems like unconventional computing and autonomous robotics. The nervous system of proteinoid microspheres and their analogues provide insights into their interactions. Communication between microspheres primarily occurs through direct contact, allowing signal transmission through excitability. This involves the spheres coming into contact through surface tension or mechanical pressure (piezoelectricity). While this approach is reliable, chemical differences among the microspheres may hinder it. Electrical coupling is the most common method of communication between microspheres. It enables the transmission of binary information, such as digitally encoded data packets, using electrical signals. Capacitors, inductors, and transistors serve as connecting points between the microspheres. Magnetic coupling is another form of communication, where signals are exchanged through magnetic fields (ferroelectricity). This method utilizes an inductive current generated by one microsphere and received by the other. Optical coupling relies on the transmission of light for information exchange between microspheres [37]. Light pulses are transmitted from one microsphere to another, allowing for relatively high data transfer rates. Communication among microspheres can be categorized into two distinct categories: information exchange and control. Information exchange involves the transmission and reception of data, while control refers to the transmission and reception of commands. Microspheres engage in information exchange, interaction, and resource sharing to facilitate the advancement of complex systems and procedures. Figure 8 provides insights into potential interpretations and analogies of a proteinoid microsphere nervous system. It offers an understanding of microspheres' interactions and network organization similar to biological nervous systems. The figure presents two distinct architectures showcasing the potential functions of proteinoid microspheres. The first architecture depicts spiking networks composed of leaky integrate-and-fire neurons that receive external input force Fin(t) and produce output Fout(t) via synapses W. This architecture resembles the nervous system of advanced organisms, as the input and output signals exhibit similar behavior and generation patterns to those found in a typical nervous system. The second architecture utilizes continuous-variable networks to process an input from an external force Fin(t) and an internal output Fout(t) to produce the corresponding output. Continuous variables are employed instead of binary states of neurons, allowing for a wider range of interpretations and analogies of the nervous system. This network architecture provides a more realistic representation of the nervous system and its associated functions. Proteinoid microspheres have the potential to function as protoneural networks, as shown in Figure 8 with its two distinct architectures. Proteinoid microspheres serve as the fundamental units of the network, enabling basic communication and potential capacity for simple computations. As the network expands, it can develop intricate architectures that leverage the inherent connectivity of interconnected molecules, enabling significantly advanced functions and capabilities. This distinguishes proteinoid microsphere networks from Figure 8: Network architectures of proteinoid microspheres. a) Spiking network. A network of N recurrently connected leaky integrate-and-fire neurons (green circles) receives an input Fin(t) (grey circle) through synapses U, and generates an output Fout(t) (red circle) through synapses W. b) Continuous-variable network. A network of \(N_{t}ilde\) recurrently connected “rate” units (blue circles) receive inputs Fin(t) and Fout(t) through synapses \(U_{t}ilde\) and u, respectively [38]. conventional computing architectures that rely on external wiring for communication. Proteinoid microspheres and biological neural networks share similarities in structure and function. Both consist of interconnected units capable of processing information and exhibiting adaptive behavior. Proteinoid microspheres are artificially synthesized structures made up of chemically bonded amino acids created in a laboratory setting. In contrast, neural networks are complex biological systems composed of individual neurons that work together in a coordinated manner. Proteinoid microspheres exhibit a less complex architecture compared to neural networks, with each node accountable for a single function, while neurons can process multiple inputs and perform various roles, such as transmitting signals between neurons or serving as synapses. Proteinoid microspheres have limited behavioral capabilities, primarily focused on simple tasks like self-repair and shape adaptation due to their inherent lack of complexity. In contrast, biological neural networks possess advanced capabilities such as memory formation, decision-making, and learning. While both proteinoid microspheres and biological neural networks can process information, the intricate and adaptable nature of biological neural networks sets them apart from proteinoid microspheres [39; 40; 41; 42]. The results obtained from detecting spikes using differential pulse voltammetry (DPV) indicate that proteinoid microspheres could have had a significant impact on the emergence of life. Proteinoid microspheres possess the property of self-assembly, allowing for the aggregation of essential elements necessary for the genesis of protocells and the formation of intricate structures. Additionally, the microspheres have the ability to retain and convey data, a crucial prerequisite for the origin of biological existence. The collective capabilities of proteinoid microspheres may have facilitated the emergence and development of primitive cells during the initial phases of life. Proteinoid microspheres offer a fresh perspective for advancing our understanding of neural circuits. As researchers delve deeper into the system's adaptability, it is anticipated that proteinoids will unlock new insights in currently unexplored domains. These discoveries have the potential to pave the way for improved treatments for neurological disorders and advancements in medical technology and unconventional computing. ## 5 Conclusion The findings of the study highlight the promising compatibility between differential pulse voltammetry and proteinoid nano-brains, opening up a new avenue for exploring these unique systems. The results suggest that utilizing differential pulse voltammetry as a tool can greatly contribute to understanding the functionality of proteinoid nano-brains, offering valuable insights into their behavior and potential applications. Further research in this area could unlock a deeper comprehension of these nano-brains and their potential role in the development of intelligent machines, potentially revolutionizing the field of artificial intelligence. ## Acknowledgement The research was supported by EPSRC Grant EP/W010887/1 "Computing with proteinoids". Authors are grateful to David Paton for helping with SEM imaging and to Neil Phillips for helping with instruments.
2310.03187
Synthesis of Data-Driven Nonlinear State Observers using Lipschitz-Bounded Neural Networks
This paper focuses on the model-free synthesis of state observers for nonlinear autonomous systems without knowing the governing equations. Specifically, the Kazantzis-Kravaris/Luenberger (KKL) observer structure is leveraged, where the outputs are fed into a linear time-invariant (LTI) system to obtain the observer states, which can be viewed as the states nonlinearly transformed by an immersion mapping, and a neural network is used to approximate the inverse of the nonlinear immersion and estimate the states. In view of the possible existence of noises in output measurements, this work proposes to impose an upper bound on the Lipschitz constant of the neural network for robust and safe observation. A relation that bounds the generalization loss of state observation according to the Lipschitz constant, as well as the $H_2$-norm of the LTI part in the KKL observer, is established, thus reducing the model-free observer synthesis problem to that of Lipschitz-bounded neural network training, for which a direct parameterization technique is used. The proposed approach is demonstrated on a chaotic Lorenz system.
Wentao Tang
2023-10-04T22:19:53Z
http://arxiv.org/abs/2310.03187v1
# Synthesis of Data-Driven Nonlinear State Observers using Lipschitz-Bounded Neural Networks ###### Abstract This paper focuses on the _model-free_ synthesis of state observers for nonlinear autonomous systems without knowing the governing equations. Specifically, the Kazantzis-Kravaris/Luenberger (KKL) observer structure is leveraged, where the outputs are fed into a linear time-invariant (LTI) system to obtain the observer states, which can be viewed as the states nonlinearly transformed by an immersion mapping, and a neural network is used to approximate the inverse of the nonlinear immersion and estimate the states. In view of the possible existence of noises in output measurements, this work proposes to impose an upper bound on the Lipschitz constant of the neural network for robust and safe observation. A relation that bounds the generalization loss of state observation according to the Lipschitz constant, as well as the \(H_{2}\)-norm of the LTI part in the KKL observer, is established, thus reducing the model-free observer synthesis problem to that of Lipschitz-bounded neural network training, for which a direct parameterization technique is used. The proposed approach is demonstrated on a chaotic Lorenz system. ## I Introduction For nonlinear systems that arise from realistic engineering applications such as transport-reaction processes, modern control theory relies on _state-space representations_ for their modeling, analysis, and control [1, 2, 3]. Recent advances in nonlinear control have highlighted the role of data-driven (machine learning) techniques in identifying governing equations or underlying dynamical structures [4, 5, 6], analyzing system and control-theoretic properties [7, 8], and synthesizing model-free controllers [9, 10, 11]. In these efforts, it is often assumed that the _state_ information is available for analysis or control; for example, in reinforcement learning (RL) literature, it is common to apply stochastic first-order optimization to learn a value (cost) function or \(Q\) function based on temporal actions and state measurements. In many (if not most) control engineering applications, such as in chemical processes, however, it is more likely that the states are not measurable. Hence, for nonlinear control in a state-space framework, a _state observer_ is necessary, whereby the states are estimated based on input and output history [12]. A recent review on model-based approaches to synthesize state observers is found in Bernard, Andrieu, and Astolfi [13]. A classical form of state observer for linear systems is known as Luenberger observer [14], which an auxiliary linear time-invariant (LTI) system that uses the plant outputs as inputs and returns state estimates. The observer states are in fact a linear transform of the plant states [15]. The idea was extended to nonlinear systems in the seminal work of Kazantzis and Kravaris [16]. In their Kazantzis-Kravaris/Luenberger (KKL) observer (as named in Andrieu and Praly [17]) still uses an LTI system to convert plant outputs to observer states, which turn out to be the plant states transformed via a nonlinear immersion. Thus, the observer synthesis problem reduces to the determination of this nonlinear immersion and its inverse, via solving (model-based) partial differential equations (PDEs). Such a KKL observer was extended from autonomous to actuated systems in [18], where the LTI part is replaced by an input-affine one with an additional nonlinear drift term associated with the actuated inputs. This paper focuses on the _synthesis of KKL observer_ in a _model-free_ manner, without assuming prior knowledge on the plant dynamics. This is motivated by two reasons: (i) many nonlinear systems that involve complex kinetic or kinematic mechanisms are often hard to model accurately, and (ii) it can be challenging to solve the associated PDEs, especially in high-dimensional state space (in fact, there may not be well-posed boundary conditions). In the recent years, there have been several works that pioneered the use of neural networks in the observer problem. For example, Ramos et al. [19] first trained neural networks to approximate the inverse immersion map to reconstruct the actual states from observer states. Then, the optimization of pole placement was considered along with the training of inverse immersion in [20]. Niazi et al. [21] used physics-informed neural networks (PINNs) to approach a surrogate solution to solve the PDEs. Miao and Gatsis [22] formulated a dynamic optimization problem to minimize the accumulated squared state observation error, whereby the optimality condition, through calculus of variations results in neural ODEs. It is commonly known that neural networks, when over-parameterized with large widths and depths, may cause a deteriorated capability of generalization. It has also been argued that neural networks can be fragile to adversarial attacks to the training data and thus must be equipped with a self-defense mechanisms that warranty robustness [23, 24]. In particular, controlling the Lipschitz constant of the mapping specified by the neural network has been studied as a promising approach [25, 26, 27]. However, in these works, estimating and minimizing the Lipschitz constant requires the use of semidefinite programming routines, which has a high complexity when the number of neurons is large. An alternative way, called _direct paramterizaton_, as recently proposed in Wang and Manchester [28], is to translate the Lipschitz bound constraint into a special architecture of the neural layers, thus allowing the use of typical back-propagation (BP) to train the network in an unconstrained way. Hence, in this work, the Wang-Manchester direct parameterization is adopted to train Lipschitz-bounded neural networks in a KKL state observer for any unknown nonlinear autonomous system. The paper establishes a relation between the generalized observation error and the Lipschitz bound of the neural network as well as the \(H_{2}\)-norm of the LTI observer dynamics, under a typical white noise assumption on the plant outputs. Hence, by varying the Lipschitz bound, the optimal observer can be synthesized. ## II Preliminaries We consider a nonlinear autonomous system: \[\dot{x}(t)=f(x(t)),\quad y(t)=h(x(t)) \tag{1}\] where \(x(t)\in\mathcal{X}\subseteq\mathbb{R}^{n}\) is the vector of states and \(y(t)\in\mathbb{R}^{m}\) represents the outputs. For simplicity, we will consider \(m=1\). It is assumed that \(f\) and \(h\) are smooth on \(\mathcal{X}\) to guarantee existence and uniqueness of solution but unknown for model-based synthesis. ### _KKL Observer_ For nonlinear systems, KKL observer generalizes the notion of Luenberger observers that were restricted to linear systems [14], providing a generic method for state observation with mild assumptions to guarantee existence. Specifically, the KKL observer for (1) is expressed as \[\dot{z}(t)=Az(t)+By(t),\quad\hat{x}(t)=T^{\dagger}(z(t)). \tag{2}\] Here the observer states \(z\in\mathbb{R}^{n_{z}}\) has an LTI dynamics. The matrices \(A\) and \(B\) are chosen under the requirements of (i) controllability of \((A,B)\) should be controllable, (ii) Hurwitz property of \(A\), and (iii) sufficiently high dimension of \(z\) (\(n_{z}\)), which should be at least \(n+1\) if \((A,B)\) is complex [17] and at least \(2n+1\) if \((A,B)\) is real [29]. The mapping from the observer states \(z\) to the state estimates \(\hat{x}\) is a static one, \(T^{\dagger}\), which is the left-pseudoinverse of a nonlinear immersion \(T\) (i.e., a differentiable injection satisfying \(T^{\dagger}\circ T=\mathsf{id}\)). This immersion \(T\) should satisfy the following PDE: \[\frac{\partial T}{\partial x}(x)f(x)=AT(x)+Bh(x),\quad\forall x\in\mathcal{X}, \tag{3}\] where \(\partial T/\partial x\) denotes the Jacobian matrix of \(T\). It can be easily verified that under the above PDE, \(dT(x)/dt=AT(x)+By\), and thus \(z-T(x)\) has an exponentially decaying dynamics, as \(A\) is Hurwitz. The conditions for the existence of a KKL observer, namely the solution to its defining PDE (3), have been established based on the condition of backward distinguishability. In below, we denote the solution to the ODEs \(\dot{x}=f(x)\) at time \(t\) with initial condition \(x(0)=\xi\) as \(\Phi_{t}(\xi)\). For any open set \(\mathcal{O}\) in \(\mathcal{X}\), denote the backward time instant after which the solution does not escape this region by \(\varsigma_{\mathcal{O}}(\xi)=\inf\{t|\Phi_{t}(\xi)\in\mathcal{O}\}\). Also denote \(\mathcal{O}+\epsilon:=\{\xi+\eta|\xi\in\mathcal{O},\|\eta\|<\epsilon\}\). **Definition 1** (Backward distinguishability).: _The system (1) is \((\mathcal{O},\epsilon)\)-backward distinguishable if for any distinct \(\xi,\xi^{\prime}\in\mathcal{X}\) there exists a negative \(t>\varsigma_{\mathcal{O}+\epsilon}(\xi)\wedge\varsigma_{\mathcal{O}+\epsilon} (\xi^{\prime})\) such that \(h(\Phi_{t}(\xi))\neq h(\Phi_{t}(\xi^{\prime}))\)._ **Fact 1** (Existence of KKL observer, cf. Brivadis et al. [29]).: _Assume that there is an open \(\mathcal{O}\subseteq\bar{\mathcal{X}}\) and a positive constant \(\epsilon\) such that the system (1) is \((\mathcal{O},\epsilon)\)-backward distinguishable. Then there exists a constant \(\rho>0\) such that for all but a Lebesgue-zero-measure set of \((A,B)\in\mathbb{R}^{(2n+1)\times(2n+1)}\times\mathbb{R}^{(2n+1)}\), if \(A+\rho I\) Hurwitz, then there exists an immersion \(T:\mathcal{O}\rightarrow\mathbb{R}^{(2n+1)}\) solving the PDEs (3)._ The above theorem clarifies that as long as the spectrum of \(A\) is restricted to the left of \(-\rho+i\mathbb{R}\), the LTI dynamics in the KKL observer can be almost arbitrarily assigned. Once \((A,B)\) are chosen, the remaining question for synthesis a KKL observer (2) is to numerically determine the solution. In view of the computational challenge in directly solving the PDEs (3) and the recent trend of handling the problem by neural approaches [19, 20, 21], this work will seek to approximate \(T^{\dagger}\) by a neural network. Yet, instead of using a vanilla multi-layer perceptron architecture, a Lipschitz-bounded neural network will be adopted, which safeguards the generalization performance of state observation, which will be discussed in SSIII. This overall idea is illustrated in Fig. 1. ### _Lipschitz-Bounded Neural Networks_ Consider a \(\nu\)-layer neural network \(\hat{x}=S(z,\theta)\) with all parameters denoted as a single vector \(\theta\). Without loss of generality, assume that the activation function (element-wise applied to vectors) is \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), with slope bounded in \([0,1]\) (in this work, rectified linear units (ReLU) are used to prevent gradient decay in BP training). The neural network then can be expressed as \[\begin{split}& z^{\ell+1}=\sigma(W^{\ell}z^{\ell}+b^{\ell}),\ \ \ell=0,\ldots,\nu-1\\ & z^{0}=z,\quad\hat{x}=W^{\nu}z^{\nu}+b^{\nu}.\end{split} \tag{4}\] where \(W^{0},\ldots,W^{\nu}\) are the weight matrices and \(b^{0},\ldots,b^{\nu}\) are the biases. In total there are \(\nu\) activation layers inserted between \(\nu+1\) fully connected layers. \(z\) represents the inputs to the neural network and \(\hat{x}\) is the output vector, as we will use such a neural network to approximate the \(T^{\dagger}\) mapping in the KKL observer. Given a neural network with fixed parameters \(\theta=(W^{0},b^{0},\ldots,W^{\nu},b^{\nu})\), a rough estimate of the Lipschitz Fig. 1: KKL observer with a Lipschitz-bounded neural network to be trained. constant of \(S\) can be obviously obtained as \[L_{S}(\theta)=\prod_{\ell=0}^{\nu}\|W^{\ell}\|_{2}, \tag{5}\] where \(\|\cdot\|_{2}\) for a matrix refers to its operator norm induced by the \(\ell_{2}\)-norm of vectors, i.e., its largest singular value. To reduce the conservativeness, Fazlyab et al. [25] leverages the control-theoretic tool of integral quadratic constraints to formulate the Lipschitz bound condition as a linear matrix inequality, thus estimating the Lipschitz constants and training Lipschitz-bounded neural networks through solving semidefinite programming problems [27]. The pertinent matrix size, however, proportionally scales with the total number of neurons, which results in high computational complexity unless the neural network is very small. The recent work of Wang and Manchester [28] proposed a _direct parameterization_ approach to accommodate Lipschitz bound by a special design of the neural network architecture instead of imposing extra parameter constraints. By this approach, the training of neural networks is an unconstrained optimization problem and is thus amenable to the typical, computationally lightweight back-propagation (BP) routine. Wang-Manchester direct parameterization is conceptually related to, and arguably motivated by, the theory of controller parameterization [30, 31]. **Definition 2** (\(1\)-Lipschitz sandwich layer, cf. [28]).: _Given parameters \(X\in\mathbb{R}^{d\times d}\), \(Y\in\mathbb{R}^{c\times d}\), \(s\in\mathbb{R}^{d}\), and \(b\in\mathbb{R}^{d}\), a \(1\)-Lipschitz sandwich layer is defined as such a mapping \(\Xi:\mathbb{R}^{c}\rightarrow\mathbb{R}^{d}\) that maps any \(h\in\mathbb{R}^{c}\) into a \(\Xi(h;X,Y,s,b)\in\mathbb{R}^{d}\) according to the following formulas:_ \[Z =X-X^{\top}+Y^{\top}Y,\ \Psi_{s}=\mathrm{diag}(e^{s}) \tag{6}\] \[M_{X,Y} =\left[(I+Z)^{-1}(I-Z)\right]^{\top},\] \[N_{X,Y} =\left[-2Y(I+Z)^{-1}\right]^{\top},\] \[\Xi(h) =\sqrt{2}M_{X,Y}^{\top}\Psi_{s}\sigma(\sqrt{2}\Psi_{s}^{-1}N_{X, Y}h+b).\] It turns out that the Lipschitz constant of the above-defined sandwich layer is guaranteed to be upper bounded by \(1\)[28, Theorem 3.3]. The mapping from the input \(h\) to the output \(\Xi(h)\) can be regarded as comprising of an activation layer in the midst of two fully connected layers with related parameters. The operation from \((X,Y)\) to \((M,N)\) is known as the _Cayley transform_. The structure and the parameters of a sandwich layer is shown in Fig. 2. Thus, by stacking a number of such sandwich layers after a scaling by \(\sqrt{\gamma}\) and before a non-activated half-sandwich layer (meaning a layer containing only the terms in the parentheses of \(\Xi\) as in Equation (6)), a neural network with Lipschitz bound \(\gamma\) can be obtained, for any provided \(\gamma>0\). **Definition 3** (Wang-Manchester network).: _In this work, we refer to Wang-Manchester network, \(S(\cdot|\theta)\), by a neural network in the following architecture:_ \[h^{0} =\sqrt{\gamma}z; \tag{7}\] \[h^{\ell+1} =\Xi(h^{\ell};X^{\ell},Y^{\ell},s^{\ell},b^{\ell}),\ \ell=0,1,\ldots,\nu-1;\] \[\hat{x} =\sqrt{\gamma}N_{X^{\nu},Y^{\nu}}h^{\nu}+b^{\nu}.\] _Here the parameters include_ \[\theta=\{X^{\ell},Y^{\ell},s^{\ell},b^{\ell}\}_{\ell=0}^{\nu-1}\cup\{X^{\nu}, Y^{\nu},b^{\nu}\}\] _which can be trained in an unconstrained way using back-propagation. The inputs and outputs of the network are \(z\) and \(\hat{x}\), respectively._ The above-defined Wang-Manchester network satisfies \(\|S(\cdot|\theta)\|_{\mathrm{Lip}}\leq\gamma\). In this work, the network is defined and trained with data using PyTorch (version 2.0.1) on Google Colaboratory, which allows the auto-differentiation of a user-defined loss function with respect to the neural network parameters for the parameters to be iteratively updated. ## III Analysis on the Generalized Loss Here we shall provide a justification for requiring a Lipschitz bound on the neural network. We will make the following standing assumptions on the training data collection procedure for subsequent analysis. **Assumption 1** (Ergodicity).: _Assume that a sample trajectory is collected from the system, whose initial state is sampled from a probability distribution \(\mathcal{F}\) on \(\mathcal{X}\). The distribution \(\mathcal{F}\) is time-invariant (i.e., an eigenmeasure of the Perron-Frobenius operator), so that any point of the trajectory comes from \(\mathcal{F}\)._ Suppose that The LTI dynamics of the KKL observer, \((A,B)\), is fixed. Then the observer states can be simulated from this linear dynamics. **Assumption 2** (Noisy measurements).: _Assume that the input signal for this LTI system is not noise-free measurements \(y=h(x)\), but instead containing a white noise of unknown covariance \(\sigma^{2}\). In other words, the simulation from \(y\) to \(z\) is_ \[\begin{split}&\dot{z}=Ax+By+w,\quad\mathbb{E}[w(t)]=0,\,\forall t \in\mathbb{R}\\ &\mathbb{E}[w(t)w(s)]=\delta(t-s)\sigma^{2},\,\forall t,s\in \mathbb{R}.\end{split} \tag{8}\] In this way, the collected sample, denoted as \(\{(x(t_{i}),z(t_{i}))\}_{i=1}^{m}=\{(x_{i},z_{i})\}_{i=1}^{m}\), in fact satisfies the following relation: \[z_{i}=\bar{z}_{i}+v_{i},\quad\delta_{i}=\int_{-\infty}^{t_{i}}g(\tau)w(t_{i}- \tau)d\tau. \tag{9}\] Here \(g(\tau)\) is the impulse response of LTI system \((A,B)\); \(\bar{z}\) is the value of \(z(t_{i})\) that would be otherwise obtained if there were no white noises in the output measurements. **Assumption 3** (Sufficient decay).: _After a significantly long time \(t_{\epsilon}\), \(\|z-T(x)\|\leq\epsilon\) for a small enough \(z\). Here \(T\) is the nonlinear immersion map specified by (3)._ Fig. 2: A sandwich layer and its parameters. Then, \(\|\bar{z}_{i}-T(x_{i})\|\leq\epsilon\). Thus, we may write \[z_{i}=T(x_{i})+v_{i}+v_{i}^{\prime},\quad\|v_{i}^{\prime}\|\leq\epsilon. \tag{10}\] Now we suppose that the sample \(\{(x_{i},z_{i})\}_{i=1}^{m}\) is used to train a neural network \(S(\cdot|\theta)\), which gives the state observations \(\hat{x}_{i}=S(z_{i}|\theta)\), and that the resulting empirical loss, if defined as the average squared observation error, is \[\hat{R}_{S}(\theta):=\frac{1}{m}\sum_{i=1}^{m}\|\hat{x}_{i}-x_{i}\|^{2}. \tag{11}\] Then we get \[\hat{R}_{S}(\theta)=\frac{1}{m}\sum_{i=1}^{m}\left\|S\left(T(x_{i})+v_{i}+v_{i }^{\prime}|\theta\right)-x_{i}\right\|^{2}. \tag{12}\] **Assumption 4**.: _Assume that the probability distribution \(\mathcal{F}\) is supported by a compact set, i.e., if \(x\sim\mathcal{F}\), then \(x\) should be almost surely bounded._ It follows that both \(S(\cdot|\theta)\) and \(T\) should be Lipschitz continuous. Denote their Lipschitz constants as \(L_{S}(\theta)\) and \(L_{T}\), respectively. We have \[\|S\left(T(x_{i})+\delta_{i}+\delta_{i}^{\prime}|\theta\right)-S\left(T(x_{i}) |\theta\right)\|\leq L_{S}(\theta)L_{T}(\|v_{i}\|+\epsilon). \tag{13}\] Denote \(D\) as the essential upper bound of \(\|x\|\) on the distribution \(\mathcal{F}\). As such, without loss of generality, let \(S(T(0))=0\). Then \(\|x-S(T(x))\|\leq(L_{S}(\theta)L_{T}+1)D\) almost surely. Combining the above two equations, we further get \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq \hat{R}_{S}(\theta) \tag{14}\] \[\quad+\frac{1}{m}\sum_{i=1}^{m}L_{S}(\theta)L_{T}(L_{S}(\theta)L _{T}+1)D(\|v_{i}\|+\epsilon)\] \[\quad+\frac{1}{m}\sum_{i=1}^{m}L_{S}^{2}(\theta)L_{T}^{2}(\|v_{i} \|+\epsilon)^{2}.\] That is, \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq \hat{R}_{S}(\theta) \tag{15}\] \[\quad+\frac{1}{m}\sum_{i=1}^{m}(L_{S}(\theta)L_{T}+1)^{2}\left(D+ \|v_{i}\|+\epsilon\right)(\|v_{i}\|+\epsilon).\] The left-hand side gives an estimation of the empirical loss when observing the states from perfect output measurements (namely when \(z_{i}=T(x_{i})\), \(i=1,\ldots,m\)). Further expanding the last term and applying Cauchy-Schwarz inequality, we have \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq\hat {R}_{S}(\theta)+(L_{S}(\theta)L_{T}+1)^{2}\times \tag{16}\] \[\quad\left[\frac{1}{m}\sum_{i=1}^{m}\|v_{i}\|^{2}+(D+2\epsilon) \left(\sum_{i=1}^{m}\|v_{i}\|^{2}\right)^{1/2}+(D+\epsilon)\epsilon\right].\] Given that \(v_{i}\) is the response of LTI system \((A,B)\) to a white noise of covariance \(\sigma^{2}\), \(\mathbb{E}(\|v_{i}\|^{2})=h^{2}\sigma^{2}\) where \(h\) is the \(H_{2}\)-norm of the system \((A,B)\) where \(A\) is Hurwitz. Therefore, \[\mathbb{E}\left(\frac{1}{m}\sum_{i=1}^{m}\|v_{i}\|^{2}\right)=h^{2}\sigma^{2}. \tag{17}\] Let \(\alpha\) be a small positive number. With confidence \(1-\alpha/2\), a conservative estimation for its upper bound can be found according to Markov inequality: \[\frac{1}{m}\sum_{i=1}^{m}\|v_{i}\|^{2}\leq\frac{1}{1-\alpha/2}h^{2}\sigma^{2}. \tag{18}\] Therefore, \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq\hat {R}_{S}(\theta)+(L_{S}(\theta)L_{T}+1)^{2}\times \tag{19}\] \[\quad\left[\frac{h^{2}\sigma^{2}}{1-\alpha/2}+(D+2\epsilon)\frac{ h\sigma}{\sqrt{1-\alpha/2}}+(D+\epsilon)\epsilon\right].\] Finally, we note that for \(x\sim\mathcal{F}\), now that \(\|x-S(T(x)|\theta)\|\leq(L_{S}(\theta)L_{T}+1)D\) almost surely, by Hoeffding's inequality, for any \(\varepsilon>0\), \[\mathbb{P}\bigg{(}\bigg{|}\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{ i})|\theta)\|^{2}-\mathbb{E}\left(\|x-S(T(x))\|^{2}\right)\bigg{|} \tag{20}\] \[\geq(L_{S}(\theta)L_{T}+1)^{2}D^{2}\varepsilon\bigg{)}\leq 2\exp \left(-2m\varepsilon^{2}\right).\] Thus, with confidence \(1-\alpha/2\), we have \[\left|\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}- \mathbb{E}\left(\|x-S(T(x)|\theta)\|^{2}\right)\bigg{|}\right. \tag{21}\] \[\quad<(L_{S}(\theta)L_{T}+1)^{2}D^{2}\sqrt{\frac{\ln(4/\alpha)}{2m }}.\] Combining (19) and (21), we have the following theorem. **Theorem 1**.: _Under the afore-mentioned assumptions, the generalization loss, defined as_ \[R_{S}(\theta)=\mathbb{E}\left(\|x-S(T(x)|\theta)\|^{2}\right), \tag{22}\] _is related to the empirical loss as defined in (11) by_ \[R_{S}(\theta)<\hat{R}_{S}(\theta)+(L_{S}(\theta)L_{T}+1)^{2}\Delta(h,\sigma, \alpha,\epsilon). \tag{23}\] _with confidence \(1-\alpha\) (\(\alpha\in(0,1)\)). Here_ \[\Delta(h,\sigma,\alpha,\epsilon)= D^{2}\sqrt{\frac{\ln(4/\alpha)}{2m}}+\frac{h^{2}\sigma^{2}}{1- \alpha/2} \tag{24}\] \[+(D+2\epsilon)\frac{h\sigma}{\sqrt{1-\alpha/2}}+(D+\epsilon)\epsilon.\] The theorem shows that the Lipschitz constant of the neural network trained plays an important role in the generalized performance of the resulting state observer. The effect of \(L_{S}(\theta)\) is mainly that of amplifying the first and third terms defined on the right-hand side of (24), supposing that \(\sigma\) and \(\epsilon\) are small enough. These two terms respectively arise from (i) the overall upper bound of the observation error \(\|x-S(T(x)|\theta)\|\), which acts as a coefficient before the Hoeffding term \(\sqrt{\ln(4/\alpha)/2m}\), and (ii) the effect of noisy measurements on the observer states. **Remark 1**.: _It is noted that the performance bound stated in the above theorem can be conservative. The conclusion that \(L_{S}(\theta)\) amplifies the generalization error and measurement noise should be considered as qualitative. The theorem also does not suggest a tractable algorithm to optimize the selection of \((A,B)\) along with the neural network \(S(\cdot|\theta)\), as the dependence of \(L_{T}\) on \((A,B)\) is highly implicit. Hence, this paper does not consider the problem of simultaneously training \((A,B)\) and the neural network._ ## IV Case Study Let us consider a Lorenz system in a 3-dimensional state space with chaotic behavior. The equation is written as: \[\dot{x}_{1} =10(x_{2}-x_{1}), \tag{25}\] \[\dot{x}_{2} =x_{1}(28-10x_{3})-x_{2},\] \[\dot{x}_{3} =10x_{1}x_{2}-(8/3)x_{3}.\] Suppose that the measurement used for state observation is \(y=x_{2}\), where a white noise exists. We assign different values to the variance of the measurement noise and investigate how the resulting neural network should be chosen differently. To simulate the process we will use a sampling time of \(0.01\). The LTI part of the KKL observer, \(A=-\mathrm{diag}(8,4,2,1)\) and \(B=[1,1,1,1]^{\top}\) are chosen. At the beginning of the observer simulation, \(z(0)=0\) is set as the initial condition; we simulate the dynamics until \(t=500\) and randomly collect \(m=2000\) time instants between \(t=20\) and \(t=500\) as the training data. Consider first the case with noiseless measurement (\(\sigma=0\)). The sample \(\{(x_{i},z_{i})\}_{i=1}^{2000}\) is plotted in Fig. 3, which shows that the data points are representative on the forward invariant set of the system, and that the observer states \(z_{i}\) indeed captures the structure of such a Lorenz attractor in a \(4\)-dimensional space. Hence, we train the Wang-Manchester network using a randomly selected \(80\%\) of the sample under the mean-squares loss metric, and validate using the remaining \(20\%\) sample points. Stochastic gradient descent (SGD) algorithm with a learning rate of \(10^{-3}\) is used for optimization. The number of epochs is empirically tuned to \(300\). The neural network has \(2\) hidden layers, each containing \(8\) neurons, resulting in \(292\) parameters to train in total. After training, the Lipschitz constant is evaluated a posteriori via the semidefinite programming approach of Fazlyab et al. [25] using cvxpy, which costs approximately \(1.5\) seconds (for a randomly initialized network). Varying the prior bound on the Lipschitz constant, the resulting training loss, validation loss, and the posterior Lipschitz bound obtained from the same training conditions are illustrated in Fig. 4. The following observations can be made from these results. * As anticipated, as the set bound on the Lipschitz bound increases, the Lipschitz constant of the trained neural network becomes higher. The Lipschitz constants estimated a posteriori are lower than the prior bound on the Wang-Manchester network, validating the direct parameterization approach on constraining the slope. On the other hand, the actually posterior Lipschitz constant has an increasingly large lag behind the prior bound; for example, when the prior bound is \(1000\), the \(L_{S}\) after training does not exceed \(300\). This indicates that even for the training objective alone, there is a "resistance" to pursue the maximally possible Lipschitz constant. * When the Lipschitz bound is small, relaxing the restriction on \(L_{S}\) is beneficial for decreasing the training loss as well as the validation loss, showing that the Lipschitz bound is a bottleneck causing underfitting. When \(L_{S}\) is high enough, such underfitting no longer exists; instead, overfitting will appear, with rising training and Fig. 3: Sample collected from the Lorenz system. validation losses. The overfitting phenomenon is more significant when the noise is large. Thus, there should be optimal values to be set as the Lipschitz bound. * Depending on the noise magnitude, the deviation of posterior Lipschitz constant from the prior bound and the emergence of overfitting phenomenon occur at different threshold values of the Lipschitz bound. Thus, the Lipschitz bound to be used for neural network training should be tuned differently as the noise intensity varies. For example, at \(\sigma=1\), a suitable choice can be \(\gamma=100\), whereas at \(\sigma=5\) and \(\sigma=10\), \(\gamma\) can be chosen as \(30\) and \(10\), respectively. Now suppose that at the observer design stage, the Wang-Manchester network is trained by the simulated data from a perfect digital twin of the true dynamics, i.e., \(\sigma=0\); yet, when applying the network trained to observe the states of the physical system, the environment is noisy. In Fig. 5, the resulting loss (mean squared state observation error) is plotted against varying prior Lipschitz bounds under multiple values of the environment noise magnitude. It is seen that when the noise is low, roughly speaking, increasing \(L_{S}\) leads to monotonic decrease in the observation error within a large range. On the other hand, when the environment is highly noisy (e.g., when \(\sigma\geq 3\)), the Lipschitz bound has a severe effect on the generalization loss, and since the achievable performance is restrictive, the fine-tuning of Lipschitz bound as a hyperparameter becomes critical. Finally, the performance of the state observer is examined. Consider using the network trained with noiseless simulation data under the prior Lipschitz bound \(L_{S}=10\), and applying it to environments with noise \(\sigma=0.1\), \(0.3\), \(1.0\), \(3.0\). The trajectories of the three components of estimated states by the observer are plotted against the true states in Fig. 6, within a time horizon of \(10\) time units. Naturally, when \(\sigma\) is low, the state estimates can well track the true states and capture the trends in the correct directions; as \(\sigma\) increases, the accuracy is lowered and the signals constructed by the observer are more noisy, occasionally yielding incorrect directions of evolution (e.g., on \(3<t<4\) or \(8<t<9\), where the states swing between the two foils of the Lorenz attractor). Overall, the state estimates mollifies the true state trajectories, which is due to the structure of our KKL observer - a linear filter (LTI system) as the state dynamics and a Lipschitz-bounded neural network as the static output map. ## V Conclusions and Discussions This work leverages the recent tools of Lipschitz-bounded neural networks for the synthesis of nonlinear state observers in a model-free setting. The observer, which has a Kazantzis-Kravaris structure, turns out to have a provable generalization performance that is related to the Lipschitz constant of the trained neural network (which represents the mapping from the observer states to the plant states). As such, by varying the Lipschitz bound and re-training the neural network, the optimal training result can yield the minimum generalized state observation error. The importance of bounding the Lipschitz constant has been demonstrated by a numerical case study on the Lorenz system. Fig. 4: Loss and Lipschitz constants under different prior Lipschitz bounds. (Blue wedges: training loss, blue circles: validation loss, green circles: prior Lipschitz bound; green wedges: posterior Lipschitz bound.) Fig. 5: Errors of noiselessly trained observers in noisy environments. We implicitly assumed here that a simulator of the dynamics is available, so that the true states' trajectories can be used to train the neural network. However, such ground truth for supervised learning may not actually exist in real applications, i.e., only inputs and outputs are recorded, yet a state observation mechanism is still needed or desired for feedback control. To this end, the author's recent work [32] proposed a data-driven KKL observer by appending a kernel dimensionality reduction scheme to the LTI dynamics, thus obtaining estimates that are diffeomorphic to the states. Also, the current approach is yet restricted to autonomous systems. For control purposes, it should be further extended to non-autonomous ones, where the Bernard-Andrieu observer structure [18] is anticipated. Also, the application of such data-driven state observers to learning control-relevant properties of nonlinear dynamical systems and controller synthesis [33, 34] is undergoing active research.
2305.05740
Message Passing Neural Networks for Traffic Forecasting
A road network, in the context of traffic forecasting, is typically modeled as a graph where the nodes are sensors that measure traffic metrics (such as speed) at that location. Traffic forecasting is interesting because it is complex as the future speed of a road is dependent on a number of different factors. Therefore, to properly forecast traffic, we need a model that is capable of capturing all these different factors. A factor that is missing from the existing works is the node interactions factor. Existing works fail to capture the inter-node interactions because none are using the message-passing flavor of GNN, which is the one best suited to capture the node interactions This paper presents a plausible scenario in road traffic where node interactions are important and argued that the most appropriate GNN flavor to capture node interactions is message-passing. Results from real-world data show the superiority of the message-passing flavor for traffic forecasting. An additional experiment using synthetic data shows that the message-passing flavor can capture inter-node interaction better than other flavors.
Arian Prabowo, Hao Xue, Wei Shao, Piotr Koniusz, Flora D. Salim
2023-05-09T19:33:52Z
http://arxiv.org/abs/2305.05740v1
# Message Passing Neural Networks ###### Abstract A road network, in the context of traffic forecasting, is typically modeled as a graph where the nodes are sensors that measure traffic metrics (such as speed) at that location. Traffic forecasting is interesting because it is complex as the future speed of a road is dependent on a number of different factors. Therefore, to properly forecast traffic, we need a model that is capable of capturing all these different factors. A factor that is missing from the existing works is the node interactions factor. Existing works fail to capture the inter-node interactions because none are using the message-passing flavor of GNN, which is the one best suited to capture the node interactions This paper presents a plausible scenario in road traffic where node interactions are important and argued that the most appropriate GNN flavor to capture node interactions is message-passing. Results from real-world data show the superiority of the message-passing flavor for traffic forecasting. An additional experiment using synthetic data shows that the message-passing flavor can capture inter-node interaction better than other flavors. Keywords:Traffic Forecasting Graph Neural Network. ## 1 Introduction Traffic forecasting is the task of predicting future traffic measurements (e.g. volume, speed, etc.) in a road network based on historical data. It is an interesting task from the perspective of both the broader societal impact of its application, as well as from a theoretical perspective. As a part of intelligent transport systems, better traffic forecasting can lead to more efficient designs and management of transportation networks. The downstream impacts include decreases in energy usage and reductions of the negative economic and psychological impacts of congestion. Recently, traffic forecasting also gathers the interests of the research community [11]. This is because, from the theoretical perspective, traffic forecasting is interesting because it is complex as the future speed of a road is dependent on a number of different factors. Therefore, to properly forecast traffic, we need a model that is capable of capturing all these different factors. An example of one such factor is each node's unique periodical pattern. To capture this behavior, AGCRN [3] learned a unique set of weights for each node. Another factor is the past speed of neighboring nodes. To capture information from the neighboring nodes, many models used Graph Neural Networks (GNNs) which come in three flavors: convolutional, attentional, and message-passing [4]. For example, DCRNN[16], STGCN[24], Graph WaveNet[23], MTGNN[22], AGCRN[3] used different variations of convolutional GNNs, while ASTGCN[9] and GMAN[25] used different variations of attentional GNNs. However, in this paper, we argue that there is another factor that is missing from the existing works, and that is the node interactions factor. Existing works fail to capture the inter-node interactions because none are using the message-passing flavor of GNN, which is the one best suited to capture node interactions. Our contributions are as follows: * This paper presents a plausible scenario in road traffic where nodes interactions are important. * We then argued that the most appropriate GNN flavor to capture node interactions is message-passing. * We perform several experiments using real-world data to show the superiority of the message-passing flavor for traffic forecasting. * We perform an additional experiment using synthetic data to show that the message-passing flavor can capture inter-node interaction better than other flavors. In the next section, we are going to formally describe the traffic forecasting task, node interactions in the context of traffic, and GNN flavors in the context of node interactions. Then, we briefly review the existing works on traffic forecasting, from the perspective of GNN flavors. Next, we describe the models we use in our experiments. These are: Graph WaveNet [23], the backbone architecture for our experiment; Diffusion convolution [16] a part of Graph WaveNet representing the convolutional flavor; Graph Attention Network (GAT) [21] representing the attentional flavor; and Message Passing Neural Network (MPNN) [8] representing the message-passing flavor. We then present and describe the results of two sets of experiments; one set on real-world data and another on synthetic data. Finally, we discuss the ethical implication of our work and present our conclusion. ## 2 Preliminary ### Traffic Forecasting Problem Definition A traffic dataset is defined as a tensor \(\chi\in\mathbb{R}^{D\times N\times L}\) where \(D\) is the value of different traffic metrics (such as speed, flow, and density) at a particular node, \(N\) is the number of nodes, and \(L\) is the number of timesteps. The dataset can be accompanied with adjacency matrices \(A\in\mathbb{R}^{B,N,N}\), where \(B\) is the number of different adjacency matrices. Since the edges can be weighted (usually as the inverse of distance) the adjacency matrix contains real values instead of binary. Moreover, because there are more than one way to construct adjacency matrices, some works use multiple adjacency matrices. The traffic forecasting task is to perform multi-step forecast of the near future traffic metrics \(\chi[:,:,l+L_{FH}:l+L_{FH}+L_{FW}]\) based on the recent past traffic metrics \(\chi[:,:,l-L_{OW}:l]\) and the adjacency matrix \(\mathbf{A}\). The typical value is 12 timesteps (1 hour) for observation and forecasting window. This is formalized as follows: \[\chi[:,:,l+L_{FH}:l+L_{FH}+L_{FW}]=f(\chi[:,:,l-L_{OW}:l],\mathbf{A}) \tag{1}\] where \(l\) is the forecasting time, \(L_{OW}\) is the observation window, \(L_{FH}\) is the forecasting horizon, and \(L_{FW}\) is the forecasting window. Figure 3 in the Supplementary Material shows the problem definition visually. ### Node interactions In this subsection, we are going to present a simplistic but illustrative scenario where node interaction is an important factor in road traffic. Consider a subgraph consisting of two nodes, \(u\) and \(v\), and a directed edge \((u,v)\), representing a one way street. Each node has a corresponding binary traffic state \(x_{u}^{t}\) and \(x_{v}^{t}\) at time \(t\) with the options being: congested or free-flow. The binary function \(x_{v}^{t+1}(x_{u}^{t},x_{v}^{t})\) represents the future traffic at node \(v\) at time \(t+1\). Generally, we can decompose this function into three different terms: \(x_{v}^{t+1}(x_{u}^{t},x_{v}^{t})=f(x_{u}^{t})+g(x_{v}^{t})+h(x_{u}^{t},x_{v}^{ t})\) where \(f(x_{u}^{i})\) is the term representing the influence of the neighboring nodes (in this case, there is only one), \(g(x_{v}^{i})\) is the term representing the influence of node \(v\) onto itself, and \(h(x_{u}^{i},x_{v}^{i})\) is the term representing the node interactions. (Note that the idea is that \(h(\cdot)\) is whatever is left from \(f(\cdot)\) and \(g(\cdot)\).) Now, let's imagine a situation where the timestep is relatively short compared to the clearing of congestion at a node, but sufficiently long for congestion from node \(u\) to be able to reach node \(v\) in a timestep. Then the future of node \(v\) can be modeled with an AND function: \(x_{v}^{t+1}(x_{u}^{t},x_{v}^{t})=x_{u}^{t}\wedge x_{v}^{t}\). This behavior cannot be reduced in terms of combinations of \(f(\cdot)\) and \(g(\cdot)\) alone. Thus, we show that it is plausible that a scenario in traffic exists where the node interaction \(h(\cdot)\) is important. ### GNN flavors One popular taxonomy of GNNs is the three flavors[4]: convolutional, attentional, and message-passing, visualized in Figure 1. The idea behind all GNN layers is to update a node's latent features based on its neighborhood (including itself). They all have the general form of: \[\mathbf{h}_{u}^{i+1}=\boldsymbol{\phi}(\mathbf{h}_{u}^{i},\mathcal{H}_{u}^{i }\left(\left\{\mathbf{h}_{v}^{i}|v\in u\cup\mathcal{N}_{u}\right\}\right)) \tag{2}\] where \(\mathbf{\phi}(\cdot)\) is a learnable non-linear transformation such as Multi-Layer Perceptron (MLP), \(\mathbf{h}_{u}^{i}\) is the latent feature of node \(u\) at layer \(i^{\text{th}}\), \(\mathcal{H}_{u}^{i}(\cdot)\) is a function that acts on the neighbourhood, and \(\mathcal{N}_{u}\) is a set containing all the neighbours of node \(u\). The key difference between the flavors is in their choice of \(\mathcal{H}_{u}^{i}(\cdot)\) that decides what the central node \(u\) is allowed to interact with. In a convolutional GNN layer, \[\mathcal{H}_{u}^{i}=\bigoplus_{\mathcal{N}_{u}}c_{uv}\mathbf{\psi}(\mathbf{h}_{v}^ {i}) \tag{3}\] where \(\mathbf{\psi}(\cdot)\) are learnable non-linear transformations such as MLP, \(\bigoplus\) is a permutation-invariant aggregator such as mean-pooling, max-pooling, or summation, and \(c_{uv}\) is a learnable constant. Here, the central node \(u\) is only allowed to interact with the aggregate of the neighborhood. There are no node interactions. In an attentional GNN layer, \[\mathcal{H}_{u}^{i}=\bigoplus_{\mathcal{N}_{u}}\alpha(\mathbf{h}_{u}^{i}, \mathbf{h}_{v}^{i})\mathbf{\psi}(\mathbf{h}_{v}^{i}) \tag{4}\] where \(\alpha(\cdot,\cdot)\) is a scalar function representing attention mechanism [21]. It dynamically modulates the contribution of each of the node neighbours. Since the scalar attention is calculated from the node interactions \(a(\mathbf{x}_{i},\mathbf{x}_{j})\), when combined with the multi-head technique, the linear combinations of the different heads can approximate the full inter-nodes interactions. In a message-passing GNN layer, \[\mathcal{H}_{u}^{i}=\bigoplus_{\mathcal{N}_{u}}\mathbf{\psi}(\mathbf{h}_{u}^{i} \dashv\mathbf{h}_{v}^{i}) \tag{5}\] where \(\dashv\) is the binary concatenation operator. Here, \(\mathcal{H}_{u}^{i}(\cdot)\) captures the node interactions before the aggregations. This ensures that all the important information arising from the interactions between pairs of nodes is preserved. Thus, Figure 1: Visualization of the three flavors of GNN [4]. The convolutional flavor lacks direct interactions between the center node and its neighbors. The attentional flavor uses the center and neighbor nodes interactions to dynamically adjust the attention weights of the neighboring nodes. Only the message-passing flavor allows direct interactions between the center and the neighboring nodes. in tasks where it is important to capture interactions between nodes, such as traffic, message-passing is the most appropriate flavor. ## 3 Related Works Road traffic has spatial and temporal elements. However, in this section, we are only going to focus on how the models proposed in the existing works propagate spatial information, and the GNN flavors used. This information is summarized in Table 1. For a more thorough review of this topic, check the survey [18] and the benchmark paper [11]. Earlier work such as LSTNet [14] did not model any spatial dependencies. ### Convolutional STGCN [24] focused on making the multi-hop spatial information propagation efficient. To that end, they choose to use the Chebychev polynomial to approximate the spectral convolution kernels [6]. At about the same time, DCRNN [16] argued that a weighted and directed graph is a better model of the traffic network. To that end, they formulated a spatial convolution operation, called diffusion convolution, that works on weighted and directed graphs. They also showed that, when a graph is unweighted and non-directed, the diffusion convolution becomes equivalent the spectral convolution. Graph WaveNet [23] also used the diffusion convolution proposed by in DCRNN [16]. However, it argued that there are hidden spatial dependencies that are not captured by the adjacency matrix constructed from the physical road network. Instead, it proposed the construction of self-adjacency matrices that learn the hidden spatial structure from data. In the absence of information about the spatial connectivity, these self-adjacency matrices can be used alone. \begin{table} \begin{tabular}{l l c c} \hline \hline & Model & Conv. & Att. & M-P \\ \hline IJCAI 2018 & STGCN[24] & ✓ & & \\ ICLR 2018 & DCRNN[16] & ✓ & & \\ IJCAI 2019 & Graph WaveNet[23] & ✓ & & \\ AAAI 2019 & ASTGCN[9] & & ✓ & \\ AAAI 2020 & GMAN[25] & & ✓ & \\ SIGKDD 2020 & MTGNN[22] & ✓ & & \\ NeurIPS 2020 & AGCRN[3] & ✓ & & \\ PAKDD 2021 & AGCAN[15] & & ✓ & \\ PAKDD 2022 & T\({}^{2}\)-GNN[17] & ✓ & & \\ & **Ours** & & & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: GNN flavors of existing works MTGNN [22] extended this idea of learning the graph structure from the data, to any multivariate time series instead of just traffic. Regarding the propagation of spatial information, they used a multi-hop spatial Graph Convolutional Network (GCN) [13]. Furthermore, they argued that it is important to dynamically adjust the contributions of neighborhoods with different numbers of hops. This was done by using a gating mechanism. AGCRN [3] argued that every node has a unique behavior, and thus, there should be a different set of weights for every node. To prevent over-fitting and the explosion in the number of parameters, they proposed matrix factorization of the GCN parameters. T\({}^{2}\)-GCN [17] decomposed the dynamic spatial correlations in a traffic network into seasonal static and acyclic dynamic components. Because the two components have different dynamic spatial correlations, they used two parallel GCN towers, each containing a dynamic graph generation layer. ### Attentional ASTGCN [9] pointed out the lack of attentional flavor in the literature then. Spatially, they proposed an attention mask over each of the Chebyshev terms. GMAN [25] argued that the inter-node correlations changes over the typical forecasting window (1 hour). Thus, to capture these fast changes, they used multi-head attention [20] to make the graph convolution steps dynamic. AGCAN [15] observed that adjacent nodes do not always have similar behavior due to differences in road attributes (like road width). On the other hand, non-adjacent nodes could have very similar traffic patterns, also due to road attribute similarities (like point-of-interest). To this end, they proposed a hierarchical spatial attention module. ### Message-passing To the best of our knowledge, we are the first to try the message-passing flavor to the traffic forecasting task, as shown in Table 1. MPNN [8] was the first to be introduced as a generalization of several different GNN. The first implementation was in the domain of quantum chemistry and remained as a popular application up to recent years. For example, DimeNet [7] added a way for the model to access information regarding the angle between bonds in a molecule. ## 4 Methods To evaluate the impact of different flavors of GNN for traffic forecasting, we first pick the current state-of-the-art architecture as the backbone. Based on a recent benchmark [11], we picked Graph WaveNet [23] as the backbone as it remains state-of-the-art. Its usage as a backbone remained popular in recent works [17]. Next, we replaced the GNN module of Graph WaveNet with other flavors of GNN as shown in Figure 2. In particular, we picked Graph Attention Network (GAT) [21] and Message Passing (MPNN) [8] as representatives of the attentional and message-passing flavors respectively, to replace the diffusion convolution [16] in Graph WaveNet. This way, we can attribute any changes in performance to the different flavors alone. For the rest of this section, we describe diffusion convolution, GAT, and MPNN. A short summary of our Graph WaveNet backbone can be found in the Supplementary Material B. ### Diffusion convolution with self-adaptive adjacency matrix DCRNN [16] introduced diffusion convolution for directed graphs. Graph WaveNet [23] added a self-adaptive adjacency matrix term into the diffusion convolution. The combined version is as follows: \[\mathbf{h}^{[l+1]}=\mathbf{B}+\sum_{k=0}^{K}\mathbf{P}_{f}^{k}\mathbf{h}^{[l]} \mathbf{W}_{k,1}+\mathbf{P}_{b}^{k}\mathbf{h}^{[l]}\mathbf{W}_{k,2}+\tilde{ \mathbf{A}}^{k}\mathbf{h}^{[l]}\mathbf{W}_{k,3} \tag{6}\] [16] where \(\mathbf{h}^{[l]}\) and \(\mathbf{h}^{[l+1]}\) are the input and output of the \(l^{\text{th}}\) module, \(\mathbf{P}_{f}^{k}\) and \(\mathbf{P}_{b}^{k}\) are the forward and backward normalized adjacency matrices respectively, to the power of \(k\), \(\mathbf{W}_{k,1}\), \(\mathbf{W}_{k,2}\), \(\mathbf{W}_{k,3}\), and \(\mathbf{B}\) are the learnable parameters, \(K\) is the number of hops, and \(\tilde{\mathbf{A}}\) is the self-adaptive adjacency matrix. The adjacency matrices are normalized as follows: \(\mathbf{P}_{f}=\mathbf{A}/\text{rowsum}(\mathbf{A})\) and \(\mathbf{P}_{b}=\mathbf{A}^{T}/\text{rowsum}(\mathbf{A}^{T})\). The self-adaptive adjacency matrix is defined as follows: \(\tilde{\mathbf{A}}=\text{SoftMax}(\text{ReLU}(\mathbf{E}_{1}\mathbf{E}_{2}^{T}))\) where \(\mathbf{E}_{1},\mathbf{E}_{2}\in\mathbb{R}^{N\times c}\) are learnable parameters, \(N\) is the number of nodes, and \(c\) is the size of latent feature. ### Graph Attention Network (GAT) Based on the success of attention [2] and multi-headed self-attention mechanism [20], GAT [21] extended this to a graph structured data. We picked GAT as the representative of the attentional flavor. We used this PyTorch implementation: github.com/Diego999/pyGAT. Figure 2: Simplified architecture diagram of Graph WaveNet backbone. Number of layers is only illustrative. G-TCN is a gated temporal convolutional network layer [19]. \(\bigoplus\) is a summation operation. In the original paper [23], the GNN of choice is diffusion convolution (DC) [16]. In this paper, we replace the DC with GAT [21] or MPNN [8]. The forward function of GAT is: \[\tilde{\mathbf{h}}_{i}^{[l]}=\underset{m=1}{\overset{M}{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text #### 5.1.2 Experimental Setup We used Optuna [1] for hyperparameter optimization with a budget of 20 runs for each model. Following the original setup, we used Adam as the optimizer, and Mean Absolute Error (MAE) as the loss function. We split the dataset to training/validation/testing set with a ratio of 7:1:2. We use a standard scaler on the input. #### 5.1.3 Results The results of our experiments are being compared with the results from a recent benchmark [11] in Table 2. Firstly, not including results from our experiments, Graph WaveNet outperforms all the other models more often than not. In METR-LA, GAT WaveNet significantly outperformed the original Graph WaveNet, and yet still outperformed by MP WaveNet. We attribute these improvements to GAT capabilities to dynamically adjust the edge weights and MPNN capabilities to capture node interaction. ### Synthetic data: Root Mean Square of neighbor pairs in a Graph (RMSG) To show that the improvements of the MP WaveNet was due to its capacity in capturing complex inter-node interactions, we tested the different flavors of GNN on a synthetic graph attribute regression task: Root Mean Square of neighbour pairs in a Graph (RMSG). The motivation behind this task is to get simple non-linear interactions between a pair of neighbour. #### 5.2.1 Task and Dataset We constructed a graph with 100 nodes. 10% of all possible edges are connected; the edges are not directed and are unweighted. The node \begin{table} \begin{tabular}{l l|c c c|c c c|c c c} \hline \multicolumn{2}{c|}{\multirow{2}{*}{Model}} & \multicolumn{4}{c}{RMSE} & \multicolumn{4}{c}{MAE} & \multicolumn{4}{c}{MAPE (\%)} \\ & \multicolumn{2}{c}{15 mins} & \multicolumn{2}{c}{30 mins} & \multicolumn{2}{c}{60 mins} & \multicolumn{2}{c}{15 mins} & \multicolumn{2}{c}{30 mins} & \multicolumn{2}{c}{60 mins} & \multicolumn{2}{c}{15 mins} & \multicolumn{2}{c}{30 mins} & \multicolumn{2}{c}{60 mins} \\ \hline \multirow{6}{*}{ \begin{tabular}{} \end{tabular} } & HistoricalAverage & 14.737 & 14.737 & 14.736 & 11.013 & 11.010 & 11.005 & 23.34 & 23.34 & 23.33 \\ & CopyLastSteps & 14.215 & 14.214 & 14.214 & 6.799 & 6.799 & 6.798 & 16.73 & 16.73 & 16.72 \\ & LSTNet & 8.067 & 10.181 & 11.890 & 3.914 & 5.129 & 6.335 & 9.27 & 12.22 & 15.38 \\ & STGCN & 7.918 & 9.948 & 11.813 & 3.469 & 4.263 & 5.079 & 8.57 & 10.70 & 13.09 \\ & DCRNN & _7.509_ & 9.543 & 11.854 & 3.261 & 4.021 & 5.080 & 8.00 & 10.12 & 13.08 \\ & Graph WaveNet & 7.512 & _9.445_ & _11.485_ & _3.204_ & _3.922_ & _4.848_ & _7.62_ & _9.52_ & _11.93_ \\ & ASTGCN & 7.977 & 10.042 & 12.092 & 3.624 & 4.514 & 5.776 & 9.13 & 11.57 & 14.85 \\ & GMAN & 8.869 & 9.917 & 11.910 & 4.139 & 4.517 & 5.475 & 10.88 & 11.77 & 14.10 \\ & MTGNN & 7.707 & 9.625 & 11.624 & 3.277 & 3.999 & 4.867 & 8.02 & 10.00 & 12.17 \\ & AGCRN & 7.558 & 9.499 & 11.502 & 3.292 & 4.016 & 4.901 & 8.17 & 10.16 & 12.43 \\ & GAT WaveNet* & 5.303 & 6.388 & 7.595 & 2.759 & 3.173 & 3.668 & 7.14 & 8.58 & 10.12 \\ & MP WaveNet* & **5.206** & **6.247** & **7.412** & **2.712** & **3.093** & **3.556** & **6.84** & **8.23** & **10.00** \\ \hline \end{tabular} \end{table} Table 2: Performance comparison using real-world data. The results of our experiments are denoted with asterisk an (*), the rest are baselines taken from a 2021 benchmark [11]. The 15/30/60 mins column headings refer to the forecasting horizon. RMSE, MAE, and MAPE are error metrics, lower is better. The best number in the relevant group is in **bold**, the second best is underlined, and the third best is _italicized_. feature is a single scalar distributed as follows: \(x_{i}=\mathcal{U}_{[-2,2]}\), where \(x_{i}\) is the attribute of node \(i\), and \(\mathcal{U}_{[-2,2]}\) a continuous uniform distribution with a lower and upper bound of \(-2\) and \(2\) respectively. The node label is the root mean square of the product of the features between a node and its neighbors: \(y_{i}=\sqrt{\frac{1}{n}\sum_{j\in\mathcal{N}}(x_{i}x_{j})^{2}}\), where \(y_{i}\) is the label of node \(i\), and \(\mathcal{N}\) is a set containing all the nodes adjacent to node \(i\). The label is designed to have simple interaction, that is multiplications, followed by a simple non-linearity, that is quadratic. We used the root mean square as the aggregation, as we also use it as the loss function. This ensures that the behavior that the GNN needs to capture is simply the non-linear interactions, as the aggregation is trivial. #### 3.2.2 Experimental Setup A new graph is generated for each new data point, with new features and a new adjacency matrix. The loss function is RMSE. Each GNN flavor is trained on \(1,048,576=2^{20}\) datapoints, validated on \(104,857\) datapoints, and tested on \(2^{20}\) datapoints. The hyperparameters are optimized using Optuna [1] with a budget of \(20\) runs. The performance is based on the average and standard deviation of five runs. At every run, a new random seed is used to generate a new graph and initialize the model. To evaluate the performance, we used three metrics: Root Mean Square Error (RMSE), Mean Average Error (MAE), and Coefficient of Determination (\(R^{2}\)). They are defined in the Supplementary Material D. #### 3.2.3 Models There will be four different models compared: the _average_ as the baseline comparison, as well as the three GNN flavors models. In the _average_ model, the label is the same for the entire graph, and it is the average of the label of the training set: \(\mathbf{h}_{n}=\frac{1}{N}\sum_{n=1}^{N}\mathbf{y}_{n}\) As a representative for the convolutional, attentional, and message-passing flavors, we pick GCN [12], GAT [21], and MPNN [8] models for each flavor respectively. We pick GCN [12] because this is the undirected version of diffusion convolution in Graph WaveNet. To adjust the models for the RMSG task, we make the following changes: In GCN and GAT, we remove the typical softmax layer at the end because this is a regression task. We added MLP with one hidden layer as the encoder and decoder. We also added a self-loop in the adjacency matrix by adding an identity matrix to the adjacency matrix. #### 3.2.4 Results The result summary of five runs on each model is shown in Table 3. We use _average_ model metrics as the baseline. As expected, the \(R^{2}\) is practically zero. Note that this number can be negative when the sum of squared error in the numerator of equation 12 is bigger than the standard deviation of the data in the denominator. GCN did not manage to learn. This is shown by the fact that it has similar RMSE and MAE with the _average_ model. It also has an \(R^{2}\) that is very close to zero. Since past literature [12] has shown that GCN can deal with non-linearity, this result shows that GCN failed to capture node interaction. GAT performed better with \(R^{2}\) about 0.88. It also has a significantly lower RMSE and MAE compared to the GCN and the baseline _average_ model. GAT performance is also less reliable, shown by the high standard deviation across all metrics. We attributed these improvements due to the attentional flavor capability to use node interactions to dynamically adjust the weight of the contribution from each node. Only MPNN managed to learn the RMSG task completely RMSE and MAE approximately equal to zero and \(R^{2}\) approximately equal to one. The result of this set of experiments on the synthetic data shows that there exists a situation involving node interactions where MPNN, representing message-passing flavor, is the only flavor that is capable of fully learning the underlying patterns in the data. ## 6 Conclusion Among the many factors that influence the future speed in a road network, we argued that node interaction is a plausible factor. Moreover, among the three different flavors of GNN, we also argued that message-passing is the most appropriate flavor to capture node interaction. Our experiments on real-world data show the superiority of the message-passing flavor. We also did additional experiments on the RMSG task to contrast the capabilities of the three flavors of GNN with respect to capturing node interactions, concluding that message-passing is superior, not only in terms of losses and metrics but also in capturing the distribution.
2305.03319
HiPool: Modeling Long Documents Using Graph Neural Networks
Encoding long sequences in Natural Language Processing (NLP) is a challenging problem. Though recent pretraining language models achieve satisfying performances in many NLP tasks, they are still restricted by a pre-defined maximum length, making them challenging to be extended to longer sequences. So some recent works utilize hierarchies to model long sequences. However, most of them apply sequential models for upper hierarchies, suffering from long dependency issues. In this paper, we alleviate these issues through a graph-based method. We first chunk the sequence with a fixed length to model the sentence-level information. We then leverage graphs to model intra- and cross-sentence correlations with a new attention mechanism. Additionally, due to limited standard benchmarks for long document classification (LDC), we propose a new challenging benchmark, totaling six datasets with up to 53k samples and 4034 average tokens' length. Evaluation shows our model surpasses competitive baselines by 2.6% in F1 score, and 4.8% on the longest sequence dataset. Our method is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences.
Irene Li, Aosong Feng, Dragomir Radev, Rex Ying
2023-05-05T06:58:24Z
http://arxiv.org/abs/2305.03319v2
# HiPool: Modeling Long Documents Using Graph Neural Networks ###### Abstract Encoding long sequences in Natural Language Processing (NLP) is a challenging problem. Though recent pretraining language models achieve satisfying performances in many NLP tasks, they are still restricted by a pre-defined maximum length, making them challenging to be extended to longer sequences. So some recent works utilize hierarchies to model long sequences. However, most of them apply sequential models for upper hierarchies, suffering from long dependency issues. In this paper, we alleviate these issues through a graph-based method. We first chunk the sequence with a fixed length to model the sentence-level information. We then leverage graphs to model intra- and cross-sentence correlations with a new attention mechanism. Additionally, due to limited standard benchmarks for long document classification (LDC), we propose a new challenging benchmark, totaling six datasets with up to 53k samples and 4034 average tokens' length. Evaluation shows our model surpasses competitive baselines by 2.6% in F1 score, and 4.8% on the longest sequence dataset. Our method is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences. ## 1 Introduction Transformer-based models like BERT Vaswani et al. (2017) and RoBERTa Zhuang et al. (2021) have achieved satisfying results in many Natural Language Processing (NLP) tasks thanks to large-scale pretraining Vaswani et al. (2017). However, they usually have a fixed length limit, due to the quadratic complexity of the dense self-attention mechanism, making it challenging to encode long sequences. One way to solve this problem is to adapt Transformers to accommodate longer inputs and optimize the attention from BERT Feng et al. (2022); Jaszczur et al. (2021). BigBird Zaheer et al. (2020) applies sparse attention that combines random, global, and sliding window attention in a long sequence, reducing the quadratic dependency of full attention to linear. Similarly, Longformer Beltagy et al. (2020) applies an efficient self-attention with dilated windows that scale linearly to the window length. Both models can take up to 4096 input tokens. Though it is possible to train even larger models for longer sequences, they are restricted by a pre-defined maximum length with poor scalability. More importantly, they fail to capture high-level structures, such as relations among sentences or paragraphs, which are essential to improving NLP system performance Zhang et al. (2018); Zhu et al. (2019). Another way is to apply a hierarchical structure to process adjustable input lengths with chunking representations for scalability on long sequences. Hi-Transformer Wu et al. (2021) encodes both sentence-level and document-level representations using Transformers. ToBERT Pappagari et al. (2019) applies a similar approach that stacks a sentence-level Transformer over a pretrained BERT model. While most of the existing work models upper-level hierarchy using _sequential structures_, such as multiple layers of LSTMs Hochreiter and Schmidhuber (1997) or Transformers, this may still bring the long dependency issue when the sequence gets longer. To alleviate this, we investigate graph modeling as a novel hierarchy for upper levels. Besides, we also consider inter-hierarchy relationships using a new attention mechanism. Our key insight is to replace the sequence-based model with a hierarchical attentional graph for long documents. We first apply a basic pretrained language model, BERT or RoBERTa, to encode local representation on document chunks with a fixed length. The number of chunks could be extended for longer sequences for better scalability. Different from other works, we apply a graph neural network (GNN) Zhou et al. (2018) to model the upper-level hierarchy to aggregate local sentence information. This is to alleviate the long dependency issue of the sequential model. Moreover, within such a graph structure, we propose a new heterogeneous attention mechanism to consider intra- and cross- sentence-level correlations. Our contributions are two-fold: 1) We propose HiPool with multi-level hierarchies for long sequence tasks with a novel inter-hierarchy graph attention structure. Such heterogeneous graph attention is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences; 2) We benchmark the LDC (long document classification) task with better scaled and length-extended datasets. Evaluation shows that HiPool surpasses competitive baselines by 2.6% in F1 score, and 4.8% on the longest sequence dataset. Code is available at [https://github.com/IreneZihuiLi/HiPool](https://github.com/IreneZihuiLi/HiPool). ## 2 Model We introduce the HiPool (**H**ierarchical **P**ooling) model for long document classification, illustrated in Fig. 1. It consists of an overlapping sequence encoder, a HiPool graph encoder, and a linear layer. **Overlapping Sequence Encoder**. Given the input document \(S\), we first chunk the document into a number of shorter pieces with a fixed length \(L\), and we set the overlapping window size to be \(L_{olp}\). Overlapping encoding makes it possible for a chunk to carry information from its adjacent chunks but not isolated, differentiating our model from other hierarchical ones. Then each chunk is encoded with a pretrained Transformer model, i.e., BERT or RoBERTa; we choose the CLS token representation as the input to our HiPool layer: \(X=\mathrm{BERT}(S)\). **HiPool Graph Encoder**. We apply a graph neural network to encode incoming word-level information. Such a model has shown its potential in some NLP tasks (Li et al., 2022, 2021). We construct a graph, defined by \(G(V,E)\), where \(V\) is a set of nodes, and \(E\) is a set of node connections. There are two node types: \(n\)_low-level nodes_ and \(m\)_high-level nodes_, and typically \(m<n\). In our experiment, we set \(m=n/p\), and \(p\geq 0\). The feedforward operation goes from low- to high-level nodes. In layer \(l\), low-level nodes are inputs from the previous layer \(l-1\), while high-level nodes at layer \(l\) are computed based on low-level ones. Moreover, these high-level nodes will be the input to the next layer \(l+1\), becoming the low-level nodes in that layer. We consider \(X\) the low-level nodes in the first HiPool layer, as shown in the figure. In each HiPool layer, given node representation \(H^{l}\) and adjacency matrix \(A^{l}\) at layer \(l\), the task is to obtain \(H^{l+1}\): \[H^{l+1}=\mathrm{HiPool}(H^{l},A^{l}). \tag{1}\] Inspired by DiffPool (Ying et al., 2018), we conduct a clustering method to aggregate information. We assign node clusters with a fixed pattern based on their position. For example, adjacent low-level neighbors should map to the same high-level clustering node. So we first define a clustering adjacency matrix \(A_{self}\in\mathds{R}^{n\times m}\) that maps \(n\) nodes to \(m\) nodes, indicating the relations from low- to high- level nodes, marked as black arrows in the figure. Note that our approach allows overlapping, in which some nodes may belong to two clusters. We set the clustering sliding window to be \(2p\), with a stride to be \(p\). In the figure, we show the case of \(p=2\). We denote interactions between low-level nodes by the adjacency matrix \(A^{l}\),1 and we model it using a chain graph, according to the natural order of the document.2 Footnote 1: We eliminated the subscript of \(A_{low}\) for simplicity, and this also makes Eq. 1 more generalized as other GNNs. Footnote 2: We tested with a complete graph and BigBird attention structures but found little differences. Then, the relations between high-level nodes \(A^{l}_{high}\) and their node representations \(H^{l}_{high}\) are computed: \[\begin{split} A^{l}_{high}&=A^{T}_{self}A^{l}A_{ self},\\ H^{l}_{high}&=A_{self}H^{l}.\end{split} \tag{2}\] Figure 1: HiPool model illustration. It consists of a sequence encoder, HiPool graph encoder and a linear layer. Besides, for each high-level node, to strengthen the connections across different clusters, we propose an attention mechanism to obtain cross-sentence information. We propose a new edge type that connects external cluster low-level nodes to each high-level node, and the adjacency matrix is simply \(A_{cross}=1-A_{self}\), marked by green in the figure. We update \(H^{l}_{high}\) as the following: \[W_{score} =H^{l}_{self}W_{atten}(H^{l})^{T}, \tag{3}\] \[W_{score} =W_{score}A^{T}_{cross},\] \[H^{l}_{high} \gets W_{score}H^{l}+H^{l}_{high},\] where \(W_{atten}\) is trainable, and \(W_{score}\) is a scoring matrix. We then apply a GNN to obtain \(H^{l+1}\). For example, a graph convolution network (GCN) Kipf and Welling (2016): \[H^{l+1}=\text{GCN}(H^{l}_{high},A^{l}_{high}). \tag{4}\] We run our experiments with two layers, and apply a sum aggregator to achieve document embeddings. More HiPool layers are also possible. **Linear Layer**. Finally, a linear layer is connected and cross-entropy loss is applied during training. ## 3 Experiments ### LDC Benchmark The LDC benchmark contains six datasets. We first choose four widely-used public datasets. **Hyperpartisan** (HYP) Kiesel et al. (2019) and **20News-Groups** (20NG) Lang (1995) are both news text datasets with different scales. **IMDB**Maas et al. (2011) is a movie review dataset for sentiment classification. **ILDC**Malik et al. (2021) is a large corpus of legal cases annotated with binary court decisions ("accepted"and "rejected"). **Limitation and new datasets**. However, 20News-Groups and IMDB cannot test the limit of models in encoding long documents since the average length of sentence is still relatively small; whereas Hyperpartisan only contains 645 examples and is thus prone to overfitting and not representative. ILDC is large and contains long texts, but it is mainly in the legal domain. Therefore, to enrich evaluation scenario, we select and propose two new benchmarks with longer documents based on an existing large-scale corpus, Amazon product reviews He and McAuley (2016), to conduct long document classification. **Amazon-512** (A-512) contains all reviews that are longer than 512 words from the _Electronics_ category; **Amazon-2048** (A-2048) contains 10,000 randomly sampled reviews that are longer than 2048 words from the _Books_ category. We randomly split 8/1/1 as train/dev/test sets for both datasets. The proposed datasets enable us to draw statistically significant conclusions on model performance as sequence lengths increase, as demonstrated in in Table 1. ### Evaluation **Hyperparameters**. We list details in Appendix C. **Baselines**. We select four pretrained models: BERT Devlin et al. (2019), RoBERTa Zhuang et al. (2021), BigBird Zaheer et al. (2020) and Longformer Beltagy et al. (2020). We also compare with a hierarchical Transformer model ToBERT Pappagari et al. (2019). Hi-Transformer Wu et al. (2021) failed to be reproduced as there is no code available. We evaluate two variations of our HiPool method by changing the sequence encoder model: HiPool-BERT and HiPool-RoBERTa. We report the Micro-F1 score in Tab. 2. **Main Results**. Among the pretrained models, Longformer and BigBird perform better than BERT and RoBERTa. ToBERT can only surpass BERT as it is a hierarchical model that applies BERT as its text encoder. On average, HiPool-BERT improves significantly on BERT by 5.9% and on ToBERT by 3%. Compared to ToBERT, the superior performance of HiPool can be explained by the fact that sentence-level representations in ToBERT fails to capture cross-sentence information. HiPool surpasses baselines on A-512, A-2048 and ILDC that contain longer sequences. Notably, the best model, HiPool-RoBERTa, outperforms BigBird by 4.8% on ILDC. While our model applies a basic pretrained text encoder (the maximum length is 512), it can still surpass larger pretrained language models (i.e., the maximum length is 4096). Although HiPool is worse on HYP and IMDB, we note that HYP only has 65 examples in testing and is prone to overfitting. We further show that even in IMDB, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **HYP** & **20NG** & **IMDB** & **A-512** & **A-2048** & **ILDC** \\ \hline Mean & 741.44 & 587.56 & 301.14 & 879.62 & 2,915.03 & 4039.85 \\ Max & 5,368 & 144.592 & 3,152 & 17,988 & 14,120 & 501,091 \\ Min & 21 & 37 & 10 & 512 & 2,048 & 53 \\ Med. & 547 & 360 & 225 & 725 & 2,505 & 2,663 \\ 95pt. & 2,030 & 1,229 & 771 & 1,696 & 5,216 & 11,416 \\ \hline Total & 645 & 18,846 & 50,000 & 53,471 & 10,000 & 34,816 \\ Class & 2 & 20 & 2 & 5 & 5 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics on LDC benchmark. Med. is the median value. 95pt. indicates 95th percentile. Class indicates the number of classes. HiPool still out-performs the best model for long sequence in Appendix A. **Hierarchy variations.** To further compare sequential and graph hierarchy, we keep the word encoder and replace the HiPool graph encoder with the following sequential modules: Simple linear summation over low-level nodes; CNN applies a 1-dimension convolution; Trans is to apply a Transformer on top of low-level nodes. Besides, we also look at multiple graph settings: Aggr-mean is to use a mean aggregator to obtain the final document representation; Aggr-std is to use a feature-wise standard deviation aggregator; finally, Aggr-pcp applies Principal Neighbourhood Aggregation (PNA) (Corso et al., 2020). We report results on Amazon-2048 in Tab. 3, as it has the longest sequence on average. An observation is that applying aggregators are better than simpler structures, while keeping a graph is still a better choice. HiPool also considers attention in message passing, so it is doing even better. We also test other variations in Appendix B. ### Ablation Study **Effect of input length**. To better understand the effect of input length, in Fig. 2, we present an ablation study on the Amazon-2048 and ILDC, and compare three models: BigBird, Longformer, and HiPool. In general, the models benefit from longer input sequences in both datasets. Interestingly, when sequence is larger than 2048, Longformer and Bigbird could not improve and they are limited in maximum lengths. In contrast, as the input sequence gets longer, HiPool steadily improves, showing its ability to encode long documents in a hierarchical structure. **Model component**. Next, we look at how each component of HiPool affects performance. As shown in Tab. 4, we first take the best model setting, HiPool-RoBERTa, and compare it with the following settings: 1) w/o RoBERTa is to replace RoBERTa with BERT, then the model becomes HiPool-BERT; 2) w/o HiPool is to remove the proposed HiPool module and replace with a simple CNN (Kim, 2014); 3) w/o Overlapping is to \begin{table} \begin{tabular}{l|c c c|c} \hline \hline & **A-512** & **A-2048** & **ILDC** & **Avg.** \\ \hline HiPool-RoBERTa & **0.690** & **0.648** & **0.685** & **0.674** \\ w/o RoBERTa & 0.660 & 0.612 & 0.651 & 0.641 \\ w/o HiPool & 0.601 & 0.578 & 0.620 & 0.600 \\ w/o Overlapping & 0.587 & 0.560 & 0.611 & 0.586 \\ \hline \hline \end{tabular} \end{table} Table 4: The effect of sequence encoding layer, HiPool layer and overlapping modules. \begin{table} \begin{tabular}{l|c c c c c|c|c} \hline \hline & **HYP** & **20NG** & **IMDB** & **A-512** & **A-2048** & **ILDC** & **Avg.** \\ \hline BERT & 0.857 & 0.853 & 0.913 & 0.592 & 0.503 & 0.556 & 0.712 \\ RoBERTa & 0.874 & 0.857 & 0.953 & 0.650 & 0.579 & 0.560 & 0.745 \\ BigBird & 0.922 & 0.823 & 0.952 & 0.674 & 0.636 & 0.637 & 0.774 \\ Longformer & **0.938** & 0.863 & **0.957** & 0.673 & 0.612 & 0.562 & 0.768 \\ ToBERT & 0.862 & 0.901 & 0.924 & 0.587 & 0.560 & 0.611 & 0.741 \\ \hline HiPool-BERT & 0.865\(\pm\)0.030 & **0.908\(\pm\)0.005** & 0.931\(\pm\)0.001 & 0.660\(\pm\)0.009 & 0.612\(\pm\)0.011 & 0.651\(\pm\)0.010 & 0.771 \\ HiPool-RoBERTa & 0.886\(\pm\)0.018 & 0.904\(\pm\)0.001 & 0.948\(\pm\)0.001 & **0.690\(\pm\)0.007** & **0.648\(\pm\)0.017** & **0.685\(\pm\)0.018** & **0.794** \\ \hline \hline \end{tabular} \end{table} Table 2: Main evaluation results on LDC benchmark. We underscore the best average of baselines, and bold the best overall models. \begin{table} \begin{tabular}{l|c c} \hline \hline **Hierarchy** & **F1** & **Hierarchy** & **F1** \\ \hline _Sequential_ & & _Graph_ & \\ Simple & 0.618 & Aggr-mean & 0.621 \\ CNN & 0.608 & Aggr-std & 0.620 \\ Trans. & 0.560 & Aggr-pma & 0.633 \\ & & HiPool & **0.648** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of multiple hierarchies. Figure 2: Ablation study on the input text length. (X-axis shows input length.) remove the overlapping word encoding. We could see that removing the HiPool Layer leads to a significant drop, indicating the importance of the proposed method. Moreover, the HiPool framework can work with many pretrained language models, as we can see that applying RoBERTa improves BERT. A complete result table can be found in Appendix. ## 4 Conclusion In this paper, we proposed a hierarchical framework for long document classification. The evaluation shows our model surpasses competitive baselines. ## 5 Limitations and Potential Risks **Limitations** The model we proposed is specifically for classification, while it is possible to be extended to other NLP tasks by changing the high-level task-specific layer. Besides, in the evaluation, we focused on English corpora. We plan to test on other languages in the future. **Potential Risks** We make our code publicly available so that everyone can access our code. As the model is a classification model, it does not generate risky content. Users should also notice that the classification predictions may not be perfectly correct. ## 6 Acknowledgements This paper is dedicated to the memory of Professor Dragomir Radev, who passed away while this paper was being peer-reviewed.
2301.01947
StitchNet: Composing Neural Networks from Pre-Trained Fragments
We propose StitchNet, a novel neural network creation paradigm that stitches together fragments (one or more consecutive network layers) from multiple pre-trained neural networks. StitchNet allows the creation of high-performing neural networks without the large compute and data requirements needed under traditional model creation processes via backpropagation training. We leverage Centered Kernel Alignment (CKA) as a compatibility measure to efficiently guide the selection of these fragments in composing a network for a given task tailored to specific accuracy needs and computing resource constraints. We then show that these fragments can be stitched together to create neural networks with accuracy comparable to that of traditionally trained networks at a fraction of computing resource and data requirements. Finally, we explore a novel on-the-fly personalized model creation and inference application enabled by this new paradigm. The code is available at https://github.com/steerapi/stitchnet.
Surat Teerapittayanon, Marcus Comiter, Brad McDanel, H. T. Kung
2023-01-05T08:02:30Z
http://arxiv.org/abs/2301.01947v3
# StitchNet: Composing Neural Networks from Pre-Trained Fragments ###### Abstract We propose StitchNet, a novel neural network creation paradigm that stitches together fragments (one or more consecutive network layers) from multiple pre-trained neural networks. StitchNet allows the creation of high-performing neural networks without the large compute and data requirements needed under traditional model creation processes via back-propagation training. We leverage Centered Kernel Alignment (CKA) as a compatibility measure to efficiently guide the selection of these fragments in composing a network for a given task tailored to specific accuracy needs and computing resource constraints. We then show that these fragments can be stitched together to create neural networks with comparable accuracy to traditionally trained networks at a fraction of computing resource and data requirements. Finally, we explore a novel on-the-fly personalized model creation and inference application enabled by this new paradigm. ## 1 Introduction AI models have become increasingly more complex to support additional functionality, multiple modalities, and higher accuracy. While the increased complexity has improved model utility and performance, it has imposed significant model training costs. Therefore, training complex models is often infeasible for resource limited environments such as those at the cloud edge. In response to these challenges, in this paper we propose a new paradigm for creating neural networks: rather than training networks from scratch or retraining them, we create neural networks through composition by _stitching together_ fragments of existing pre-trained neural networks. A fragment is one or more consecutive layers of a neural network. We call the resulting neural network composed of one or more fragments a "StitchNet" (Figure 1). By significantly reducing the amount of computation and data resources needed for creating neural networks, StitchNets enable an entire new set of applications, such as rapid generation of personalized neural networks at the edge. StitchNet's model creation is fundamentally different from today's predominant backpropagation-based method for creating neural networks. Given a dataset and a task as input, the traditional training method uses backpropagation with stochastic gradient descent (SGD) or other optimization algorithms to adjust the weights of the neural networks. This training process iterates through the full dataset multiple times, and therefore requires compute resources that scale with the amount of data and the complexity of the network. Training large models this way also requires substantial amounts of data. While successful, this traditional paradigm for model creation is not without its limitations. Creating complex neural networks without access to large amounts of data and compute resources is a growing challenge of increasing significance, especially in resource-constrained edge environments. In the extreme case (e.g., for very large language and computer vision models), only a few companies with access to unrivaled amounts of data and compute resources are able to create such models. StitchNets solve this problem by creating new neural networks using fragments of already existing neural networks. The new approach takes advantage of the growing amount of neural networks that already exist, having been trained Figure 1: Overview of the StitchNet approach. Existing networks (left) are cut into fragments (middle), which are composed into StitchNets (right) created for a particular task. No retraining is needed in this process. previously by many groups and companies. SitticNets enable the efficient reuse of the learned knowledge resident in those pre-trained networks, which has been distilled from large amounts of data, rather than having to relearn it over and over again for new tasks as we do with traditional model creation paradigms. SitticNet's ability to reuse existing pre-trained fragments, rather than recreating from scratch or re-training for every task will help accelerate the growth and application of neural networks for solving more and more complex tasks. However, compositing these existing fragments into a coherent and high performing neural network is non-trivial. To reuse the knowledge of pre-trained neural network fragments, we need a way to 1) measure the compatibility between any two fragments, and 2) compose compatible fragments together. In the past, Centered Kernel Alignment (CKA) [1, 10, 11] has been used to measure similarity between neural network representations. We leverage CKA to assess the compatibility of any two fragments from any neural networks and compose new neural networks from fragments of existing pre-trained neural networks to create high performing networks customized for specific tasks without the costs of traditional model creation methods. The CKA score is used to reduce the search space for identifying compatible fragments and guide the fragment selection process. We present empirical validations on benchmark datasets, comparing the performance of SitticNets to that of the original pre-trained neural networks. We demonstrate that SitticNets achieve comparable or higher accuracy on personalized tasks compared with off-the-shelf networks, and have significantly lower computational and data requirements than creating networks from scratch or through retraining. Our contributions are: * The SitticNet paradigm: a novel neural network creation method that enables a new set of applications. * A novel use of Centered Kernel Alignment (CKA) in assessing the compatibility of any two fragments for their composition. * A technique to compose compatible fragments together for both linear and convolutional layers. * A feasibility demonstration of SitticNets for efficient on-the-fly personalized neural network creation and inference. ## 2 Composing Fragments The core mechanism to create SitticNets is to identify reusable fragments from a pool of existing networks and compose them into a coherent neural network model capable of performing a given task. To this end, we need a way to determine how compatible any two candidate fragments are with each other. In previous work, [10] present centered kernel alignment (CKA) [11, 12] as a way to measure similarity between neural network representations. Rather than looking at the neural network as a whole, we adopt and use CKA to as a measure of compatibility between any two _fragments_ of any neural networks. In this section, we first define CKA as a way to measure how compatible any two fragments are with one another and therefore their ability to be composed. Using CKA, we then present a technique to stitch different fragments together. Finally, we describe the algorithm to generate SitticNets. ### Centered Kernel Alignment (CKA) Given \(\mathbf{X}\in\mathbb{R}^{p\times n}\) as outputs of a fragment \(F_{A}\) of model \(A\) and \(\mathbf{Y}\in\mathbb{R}^{q\times n}\) as inputs of a fragment \(F_{B}\) of model \(B\) of the same dataset \(\mathbf{D}\), where \(n\) is the number of samples in the dataset, \(p\) is the output dimension of \(F_{A}\) and \(q\) is the input dimension of \(F_{B}\). Let \(\mathbf{K}_{ij}=k(\mathbf{x}_{i},\mathbf{x}_{j})\) and \(\mathbf{M}_{ij}=m(\mathbf{y}_{i},\mathbf{y}_{j})\), where \(k\) and \(m\) are any two kernels. We define the compatibility score \(\text{CKA}(\mathbf{X},\mathbf{Y})\) of fragment \(F_{A}\) and fragment \(F_{B}\) as \[\text{CKA}(\mathbf{X},\mathbf{Y})=\frac{\text{HSIC}(\mathbf{K},\mathbf{M})}{ \sqrt{\text{HSIC}(\mathbf{K},\mathbf{K})\,\text{HSIC}(\mathbf{M},\mathbf{M})}},\] where \(\text{HSIC}\) is the Hilbert-Schmidt Independence Criterion [1] defined as \[\text{HSIC}(\mathbf{K},\mathbf{M})=\frac{1}{(n-1)^{2}}\text{tr}(\mathbf{K}\, \mathbf{H}\,\mathbf{M}\,\mathbf{H}),\] where \(\mathbf{H}\) is the centering matrix \(\mathbf{H}_{n}=\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}\mathbf{1}^{T}\) and tr is the trace. For linear kernels, \(k(\mathbf{x},\mathbf{y})=m(\mathbf{x},\mathbf{y})=\mathbf{x}^{T}\,\mathbf{y}\), \(\text{HSIC}\) becomes \(\text{HSIC}(\mathbf{X},\mathbf{Y})=\|\text{cov}(\mathbf{X}^{T}\,\mathbf{X}, \mathbf{Y}^{T}\,\mathbf{Y})|_{F}^{2}\), where cov is the covariance function, and \(\text{CKA}(\mathbf{X},\mathbf{Y})\) becomes \[\frac{\|\text{cov}(\mathbf{X}^{T}\,\mathbf{X},\mathbf{Y}^{T}\,\mathbf{Y})\|_{F }^{2}}{\sqrt{\|\text{cov}(\mathbf{X}^{T}\,\mathbf{X},\mathbf{X}^{T}\,\mathbf{X}) \|_{F}^{2}\|\text{cov}(\mathbf{Y}^{T}\,\mathbf{Y},\mathbf{Y}^{T}\,\mathbf{Y}) \|_{F}^{2}}}. \tag{1}\] We use this function (Eq. 1) as a measurement of how compatible any two fragments are, given a target dataset. To reduce memory usage for a large target dataset, CKA can be approximated by averaging over minibatches as presented in [13]. ### Stitching Fragments Once we have determined compatible fragments, the next step in creating a SitticNet is to _stitch_ the two fragments together. To do so, we find a projection tensor \(\mathbf{A}\) that projects the output space of one fragment to the input space of the other fragment we are composing. We now describe this. Without loss of generality, we assume the output and input tensors are 2D tensors, where the first dimension is the sample dimension. If the tensors are not 2D tensors, we first flatten all other dimensions with the exception of the sample dimension. We use Einstein summation notation, where \(i\) represents the sample dimension, \(j\) the output dimension of the incoming fragment, and \(k\) the input dimension of the outgoing fragment. Given an output tensor \(\mathbf{X}_{ij}\) of the incoming fragment and an input tensor \(\mathbf{Y}_{ik}\) of the outgoing fragment, we seek to find \(\mathbf{A}\) such that \(\mathbf{Y}_{ik}=\mathbf{A}_{kj}\,\mathbf{X}_{ij}\). We can then solve for \(\mathbf{A}\) using the Moore-Penrose pseudoinverse: \[\mathbf{A}_{kj}=\mathbf{Y}_{ik}\,\mathbf{X}_{ij}^{T}(\mathbf{X}_{ij}\,\mathbf{X }_{ij}^{T}).^{-1} \tag{2}\] Once \(\mathbf{A}\) is found, we fuse \(\mathbf{A}\) with the weight of the first layer of the outgoing fragment. For linear layers, we simply do the following: \[\mathbf{W}_{lk}^{\prime}=\mathbf{W}_{lj}\,\mathbf{A}_{kj}, \tag{3}\] where \(l\) is the dimension of the output feature of the outgoing layer. For convolutional layers, we first upsample or downsample the spatial dimension to match each other, and then adjust the weight along the input channel dimension as follows. \[\mathbf{W}_{okmn}^{\prime}=\mathbf{W}_{ijmn}\mathbf{A}_{kj}, \tag{4}\] where \(o\) is the output channel dimension, \(j\) is the original input channel dimension, \(k\) is the new input channel dimension, and \(m\) and \(n\) are the spatial dimensions. For stitching a convolutional layer with an output tensor \(\mathbf{X}\) and a linear layer with an input tensor \(\mathbf{Y}\), we first apply adaptive average pooling so that the spatial dimension is 1x1 and flatten \(\mathbf{X}\) into a 2D tensor. Then, we follow Eq. 2 and Eq. 3 to find \(\mathbf{A}\) and fuse it with the \(\mathbf{W}\) of the linear layer. ### StitchNet Generation ``` Input: fragment pool \(\mathbf{P}=\{F_{ij}\}\), network \(i\) in \(\mathbf{P}\) up to layer \(j\)\(N_{ij}\), fragment ending in layer \(j\) of network \(i\)\(F_{ij}\), target dataset \(\mathbf{D}\) with \(M\) samples, span \(K\), threshold \(T\), maximum number of fragments \(L\), result list of StitchNets and their associated scores \(\mathbf{R}\), current StitchNet \(Q\), current score \(s\) Output: resulting list of StitchNets and their associated scores \(\mathbf{R}\) if\(Q\) is empty then \(\{F_{ij}\}\) = select starting fragments in \(\mathbf{P}\) for\(F_{ij}\) in \(\{F_{ij}\}\)do StitchNet(\(\mathbf{P}\), \(\mathbf{D}\), \(K\), \(T\), \(L\), \(\mathbf{R}\), \(F_{ij}\), 1) if the number of fragments in \(Q\geq L\)then return\(\mathbf{R}\) \(\{F_{ij}\}\) = select \(K\) middle or terminating fragments in \(\mathbf{P}\) for\(F_{ij}\) in \(\{F_{ij}\}\)do X = \(Q\)(\(\mathbf{D}\)); Y = \(N_{ij}\)(\(\mathbf{D}\)) \(s_{n}\) = \(s\times\) CKA(\(\mathbf{X},\mathbf{Y}\)) (see section 2.1) if\(s_{n}\) = \(T\)then \(Q\) = Stitch(\(Q\), \(F_{ij}\), \(\mathbf{X}\), \(\mathbf{Y}\)) (see section 2.2) if\(F_{ij}\) is a terminating fragment then R.append(\(\{Q,s_{n}\}\)) else StitchNet(\(\mathbf{P}\), \(\mathbf{D}\), \(K\), \(T\), \(L\), \(\mathbf{R}\), \(Q\), \(s_{n}\)) return\(\mathbf{R}\) ``` **Algorithm 1**StitchNet(\(\mathbf{P}\), \(\mathbf{D}\), \(K\), \(T\), \(L\), \(\mathbf{R}\), \(Q\), \(s\)) We now describe the main algorithm for creating StitchNet networks ("StitchNets" for short), shown in Algorithm 1. A StitchNet network is created by joining a set of pre-trained network fragments drawn from a pool \(\mathbf{P}=\{F_{ij}\}\). We use the notation \(F_{ij}\) to denote a fragment of a neural network \(i\) up to its \(j\) layer, and the notation \(N_{ij}\) to denote the computation performed by the portion of the neural network from which the fragment was taken. Other than the fragment pool \(\mathbf{P}\) and creation process hyperparameters (\(K,T,L\)), the only other input to the StitchNet creation process is a dataset \(\mathbf{D}\) for which the StitchNet will be optimized. We now describe the creation of the pool of network fragments \(\mathbf{P}\) derived from a set of pre-trained off-the-shelf networks. These pre-trained networks are divided into one of three types of fragments: _starting_ fragments for which the input is the original network input, _terminating_ fragments for which the output is the original network output, and _middle_ fragments that are neither starting nor terminating fragments. The first step in the StitchNet creation process is to choose the set of starting fragments. This could include all starting fragments in \(\mathbf{P}\), or a subset based on certain criteria, e.g., the smallest, biggest or closest starting fragment. Once a set of starting fragments are selected, a StitchNet is built on top of each starting fragment having a current starting score of 1. First, a set of \(K\) candidate fragments are selected from \(\mathbf{P}\). These fragments can be selected based on CKA scores (i.e., \(K\) fragments with highest CKA scores), the number of parameters of the fragments (i.e., \(K\) fragments with the least amount of number of parameters in \(\mathbf{P}\)), the closest fragments (i.e., \(K\) fragments with the least latency in \(\mathbf{P}\) in a distributed fragments setting), or other selection methods. For each of the candidate fragments, we then compute two intermediate neural network computations. First, we pass the dataset \(\mathbf{D}\) through the candidate StitchNet in its current form, resulting in value \(\mathbf{X}\). Second, we pass the same dataset \(\mathbf{D}\) through the neural network from which the candidate fragment \(F_{ij}\) was selected, resulting in value \(\mathbf{Y}=N_{ij}(D)\). After running these computations, we produce CKA(\(\mathbf{X},\mathbf{Y}\)) as in Section 2.1. We then multiply the CKA with the current score \(s\) to obtain the new current score \(s_{n}\). If \(s_{n}\) is still greater than a set threshold \(T\), the candidate fragment is selected and the process continues recursively. Otherwise, the candidate fragment is rejected. The threshold can be set to balance the amount of exploration allowed per available compute resources. This process continues until a terminating fragment is selected, the maximum number of fragments \(L\) is reached or all recursive paths are exhausted. At this point, the completed StitchNets and their associated scores \(\mathbf{R}\) are returned for user selection. ## 3 Results We now demonstrate that StitchNets can perform comparably with traditionally trained networks but with significantly reduced computational and data requirements at both inference and creation time. Through these characteristics, StitchNets enable the immediate on-the-fly creation of neural networks for personalized tasks without traditional training. ### Fragment pool To form the fragment pool \(\mathbf{P}\), we take five off-the-shelf networks pre-trained on the ImageNet-1K dataset [4] from Torchvision [10]: _alexnet_, _densenet121_, _mobilenet_v3_small_, _resnet50_ and _vgg16_ with IMAGENET1K_V1 weights. These pre-trained networks are cut into fragments at each convolution and linear layer that has a single input. As shown in Figure 2, there are 8 fragments for _alexnet_, 5 fragments for _densenet121_, 13 fragments for _mobilenet_v3_small_, 6 fragments for _resnet50_ and 16 fragments for _vgg16_. This results in the creation of a fragment pool \(\mathbf{P}\) of 48 fragments consisting of 5 starting fragments, 38 middle fragments, and 5 terminating fragments. We use this fragment pool in all experiments in this paper. ### Dataset The dataset used to evaluate StitchNets in this paper is the "Dogs vs. Cats" dataset [11]. This dataset includes 25,000 training images of dogs and cats and we use an 80:20 train:test split. We map ImageNet-1K class labels into cat and dog labels (class IDs 281-285 and 151-250, respectively). To form the target dataset \(\mathbf{D}\) for use in the stitching process of Algorithm 1, we randomly select \(M\) samples from the training set as the target dataset \(\mathbf{D}\). We choose this task because it is characteristic of the type of task for which StitchNets would be used: a user needs a custom classifier for a particular task and desired set of classes. ### StitchNet Generation We generate StitchNets with Algorithm 1 using the fragment pool and the dataset described in Section 3.1 and 3.2. We set \(K=2\), \(T=0.5\) and \(L=16\). The number of samples \(M\) in \(\mathbf{D}\) used for the stitching process is 32. Given these hyperparameters, a total of 89 StitchNets are generated. We evaluate them on the test set of completely unseen test samples. Summary statistics for the generated StitchNets are shown in Figure 3, including accuracy (3a), number of fragments per StitchNet (3b), CKA score (3c), and number of parameters per StitchNet (3d). ### Reduction in Inference Computation We now demonstrate how StitchNets significantly reduce inference-time computational requirements over traditional neural network training paradigms by studying StitchNet accuracy as a function of parameters. Figure 4 shows the resulting accuracy of the generated StitchNets as a function overall CKA score for each StitchNet and number of parameters (porportional to marker size) as a proxy for inference-time computation cost. We find a number of StitchNets outperform the pre-trained network while realizing significant computational savings. For example, StitchNet27 (denoted by a green star) achieves an accuracy of 0.86 with 3.59M parameters compared with the 0.70 accuracy of the pre-trained alexnet with 61.10M parameters. Therefore, StitchNet achieves a 22.8% increase in accuracy with a 94.1% reduction in number of parameters for the given task when compared with those of the pre-trained alexnet. These crystallizes one of the core benefits of StitchNets: without any training, the method can discover networks that are personalized for the task, outperform the original pre-trained networks, and do so while significantly reducing inference-time compute requirements. This is due to the fact that these pre-trained networks are not trained to focus on these two specific classes, while our StitchNets are stitched together specifically for the task. In the next section, we will Figure 2: Five pre-trained networks are fragmented into a fragment pool \(\mathbf{P}\). These fragments will be stitched together to form StitchNets. see that very little data is required for the stitching process. Additionally, we compare the StitchNets with the various off-the-shelf models, denoted by triangles. We find that the StitchNet generation process creates many different StitchNets that outperform the off-the-shelf models, many of which do so at reduced computational cost. Figure 5 shows the composition of some of these high-performing StitchNets, demonstrating the diversity in fragment use, ordering, and architectures. We also validate the effectiveness of using CKA to guide the stitching procedure. We find that StitchNets with a high CKA score also have high accuracy, especially those above 0.9. This shows that CKA can be used as a proxy to measure good compatibility between connecting fragments.1 Footnote 1: Note that there exist high accuracy StitchNets with low overall CKA score. This is because neural networks are robust and highly redundant, able to tolerate a certain amount of errors while still giving quality predictions (see Section 4.1). ### Reduction in Network Creation Computation We now demonstrate that StitchNets can be created without significant data and computation requirements. Specifically, we compare StitchNet21 (generated in Figure 5 on the target dataset of \(M=32\) samples) with fine-tuning the same five off-the-shelf networks (retraining them using the training portion of dataset of Section 3.2). For fine-tuning, we replace and train only the last layer of the pre-trained network using Stochastic Gradient Descent (SGD) with batch size 32, learning rate \(0.001\) and momentum \(0.9\). The results shown are averaged over 10 runs. For ease of comparison, we normalize the computation cost in terms of the number of samples processed through a neural network. In practice, fine-tuning requires backpropagation, which incurs additional computation per sample than StitchNet generation. Figure 6 compares the accuracy of StitchNet21 (denoted by the red star) with the traditionally fine-tuned networks as a function of the number of training samples processed. For a given accuracy target, StitchNets process a substantially smaller number of data samples than traditionally fine-tuned networks. Specifically, to reach an accuracy of 0.95, fine-tuning of _alexnet_, _densenet121_, and _mobilenet_x3_small_ require to process more than 320 samples while StitchNet requires only 32 samples used to stitch the fragments together (realizing over a 90% reduction). Therefore, only a small amount of training samples and computation are required for StitchNet to achieve comparable accuracy. This demonstrates that StitchNets effectively reuse the information already captured in the fragments to bootstrap network creation. This allows for personalization of tasks and on-the-fly training without substantial data requirements. ### Ensembles We now discuss the ability to ensemble generated StitchNets to improve performance. StitchNet and ensembling methods are complimentary. The StitchNet generation algorithm produces a set of candidate models. While a user can select a single StitchNet to use at inference time, because the StitchNet generation procedure finds such efficient models, we can also take advantage of the pool of StitchNets and ensemble some while still realize substantial computational savings. We pick 10 random models from the generated StitchNets in Section 3.3 with overall CKA \(>0.8\). We sort these models based on their overall CKA scores from high to low, and then ensemble them by averaging their predicted probabilities. The results are shown in Figure 7. The ensemble often results in higher accuracy than the individual model. As a result, this ensembling method can reduce variance in performance when on-the-fly network creation and inference (as discussed in Section 4.3) is used and there is not time for full selection of a final single StitchNet. Instead, the user can select a reasonably small subset of high performing StitchNets, which even in aggregate can be significantly smaller than a single traditionally trained network. ## 4 Discussion We now discuss the intuition behind StitchNets, examine their complexity and relation to related methods, introduce new applications they enable, and discuss their limitations. ### Why do StitchNets work? We first discuss why we are able to reuse existing fragments of networks to create new neural networks without retraining. One core reason for this is that neural networks tend to learn fundamental and universal features. Studies (Li et al., 2015; Lu et al., 2018; Morcos, Raghu, and Bengio, 2018; Wang et al., 2018; Lenc and Vedaldi, 2015; Kornblith et al., 2019; Tang et al., 2020) have shown that neural networks learn fundamental features such as edges for different tasks. Since these learned features are fundamental, they should Figure 3: Histogram of (a) accuracy, (b) # fragments, (c) CKA score, (d) # parameters in the generated batch of StitchNets. be reusable rather relearned. The challenge, however, is that although these features may be universal, they may not be compatible with one another "out of the box." Therefore, we require the stitching process introduced in Section 2.2 to project the fragments into a compatible space. Beyond this reuse of universal features and compatibility transformations, StitchNets are also enabled by the fact that neural networks are fundamentally robust. Due to the non-linear activation and built-in redundancies, neural networks tolerate certain amounts of error. As such, the fragments need not be perfectly compatible individually to produce a network that in aggregate operates at a high level of performance. ### Complexity Comparison We now compare the complexity of the traditional training process using backpropagation with the StitchNet gen Figure 4: Accuracy vs the overall CKA score on “Cat vs. Dogs.” \(cka\) is the overall CKA score, \(acc\) is the accuracy. The best StitchNet (\(acc\)=0.95) performs 12% better than the best pre-trained model(s) (densenet121 and resnet50 with \(acc\)=0.85). Figure 5: Examples of generated StitchNets. Figure 6: Accuracy vs the number of training samples processed (i.e., data and computation required). StitchNets require only a fraction of the computation of traditional training methods to achieve comparable performance. Figure 7: Accuracy of the ensemble models. Ensembling groups of StitchNets can reduce individual model variance. eration process. Traditional training complexity is \(O(ndp)\), where \(n\) is the number of parameters in the network, \(p\) is the number of epochs used to train, and \(d\) is the size of the dataset. StitchNet generation complexity is \(O(nqm)+O(K^{L})\). The first term \(nqm\) is the evaluation cost of the target dataset of size \(q\) on \(m\) networks in the pool, where \(q\ll d\) and \(n\) is the number of parameters in the network (assuming networks have the same # of parameters). The second term \(K^{L}\) is the search cost, where \(K\) is the span value we search at each level and \(L\) is the max depth to search. Using a high threshold cutoff \(T\) on the overall CKA score keeps search cost \(K^{L}\) small. Therefore, for a reasonable setting of hyperparameters (\(K,T,L\)) in Algorithm 1, StitchNets realize substantial computation gains over traditional training methods since \(q\ll d\) and \(m\ll p\). ### On-the-fly network creation and inference We now discuss a new family of applications and use cases that are enabled by StitchNets: on-the-fly neural network creation and inference. In this application, we use a batch of images on which we want to perform a task (e.g., classification or detection) as our target dataset in the StitchNet generation process. With only a minor modification to the StitchNet algorithm to additionally return task results, the StitchNet generation process can return the inference outputs along with the generated StitchNets. We now describe how this can be used in practice. Imagine a world where fragments of pre-trained neural networks for different tasks are indexed and distributed on the Internet. Any compatible fragment can be found and composed quickly to form a new neural network for a certain task. Now, imagine we want to create a neural network for classifying local cats and dogs with only a few hundred of these unlabeled images. Without StitchNets, we either need to train a network from scratch (which may fail due to our limited amount of training data), or find an existing pre-trained neural network, label the dataset, and finetune the network. If the existing pre-trained network is too big or too slow for our use, we will then have to train a new one from scratch. But, with limited amount of unlabeled data, this task seems impossible. With StitchNet, we can instead generate a set of candidate StitchNets with the small target dataset of unlabeled local cats and dogs images. These StitchNets are created from the pool of existing neural network fragments that have been indexed and distributed on the Internet. The proper fragments can be identified with a search criteria (e.g., the terminating fragment should contain cat and dog classes, the depth of the network should be less than 5 for computational efficiency reasons, etc.). With little computation, we will generate StitchNets capable of detecting and classifying local cats and dogs. ### Limitations One limitation is that the target task needs to be a subset (or a composition) of the terminating fragment tasks in the fragment pool. Additionally, while a large pool of networks and fragments can lead to higher applicability and quality of StitchNets, it can also lead to high search costs. Indexing large quantities of neural networks to form the fragment pool will require novel search methods. We see this as analogous to indexing web pages on the World Wide Web, suggesting a "Google for Fragments." Much like web search needed to index written content, large amounts of neural network "content" need to be indexed in order for their value to be unlocked. Early indexing efforts can tag fragments based on dataset characteristics, computational characteristics, etc. More advanced efforts can look at inward and outward connections of each fragment to determine its rank in results. Once a narrowed set of fragments are coarsely identified, the efficient procedure introduced in this paper can generate the StitchNets. Future work will address these types of complementary methods (indexing and distribution) that will enable StitchNets to operate at scale. ## 5 Related Work Transfer learning (or fine-tuning) Pan and Yang (2009); Weiss et al. (2016) is the current pre-dominant way to adapt existing neural networks to target tasks. Unsupervised domain adaptation is related, where the existing network is adapted using an unlabeled target dataset. StitchNets work similarly by stitching fragments using an unlabeled target dataset to create a neural network for the target task. Most work Wang and Deng (2018); Zhang et al. (2018); Tzeng et al. (2014); Kumar et al. (2018); Shu et al. (2018); Ben-David et al. (2010); Saito et al. (2017) focuses on retraining the network, while StitchNet does not require any training. StitchNets take advantage of the assumption that the fragments have shareable representations. This assumption helps explain why fragments can be stitched together into a coherent high-performing network: dissimilar yet complimentary fragments once projected into a similar space are compatible with one another. Several existing works including Li et al. (2015); Mehrer et al. (2018); Lu et al. (2018); Morcos et al. (2018); Renblith et al. (2019); Tang et al. (2020) have studied this shareable representation assumption. Gygli et al. (2021) reuse network components by training networks to produce compatible features by adding regularization at training time to make the networks directly compatible. StitchNet, however, focuses on creating neural networks without training. It is therefore more generally applicable. Lenc and Vedaldi (2015) combine network components by adding a stitching layer and training the recombined network with a supervised loss for several epochs. StitchNet adds a parameter-less stitching mechanism and therefore does not require any retraining. Instead, weights are adapted to be compatible with the method introduced in 2.2. ## 6 Conclusion StitchNet is a new paradigm that can leverage a growing global library of neural networks to fundamentally change the way networks are created. By reusing fragments of these networks to efficiently compose new networks for a given task, SitticNet addresses two of the most fundamental issues limiting neural network creation and use: large data and computation requirements. StitchNet does this by leveraging Centered Kernel Alignment (CKA) as a compatibility measure that guides the selection of neural network fragments, tailored to specific accuracy needs and computing resource constraints. Our work has shown that neural networks can be efficiently created from compatible neural network fragments of different models at a fraction of computing resources and data requirements with a comparable accuracy. We also introduce on-the-fly efficient neural network creation and inference application that is unlocked by this method.
2302.07866
Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?
Compositionality is a pivotal property of symbolic reasoning. However, how well recent neural models capture compositionality remains underexplored in the symbolic reasoning tasks. This study empirically addresses this question by systematically examining recently published pre-trained seq2seq models with a carefully controlled dataset of multi-hop arithmetic symbolic reasoning. We introduce a skill tree on compositionality in arithmetic symbolic reasoning that defines the hierarchical levels of complexity along with three compositionality dimensions: systematicity, productivity, and substitutivity. Our experiments revealed that among the three types of composition, the models struggled most with systematicity, performing poorly even with relatively simple compositions. That difficulty was not resolved even after training the models with intermediate reasoning steps.
Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
2023-02-15T18:59:04Z
http://arxiv.org/abs/2302.07866v1
# Do Deep Neural Networks Capture Compositionality ###### Abstract Compositionality is a pivotal property of symbolic reasoning. However, how well recent neural models capture compositionality remains underexplored in the symbolic reasoning tasks. This study empirically addresses this question by systematically examining recently published pre-trained seq2seq models with a carefully controlled dataset of multi-hop arithmetic symbolic reasoning. We introduce a _skill tree_ on compositionality in arithmetic symbolic reasoning that defines the hierarchical levels of complexity along with three compositionality dimensions: systematicity, productivity, and substitativity. Our experiments revealed that among the three types of composition, the models struggled most with systematicity, performing poorly even with relatively simple compositions. That difficulty was not resolved even after training the models with intermediate reasoning steps.1 Footnote 1: Our code and data are available at [https://github.com/keitokudo/dentaku_skill_tree](https://github.com/keitokudo/dentaku_skill_tree). ## 1 Introduction Integrating symbolic reasoning capabilities into neural models has been a crucial goal of artificial intelligence Marcus (2003); d'Avila Garcez and Lamb (2020). With this in mind, many researchers investigated how well modern neural models achieve symbolic reasoning Lake and Baroni (2018). However, recent studies have reported conflicting results on this; some suggest that neural models can solve complex multi-hop reasoning Clark et al. (2020), while others claim that models struggle even with performing simple symbolic operations Qian et al. (2022). As a step toward further understanding neural models' symbolic reasoning ability, this study systematically analyzes recently published pre-trained seq2seq models using a carefully controlled dataset of multi-hop arithmetic symbolic reasoning. Specifically, our study empirically evaluates the models' ability to generalize the _compositionality_ underlying arithmetic reasoning, where we explore three dimensions of compositionality: (i) systematicity, (ii) productivity, and (iii) substitutivity, as illustrated in Figure 1. Capturing compositionality is crucial in performing symbolic reasoning since compositionality is a pivotal property of generalizability over training instances. To systematically explore the models' composition ability, we introduce a _skill tree on compositionality_ that defined the hierarchical levels of complexity in arithmetic symbolic reasoning, as illustrated in Figure 2. Using this hierarchy as a lens, we identify the limitations of the neural seq2seq models in capturing the compositionality in arithmetic symbolic reasoning. Our major findings can be summarized as follows: * Among the three types of composition, the models struggled most with **systematicity**, performing poorly even with relatively simple compositions. * The major difficulty in systematicity was in the access to intermediate information that is not stated in input but produced during the reasoning. * Capturing systematicity remained hard for the models trained with the information of the intermediate reasoning steps. Figure 1: Three dimensionalities of compositionality in arithmetic symbolic reasoning Skill tree in arithmetic reasoning We take arithmetic reasoning as the domain for our exploration because it allows us to synthesize questions systematically, as we show in this paper, which helps examine a model's composition ability in a controlled manner. Furthermore, the arithmetic reasoning ability of neural models has gained much attention as modern large language models still struggle with this problems (Rac et al., 2021). Specifically, we use multi-hop arithmetic reasoning problems as follows: _Question:_ A=1, B=2, C=A+2, C=? _Answer:_ 3 Here, the value assigned to the variable C is asked. ### Compositionality in multi-hop symbolic reasoning In this study, we specifically focused on three dimensions: systematicity, productivity, and substitutivity (Hupkes et al., 2020). According to these, we evaluate how well neural models achieve compositional generalization. **Systematicity** refers to the ability of combining known different concepts into a more complex concept, i.e., structural composition. To evaluate this ability in models, we first trained with several types of primitive operations (e.g., addition; A=1+2,A=? and selection; A=1,B=2,B=?). Then, we measured the performance in solving problems consisting of combinations of primitives (e.g., A=1+2,B=2+3,B=?). **Productivity** refers to the ability to solve longer/complex problems based on shorter/simpler ones. To evaluate this ability in models, we first trained with a short version of a formula (e.g., A=1+2,B=2+3,B=?). Then, we measured the performance in solving longer problems (e.g., A=1+2,B=2+3,C=3+4,C=?). **Substitutivity** refers to the ability to keep the performance even if a particular constituent in a problem is replaced with another (unseen) constituent (i.e., lexical composition). To evaluate this ability in models, we conduct several experiments changing the variable characters between training and test (e.g., train with A=1+2,A=?; then evaluated with \(\boldsymbol{\alpha}\)=1+2, \(\boldsymbol{\alpha}\)=?). ### Dataset configurations Typical symbolic reasoning (e.g., procedural programming, assembly language) consists of at least three primitive symbol manipulations: assignment (a=2), arithmetic operation (1+2), and reference (a=?). With this in mind, our dataset is generated by combining the following five basic formulas: (i) A=1 (assignment), (ii) A=B (reference & assignment), (iii) A=1+2 (arithmetic operation & assignment), (iv) A=B+2 (arithmetic operation & assignment & reference), (v) A=? (reference). The detailed properties are explained in Section 3. ### Skill tree evaluations We preliminarily observed that compositionally generalizing complex multi-hop arithmetic reasoning was difficult for neural seq2seq learners (the 1,2,6\(\rightarrow\)9 setting in Section 4). Building on this fact, this study questions what type of composition made it hard for the neural models. To answer this, we designed a _skill tree on compositionality_ that organizes the (hierarchical) complexity levels of symbolic reasoning.2 Evaluating the models using problems with different complexity of composition in a step-wise manner, we elucidate the exact weakness of neural seq2seq models in multi-hop symbolic reasoning. Footnote 2: The term “skill tree” refers to a visualization method of step-by-step learning in the field of pedagogy (Tondello and Nacke, 2019), distinct from the “tree” in graph theory. Specifically, we designed ten versions of symbolic reasoning problems. The hierarchical relationship of their task levels is illustrated in Figure 2; in this skill tree, each vertex, i.e., domain, corresponds to different task settings with different Figure 2: Skill tree to evaluate compositional generalization. The data format of primitive operations is gray and others (complex formulas composed of combinations of primitive operations) are blue. complexity, and edges represent the hierarchical complexity levels. By adequately selecting a particular combination of training and test domains, we evaluated the compositional generalization ability of the models from various perspectives. Here, the arithmetic expressions used in the test domain are a combination of those in training domains, creating a semi-order relationship in the skill tree. For example, using the settings 1 (A=1+2,A=?) and 2 (A=1,B=2,B=?) as a training set, and 4 (A=1+2,B=2+3,B=?) as a test set, one can evaluate the model's systematicity generalization towards the arithmetic operations (a+b) and assignments (A=\(i\),B=\(j\),B=?). ## 3 Experimental settings ### Data **Dataset:** In each experimental setting, we refer to the training domains as \(\mathcal{D}_{\mathrm{train}}=\{d_{\mathrm{train}1},\cdots d_{\mathrm{train}k}\}\) and the test domain as \(d_{\mathrm{test}}\). Each domain has 100,000 training data and 3,200 test data; these are randomly generated, and there is no overlapped instance. When the training domain consisted of multiple domains, we used the union of the training data in \(\mathcal{D}_{\mathrm{train}}\). In addition, when the training domain is not primitive operations (1, 2, 3, and 6 in Figure 2), we further added the primitive operation data related to the training domain (Appendix A) into the training data. **Arithmetic expressions:** As introduced in Section 2, the input is a sequence of arithmetic expressions. Formally, each expression is in the format of a=\(n\) or a=\(n\){+,-,max,min}\(m\) except that the final expression asks the number assigned to a specified variable (b=?). Here, a and b are a member of a variable name set \(\Sigma\); \(n\) and \(m\) are a member of the variable name set or number set \(\Sigma\cup\mathcal{N}\). Specifically, \(\Sigma\) consists of 21 alphabets, and \(N\) consists of the integer from 0 to 99. The symbol = indicates that the result of the left-hand side is substituted into the right-hand side. The operations (+, -, max, and min) correspond to arithmetic addition, subtraction, max (returning the larger of its left and right numbers), and min (returning the smaller of its left and right numbers). The questions are designed so that the answer is unique, and depending on the problem set-up, may include mathematical expressions that are not directly related to the final answer, i.e., distractors. The order of the equations is arbitrary; the first equation should not necessarily be calculated first. **Substittivity test:** In each experimental setting, we evaluate the substitativity generalization performance of the model under the situation where the variable names are replaced with unseen ones, e.g., training with a=1+2,a=?; then evaluating with \(\alpha\)=2+4, \(\alpha\)=?. In this setting, we replaced each variable name in the test set with one of five alphabets that do not overlap with the training ones. ### Trainig and test **Training:** The training stops when the accuracy on the validation dataset does not increase in successive five epochs or until the validation accuracy reaches 100%. Checkpoints with the highest accuracy in the validation dataset are used for evaluation. Note that among the experiments, the accuracy in the training domain reached at least 99.5%; this indicates that the primitive operations were learnable for the models. Detailed settings for training are described in Appendix A. **Evaluation metrics:** The accuracy is calculated by the test data in the test domain \(d_{\mathrm{test}}\). Here, we used two metrics: (i) zero-shot accuracy (ZA) and (ii) weighted average of accuracies (WA) to measure the efficiency of learning (Talmor et al., 2020). In measuring WA, a model was further trained using the training set in the test domain \(d_{\mathrm{test}}\); then, the \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Type} & \multicolumn{2}{c}{base} & \multicolumn{2}{c}{large} & \multicolumn{2}{c}{x-large} \\ & & \multicolumn{1}{c}{ZA} & WA & \multicolumn{1}{c}{ZA} & WA & \multicolumn{1}{c}{ZA} & WA \\ \hline 1,2 & sys. & 42.0 & 82.1 & 35.2 & 89.8 & 51.7 & 96.7 \\ \(\rightarrow\)4 & +subst. & 39.1 & 80.4 & 32.8 & 89.4 & 50.7 & 96.7 \\ \hline 2,3 & sys. & 33.6 & 75.4 & 32.1 & 85.6 & 35.9 & 94.7 \\ \(\rightarrow\)5 & +subst. & 33.6 & 77.0 & 31.5 & 87.2 & 36.5 & 94.9 \\ \hline 2,3,6 & sys. & 40.8 & 76.5 & 39.1 & 87.7 & 40.5 & 94.6 \\ \(\rightarrow\)8 & +subst. & 39.3 & 74.7 & 37.5 & 86.4 & 39.2 & 94.9 \\ \hline 2,3,6 & sys. & 56.5 & 79.7 & 51.0 & 83.7 & 58.1 & 94.2 \\ \(\rightarrow\)7 & +subst. & 57.6 & 80.2 & 51.1 & 85.8 & 56.7 & 95.1 \\ \hline 1,2,6 & sys. & 24.1 & 28.2 & 23.1 & 29.3 & 27.5 & 32.6 \\ \(\rightarrow\)9 & +subst. & 25.8 & 28.0 & 24.1 & 31.7 & 28.5 & 34.7 \\ \hline 7,8 & sys. & 23.6 & 21.3 & 25.3 & 25.9 & 22.3 & 28.2 \\ \(\rightarrow\)10 & +subst. & 22.3 & 21.6 & 24.4 & 26.4 & 23.0 & 30.2 \\ \hline 1 & prod. & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \(\rightarrow\)3 & +subst. & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline 4 & prod. & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \(\rightarrow\)5 & +subst. & **99.9** & 99.0 & 100.0 & 100.0 & 100.0 & 99.9 \\ \hline 9 & prod. & 57.0 & 59.3 & 61.7 & 63.8 & 60.6 & 62.7 \\ \(\rightarrow\)10 & +subst. & 58.4 & 60.9 & 62.2 & 64.1 & 59.5 & 64.5 \\ \hline \hline \end{tabular} \end{table} Table 1: Average accuracies in the experiment with 2 different seeds. The “Task” column exhibits (train\(\rightarrow\)test) domains corresponding to the skill-tree (Figure 2). The “Type” column shows the targeted compositionality type in each setting; here, “sys.,” “prod,” and “subst.” denote the systematicity, productivity, and substituttivity generalizations, respectively. weighted average of accuracies at every update was calculated (details are in Appendix B). ### Models: We used three different sizes (base, large, and xl) of T5 (Raffel et al., 2020), which is a widely used pre-trained seq2seq model in numerical reasoning tasks (Pal and Baral, 2021; Chung et al., 2022; Yang et al., 2021). Note that we began our training using the models with learned parameters. We also evaluated BART variants and randomly initialized models in Appendix C. ## 4 Experiments and results We adopted nine combinations of training and test domains as shown in the first column of Table 1 (training domains\(\rightarrow\)test domain). Six of them test the systematicity generalization and the other three test productivity generalization. In each setting, we further tested substitutivity generalization ability using the test domain data with a different variable name set (e.g., \(\boldsymbol{\alpha}\) instead of A) to that used in the training domain. Table 1 shows the overall results. We observed the following four trends: * Systematicity generalization was more difficult than productivity generalization. * Even in the simple composition (the setting 1,2\(\rightarrow\)4), the models struggle with generalization from zero or few examples. * Models achieved substitutivity generalization. * Model size did not incur substantial performance difference. We identified that the systematicity generalization of reference and arithmetic operations (setting \(2,3\to 5\); from A=1,B=2,C=3,C=7 and A=1+2,A=7 to A=1+2,B=2+3,C=4+5,C=7) was a simple setting, yet difficult to solve (refer to Appendix D for results on other tasks.). To better understand why neural models struggle with this setting, we decomposed the complexity of this setting and analyzed the model performance. Note that Kim and Linzen (2020) also suggested that neural models lack systematicity generalization ability in the context of semantic parsing; our results corroborate their findings from the context of arithmetic multi-hop reasoning. Is this difficulty specific to _arithmetic_ symbolic reasoning?We experimented with the same setting except that the four arithmetic operations are replaced with string operations (join, reserveJoin, strSub, and stackJoin; details are in Appendix E.1). The notable difference between arithmetic and string operations is that the string operation could be achieved by only copying selective elements in the input (e.g., 12+34=1234), while arithmetic operation requires the models to access the arithmetic knowledge stored in their internals (e.g., 1+2=3) and generate new information not stated in the input context (e.g., 3). Larger models tended to overcome the weakness in composition with string operations (e.g., the accuracy of 86.9 in zero-shot evaluation with the x-large model), while they struggled with arithmetic operations. This suggests that the major difficulty in systematicity was in **the access to the arithmetic knowledge (e.g., 1+1=2)**. Does scratchpad training alleviate the difficulty?Existing studies suggested that showing the intermediate step (scratchpad-style training/inference) improves the multi-hop reasoning ability of neural models (Wei et al., 2022). We tested whether such an explicit generation of intermediate information alleviates the difficulty faced in the previous analysis. Specifically, we trained models with intermediate steps (e,g. A=1+2,B=2+3,B=7; B=2+3,B=5. Details are in Appendix E.2) during training. The accuracy was calculated by the exact match of the answer and intermediate steps (the steps are designed to be uniquely determined). The performance gain due to explicating the intermediate steps was limited (Table 2), at least with our T5-based models. This shows that, in our carefully controlled setting, merely employing the scratchpad-style generation is not substantially effective. ## 5 Analysis We conduct a more in-depth analysis of compositional generalization difficulties from another perspective--complexity of arithmetic expressions. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{base} & \multicolumn{2}{c}{large} & \multicolumn{2}{c}{x-large} \\ Setting & ZA & WA & ZA & WA & ZA & WA \\ \hline 2,3\(\rightarrow\)5 & 33.6 & 75.4 & 32.1 & 85.6 & 35.9 & 94.7 \\ \hline String & 37.3 & 94.1 & **66.1** & 98.4 & **86.9** & 99.3 \\ Steps & 26.2 & 82.1 & 36.1 & 89.4 & 33.7 & 96.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study with the 2,3\(\rightarrow\)5 (vanilla) setting. “String” refers to the setting where string operations are used instead of arithmetic operations. “Step” denotes the setting generating intermediate steps. Specifically, for each pair of training and test domains listed in Table 1 (e.g., 1,2\(\rightarrow\)4), we quantified the increase of the complexity of arithmetic formulas from several aspects, e.g., how much the formula's number of variables increased in the test domain (setting of 4) compared to the training domain (setting of 1 and 2). Specifically, we focused on the increase of the number of variables (\(\Delta\)#variables), numbers (\(\Delta\)#numbers), operations (\(\Delta\)#operations), and references (\(\Delta\)#references) from the training to test domains. Here, "#reference" denotes the number of access to a particular variable on the right-hand of equations. For example, the \(\Delta\)#references is 1 if the training data format is A=\(n\), B=A+\(m\) and the test format is A=\(n\), B=A+\(m\), C=B+\(l\). Then, we identified which dimension strongly relates to the compositional generalization difficulty. We analyzed the macro trends between formula's complexity increase and the difficulty of generalization across the experimental settings. Table 3 shows Spearman's rank correlation coefficient between each complexity and the test-domain accuracy. We found a notable negative correlation in the \(\Delta\)#reference; that is, the more references in the test domain compared to the training domain, the more difficult the compositional generalization becomes (the cases of 1,2,6\(\rightarrow\)9 and 9\(\rightarrow\)10 settings). Simply put, this reveals the difficulty of compositional generalization with _multi-hop_ reasoning--retaining the results of a calculation and accessing them again for another calculation. ## 6 Related work The analysis of the compositional generalization ability of neural models and arithmetic multi-hop reasoning problems have typically been studied separately; this study has merged these two directions. As for composition generalization analysis, several studies analyzed neural models using datasets such as SCAN Lake and Baroni (2018), COGS Kim and Linzen (2020), and CFQ Keysers et al. (2020). These mainly focused on compositionality in the context of semantic parsing; the composition ability toward symbol manipulations (e.g., multi-hop arithmetic reasoning) is typically out of focus. As for arithmetic reasoning, neural models' abilities have been analyzed typically using benchmarks such as DROP Dua et al. (2019). It has recently been reported that such dataset has superficial cues Al-Negheimish et al. (2021), which made it unclear how much arithmetic reasoning neural model achieves; our study using a carefully controlled dataset contributed to the exact weakness of neural models in this context. ## 7 Conclusion In this study, we have empirically investigated the arithmetic multi-hop reasoning ability of modern neural models through the lens of compositional generalization ability. To systematically analyze neural models' ability, we have defined a skill tree that organizes the (hierarchical) complexity levels of the multi-hop symbolic reasoning dataset. Our experiments have revealed that the major weakness lies in systematicity, even with a relatively simple composition. Through the ablation studies, we also have found that difficulty in systematicity is pronounced in accessing knowledge that is not written in input but stored in models. Furthermore, even in training models with intermediate steps that explicate the composition, they struggle to capture systematicity. We also found the difficulty of multi-hop reasoning in compositional generalization. These highlight the exact weakness of neural models and encourage studies to overcome such limitations. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Complexity & \multicolumn{2}{c}{base} & \multicolumn{2}{c}{large} & \multicolumn{2}{c}{x-large} & \multicolumn{1}{c}{Average} \\ dimensions & ZA & WA & ZA & WA & ZA & WA & Average \\ \hline \(\Delta\)\#variables & 0.098 & -0.098 & 0.488 & -0.293 & -0.098 & -0.488 & -0.065 \\ \(\Delta\)\#numbers & 0.059 & 0.265 & -0.088 & 0.647 & 0.206 & 0.677 & 0.294 \\ \(\Delta\)\#operations & -0.507 & -0.338 & -0.338 & 0.169 & -0.338 & 0.169 & -0.197 \\ \(\Delta\)\#reference & -0.655 & -0.655 & -0.393 & -0.655 & -0.655 & -0.655 & **-0.611** \\ \hline \hline \end{tabular} \end{table} Table 3: Spearman’s rank correlation coefficient between the increase of training–test arithmetic complexity and the compositional generalization performance (accuracy) across the nine settings listed in Table 1. A negative score indicates that the greater the training–test discrepancy in its dimension, the more difficult compositional generalization is. In the case that there are multiple training domains, the maximum value among them is used. ### Limitations In this work, we explored neural networks' ability to capture compositionality in symbolic arithmetic reasoning in hopes that it may lead to future improvements in more general reasoning. However, arithmetic reasoning may not necessarily generalize to natural language tasks. Furthermore, we explored several aspects of multi-hop arithmetic reasoning, but these were chosen from a relatively human-centric perspective, and models may suffer from unforeseen other difficulties. Finally, while we found several patterns in how model performance degrades, it is difficult to aggregate this into a full picture of what a model can and cannot do. Further experiments are needed to gain a more complete understanding of model performance ## Acknowledgements We thank four anonymous reviewers who provided valuable feedback. We would like to also appreciate the member of Tohoku NLP Group for their cooperation in conducting this research. This work was supported by JST CREST JPMJCR20D2 and JSPS KAKENHI Grant Number JP22H00524, 21K21343.
2310.10879
BLoad: Enhancing Neural Network Training with Efficient Sequential Data Handling
The increasing complexity of modern deep neural network models and the expanding sizes of datasets necessitate the development of optimized and scalable training methods. In this white paper, we addressed the challenge of efficiently training neural network models using sequences of varying sizes. To address this challenge, we propose a novel training scheme that enables efficient distributed data-parallel training on sequences of different sizes with minimal overhead. By using this scheme we were able to reduce the padding amount by more than 100$x$ while not deleting a single frame, resulting in an overall increased performance on both training time and Recall in our experiments.
Raphael Ruschel, A. S. M. Iftekhar, B. S. Manjunath, Suya You
2023-10-16T23:14:56Z
http://arxiv.org/abs/2310.10879v2
# BLoad: Enhancing Neural Network Training with Efficient Sequential Data Handling ###### Abstract The increasing complexity of modern deep neural network models and the expanding sizes of datasets necessitate the development of optimized and scalable training methods. In this white paper, we addressed the challenge of efficiently training neural network models using sequences of varying sizes. To address this challenge, we propose a novel training scheme that enables efficient distributed data-parallel training on sequences of different sizes with minimal overhead. By using this scheme we were able to reduce the padding amount by more than 100\(\times\) while not deleting a single frame, resulting in an overall increased performance on both training time and Recall in our experiments. distributed, training, machine learning, multi-GPU ## I Introduction The increasing complexity of modern deep neural network models and the expanding sizes of datasets necessitate the development of optimized and scalable training methods. Neural networks are commonly trained using multiple GPUs either within a single machine or distributed across a cluster of nodes. Traditional distributed training schemes, such as distributed data-parallel (DDP) [1], have been widely employed. While this scheme is popular, it struggles with data sequences of varied lengths, like videos of different durations. To address this challenge, we propose a novel training scheme that enables efficient DDP training on sequences of different sizes with minimal overhead. ## II Problem Statement and Current Limitations We consider a dataset \(\mathcal{D}\) comprising \(N\) samples, where each sample \(S_{i\in N}\) represents a video with dimensions \(H\times W\times T\). Here, \(H\) and \(W\) denote the height and width of each frame, respectively, while \(T\) represents the duration of the video. Our objective is to train a deep neural network model efficiently using a DDP scheme while accommodating varying values of \(H\), \(W\), and \(T\) for each sample \(S_{i}\). While our primary focus is on videos, we expect our method to be applicable to other data types like audio and text. Using PyTorch's Distributed Data-Parallel with datasets of varying lengths can lead to stalled training without any error message. The root of this issue is in the gradient synchronization step. Here, each GPU process collects gradients from other processes to compute an average gradient, which updates the model. If sequences differ in size, each process gets varying sample counts, potentially causing a deadlock as processes await others indefinitely, unable to calculate the gradient. To illustrate the problem, consider the sample dataset from figure 1. It has 8 sequences with lengths varying from 2 to 6 frames. Initiating a DDP training with a batch size of 2 using random sampling can produce situations like that in figure 2. Here, GPU 1 handles two videos, each 2 frames long, while GPU 2 manages two videos, each 6 frames long. After just 2 iterations, GPU 1 completes its batch, leaving it idle, as GPU 2 continues processing. New data is only retrieved once all GPUs finish their batches. Consequently, when GPU 2 tries to gather gradients from GPU 1, it faces an indefinite wait since GPU 1 has no gradient to return. A common strategy to resolve this issue involves padding each sample to match the duration \(T_{max}\) of the longest sequence in the dataset (as illustrated in figure 3). While this method solves the stalling problem, it becomes highly inefficient when \(T_{max}\) is significantly larger than the average sequence length, resulting in substantial padding and unnecessary computations during training. Another strategy entails breaking down each data sample into smaller chunks of size \(H\times W\times T_{block}\) and treating each smaller block as an individual sample, as employed in [2, 3]. While this approach resolves synchronization issues, it cannot be applied to train neural networks that incorporate feedback, such DDS [4]. Breaking down the original data sample into Fig. 1: Sample dataset with 8 videos of varying length - Each \(V_{i}\) represents an individual video, and each yellow square represents a frame. smaller pieces destroys the temporal relationships inherent in the original sequence, as shown in figure 4. ## III Methods Our proposed method builds upon the padding strategy but significantly reduces wasteful computations. We create blocks of size \(T_{max}\) by concatenating randomly sampled sequences with length \(T_{i}\leq T_{max}\). If we cannot construct a video of size exactly equal to \(T_{max}\), we then build a block as close to \(T_{max}\) and then pad it with \(0\)'s to fill the block. Figure 5 shows an example of our proposed solution. Additionally, we create a table containing the starting index of each new video within each particular block. This table can be useful during training a recurrent network, such as the DDS, architecture shown in Figure 6 where some information (\(oE_{t-1}\)) from iteration \(n-1\) is used at iteration \(n\). Having the knowledge of where a new sequence starts enables resetting/discarding the information from the previous iteration, as it belongs to a different sequence, correctly maintaining the temporal dependency of the data inside each block. For a more technical insight into our method, we've provided a pseudocode outline on 7. ## IV Experiments & Results Following the works on [4], we perform experiments on the Action Genome dataset. This dataset is extensively used in Scene-Graph Detection problems and contains \(7,464\) videos with \(166,785\) frames in the training set and \(1,737\) videos with \(54,371\) frames in the test set. To evaluate the differences between each sampling strategy, we retrain DDS using each strategy mentioned earlier and report the amount of padding added, number of frames deleted, time per epoch, and performance on the recall@20 metric. The Action Genome dataset is apt for these experiments, given its wide range of sequence lengths, from brief 3-frame snippets to as long as 94 Fig. 4: Sampling solution, where each sequence is trimmed to match a smaller size, usually the length of the average entry in the dataset. In this approach, one sequence might be broken into several smaller portions, which won’t allow the training of models with long temporal support. Fig. 5: Our proposed padding approach - **BLoad** (as in block load) - aims to construct sequences of size \(T_{max}\) using shorter sequences as building blocks Fig. 3: Naive padding solution - Every sequence on the dataset is padded to match the length of the largest sequence, generally by adding 0’s or repeating the last entry of the sequence Fig. 2: Deadlock situation when each GPU receives sequences of different lengths. In this situation, after the third iteration, GPU 1 will not have any gradients to report, causing GPU 2 to wait without any error message. frames. Our experiments were conducted on a machine with 8 NVIDIA A100 with 40GB of memory and are reported in table I. From the table, it's evident that the naive padding solution results in over \(500k\) padding frames--almost \(4x\) the original data size. This rendered the training so inefficient that we chose not to complete it for performance evaluation. Interestingly, with the sampling strategy, despite discarding nearly \(2/3\) of the data, we achieved results comparable to or even surpassing several established models such as [5]. We attribute this to the dataset's high frame correlation, leading to marginal gains with added frames. We haven't delved deeper into this observation as it falls outside this manuscript's scope. Our proposed block pad strategy offers clear advantages. It combines zero frame removal with minimal padding, reducing unnecessary computations and enhancing performance. ## V Conclusion In this white paper, we addressed the challenge of efficiently training neural network models using sequences of varying sizes. We proposed a novel training scheme that combines elements of padding and distributed data parallelism to achieve optimal results. By padding sequences with videos of appropriate lengths and employing a table of starting indices, our method reduces wasteful computations while preserving temporal relationships. The proposed approach opens up new possibilities for training models on diverse data types, such as videos, audio, and text, with varying sequence lengths. In future research, we can delve into the method's applicability to different modalities and test its efficacy across various deep learning challenges.
2303.08459
Forecasting Intraday Power Output by a Set of PV Systems using Recurrent Neural Networks and Physical Covariates
Accurate intraday forecasts of the power output by PhotoVoltaic (PV) systems are critical to improve the operation of energy distribution grids. We describe a neural autoregressive model that aims to perform such intraday forecasts. We build upon a physical, deterministic PV performance model, the output of which is used as covariates in the context of the neural model. In addition, our application data relates to a geographically distributed set of PV systems. We address all PV sites with a single neural model, which embeds the information about the PV site in specific covariates. We use a scale-free approach which relies on the explicit modeling of seasonal effects. Our proposal repurposes a model initially used in the retail sector and discloses a novel truncated Gaussian output distribution. An ablation study and a comparison to alternative architectures from the literature shows that the components in the best performing proposed model variant work synergistically to reach a skill score of 15.72% with respect to the physical model, used as a baseline.
Pierrick Bruneau, David Fiorelli, Christian Braun, Daniel Koster
2023-03-15T09:03:58Z
http://arxiv.org/abs/2303.08459v3
Hybrid-Physical Probabilistic Forecasting for a Set of Photovoltaic Systems using Recurrent Neural Networks ###### Abstract Accurate intra-day forecasts of the power output by PhotoVoltaic (PV) systems are critical to improve the operation of energy distribution grids. We describe a hybrid-physical model, which aims at improving deterministic intra-day forecasts, issued by a PV performance model fed by Numerical Weather Predictions (NWP), by using them as covariates in the context of an autoregressive recurrent neural model. Our proposal repurposes a neural model initially used in the retail sector, and discloses a novel truncated Gaussian output distribution. We experimentally compare many model variants to alternatives from the literature, and an ablation study shows that the components in the best performing variant work synergistically to reach a skill score of 7.54% with respect to the NWP-driven PV performance model baseline. ## 1 Introduction Grids of PV systems have become an inevitable component in the modern and future energy distribution systems. However, due to weather conditions, the magnitude of PV power production is fluctuating, while the supply to consumers requires to be adapted to the demand at each point in time. Distribution system operators (DSOs) have increasing and specific requirements for PV power forecasts. Indeed, fluctuating renewables could cause operational issues (e.g. grid congestion), which call for active grid operation. In this context, _intra-day_ forecasts (i.e., for the whole day to come at a fixed time in day) of PV power are critical in view to facilitate operations. Also, many forecasting models issue point forecasts, but hardly characterize the uncertainty attached to their forecasts, when such information can be critical for a DSO in order to quantify and mitigate risks in an optimal way. In [10], PV power production is forecasted using a deterministic PV performance model, which involves regional solar irradiance forecasts issued by a Numerical Weather Prediction (NWP) service as inputs. The underlying hypothesis is that solar irradiance is fairly smooth over limited regional areas, and the production curve specific to a PV system will be mainly influenced by how it converts this solar energy to PV power according to its specifications. In [11], authors of the present paper introduced a model which performs intra-day probabilistic forecasts of PV power production. It combines the PV performance model above with a model based on Long-Short-Term Memory (LSTM) cells [14]. This kind of combination of a model based on a set of physical equations to a statistical model is coined as _hybrid-physical_ approaches [1]. For training and evaluation, it uses real data provided by Electrois, a DSO in Luxembourg. Results show that this new model improves the baseline performance, while coping with local effects such as PV system shading. The former paper rather targets solar energy specialists, with little details unvealed about how the Machine Learning model acting as cornerstone to the approach has been designed and trained. The present paper aims at filling this gap, by providing entirely new material focusing on this complementary view. Specifically, the purpose of the present paper is to focus on neural time series forecasting aspects in the context of this application. The specific contributions of the present work mainly focus on the design of a model architecture and a training procedure which meets the operational needs of a DSO. Our proposal is based on an existing LSTM-based model [15], which we present in a concise and effective way in order to make this work self-contained. In addition, we design a novel truncated Gaussian output component, which we plug in to the LSTM-based model. In Section 2, we give a structured survey of the related work which positions the problem faced by a DSO, and motivates which existing work could be reused or repurposed to suit our needs. After describing our model proposal in Section 3, we provide a thorough experimental evaluation in Section 4. Several variants of our model proposal are compared to alternative models from the literature, and an ablation study allows to emphasize the specific contribution of each of its components. Finally we recall some qualitative results to underline how local effects, tainting the PV performance model, are mitigated using our approach. ## 2 Related Work In Section 2.1, we review seminal PV power forecasting methods. Then in Section 2.2, we survey time series forecasting as addressed in the literature on Machine Learning (ML), from the perspective of their repurposing potential to the PV power forecasting application. Section 2.3 reviews existing hybrid-physical models which relate the most closely to our proposal. Finally, Section 2.4 focuses on the peculiarities in terms of forecasting structure and validation which come with ML approaches to time series forecasting. ### PV Power Forecasting Most approaches in PV power forecasting model the conversion chain of solar irradiance to electrical power in a PV system. Thus they follow a two-step approach: first, forecasting the solar irradiance, then converting this irradiance to PV power forecasts [1, 1]. The most common way to forecast solar irradiance relies on NWP systems such as the European Centre for Medium-Range Weather Forecasts (ECWMF) Ensemble Prediction System [2]. Everyday, it issues hourly regional forecasts for a range of meteorological variables (including solar irradiance) on the 10 days to come, thus including the intra-day range. Intra-day may be defined as up to 6h forecast horizons in the literature [1]. In this paper, we deviate from this defini tion by considering intra-day as the next 24h, starting at midnight of the same day. In view to improve NWP forecasts, or to avoid having to rely on such systems, solar irradiance can also be forecasted using statistical methods and ML models. The simplest include persistence models, which are often adjusted using clear sky models [11]. [20] also review various ML techniques which have been employed to this purpose, e.g., AutoRegressive (AR) models, Feed-Forward Networks (FFN) and k-Nearest Neighbors. In this range of contributions, [12] address the intra-day hourly prediction of solar irradiance using an ensemble of FFN models. Specifically, they implement rolling forecasts by specializing each model to a current time and a prediction horizon. PV power forecasting is reviewed in detail by [1]. In this landscape, several approaches aim at modelling directly the series of PV power values, without having to rely on solar irradiance forecasts. [10] propose short-term forecasts (\(<\)30mn) which exploit cross-correlation of PV measurements in a grid of PV systems. They hypothesize that clouds casting over a given PV system have lagged influence on other systems downwind. They optimize associated time lag and cloud motion vector. [1] also consider a spatially distributed set of PV panels. They directly use PV power values, without converting proxy information such as solar irradiance or cloud density. Similarly to [10], they focus on correlations among stations to help accounting for intermittency due to clouds. They report intra-day forecasts are useful for an energy trading strategy, while hour-ahead and intra-hour serve for managing demand response. [14] present AR approaches to PV power forecasting. They focus on forecasting one and two hours ahead, where NWP models tend to under-perform. Several models are compared, among which are persistence, linear models such as AutoRegressive Integrated Moving Average (ARIMA) [15], and FFN. They found out that FFN perform the best, with improvements brought by the optimization of FFN parameters, input selection and structure using a Genetic Algorithm. They conjecture that binning data according to the associated season, and learning per-bin models should improve forecasting ability overall, even if this approach has been recently criticized [16]. ### ML approaches for time series forecasting Among other related work, Section 2.1 surveyed some contributions which involved ML methods in view to forecast solar irradiance and PV power production. In this section, we generalize this view, by surveying recent work in time series forecasting at large. Methods in this section were generally not applied to the application context considered in the present paper, but could be repurposed _a priori_. Besides neural and ARIMA models, seminal ways to forecast time series include the Croston method, which is an exponential smoothing model dedicated to intermittent demand forecasting [17]. It is notably used as baseline method in [16], along with the Innovation State-Space Model (ISSM) [18]. Modern, so-called _deep_ neural network architectures exploit the structure of data - sequential, in the case of the Long-Short-Term Memory (LSTM) [19] and the Gated Recurrent Unit (GRU) [12], 2D or 3D, in the case of convolutional networks. Even though recurrent models such as the LSTM would appear as outdated, they are still popular in the recent literature thanks to improvements to training and inference procedures carried by modern toolboxes such as Tensorflow [1] or MXNet [15], as well as the encoder-decoder mechanism, in which a context sequence is encoded and conditions the prediction of the target sequence. It has been initially codified for the GRU, and transferred to the LSTM, leading to continued usage in recent contributions [16, 17]. These models contrast with the seminal Feed-Forward Network (FFN), in which all layers are fully connected to previous and next layers, up to the activation layer [11]. Salinas et al. propose DeepAR, which implements flexible forecasting for univariate time series [17]. Formally, it defines a _context_ interval (a chunk of past values) and a _prediction_ interval (the set of values to be forecasted), and the model is optimized end-to-end w.r.t. prediction interval forecasts. This contrasts with linear models such as ARIMA, which are optimized for one time step ahead forecasts. Also, instead of point forecasts, DeepAR predicts model parameters, which can be used to compute sample paths, and empirical quantiles, which can be highly valuable in our context. In this case, a family of probability distributions has to be chosen so as to fit the time series at hand. The model was initially aimed at retail business applications, but it can be adapted to other types of data just by changing the family of the output probability distribution. It is based on the LSTM model architecture. The model supports the adjunction of covariates, i.e., time series which are available for both context and prediction interval at forecast time. By repeating the same value for the whole intervals, static covariates are also supported. Such flexibility meets all the requirements of our hybrid-physical approach (i.e., NWP covariates and PV system descriptive features). In retail applications, input data may have highly variable magnitude (e.g., depending on item popularity or time in the year). The authors observe an approximate power law between item magnitude and frequency (i.e., values are less likely as they get large, and reciprocally). They claim that grouping items to learn group-specific models or performing group-specific normalizations, as previously done in the solar and PV power forecasting literature [14, 15] are not good strategies in such case. They propose a simple alternative scheme, where samples are scaled by an item dependent factor computed using the context interval. [18] propose a time series classification model inspired by Inception-v4 [16]. Basically, they are transferring the multi-scale pattern extraction capability of convolutional neural networks to 1D data such as time series. A convolutional encoder is tested by Wen et al. in the context of their multi-horizon quantile forecaster [21]. Instead of forecasting probabilistic model parameters, this model directly forecasts quantiles in a non-parametric fashion. However, it suffers from the quantile crossing problem: forecasted values may have ranks inconsistent with the quantile they are attached too. [11] is another alternative to DeepAR. Similarly to [21], it does not rely on probability distribution outputs, and implements conditional quantile functions using regression splines instead. Spline parameters are fit using a neural network directly minimizing the Continuous Ranked Probability Score (CRPS), which is then used as a loss function. This results in a more flexible output distribution, and an alternative to other flexible schemes (e.g. mixture of distributions in the context of [17]). However, it currently1 lacks a publicly available implementation. Multivariate forecasting consists in modelling and forecasting multiple time series simultaneously, by contrast to univariate forecasting. The seminal way to achieve this, is with the Vector AutoRegression (VAR) model, which is an extension of the linear autoregressive model to multiple variables [14]. As this model has hard time dealing with many variables (e.g., items in the retail domain), neural network-based models such as DeepVAR were designed as an alternative [10]. It can be thought of as a multivariate extension to [11]. DeepVAR models interaction between time series, e.g., as resulting from causality or combined effects. It uses a Copula model, which models interactions between time series, and elegantly copes with time series of varying magnitude, alleviating the need for an explicit scaling mechanism. A single multivariate state variable underlying a LSTM model is used for all time series. Empirical cumulative distributions serve to compute sample paths and quantiles. Only static covariates (i.e., constant in the context and prediction intervals) were considered in this paper. They define a low-rank parametrization, which opens the possibility to deal with a very large number of time series. Some prior work involved deep learning in the context of PV power forecasting. For example, [1] consider one hour ahead forecasts (instead of intra-day as aimed at in this paper). They used a single LSTM layer without the encoder-decoder mechanism. Also, they consider point forecasts. Data for two PV sites is used for the experiments, with roughly the same power magnitude for both sites (approx. 3.5kW). Models are trained for each site separately. Alternatively, in this paper we address all sites with a single model in a scale free approach, addressing an arbitrary number of sites with little to no model size overhead. Finally, the locations associated to the datasets are subject to a dry climate, which is simpler to predict [2]. Our application testbed is a temperate area, subject to frequent and abrupt changes on a daily basis, therefore much more challenging to predict. [14] also address PV power forecasting, by decorrelating scale free forecasts using LSTM, from seasonal effects modelled separately using time correlation features and partial daily pattern prediction. However, they focus on forecasts aggregated at a daily scale, when we consider hourly data in this paper. In addition, our approach is end-to-end, with no independent modelling of seasonal effects. ### Hybrid-physical approaches [1] focus on wind speed and power forecasting. They consider hourly forecasts for 72 hours ahead, using Numerical Weather Prediction (NWP) forecasts as additional inputs, thus proposing an early combination of observation data with NWP covariates. The recurrent model used then (diagonal recurrent neural networks [12]) was superseded by alternative models such as LSTM [10] and GRU [15] in the recent literature as seen in the previous section. Another early work is proposed by [13], who combine NWP covariates and neural networks for hourly and daily solar irradiance forecasting. [17] present a regional PV power forecasting system, with hourly resolution up to 72h ahead. Their approach combines clustering and numerical optimization, and it is compared to regression methods such as ElasticNet [18], SARIMAX [19], or Random Forests [19]. The CRPS metric is used for evaluation. Their approach is not autoregressive, rather they directly predict future PV power from solar irradiance and temperature forecasts obtained from a proprietary system which refines NWP forecasts according to local conditions. Alternatively, our approach tries to combine the benefits of using NWP forecasts with an autoregressive model of the PV power observations. ### Forecast structure and validation Figure 1 distinguishes _regular_ forecasts from _rolling_ forecasts, which are the two main strategies to consider for extracting fixed sized blocks from time series. For simplicity, the figure considers hourly forecasts and 24-hour context and prediction intervals, but the definition is straightforward to generalize. In brief, two consecutive regular forecasts are offset by the size of the prediction interval, when rolling forecasts are offset by the frequency of the time series (hourly, in Figure 1). In other words, with 24-hour prediction and context intervals, regular forecasts happen on the same time every day, whereas rolling forecasts are issued every hour for the whole prediction interval. In this process, forecasts beyond the next hour are refreshed every hour. Works such as [11] consider regular forecasts, as the forecast time is tied to availability of NWP data. As we use similar NWP covariates, regular forecast are a requirement in our work too. Alternatively, [12] address rolling forecasts by having a distinct model for each possible starting time in the day. Let us note that some models (e.g., [13]) allow to encode seasonal information on the predicted time steps (e.g. hour in day, day in week) as covariates. Therefore, they can be used indistinctively with regular and rolling forecasts, provided an adapted training set is available. Time series forecasting models are typically evaluated using a variant of the Root-Mean-Square Error (RMSE) metric. When quantiles can be computed, the Continuous Ranked Probability Score (CRPS) rates the compatibility of an observation with a set of quantiles [10]. This metric is generally used for evaluating models which output forecasting quantiles [10, 11, 12]. [12] discuss the problem of cross-validation, and more generally validation, in the context of time series forecasting. Original formulations of cross-validation methods often assume that data items are independent. They cannot be used out of the box with time series, as the sequential structure of the latter invalidates the underlying hypotheses. They recognize that the Figure 1: Distinction between regular and rolling forecasts. \(t_{0}\) denotes the present time step in the context of a given data sample. seminal way to validate models with time series is to train a model using the first \(t_{0}\) values, and validate using the last \(T-t_{0}\) values. However, this way is hardly compatible with cross-validation schemes, and yields weak test error estimations. In virtue of the bias-variance tradeoff [1], this issue has moderate impact on models with strong bias such as linear models. However, over-parametrized models, such as most neural models presented in Section 2.2 (e.g. [12, 12, 13, 14]) can be significantly affected, and exhibit a strong tendency to overfit (even if recent theory shows that with careful consideration this problem can be correctly addressed [15]). For mitigation, [1] recommend blocked cross-validation, in which time series segments of size \(\tau<<T\) are used for independent training and validation for model selection and test error computation. As we also use deep learning as a building block in our approach, we carefully consider these recommendations in our experimental design (see Section 4). ## 3 Model Description The survey of related works in Section 2 led us to choose DeepAR [12] as a framework to develop our hybrid-physical implementation. We adapted the official implementation of the model [1] to suit our needs. Another model which offers the relevant flexibility as well as a public implementation is the model by [17]. We will compare to this model in our experimental section. Alternatively, PV sites could have been considered as dimensions in a multivariate forecasting problem, thus possibly forecasting all sites at a given time at once using DeepVAR [12]. However, its limitation to static covariates prevents us from implementing the projected hybrid-physical approach. Also, it is unclear how new PV sites, as well as missing values, which are pretty common as PV sites may witness independent breakdowns and interruptions of measurements or data communication, can be handled with such multivariate modelling. Alternatively, we choose to model the PV site using covariates. ### DeepAR model In the remainder of the paper, for clarity of the derivations, scalar variables are represented in normal font, vector variables in bold font, and matrix variables in capital bold font. This section 3.1 essentially paraphrases [12], but ensures the present paper is self-contained, while introducing the necessary formalism. Let us assume we have a data set of \(N\) univariate time series, each with fixed size \(T\). Each observed time series in \(\{\mathbf{z}_{n}\}_{n\in 1,\ldots,N}\) may relate to an item in store in the retail context, or to a distinct PV system in the context addressed in this paper. \(t_{0}\in[1\ldots T]\) denotes the present time, i.e. the latest time point for which we assume \(z_{n,t}\) is known when issuing the forecast. \([1\ldots t_{0}]\) is then the _condition_ interval, and \([t_{0}+1\ldots T]\) is the _prediction_ interval. The goal of the model is to forecast \(\mathbf{z}_{n,t_{0}+1:T}=[z_{n,t_{0}+1},\ldots,z_{n,T}]\) with the knowledge of \(\mathbf{z}_{n,0:t_{0}}=[z_{n,0},\ldots,z_{n,t_{0}}]\). We also consider a set of covariates \(\mathbf{X}_{n,1:T}=[\mathbf{x}_{n,1},\ldots,\mathbf{x}_{n,T}]\) which are known for \(t\in 1,\ldots,T\) at time \(t_{0}\). In this context, the model is defined by the following product of likelihood factors, also summarized in Figure 2: \[Q_{\Theta} =\prod_{n=1}^{N}\prod_{t=t_{0}+1}^{T}q_{\Theta}(z_{n,t}|\mathbf{z}_{n,1:t-1},\mathbf{X}_{n,1:T})\] \[=\prod_{n=1}^{N}\prod_{t=t_{0}+1}^{T}p(z_{n,t}|\theta(\mathbf{h}_{ n,t},\Theta)) \tag{1}\] The model is both autoregressive and recurrent, as state variable: \[\mathbf{h}_{n,t}=\Theta(\mathbf{h}_{n,t-1},z_{n,t-1},\mathbf{x}_{n,t}) \tag{2}\] is obtained from LSTM model \(\Theta\) in which the state variable and observation of the previous time step are both reinjected. The model also depends on parametrized function \(\theta\), which learns the mapping between the state variable \(\mathbf{h}\) and parameters of the probability distribution \(p\). In effect, as seen in Figure 2, during training time observations are injected as \(z_{n,t-1}\) in Equation (2). However at test time actual observations are not available for the prediction interval, so we sample \(\tilde{z}_{n,t}\sim p\), and inject them as proxy observations. Doing so yields sample paths, which can be repeated and serve to compute empirical quantiles in the prediction interval, instead of simple point estimates. In this paper, when point estimates \(\hat{z}_{n,t}\) are needed, we take them as the empirical median of a set of sample paths. In Figure 2, we can see that the LSTM _encodes_ the context interval into \(\mathbf{h}\), which is then _decoded_ for the prediction interval. The same LSTM model \(\Theta\) is used for encoding and decoding. The negative log of expression (1) is used as the loss function for training all parameters in the model in an end-to-end fashion. The form of function \(\theta\) depends on the probabilistic model in expression (1): for example, if \(p\) is chosen as a Gaussian, appropriate functions would be: Figure 2: Illustration of the DeepAR model. Observed variables are represented as shaded boxes, and latent variables as blank boxes. For the context interval, \(z\) variables are always known. For the prediction interval, the model behaves differently at training and test time. At test time, \(\tilde{z}\) variables are sampled according to \(p\), forming sample paths. Plain lines represent dependencies between random variables, and the dashed line highlights the reinjected sample. \[\theta_{\mu}(\mathbf{h}_{n,t}) =\mathbf{w}_{\mu}\mathbf{h}_{n,t}+b_{\mu}\] \[\theta_{\sigma}(\mathbf{h}_{n,t}) =\log(1+\exp(\mathbf{w}_{\sigma}\mathbf{h}_{n,t}+b_{\sigma})) \tag{3}\] We note that the softplus function in (3) ensures \(\sigma\) is mapped as a positive real number. Among possible probabilistic models and mapping functions, the official DeepAR implementation [1], used in the experiments for this paper, features Gaussian, Student, negative binomial, and mixture distributions. The mixture distribution composes several distributions from the same nature using mixture weights, which have their dedicated \(\theta\) function. ### Positive Gaussian likelihood model As PV power measurements are bound to be non-negative real numbers, a contribution of this paper is to allow for the Gaussian distribution to be truncated from below at 0, referred to as the _positive Gaussian_ distribution in the remainder of this paper. Formally this yields: \[p(z_{n,t}|\theta_{\mu},\theta_{\sigma})=\frac{1}{\sigma\sqrt{2\pi}}\frac{\exp \left(-\frac{1}{2}\frac{(z_{n,t}-\theta_{\mu})^{2}}{\theta_{\sigma}{}^{2}} \right)}{1-\Phi(-\frac{\theta_{\mu}}{\theta_{\sigma}})} \tag{4}\] With \(\Phi\) the cumulative distribution function of the standard Gaussian (i.e., with mean 0 and standard deviation 1). Besides adapting the loss function (see Equation (1)) to this new probability distribution function, the same \(\theta_{\sigma}\) function as the Gaussian distribution can be used. To make sure the range of \(\theta_{\mu}\) is also positive, for the positive Gaussian we use: \[\theta_{\mu}(\mathbf{h}_{n,t})=\log(1+\exp(\mathbf{w}_{\mu}\mathbf{h}_{n,t}+b_ {\mu}))\] From an application of the Smirnov transformation [1] to the case at hand, samples from a positive Gaussian distribution can be obtained as: \[\tilde{z}=\Phi^{-1}\Bigg{(}\Phi\big{(}-\frac{\theta_{\mu}}{\theta_{\sigma}} \big{)}+\tilde{u}\bigg{(}1-\Phi\big{(}-\frac{\theta_{\mu}}{\theta_{\sigma}} \big{)}\bigg{)}\Bigg{)}\theta_{\sigma}+\theta_{\mu} \tag{5}\] where \(\tilde{u}\) is a uniform sample in \([0,1]\). ## 4 Experiments ### Data Section 3 presented the forecasting model underlying our experiments in general terms, but here we recall that we focus on a specific application and its peculiarities: forecasting the power output of a set of PV systems. The variable to forecast (\(z\) in Section 3) is the average power output of a PV system during the last hour in Watts. As hypothesized in Section 2.4, it is thus a hourly time series. For our experiments, we used data recorded by 119 PV systems located in Luxembourg between 01/01/2020 and 31/12/2021. They are dispatched in a relatively small (4 \(\times\) 4 km) area. These PV systems are managed by Electris, a DSO in Luxembourg which collaborated with the authors of this paper in the context of a funded research project. Besides PV power measurements, each time step is associated to intra-day, day-ahead, and 2 days ahead predictions by the physical PV performance model described in [13], which uses ECMWF NWP solar irradiance forecasts as input (referred to as _24h_, _48h_, and _72h_ NWP forecasts, respectively, in the remainder of the paper). We use these tier forecasts as covariates (**X** in Section 3), as they will be available beforehand for the prediction interval. The model also supports the adjunction of _static_ covariates, which are constant for a given time series. Relating to Section 3, we note that this simply amounts to set associated \(\mathbf{x}_{n,1:T}\) to a constant. In the context of the present work, we consider a _system ID_ categorical feature, which is simply the system ID converted to a categorical feature with 119 modalities. We also consider _system description_ continuous features. Among the set of descriptors provided by system vendors and characteristics of their setup, we retain the following features as they are expected to influence PV power curves and magnitude: the _exposition_ of the system (in degrees), its _inclination_ (in degrees), its nominal _power_ (in Watts) and its _calibration factor_ (unitless, tied to the system on-site setup). As the DeepAR implementation expects normally distributed features, we standardize features so that they have zero mean and unit standard deviation. As the nomimal power of our PV systems varies over a large range (from 1.4kW to 247kW), a scaling scheme is necessary to properly handle measured values. As mentioned in Section 2, we address this as implemented in DeepAR by dividing all measurements in a given sample by \(\frac{1}{t_{0}}\sum_{1}^{t_{0}}|z_{t}|\). Also, as NWP covariates are expected to be distributed similarly to their associated measurements, they are normalized likewise. ### Loss function and metrics The sum of negative log-likelihoods of observations \(z_{n,t}\) (on top of Figure 2) is used as a loss function to fit all model parameters \(\{\mathbf{w}_{\mu},\mathbf{w}_{\sigma},\Theta\}\) in an end-to-end fashion. As commonly done in the literature (see Section 2.2), we use a fixed size for the context and prediction intervals in our experiments. As we are interested in intra-day regular forecasts (see Section 2.4), with hourly data this means that the prediction interval has size 24. The midnight run of the ECMWF NWP forecasts for the 3 days to come are broadcasted each day early in the morning, but before the sunrise, so while PV power is still obviously zero. We assume also the associated covariates are instantly available then. For simplicity, we thus choose midnight as the reference time in day for the regular forecasts (i.e., \(t_{0}\) in Section 3). In this context, 24h, 48h and 72h NWP covariates associated to predicted time step \(t_{0}+h\) will have been issued at time steps \(t_{0}\), \(t_{0}-24\) and \(t_{0}-48\), respectively. As the maximal horizon of the collected NWP forecasts is 72h, and we previously set the intra-day prediction interval as 24h, as a rule of thumb we used 48h as the context interval so that a training sample covers 72h. In practice, preliminary tests showed that using a larger context interval would not bring visible improvements, and using a multiple of the prediction interval size facilitates the creation of train and test data sets. To measure model performance, RMSE-based metrics are common in energy utility companies, notably as they penalize large errors [16]. We used the normalized Root-Mean-Square Error (nRMSE) defined as: \[\text{nRMSE}(\hat{\mathbf{Z}},\mathbf{Z})=\sqrt{\sum_{n=1}^{N}\frac{1}{N}\frac{ \frac{1}{T-t_{0}}\sum_{t_{0}+1}^{T}(\hat{z}_{nt}-z_{nt})^{2}}{P_{n}^{2}}} \tag{6}\] with \(P_{n}\) the nominal power of PV system \(n\), \(\hat{\mathbf{Z}}=\{\hat{z}_{nt}\}\) the estimated point forecast, and \(\mathbf{Z}=\{z_{nt}\}\) the observed power. This nRMSE allows to measure the performance of a point estimate forecast in such way that PV systems with larger nominal power do not dominate the error metric. This is a field requirement, as PV systems have private owners, who have to be treated equally, irrespective of the nominal power of their system. In practice, nRMSE can be interpreted as a percentage of the PV system nominal power. To evaluate the performance of a proposed system w.r.t. a reference, the _skill score_ is derived from RMSE metrics as: \[\text{Skill score}=1-\frac{\text{nRMSE}(\hat{\mathbf{Z}},\mathbf{Z})}{\text {nRMSE}(\hat{\mathbf{Z}}_{\text{ref}},\mathbf{Z})} \tag{7}\] As presented in Section 3, the models trained in the context of this work output prediction quantiles. We use the median as the point estimate forecast; in addition, we compute the CRPS metric, commonly used in related work [17, 18, 19], which rates the quality of prediction quantiles as a whole: \[\text{CRPS}(F^{-1},\mathbf{Z}) =\frac{1}{N(T-t_{0})}\sum_{n=1}^{N}\sum_{t=t_{0}+1}^{T}\int_{0}^{ 1}2\Lambda_{\alpha}(F^{-1}(\alpha),z_{nt})d\alpha \tag{8}\] \[\Lambda_{\alpha}(F^{-1}(\alpha),z_{nt}) =(\alpha-\mathcal{I}_{[z_{nt}<F^{-1}(\alpha)]})(z_{nt}-F^{-1}( \alpha))\] with \(F^{-1}\) the quantile function of the predictor (which returns the quantile level in Watts associated to a probability \(\alpha\in]0,1[\)), \(\Lambda_{\alpha}\) the _quantile loss_, and \(\mathcal{I}_{[c]}\) the indicator function associated to logical clause \(c\). As discussed in Section 3, the quantile function is estimated empirically using a set of sample paths \(\{\hat{\mathbf{z}}_{n}\}\). Intuitively, the quantile loss gets larger when observations are far from the median as measured by distribution quantiles. This also allows to penalize models which are excessively confident, and somehow reward models which are able to better estimate the expected accuracy of their point forecast. In our experiments, we use 100 paths per sample, which can be used as input to return empirical quantiles. PV power is naturally zero at night time. Therefore, including these time steps in metric computation is likely to bias nRMSE and CRPS towards 0. To prevent this, we exclude nightly time steps by limiting terms accounted for in Equation (6) and (8) to time steps where the value averaged over the whole data set is significantly different from zero. ### Validation scheme Following recommendations by [17], we create training samples by cutting the data set in fixed size 72h segments, with \(t_{0}\) in each segment being midnight 24h before the end of the segment. Assuming we extracted the segment at the beginning of the time series, we then shift the offset 24h forward, so that the next segment includes the previous prediction interval in its context interval (see, e.g., top of Figure 1). As we treat all PV systems as independent time series, this results in \(O(CD)\) series, with \(C\) the number of PV systems and \(D\) the number of values \(t_{0}\) can take in the original time series. Our number of PV systems and temporal collection bounds would yield 86989 samples. However, PV systems may exhibit missing or erroneous measurements due to several reasons (e.g., power outage, bad manipulation, faulty sensor). Figure 3 summarizes how missing values are distributed in the data set. The l.h.s. of Figure 3 shows that missing values are not uniformly distributed across PV systems. Approximately one third has no missing value, another third has a bit less than 20% of missing values, and the last third between 25% and 50%. The under-representation of this last third can be problematic. The r.h.s. of Figure 3 shows that these missing values are not evenly distributed in time: this indicates that a group of system may have been offline for a contiguous time frame during late winter and spring. Actually, most missing values are linked to systems started later than the others in year 2020. The associated periods are therefore under-represented, but we note that any month has at most 30% missing data. In the remainder, we consider that this bias remains in a range which makes uniform sampling w.r.t. time acceptable for building training batches. In order to facilitate processing, and as samples cuts are aligned with day frames, we detect and exclude days matching one of the following patterns: more than two consecutive missing values, measurements blocked to a constant value, visual inspection for aberrant values. This results in 67666 valid day frames. The PV systems are distributed in a relatively small area in Luxembourg: therefore, it is expected that prediction intervals for different systems but same absolute time attached to \(t_{0}\) will be highly correlated. In order to validate this intuition, we computed all \(D\) intra-day correlation matrices between systems in our data set. Specifically, we defined intra-day time steps (excluding nightly time steps) as observations and PV systems as variables, resulting in \(D\frac{C(C-1)}{2}\) distinct correlation values. We observe that the median of the distribution of these correlation values is 0.95, which confirms a very high correlation between systems for a given day. As a consequence, sampling uniformly training, validation and test sets in the \(O(CD)\) series would result in _data leakage_, i.e., the model will be able to overfit without harming test error as identical samples (up to scale) will be scattered in training, validation and test sets. Let Figure 3: _l.h.s._: Proportion of missing values per system. _r.h.s._: Proportion of missing data per associated month. us note an unexpected positive benefit of this strong intra-day correlation: the under-representation of some systems is then much less problematic. The only remaining issue would pertain to estimating the parameters associated to the static categorical modalities of these systems, if using system ID static covariates. We hypothesise that at least 50% of represented day frames is sufficient to perform this estimation. To prevent the data leakage problem, we first group the time series by the absolute time attached to their respective \(t_{0}\) (hence grouping samples for all PV systems for a given \(t_{0}\)), and sample 60% of the \(D\) time steps as the training set. We use a static temporal pattern, in order to ensure that each month and season is represented fairly evenly. Validation and test sets are uniformly sampled as half of the remaining 40% irrespective of the PV system, which is not an issue as the goal of validation error is to be an estimate of the test error, provided parameters are not explicitly fitted on the former. The validation set is used to implement early stopping, and select the model before it starts to overfit. The test set serves to compute the metrics described in Section 4.2. To choose the cut between validation and test, and ensure validation error is a fair proxy of the test error, we resample cuts until the RMSE and nRMSE between ground truth and intra-day NWP forecasts (which are considered as constant and known in advance, and are the most relevant baseline to compare to) are equal up to a small threshold. Using the cutting procedure defined so far, we obtain 40670 training, 13498 validation and 13498 test samples. ### Hyper-parameters The models were trained using the Adam optimizer with learning rate \(10^{-3}\), batch size 64, for 200 epochs. Samples are reshuffled at the beginning of each epoch. In the end, we implement a form of early stopping, by selecting the model with best validation error. DeepAR uses 2 LSTM layers by default, we stick to this parametrization. Two free parameters remain then: the LSTM hidden layer size, and the number of components when a mixture distribution output is used. Figure 4 shows the results of a hyper-parameter search of these parameters using the Gaussian distribution as mixture components. Median results from 6 independently trained models are displayed in the graphs, along with \(\pm 1\) standard deviation error bars. First we determine the optimal LSTM hidden layer size using a single component mixture on the l.h.s. of Figure 4. We see that beyond 100 units, the validation performance is only marginally improved: we therefore retain 100 as hidden layer size for all our experiments. The r.h.s. of Figure 4 shows the validation performance w.r.t. the number of mixture components using this layer size. Using 2 components yields the best results, so we retain this mixture size for the remainder of the experiments. ### Model comparison strategy Besides these hyper-parameters, a large search space of models results from using a given distribution output (Gaussian, Student, Positive Gaussian), one or two mixture components, whether or not using NWP, system ID, or system description covariates. Instead of comparing all possible combinations, we opted for a stepwise approach. We first compared 2-component mixtures of Gaussian, Student, and positive Gaussian components, with NWP covariates, but without system static covariates (models 3 to 5 in Table 1). We chose this configuration as a middle ground, as using the NWP features implements the hybrid-physical approach advocated in this paper. This first comparison highlighted that the Student distribution output yields significantly inferior results in terms of nRMSE. We thus exclude the Student distribution from further tested combinations. This middle ground can be compared to the alternative quantile regression architecture proposed by [23], with access to equivalent input information (model 13). Then we performed an ablation study of our middle ground. First we tested the consequence of using a single component instead of mixture (models 1 and 2). We also evaluated performance without NWP covariates, i.e. implementing purely autoregressive models (models 10 and 11). The latter are compared to a FFN model (model 12), which forecasts \(\mathbf{z}_{t_{0}+1:T}\) as a function of \(\mathbf{z}_{1:t_{0}}\) without any form of autoregression or covariates support, so that the value of using LSTM cells is evaluated _per se_. Finally, the improvement brought by using the system ID (models 6 and 8) and system description features (models 7 and 9) to the middle ground configuration is evaluated. System ID is also used for model 11, as a way to estimate the best performance that can be reached (i.e. using a mixture of the best distribution output _a posteriori_) without NWP features. This can be of practical interest, as access to ECMWF solar irradiance forecasts is free for research purpose, but requires a subscription for industrial applications. ### Results and interpretation Results are given in Table 1. In [13], skill scores are computed using a 24h persistence model, adjusted according to the clear sky PV power for the day under consideration. This is a common baseline in the solar energy domain [10]. In this paper, we rather consider that the NWP covariates are the baseline against which our results have to be evaluated. So we use 24h NWP covariates as \(\hat{\mathbf{Z}}_{\text{ref}}\) in Equation (7) for computing skill scores presented in Table 1. This is a stronger baseline for skill score computation, as it was shown to significantly outperform persistence forecasts [15]. In this experimental section, this means that a model with a skill score lower than 0 is not able to beat the covariates it is given among its inputs. First focusing on the nRMSE metric through skill scores, we see that middle ground models with Gaussian and positive Gaussian yield some improvement, Figure 4: Validation (orange) and test (blue) nRMSE curves for variable LSTM layer sizes (_l.h.s._) and numbers of mixture components (_r.h.s._). with skill scores 1.04% and 2.23%, respectively. On the other hand, using the Student distribution yields skill score -0.53%. Using the Student distribution is therefore not even able to do better than just copying part of its inputs. This motivated pruning this option beyond the middle ground. We also note that the quantile regression method of [17] (model 13) obtains a skill score of -7.17%, which justifies our choice of DeepAR as the framework for our proposal. Using a single Gaussian does also not match the baseline. On the other hand, a single positive Gaussian component yields a skill score of 0.50%. Using a mixture of distributions therefore contributes to improving the performance. Then, combining the system ID covariate to the mixture of Gaussian and positive Gaussian components yields skill scores 2.16% and 7.54%, respectively. We see that an important performance gap results from the joint usage of the system ID covariate and the positive Gaussian component. Using the system description covariates, these scores drop to 1.89% and 5.12%. Using the system ID is therefore the best option, but the system description features have the advantage to generalize to new systems without having to fully retrain the model, which can be useful in production conditions. We note that the improvement brought by using the system ID as a covariate provides anecdotal evidence supporting the hypothesis formulated in Section 4.3 regarding the imbalance of systems representation in the dataset. Using the system description covariates is beneficial (e.g., causes skill score increase by 2.89 points with the mixture of positive Gaussian components), but to a lesser extent than using the system ID. We can relate this to the fact that they do not fully reflect some local effects which already taint the PV performance model (see Section 1). If ignoring NWP covariates, the performance of models is very significantly degraded. The mixtures of Gaussian and positive Gaussian components get 26.3% and 23.0% negative skill scores, respectively. Both models are still doing better than the alternative FFN architecture (28.4%), but the fallback consisting of not using NWP features comes at a high cost in terms of performance. We note that the gap between model 10 and 11 (2.7%) is not as large as the gap between their counterparts using NWP features (model 3 and 8, 6.5%). This con \begin{table} \begin{tabular}{c l l l l l l} \hline \multicolumn{2}{c}{**ID**} & \multicolumn{1}{c}{**Output**} & \multicolumn{1}{c}{**NWP**} & \multicolumn{1}{c}{**Static**} & \multicolumn{1}{c}{**nRMSE (\%)**} & \multicolumn{1}{c}{**Skill (\%)**} & \multicolumn{1}{c}{**CRPS (-)**} \\ \hline \multicolumn{2}{c}{} & \multicolumn{5}{c}{_Baseline_} & \multicolumn{1}{c}{} \\ \hline \multicolumn{2}{c}{-} & PV perf. model & Yes & - & 9.651 & - & - \\ \hline \multicolumn{2}{c}{_Single-component DeepAR_} & \multicolumn{1}{c}{} \\ \hline 1 & Gaussian & Yes & - & 9.697(\(\pm\)0.035) & -0.48 & 0.639(\(\pm\)0.004) \\ 2 & Positive & Yes & - & 9.603(\(\pm\)0.041) & 0.50 & 0.650(\(\pm\)0.015) \\ \hline \multicolumn{2}{c}{_Mixture DeepAR_} & \multicolumn{1}{c}{} \\ \hline 3 & Gaussian & Yes & - & 9.551(\(\pm\)0.089) & 1.04 & 0.626(\(\pm\)0.002) \\ 4 & Student & Yes & - & 9.702(\(\pm\)0.023) & -0.53 & 0.629(\(\pm\)0.004) \\ 5 & Positive & Yes & - & 9.436(\(\pm\)0.049) & 2.23 & 0.618(\(\pm\)0.005) \\ 6 & Gaussian & Yes & System ID & 9.443(\(\pm\)0.090) & 2.16 & 0.605(\(\pm\)0.004) \\ 7 & Gaussian & Yes & System descr. & 9.469(\(\pm\)0.070) & 1.89 & 0.620(\(\pm\)0.004) \\ 8 & Positive & Yes & System ID & **8.923**(\(\pm\)**0.044) & **7.54** & **0.579(\(\pm\)0.007)** \\ 9 & Positive & Yes & System descr. & 9.157(\(\pm\)0.041) & 5.12 & 0.596(\(\pm\)0.004) \\ 10 & Gaussian & No & - & 12.185(\(\pm\)0.055) & -26.3 & 0.831(\(\pm\)0.008) \\ 11 & Positive & No & System ID & 11.873(\(\pm\)0.036) & -23.0 & 0.811(\(\pm\)0.006) \\ \hline \multicolumn{2}{c}{_Alternative models_} & \multicolumn{1}{c}{} \\ \hline 12 & FFN & No & - & 12.392(\(\pm\)0.089) & -28.4 & 0.879(\(\pm\)0.017) \\ 13 & Wen et al. & Yes & - & 10.343(\(\pm\)0.070) & -7.17 & 0.686(\(\pm\)0.009) \\ \hline \end{tabular} \end{table} Table 1: nRMSE and CRPS test metrics for the range of compared models. The results of the best performing model are bold-faced. Median results and standard deviations are estimated from 6 models trained independently. firms a synergy between the elements composing of the best-performing model (i.e. mixture of positive Gaussian components, NWP covariates, and system ID). Figure 5 illustrates the relationship between nRMSE and CRPS metrics. We see that they are strongly correlated, so in the context of our experiments, models with the best skill scores generally yield the best prediction intervals. Models significantly below the linear regression fit will tend to provide better prediction intervals. This is the case with model 4 (mixture of Student components). However, its CRPS is still not as good as that of the other middle ground models (3 and 5). On the other hand, model 2 (single positive Gaussian component) has significantly degraded CRPS. All other models are quite close to the regression line. This includes models based on the mixture of positive Gaussian components: using a mixture seems to fix the discrepancy observed with model 2. As alternative designs, we considered using unscaled NWP features (i.e. not scaling them along with \(\mathbf{z}\) values as described in Section 4.1), and using the weighted sampling scheme described in [13], which samples training examples with frequency proportional to the magnitude of \(\mathbf{z}\) values. We did not report results with these alternative designs as they brought systematic degradation to the performance. We note that the poor performance of weighted sampling w.r.t. the nRMSE metric is expected, as the latter balances the leverage of systems with large nominal power. We also tried to use both static covariates (i.e., system ID and description) simultaneously, but this led to a slight degradation compared to the respective model using only the system ID covariate. This is expected, as the system ID alone already encodes system di Figure 5: Illustration of the relationship between nRMSE and CRPS metrics. The line is the result of a linear regression of the points in the graph. Glyph shapes and colors recall characteristics of the respective models. For better legibility, outlying models (i.e. those not using ECWF covariates) are excluded, even though they were also used for fitting the linear regression. versity. In addition, as we already saw above, system description features may reflect the system setup in a biased way - even though using them is better than using no covariates at all. ### Discussion and qualitative examples In the previous section we evaluated our proposed models using global metrics. In this section, we aim at providing more detailed insight into our results by analyzing model performance at the system level. The displayed examples were obtained using the best performing model identified in the previous section (i.e. model 8). First we compute per-sample nRMSE metrics for the test set, group them according to their associated system ID, and rank the systems according to the difference between model 8 and baseline nRMSE. In other words, the higher a system is in this ranking, the more DeepAR outperforms the PV performance model for this system. For all but 7 systems over 118, model 8 performs better than the baseline. As a worst case example, we first consider system 115 which comes last in this ranking. On the l.h.s. of Figure 6, we display the sample of this system associated to Figure 6: Two test samples associated to system 115. Confidence bounds are displayed for the prediction interval using green shades. The observations and NWP forecasts of the context interval are prepended. Figure 7: Two test samples associated to system 44. Confidence bounds are displayed for the prediction interval using green shades. The observations and NWP forecasts of the context interval are prepended. the lowest nRMSE with the DeepAR model. This is a typical clear day example, where the prediction is fairly easy for the neural model. We note that for this instance, forecasts stick more closely to the observations curve than the baseline. The confidence intervals are naturally tight, reflecting the high confidence of the neural model in this case. On the r.h.s. of Figure 6, for the same system, we display a sample for which the difference between the two models is among the largest. In this case, DeepAR is not able to keep up with the sudden peak of PV power. The 24h NWP covariates somehow informed on this peak, but this information was not used by model 8, which acted conservatively regarding the observations in the context interval. In Figure 7, we consider samples from system 44. This system is the second best of our ranking, and has been identified as problematic for the PV performance model because of a double-pitched roof, not reflected by the system description features [13]. On both sides of Figure 7, the systematic shift of the PV performance model is clearly visible. We also see that model 8 is able to completely ignore this shift, and come up with sensible forecasts. The figure also shows how the confidence interval is tighter when the PV production curve was straightforward to forecast (l.h.s.), and broader when the day ahead is harder to forecast (r.h.s.). ## 5 Conclusion Eventually, we are able to improve power forecasts obtained from an already strong PV performance model. By comparing many model variants, our experiments highlight the best working configuration, which uses the PV performance model forecasts as covariates, a mixture of positive Gaussians as output distribution, and a static categorical covariate reflecting the associated system ID. The positive Gaussian output allows to deal effectively with the _bell-shaped_ data profile typical of solar energy applications, and the system ID feature allows to model local effects which went previously unnoticed with the PV performance model alone. In future work, we plan to refine and explore novel neural model designs. For example, quantile regression methods more recent than [14] will be explored. Also, we will further investigate how to deal with novel systems being added to the grid without having to retrain the full model. We saw that using system description features is an effective fallback, but these features do not account for local effects such as a double-pitched roof, so they remain suboptimal. We will also consider longer prediction intervals (e.g. day-ahead and 2 days ahead). Despite visible success, models trained for this work tended to overfit, and relied critically on early stopping. This is mostly due to our measures taken to prevent data leakage: when segmenting 2 years of data at the day scale, despite all our measures, training and test sets are unlikely to be identically distributed. We addressed this problem in the most straightforward and conservative way, but it seems related to the domain shift problem characterized by the domain adaptation literature [15]. Adapting contributions from this area to the peculiarities of our application is left for future work. Acknowledgements This work was supported by the Luxembourg National Research Fund (FNR) in the framework of the FNR BRIDGES Project _CombiCast_ (BRIDGES18/IS/12705349/Combi-Cast). Furthermore, the authors would like to thank our partner Electirs (a brand of Hoffmann Freres Energie et Bois s.a r.l.), for their trust, the very supportive partnership throughout the whole project duration, and their contribution to the common project, financially as well as in terms of manpower and data.
2310.10448
A Geometric Insight into Equivariant Message Passing Neural Networks on Riemannian Manifolds
This work proposes a geometric insight into equivariant message passing on Riemannian manifolds. As previously proposed, numerical features on Riemannian manifolds are represented as coordinate-independent feature fields on the manifold. To any coordinate-independent feature field on a manifold comes attached an equivariant embedding of the principal bundle to the space of numerical features. We argue that the metric this embedding induces on the numerical feature space should optimally preserve the principal bundle's original metric. This optimality criterion leads to the minimization of a twisted form of the Polyakov action with respect to the graph of this embedding, yielding an equivariant diffusion process on the associated vector bundle. We obtain a message passing scheme on the manifold by discretizing the diffusion equation flow for a fixed time step. We propose a higher-order equivariant diffusion process equivalent to diffusion on the cartesian product of the base manifold. The discretization of the higher-order diffusion process on a graph yields a new general class of equivariant GNN, generalizing the ACE and MACE formalism to data on Riemannian manifolds.
Ilyes Batatia
2023-10-16T14:31:13Z
http://arxiv.org/abs/2310.10448v1
# A Geometric Insight into Equivariant Message Passing Neural Networks on Riemannian Manifolds ###### Abstract This work proposes a geometric insight into equivariant message passing on Riemannian manifolds. As previously proposed, numerical features on Riemannian manifolds are represented as coordinate-independent feature fields on the manifold. To any coordinate-independent feature field on a manifold comes attached an equivariant embedding of the principal bundle to the space of numerical features. We argue that the metric this embedding induces on the numerical feature space should optimally preserve the principal bundle's original metric. This optimality criterion leads to the minimization of a twisted form of the Polyakov action with respect to the graph of this embedding, yielding an equivariant diffusion process on the associated vector bundle. We obtain a message passing scheme on the manifold by discretizing the diffusion equation flow for a fixed time step. We propose a higher-order equivariant diffusion process equivalent to diffusion on the cartesian product of the base manifold. The discretization of the higher-order diffusion process on a graph yields a new general class of equivariant GNN, generalizing the ACE and MACE formalism to data on Riemannian manifolds. Machine Learning, Gaussian Networks, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Manifolds, Riemannian Riemannian Manifolds, Riemannian message passing to data on Riemannian Manifolds. Our approach demonstrates how, without non-linear activation, one can derive equivariant message passing from the minimization of a specific functional: the twisted Polyakov action. The specialization to Euclidean spaces recovers well known equivariant message passing architectures. This functional can be seen as measuring how accurate the feature field is at embedding the geometry of the manifold along with the action of a given group on it into a vector space. The update rule of the architecture comes from the discretization of the diffusion process resulting from the minimization of a twisted form of this functional. We propose a higher-order equivariant diffusion process equivalent to diffusion on the cartesian product of the base manifold. The discretization of the higher-order diffusion process on a graph yields a new general class of equivariant GNN closely related to the MACE (multi- Atomic Cluster Expansion) formalism. While the ideas developed in this paper could interest the community, they are far from mature and complete. This paper is intended to spark interest and hopefully lead to a complete understanding of the interplay between modern geometric machine learning architectures and the associated functional they minimize. In the first chapter, we present Message Passing Neural Networks (MPNNs) formalism and make the equivariance constraint explicit. We explain the connection between MPNNs and diffusion through the discretization of Beltrami flow. In the second chapter, we construct an equivariant feature diffusion to apply the analogy between MPNNs and diffusion processes to equivariant MPNNs. We first represent coordinate independent feature fields on Riemannian manifolds as sections of associated vector bundles. To any coordinate-independent feature field comes a canonically attached equivariant embedding of the principal bundle to the space of numerical features. Minimizing the Polyakov action with respect to the graph of this embedding yields a flow on the associated bundle that is equivalent to a diffusion process. By discretizing this flow for a fixed time step, one obtains a message passing on the manifold. To realize a higher-order message passing on the manifold, we considered Equivariant feature diffusion on the Cartesian product of the manifold allows us to address this question. Finally, we obtain a discrete version of the message passing on the graph by embedding a graph into a Riemannian manifold. We show that such formulation is very closely related to the MACE architecture proposed for the representation of molecules in GNNs in the case of manifolds being \(\mathbb{R}^{3}\) and the group \(SE(3)\). ## 2 Background ### Equivariant Message Passing Neural Networks Message Passing Neural Networks (MPNN) (Gilmer et al., 2017) are a general class of Graph Neural Networks that parametrize a mapping from labelled graphs to a vector space. In MPNN frameworks, each node is labelled with a state updated via successive message passing between neighbours. After a fixed number of iterations, a readout function maps the state to the space of real numbers. Equivariant MPNNs emerge if the states of the nodes have additional vector features along with an action of a group on them. In this case, one seeks mappings that preserve a given group's action. Nodes statesLet \(\Gamma=(\mathcal{V}=\{1,...,n\},\mathcal{E})\) be a graph with \(\mathcal{V}\) and \(\mathcal{E}\) denoting the nodes and edges respectively. The state of node \(i\), \(\sigma_{i}^{(t)}\) is composed of two properties : \[\sigma_{i}^{(t)}=(r_{i},h_{i}^{(t)}) \tag{1}\] with \(r_{i}\) the positional attribute of the node and \(h_{i}^{(t)}\) is a learnable feature of node \(i\). These learnable features are updated after each iteration of message passing, with an iteration index by \(t\). Message passing and updateDuring each round of message passing, features on each node \(h_{i}^{(t)}\) are updated based on aggregated messages, \(m_{i}^{(t)}\) derived from the states of the atoms in the neighbourhood of \(i\), denoted by \(\mathcal{N}(i)\) : \[m_{i}^{(t)}=\bigoplus_{j\in\mathcal{N}(i)}M_{t}(\sigma_{j}^{(t)},\sigma_{i}^{ (t)}) \tag{2}\] where \(\bigoplus_{j\in\mathcal{N}(i)}\) is any permutation invariant operation over the neighbours of node i and \(M_{t}\) is a learnable function acting on states of nodes \(i\) and \(j\). The messages are used to update the features of node \(i\) with a learnable update function \(U_{t}\): \[\sigma_{i}^{(t+1)}=(r_{i},h_{i}^{(t+1)})=(r_{i},U_{t}(m_{i}^{(t)})) \tag{3}\] ReadoutAfter \(T\) iterations of message passing and update, in the readout phase, the states of the nodes are mapped to the output \(y_{i}\) by a learnable function \(R\): \[y_{i}=R(\sigma_{i}^{(T)}) \tag{4}\] EquivarianceOne can ask for an MPNN to be equivariant with respect to the action of a group \(G\). We will consider the case where \(G\) is a reductive Lie group, and V is a representation of \(G\) with action \(\rho\). Formally, a message as a generic function of the states \((\sigma_{i_{1}},...,\sigma_{i_{n}})\) is said to be \((\rho,G)\) equivariant if it respects the constraint: \[m^{(t)}(g\cdot(\sigma_{i_{1}}^{(t)},...,\sigma_{i_{n}}^{(t)}))=\rho(g)\cdot m^{( t)}(\sigma_{i_{1}}^{(t)},...,\sigma_{i_{n}}^{(t)})\quad\forall g\in G \tag{5}\] where \(g\cdot(\sigma_{i_{1}}^{(t)},...,\sigma_{i_{n}}^{(t)})\) denotes an action of \(G\) over the states such that \[g\cdot\sigma_{i}^{(t)}=(g\cdot r_{i},\rho_{h}(g)h_{i}^{(t)}) \tag{6}\] with \(\rho_{h}(g)\) denoting the action of g on \(h\). In practice from equivariance constraint of equation (5) results constraints on the type of operations \(M_{t}\), \(\bigoplus,U_{t},R_{t}\). ### Diffusion process and message passing Recently a graph Beltrami flow has been introduced (Chamberlain et al., 2021) by analogy with diffusion processes on images and related it to a subcase of MPNNs referred to as Attentional graph neural networks. The graph Beltrami flow evolves the features as : \[\frac{\partial h_{i}^{(t)}}{\partial t}=\sum_{j:i\to j\in\mathcal{E}}a( \sigma_{j}^{(t)},\sigma_{i}^{(t)})(h_{j}^{(t)}-h_{i}^{(t)}) \tag{7}\] The solution of this equation for long times minimizes a discretized version of the Polyakov action, which is equivalent to finding an optimal embedding. With a discretization with a time step of 1, one obtains: \[h_{i}^{(t+1)}=h_{i}^{(t)}+\sum_{j:i\to j\in\mathcal{E}}a(\sigma_{j}^{(t)}, \sigma_{i}^{(t)})(h_{j}^{(t)}-h_{i}^{(t)}) \tag{8}\] This update formula corresponds precisely to the attention flavour of MPNNs, and the attention weights \(a(\sigma_{j}^{(t)},\sigma_{i}^{(t)})\) can be understood as anisotropic diffusion coefficients. More general flavours are possible by studying the non-linear equation of evolution of the type \(\frac{\partial h_{i}^{(t)}}{\partial t}=\Phi(\{\sigma_{j}\}_{j\in\mathcal{N} (i)})\). In the next section, we will show that equivariant message passing can be achieved as an equivariant diffusion process resulting from a twisted form of the Polyakov action. Other analogies of GNNs with energy minimization have been proposed in (Giovanni et al., 2022). ## 3 A geometric insight into equivariant message passing To apply the rich analogy between diffusion processes and equivariant message passing, one must represent message passing as a diffusion that respects the underlying symmetry imposed by the structure group. For this purpose, we will define an equivariant features diffusion process on a Riemannian Manifold. This will rely on defining equivariant features as sections of an associated bundle as proposed in (Cohen et al., 2019; Weiler et al., 2021). A general Laplacian will carry the diffusion process that updates the features on the associated bundle. This diffusion process results from minimizing a twisted form of the Polyakov action that preserves equivariance. The equivariant features diffusion process is thus related to finding an optimal mapping between the principal bundle of a manifold and a given irreducible representation of the structure group. ### Equivarient features fields as sections of associated bundles We aim to construct a geometric notion of equivariant features on the manifold as outlined in (Weiler et al., 2021; Cohen et al., 2019). We will review some fundamentals of differential geometry for this construction. For more mathematical details, see (Kobayashi and Nomizu, 1963). As manifolds do not come with a preferential reference frame (gauge), features must be defined up to an arbitrary choice of coordinates. It is natural to ask for the network to be independent of the coordination as proposed in (Weiler et al., 2021). It results naturally in asking the network to be equivariant under gauge transformations, i.e. local change of reference frame. Let \((\mathcal{M},\eta_{\mathcal{M}})\) be a Riemannian manifold of dimension \(\mathfrak{m}\). We will start by making more specific the notion of coordinate independence. To each point \(x\in\mathcal{M}\), one can attach a tangent space \(T_{x}\mathcal{M}\) which locally looks like \(\mathbb{R}^{\mathfrak{m}}\). Each tangent space \(T_{x}\mathcal{M}\) is isomorphic to \(\mathbb{R}^{\mathfrak{m}}\); however, there is no canonical isomorphism. A preferred choice of local isomorphism is called a gauge (Naber, 1997). **Definition 3.1** (Gauge).: Let \(x\in\mathcal{M}\) and \(U^{A}\) be a neighborhood of \(x\). A gauge is defined as a smooth and invertible map: \[\psi_{x}^{A}:T_{x}\mathcal{M}\rightarrow\mathbb{R}^{\mathfrak{m}} \tag{9}\] that specifies a preferred choice of isomorphism between \(T_{x}\mathcal{M}\) and \(\mathbb{R}^{\mathfrak{m}}\). Gauges coordinatize tangent spaces only in local neighbourhoods, and in all generality, they can not be extended to the full manifold while preserving their smoothness requirement. One can consider a collection of local neighbourhoods and their corresponding gauges called an atlas : \[\mathcal{A}=\{(U^{X},\psi^{X})\}_{X\in\mathcal{X}} \tag{10}\] such that \(\bigcup_{X\in\mathcal{X}}U^{X}=\mathcal{M}\). On intersections \(U^{A}\cap U^{B}\neq\varnothing\), the different gauges \(\psi_{x}^{A},\psi_{x}^{B}\) are stitched together using smooth transition functions defined on the structure group \(G\). From now on, we will consider \(G\) to be a reductive Lie group. **Definition 3.2** (Transition functions).: A transition function (see 3.1) between intersections \(U^{A}\cap U^{B}\neq\varnothing\), is a map : \[g^{BA}=U^{A}\cap U^{B}\to G,x\mapsto\psi^{B}_{x}\circ(\psi^{A}_{x})^{-1} \tag{11}\] Therefore one observes that a single coordinate free tangent vector \(v\in T_{x}M\) is represented by two vectors in \(v^{A},v^{B}\in\mathbb{R}^{\mathfrak{m}}\) : \[v^{A}=g^{AB}v^{B},g^{BA}\in G \tag{12}\] depending on the gauges \(\psi^{A},\psi^{B}\). We see that coordinate independence is closely related to the action of the structure group on the manifold that defines the transition maps between gauges. In practical implementations, we aim for equivariant message passing to diffuse feature fields relative to some local reference frame that take values in some vector space \(V\) of dimension \(c\). Let \(\psi^{A}\) be a local gauge on a neighborhood \(U^{A}\) of \(M\). Relative to this gauge, a local feature field is defined as : \[f^{A}:U^{A}\to V \tag{13}\] given by a vector with \(c\)-dimensional (corresponding to \(c\) channels) feature vector \(f^{A}(x)\) at each point \(x\in M\). According to some other gauge \(\psi^{B}\) on \(U^{B}\), one can measure \(f^{B}\). As we require the global feature field \(h\) to be coordinate independent, the different local feature fields must transform principally. Because gauge transformations are elements of \(G\), the local features fields will transform according to some linear representations of \(G\) on \(V\) : \[\rho:G\to Aut(V)=GL(c) \tag{14}\] and we have the local features fields transforming as : \[f^{B}(x)=\rho(g^{BA}_{x})f^{A}(x),x\in U^{A}\cap U^{B} \tag{15}\] The type of representations models different types of features. The scalar fields transform according to the trivial representation \(\rho(g)=\mathbf{1},\forall g\in G\) whose local numerical values are invariant under gauge transformations. More generally, feature fields that transform according to irreducible representations of \(G\) are of great practical importance in physics. As \(G\) is a reductive Lie group, any finite representation can be decomposed as a sum of irreducible representations. We will therefore restrict ourselves to the case where \(V\) is a finite-dimensional representation. To properly define the diffusion of equivariant feature fields, we must give a geometric interpretation of coordinate-independent feature fields. Global coordinate independent feature fields are sections of an associated vector bundle. Let \(G\) be a reductive Lie group and \(V\) a finite-dimensional representation of \(G\) of dimension c and denote by \(\rho\) the linear representation of \(G\) on \(V\). Fiber bundles allow for a global description of fields over a manifold and can thus represent feature fields. **Definition 3.3** (Fiber bundle).: A fiber bundle is a quadruplet \((E,\pi,M,F)\) with E the total space, \(M\) the base space and \(F\) the typical fiber and a smooth surjective map \(\pi:E\to M\). A fiber bundle is locally trivial as \(\forall x\in M\) there exists a neighbourhood \(U\) of \(x\) and a diffeomorphism \(\phi:U\times F\rightarrow\pi^{-1}(U)\), such that \(\pi\circ\phi(x,f)=x,\forall f\in F\). The tuple \((\psi,U)\) is called a local trivialization. To include more specific mathematical structures on the typical fiber, one can define subtypes of bundles. An important example is vector bundles, where F is a vector space, and this is crucial as it formalizes the concept of parameterizing a vector space by a manifold. **Definition 3.4** (Vector bundle).: A vector bundle is a triplet \((E,\pi,M)\) with \(E,M\) two manifolds, and \(\pi:E\to M\) such that the preimage \(\pi^{-1}(x)\) of \(x\in M\) has the structure of a vector space. To combine geometric properties with differential geometry, one needs to construct a principal bundle \(P\), that locally looks like the product of \(M\) with a structure group \(G\) and such that transition maps are isomorphism (Warner, 1983). **Definition 3.5** (Principal bundle).: A principal bundle is a quadruplet \((P,\pi,M,G)\) where \(P\) and \(M\) are two manifolds, \(G\) a Lie group, \(\pi:P\to M\) is a surjective map such that \(\pi^{-1}(x)\) is diffeomorphic to \(G\) and there is an action \(\cdot\) of \(G\) on \(P\) such that : * \(\pi(p\cdot g)=\pi(p)\) for \(p\in\pi^{-1}(x)\) and \(g\in G\) * the restriction \(G\times\pi^{-1}(x)\rightarrow\pi^{-1}(x)\) is free and transitive. We call \(M\) the base manifold, \(P\) the total space and \(G\) the structure group of the principal bundle. When no confusion can be made, \((P,\pi,M,G)\) is called \(P\). The disjoint union of bases of \(T_{x}M,x\in M\) that are equivalent by the action of \(G\) forms the total space of a principal bundle \(PG(TM)\) over \(M\) called the bundle of G-frames of \(TM\).For now we will refer by \(P\) to \(PG(TM)\). Figure 1: Relationship between gauge maps and transition functions. In the following, we will construct associated feature vector bundles with feature coefficients in \(V\) as typical fibers. Under gauge transformations, these fibers are acted on by the group linear representation \(\rho:G\to Aut(V)\). These features vector bundles are constructed as quotients. **Definition 3.6** (Associated bundle).: Let \((P,\pi,M,G)\) be a principal bundle and \(\rho\) a linear representation of \(G\) on \(V\). We define \(E=(P\times V)/G\), a point of \(E\) is of the form \[[p,h]=\{(p\cdot g,\rho(g^{-1})h),g\in G\} \tag{16}\] where \(p\in P\) and \(h\in V\). Let \(\pi_{E}:E\) be given by \(\pi_{E}[p,h]=\pi(p)\). Then \((E,\pi_{E},M)\) forms a vector bundle called an associated vector bundle to P, denoted \(P\times_{(\rho,G)}V\). The construction of \(E\) is equivalent to coordinate independent feature vectors on \(\text{M}:\) features \(f(x)\in E\) are expressed relative to arbitrary frames in \(P\). **Definition 3.7** (Coordinate free features field).: Coordinate free features fields are defined as smooth global sections \(f\in\Gamma(E)\) that is a smooth map \(f:M\to E\) such that \(\pi_{E}\circ f=id_{M}\). **Lemma 3.1**.: The coordinate free feature fields \(f\in\Gamma(E)\) and \((\rho,G)\)-equivariant functions \(h\) on \(P\) are in one-to-one correspondence such that one can attach to any \(h\) a canonical coordinate free feature field \(f_{h}\). Proof.: Define a \((\rho,G)\)-equivariant function \(h:P\to V\) from the principal bundle to the representation \(V\). Let \(p_{1},p_{2}\in\pi^{-1}(x)\) and define \(f_{h}^{1}(x)=[p_{1},h(p_{1})]\) and \(f_{h}^{2}(x)=[p_{2},h(p_{2})]\). As \(p_{1},p_{2}\in\pi^{-1}(x)\), there exists a \(g\in G\) such that \(p_{1}=p_{2}\cdot g\). By \((\rho,G)\)-equivariance of \(h\) and by the definition of a point of the associated bundle (16), \[f_{h}^{1}(x)=[p_{1},h(p_{1})]=[p_{2}\cdot g,h(p_{2}\cdot g)]= \tag{17}\] \[[p_{2}\cdot g,\rho(g^{-1})h(p_{2})]=[p_{1},h(p_{1})] \tag{18}\] we have \(f_{h}(x)\) invariant by the choice of \(p\in\pi^{-1}(x)\). Thus for any \(p\in\pi^{-1}(x)\), \(\pi_{E}\circ f_{h}(x)=x\), and \(f_{h}(x)\in\Gamma(E)\). Conversely, for \(f\in\Gamma(E)\), we put \(h_{f}(p)=v\) such that \(f\circ\pi(p)=[p,v]\). Then \(h_{f}\) is \((\rho,G)\)-equivariant. Therefore the coordinate-free features fields naturally generate a corresponding equivariant map. We will denote \(f_{h}\) the coordinate-free feature map canonically associated to the equivariant function \(h\). **Definition 3.8** (Canonical association function).: We will write \(\gamma\) the map associating to an equivariant function \(h\in C^{\infty}(P,V)^{(\rho,G)}\) the section of \(E\), \(f_{h}\) such that \(f_{h}=[p,h(p)]:\) \[\gamma:C^{\infty}(P,V)^{(\rho,G)}\rightarrow\Gamma(E),h\mapsto f_{h} \tag{19}\] On local neighborhood \(U^{A}\) of \(M\), the coordinate-free feature field is trivializable on an arbitrary local frame by the action of a gauge \(\psi_{E,x}^{A}\) into numerical features \(f_{h}^{A}\): \[f_{h}^{A}(x)=\psi_{E,x}^{A}(f_{h}(x)),x\in U^{A} \tag{20}\] A different choice of trivialization on different neighbourhoods \(U^{B}\) yields different numerical features related by the action of the linear representation of \(V\) \[f_{h}^{B}(x)=\rho(g^{BA})\psi_{E,x}^{A}(f_{h}(x)) \tag{21}\] In all generality, the features consist of multiple independent feature fields on the same base space transforming according to different representations. The whole space is defined as the Whitney sum \(\bigoplus_{i}E_{i}\). A common practice in equivariant message passing the construction of feature fields from a set of finite dimensional irreducible representations with the highest weight \(\lambda\). ### Diffusion of features fields and Polyakov Action This section considers a coordinate-free feature field \(f_{h}\in\Gamma(E)\) introduced in the previous section over a single finite-dimensional representation \(V\) of \(G\). We argue that an implicit regularization for the feature field is to realize an optimal embedding of \(M\) in \(E\), finding the optimal featurization of \(M\) into the representation \(V\) that respects the underlying structure imposed by the group \(G\). This optimality constraint applies to its corresponding equivariant function \(h\). In order to make the notion of optimality more precise, we will introduce the Polyakov action that measures the energy of an embedding and will be our optimality criterion. We will assume that \(G\) is a simply connected compact Lie group with Lie algebra \(\mathfrak{g}\). For locally compact reductive Lie groups, most of the following discussion can be applied identically by considering the exposition on the maximal compact subgroup of the universal cover of the complexification of \(G\) as they share the same finite-dimensional representations. One can induce a Riemannian metric \(u\) on \(P\) using the the Riemannian metric on \(\mathcal{M}\), \(\eta_{\mathcal{M}}\) and the invariant metric on \(G\), \(\eta_{G}\). To do so, consider the tangent space of \(P\) at the point \(p\in P\), \(T_{p}P\). It can be decomposed into a sum of two subspaces, \(V_{p}P\) called the vertical space, which is the kernel of the pushforward \(\pi^{*}:T_{p}P\to T_{\pi(p)M}\) and the horizontal space \(H_{p}P\) which is the complementary subspace. One can define naturally a connection on \(P\) by looking at the following map \(\phi_{p}:\mathfrak{g}\to V_{p}P\): \[\phi_{p}(X)=\frac{d}{dt}(p\cdot\exp(tX))|_{t=0},\quad X\in\mathfrak{g} \tag{22}\] In the case where \(G\) is a compact simply connected Lie groups, this map is invertible, and we call \(\phi_{p}^{-1}\) the inverse. A similar map can be formed if the group has a finite number of disconnected components by multiplying the map by a discrete group \(\mathbf{H}\in G\) containing a representative from each connected component. From this map, one can define an inner product on \(P\). Let \(dp_{1}=v_{1}+h_{1}\) and \(dp_{2}=v_{2}+h_{2}\) be two points on \(T_{p}P\), the inner product is defined as, \[(dp_{1},dp_{2})=\eta_{\mathcal{M}}(\pi(h_{1}),\pi(h_{2}))+\eta_{G}(\phi_{p}^{- 1}(v_{1}),\phi_{p}^{-1}(v_{2})) \tag{23}\] The metric \(u\) on \(P\) is the metric induced by this inner product. Moreover, we assume that \(P\times V\) is equipped with a Riemannian metric \(v=u\oplus\kappa\mathbf{I}_{d}\), for an arbitrary positive number \(\kappa\) multiplying the identity of \(V\). The graph of \(h\) given by \(\varphi_{h}:P\to P\times V\) realizes an embedding of \(P\) into \(P\times V\). This embedding induces a natural metric on \(P\times V\) given by \(\gamma_{\mu,\nu}=\frac{\partial\varphi_{h}^{i}}{\partial x_{\mu}}\frac{ \partial\varphi_{h}^{j}}{\partial x_{\nu}}v_{i,j}\). We will now make precise our optimality criterion. **Definition 3.9** (Optimality of the feature fields).: A feature field \(f_{h}\in\Gamma(E)\) is optimal if the induced metric associated with the embedding \(\varphi_{h}\) of the principal bundle to the numerical features space \(V\) preserves optimally the metric on the Principal bundle. A natural way to think about this optimal condition is that one wishes to construct features in a vector space that preserves as much as possible the geometry of the initial manifold. To measure the energy of \(\varphi_{h}\) and correlatively of \(f_{h}\), we study its Polyakov Action. **Definition 3.10** (Polyakov action).: The Polyakov action of the embedding \(\varphi\) is defined as, \[S[\varphi_{h},u,v]=\int_{P}u^{\mu,\nu}\frac{\partial\varphi_{h}^{i}}{\partial x _{\mu}}\frac{\partial\varphi_{h}^{j}}{\partial x_{\nu}}v_{i,j}dP \tag{24}\] Minimizing the Polyakov with respect to embedding metrics \(u\) and \(v\) is equivalent to finding the optimal embedding \(\varphi_{h,opt}\) of \(P\) in \(P\times V\). The minimization of the precedent action yields the resolution of the Euler-Lagrange equation. One obtains a gradient descent flow in the form of a heat equation on \(h:P\to V\) : \[\frac{\partial h_{t}}{\partial t}=\Delta^{P}h \tag{25}\] where \(\Delta^{P}\) corresponds to the Laplace Beltrami operator on the principle bundle \(P\). However, this flow equation is not guaranteed to preserve equivariance. To preserve equivariance of the initial condition \(h^{(0)}\), (Batard & Sochen, 2012) proposed introducing a twisted version of the Polyakov action resulting in a gradient descent flow. The additional term in the action is expressed as a scalar product \(\langle,\rangle\) on \(V\). **Definition 3.11** (Casimir Operator).: Let \(\mathfrak{g}\) be the Lie algebra of \(G\). Let \((\mathfrak{g}_{1},...,\mathfrak{g}_{n})\) be an orthonormal basis of \(\mathfrak{g}\) and \(d\rho:\mathfrak{g}\to GL(V)\) the representation of \(\mathfrak{g}\) induced by \(\rho\). Let \((\mathfrak{g}^{1},...,\mathfrak{g}^{n})\) be the dual basis on \(\mathfrak{g}\) with respect to the Killing form. The Casimir operator \(Cas\in GL(V)\) is defined as : \[Cas=\sum_{i}^{n}d\rho(\mathfrak{g}^{i})d\rho(\mathfrak{g}_{i}) \tag{26}\] The twisted Polyakov action is given by adding a term to the Polyakov action : \[S[\varphi_{h},u,v]=\int_{P}u^{\mu,\nu}\frac{\partial\varphi_{h}^{i}}{\partial x _{\mu}}\frac{\partial\varphi_{h}^{i}}{\partial x_{\nu}}v_{i,j}+\frac{1}{2}( \text{Cas}\cdot h,h)dP \tag{27}\] Minimizing the equation with respect to \(\varphi_{h}\), one obtains the following heat equation : \[\frac{\partial h^{t}}{\partial t}=-(\Delta^{P}\otimes\mathbb{I}+\mathbb{I} \otimes Cas)h^{t} \tag{28}\] which one can rewrite : \[\frac{\partial h^{t}}{\partial t}=\Delta^{E}h^{t} \tag{29}\] where we define \(\Delta^{E}=-(\Delta^{P}\otimes\mathbb{I}+\mathbb{I}\otimes Cas)\) the generalized Laplacian on the associated bundle \(E\). In the special case where \(V\) is an irreducible representation, \(Cas=\lambda\mathbb{I}\) and \(\Delta^{E}=-(\Delta^{P}\otimes\mathbb{I}+\lambda\mathbb{I})\). **Definition 3.12** (Equivariant Feature Diffusion Process).: Let \(h^{(0)}:P\to V\) be an initial equivariant feature map. Let \(\Delta^{E}\) be the generalized Laplacian on the associated bundle \(E\). We define an Equivariant Diffusion Process as follows: \[h^{(T)}=h^{(0)}+\int_{0}^{T}\Delta^{E}h^{(t)}dt\quad y=\mathbf{R}(h^{(T)}) \tag{30}\] The feature map \(h^{(T)}\) will be the optimal equivariant map from \(P\) to the irreducible representation \(V\) of \(G\) for \(T\) sufficiently long. Furthermore, \(\mathbf{R}\) is a learnable readout function. From the equivariant diffusion process, we can construct a smooth coordinate independent feature field using the feature field canonically attached to any equivariant function on \(P\) : \[f_{h}^{(t)}(x)=[p,h^{(t)}(p)],p\in\pi^{-1}(x) \tag{31}\] ### Equivariant features propagator on reductive groups The diffusion process on the associated bundle, \(E\), can be understood in terms of the scalar heat kernel on \(P\) as detailed in (Berline et al., 1992). First, we observe the following remark : **Remark.** Let \(\Delta^{E}=-(\Delta^{P}\otimes\mathbb{I}+\mathbb{I}\otimes Cas)\), then for some parameter \(t\), \[e^{-t\Delta^{E}}=e^{-tCas}e^{-t\Delta^{P}}\] We have the following equality for \(p_{1}\in P\) \[(e^{-t\Delta^{E}}h)(p_{1})=\int_{E}\langle p_{1}|e^{-t\Delta^{E}}|p_{2}\rangle h (p_{2})dp_{2} \tag{32}\] where \(\langle p_{1}|e^{-t\Delta^{E}}|p_{2}\rangle\) is the Schwartz Kernel of \(e^{-t\Delta^{E}}\) in the Dirac notation. **Proposition 3.1** (Getzler - Vergne - Berline).: If \(p_{1}\) is a point of the principle bundle \(P\) of a locally compact group \(G\) and \(\Delta^{E}\) the generalized Laplacian over the associated bundle \(E\). Set \(p_{2}\) is a chosen representative in \(\pi^{-1}(x)\), for \(x\in\mathcal{M}\). Then for any function \(h:P\to V\), and \(|dx|\) a Riemannian density, \[(e^{-t\Delta^{E}}h)(p_{1})=e^{-tCas}\int_{G}\int_{\mathcal{M}}\langle p_{1}|e ^{-t\Delta^{P}}|p_{2}g\rangle\rho(g)^{-1}h(p_{2})dgdx \tag{33}\] In the case of \(G\) is a non-compact reductive Lie group, the first integral taken over the maximal compact subgroup of the complexification of \(G\) referred to as \(K_{\mathbb{C}}\), following the Weyl unitary trick. **Definition 3.13**.: Let's \(e^{-t\Delta^{E}}\) be the equivariant propagator of the diffusion such that : \[h^{(t^{\prime})}=e^{-(t^{\prime}-t)\Delta^{E}}h^{(t)} \tag{34}\] **Definition 3.14**.: For any \(p_{1},p_{2}\in P\), denote \[\langle p_{1}|e^{-(t^{\prime}-t)\Delta^{P}}|p_{2}\rangle=k_{t^{\prime}-t}(p_{1 },p_{2}) \tag{35}\] the heat kernel of \(\Delta^{P}\) such that one can rewrite the propagator of equation (34) as \[h^{(t^{\prime})}=e^{-(t^{\prime}-t)\Delta^{E}}h^{(t)}= \tag{36}\] \[e^{-tCas}\int_{\mathcal{M}}\int_{G}k_{t^{\prime}-t}(p_{1},p_{2}g)\rho(g)^{-1} h^{(t)}(p_{2})dgdx \tag{37}\] **Definition 3.15** (GM - Message passing).: Let \(h^{(0)}:P\to V\) be an initial equivariant feature map. Let \(\Delta^{E}\) be the generalized Laplacian on the associated bundle \(E\). We define the message operation in a GM - Message passing as : \[m^{(T)}(p_{1})=e^{-TCas}\int_{\mathcal{M}}\int_{G}k_{T}(p_{1},p_{2}g)\rho(g)^{ -1}h^{(0)}(p_{2})dgdx \tag{38}\] \[h^{(T)}(p_{1})=U^{(T)}(m^{(T)}(p_{1})) \tag{39}\] with the function \(U^{(T)}\) a learnable equivariant function. The usual equivariant updates are linear combinations from the vector space of \(C^{\infty}(P,V)^{(\rho,G)}\). **Remark.** GM-Message passing networks are gauge equivariant. A weight-sharing constraint needs to be preserved for the method to be also equivariant to the diffeomorphism of the manifold. GM-Message passing networks are indeed diffeomorphism-preserving maps as the heat kernel is shared across the manifold. As this diffusion process has low correlation order (only pairwise interaction), it can be inefficient at modeling highly correlated interactions. Therefore, it is convenient to construct a higher-order GM-Message passing on a Cartesian product of manifolds. First, define \(P^{n}\) the principal bundle on the base manifold \(M^{n}\). **Definition 3.16** (Higher order GM - Message passing).: Let \(\tilde{h}^{(0)}:P^{n}\to V^{n}\), be an initial equivariant feature map. Let \(E^{n}\) be the associated vector bundle to \(P^{n}\) and \(\Delta^{E^{n}}\) be the generalized Laplacian on the associated bundle \(E^{n}\). Let \(dx^{n}=\prod_{t}^{n}dx_{\xi}\) be a volume element on the manifold \(\mathcal{M}^{n}\). We define a Higher order GM - Message passing as : \[m^{(T)}(p_{1})= \tag{40}\] \[e^{-TCas}\int_{\mathcal{M}^{n}}\int_{G}\tilde{k}_{T}^{N}(p_{1}, p_{2}g,..,p_{n}g)\rho(g)^{-1}\tilde{h}^{(0)}(p_{2},...,p_{n})dgdx^{n}\] Assume the following structure on the diffusion kernel and feature function, \[\tilde{k}_{T}(p_{1},p_{2}g,..,p_{n}g)=\prod_{\xi=1}^{n}k_{T}(p_{1},p_{\xi}g) \tag{41}\] \[\tilde{h}^{(0)}(p_{2}g,...,p_{n}g)=\rho(g)^{n-1}\prod_{\xi=1}^{n}h^{(0)}(p_{ \xi}g) \tag{42}\] We can rewrite the Higher-order GM-Message Passing of equation (40) as : \[m^{(T)}(p_{1})= \tag{43}\] \[e^{-TCas}\int_{G}\rho(g)^{-1}dg\prod_{\xi=1}^{n}\int_{\mathcal{M }}k_{T}(p_{1},p_{\xi}g)h^{(0)}(p_{\xi})dx_{\xi}\] \[h^{(T)}(p_{1})=U^{(T)}(m^{(T)}(p_{1})) \tag{44}\] The propagated equivariant map is projected back to numerical features using the canonically attached feature field and the gauge, \[f_{h}^{A,(T)}(x)=\psi_{E}^{A}\circ\gamma(h^{(T)}(p)),p\in\pi^{-1}(x) \tag{45}\] ### Propagation beyond Diffusion Equivariant diffusion allows for an insightful interpretation regarding the minimization of the Polyakov action. It also gives theorems and techniques to understand geometrically the diffusion kernel. However, diffusion poses severe limitations. The update function should be linear to conserve the original heat equation, which is not always valid in message-passing neural networks. However, this corresponds to the case of MACE (Batatia et al., 2022b), which uses linear updates. We allow GM - Message passing in all generality to have a non-linear update function. The nonlinearity should be of a gate form to avoid breaking equivariance (Weiler et al., 2021). It is for future work to analyze the impact of nonlinearities update on the process. ### Equivariant Message Passing over graphs In the previous section, we introduced a method to update the coordinate-free features field by applying an equivariant diffusion process to the canonically attached equivariant map. In this section, we will apply this formalism to discretized manifolds. We consider now \(\mathcal{G}=(\mathcal{V}=\{1,...,n\},\mathcal{E})\) a graph with \(\mathcal{V}\) and \(\mathcal{E}\) denoting nodes and edges respectively. Let \((\mathcal{M},\eta_{\mathcal{M}})\) be a Riemannian manifold of dimension \(\mathfrak{m}\). The induced topology on the graph by the metric \(\eta_{\mathcal{M}}\) is defined by the set \(\mathcal{E}=\{(i,j)\in\mathcal{V}|\eta_{\mathcal{M}}(r_{i},r_{j})\leq r_{c}\}\) of neighboring points. By fixing one point \(i\), one can construct from the set \(\mathcal{E}\), neighbors of \(i\) defined as \(\mathcal{N}(i)=\{j\in\mathcal{V}|\eta_{\mathcal{M}}(r_{i},r_{j})\leq r_{c},j \neq i\}\). We find the discrete analogy to the GM-Message passing, equivalent to finding the optimal equivariant map \(h:P\to V\) with information on a discretized manifold represented by the graph \(\mathcal{G}\). **Definition 3.17** (State function).: We define \((P,\pi,\mathcal{M},G)\) the principal bundle of \(\mathcal{M}\). Let \(\sigma:\mathcal{V}\to\mathcal{M}\times V\) be the state function of the nodes (atoms) \(\mathcal{V}\) such that \(\sigma_{i}=(r_{i},f_{h,i})\) with \(r_{i}\) the positions and \(f_{h,i}\) the canonically attached coordinate independent feature field attached to \(h:P\to V\). We call \(\sigma^{A}\) the state relative to a gauge \(\psi^{A}\) on \(\mathcal{M}\). **Definition 3.18** (Equivariant Message Passing).: Let \(h^{(0)}:P\to V\) be an initial equivariant feature map. Let \(\Delta^{E}\) be the generalized Laplacian on the associated bundle \(E\) and \(k_{T}\) the heat kernel of \(\Delta^{E}\). We define the message operation of Equivariant - Message passing on \(\mathcal{G}\) : \[m^{(t)}(p_{i})=e^{-tCas}\int_{G}\rho(g)^{-1}dg\prod_{\xi=1}^{n}\sum_{j\in \mathcal{N}(i)}k_{i}(p_{i},p_{j}g)h^{(t-1)}(p_{j}) \tag{46}\] The feature map \(h^{(T)}\) will be the optimal equivariant map from \(P\) to the irreducible representation \(V\) of \(G\) for \(T\) sufficiently long over the graph \(\mathcal{G}\). We identify this operation as the pooling operation of message passing, \[\bigoplus_{j\in\mathcal{N}(i)}M_{t}(\sigma_{j}^{(t)},\sigma_{i}^{ (t)})=\\ e^{-tCas}\int_{G}\rho(g)^{-1}dg\prod_{\xi=1}^{n}\sum_{j\in \mathcal{N}(i)}k_{i}(p_{i},p_{j}g)h^{(t-1)}(p_{j}) \tag{47}\] The kernel function can be learnable in all generality depending on the iteration \(t\), such as a neural network. The update function \(U_{t}\) can be a linear or non-linear neural network, shifting from classical diffusion to a partial differential equation evolution. Let's assume that, \(\mathcal{M}=\mathbb{R}^{3}\), \(G=SE(3)\) and \(V\) is the irreducible representation of invariant scalar to \(SE(3)\), such that \(\rho(g)=I,\forall g\in G\). If one expands the heat kernel into a spherical series, we recover the equation of the MACE architecture (Batatia et al., 2022b;a). Let \(k(p,q)\) be a translationally invariant kernel. Then \(\forall t\in\mathbb{R},k(p+t,q+t)=k(p,q)=k(0,q-p)=\tilde{k}(q-p)\). Thus we see that \(\tilde{k}\) and \(k\) contain as much information. Spherical expension of the kernelFirst consider the heat kernel on the Sphere \(S^{2}\). By Mercer's theorem, any kernel \(k(p-q)\in L(S^{2})\) can be expended in the form of spherical harmonics. By extending it to \(\mathbb{R}^{3}\) any kernel on \(k(p-q)\in L(R^{3})\) can be expended into, \[k(p-q)=\sum_{n=0}^{+\infty}\sum_{l=0}^{+\infty}\sum_{m=-l}^{m-l}R^{(n)}(\mathbf{p}- \mathbf{q})c_{lm}Y_{m}^{l}(\hat{p}-\hat{q}) \tag{48}\] Figure 3: The state function labels the set of nodes \(\mathcal{V}\) by a tuple of positional features in \(\mathcal{M}\) and a the numerical feature field in \(V\) a vector space. where \(Y_{m}^{l}\) are spherical harmonics of order \(lm\) By truncating the expansion to a maximal \(l\) value and injecting it into the previous message passing equation, one recovers the exact equation for the MACE messages if the time step is constant as the Casimir term will just become a constant re-scaling. ## 4 Discussion In this work, we have introduced a new geometric interpretation of message passing based on differential geometry and diffusion that led to the formulation of a general class of networks on Riemannian manifolds. Implementing these models on test manifolds is still needed, and numerical experiments are required to validate the proposed approach. Fast Fourier transform on the group could simplify the computation of the integral over the group arising in the formulation of \(GM\)-message. The connection with equivariant message passing could also be fruitful for the interpretability of these methods and to help create new numerical schemes based on our proposed understanding. A more general discussion is needed on the connection between message passing and non-linear partial differential equations on manifolds. Extending this work beyond point clouds using non-commutative geometry would represent a challenging task but might be a fruitful endeavour. ## 5 Acknowledgements The author would like to express heartfelt gratitude towards the reviewers, denoted as \(uudT\) and \(8mvL\). The invaluable feedback and constructive critiques provided have been instrumental in shaping and refining this work.
2304.08398
Spectral classification of young stars using conditional invertible neural networks I. Introducing and validating the method
Aims. We introduce a new deep learning tool that estimates stellar parameters (such as effective temperature, surface gravity, and extinction) of young low-mass stars by coupling the Phoenix stellar atmosphere model with a conditional invertible neural network (cINN). Our networks allow us to infer the posterior distribution of each stellar parameter from the optical spectrum. Methods. We discuss cINNs trained on three different Phoenix grids: Settl, NextGen, and Dusty. We evaluate the performance of these cINNs on unlearned Phoenix synthetic spectra and on the spectra of 36 Class III template stars with well-characterised stellar parameters. Results. We confirm that the cINNs estimate the considered stellar parameters almost perfectly when tested on unlearned Phoenix synthetic spectra. Applying our networks to Class III stars, we find good agreement with deviations of at most 5--10 per cent. The cINNs perform slightly better for earlier-type stars than for later-type stars like late M-type stars, but we conclude that estimations of effective temperature and surface gravity are reliable for all spectral types within the network's training range. Conclusions. Our networks are time-efficient tools applicable to large amounts of observations. Among the three networks, we recommend using the cINN trained on the Settl library (Settl-Net), as it provides the best performance across the largest range of temperature and gravity.
Da Eun Kang, Victor F. Ksoll, Dominika Itrich, Leonardo Testi, Ralf S. Klessen, Patrick Hennebelle, Sergio Molinari
2023-04-17T16:05:51Z
http://arxiv.org/abs/2304.08398v1
# Spectral classification of young stars using conditional invertible neural networks ###### Abstract Context: Aims:We introduce a new deep learning tool that estimates stellar parameters (such as effective temperature, surface gravity, and extinction) of young low-mass stars by coupling the Phoenix stellar atmosphere model with a conditional invertible neural network (cINN). Our networks allow us to infer the posterior distribution of each stellar parameter from the optical spectrum. Methods:We discuss cINNs trained on three different Phoenix grids: Settl, NextGen, and Dusty. We evaluate the performance of these cINNs on unlearned Phoenix synthetic spectra and on the spectra of 36 Class III template stars with well-characterised stellar parameters. Results:We confirm that the cINNs estimate the considered stellar parameters almost perfectly when tested on unlearned Phoenix synthetic spectra. Applying our networks to Class III stars, we find good agreement with deviations of at most 5-10 per cent. The cINNs perform slightly better for earlier-type stars than for later-type stars like late M-type stars, but we conclude that estimations of effective temperature and surface gravity are reliable for all spectral types within the network's training range. Conclusions:Our networks are time-efficient tools applicable to large amounts of observations. Among the three networks, we recommend using the cINN trained on the Settl library (Settl-Net), as it provides the best performance across the largest range of temperature and gravity. ## 1 Introduction In star-forming regions, it is massive stars that influence the surrounding environment energetically and dynamically during their short lifetime, but the majority of stars formed in star-forming regions are low-mass stars similar to or less than the solar mass. These low-mass stars are not only the most numerous in the star-forming region (Bochanski et al., 2010) but also account for about half of the total stellar mass (Kroupa, 2002; Chabrier, 2003). Living longer than massive stars, these low-mass stars still remain in the pre-main-sequence phase even when the massive stars are dead. These young low-mass stars provide important information for studying stellar evolution and planet formation. Stellar parameters (e.g. effective temperature, surface gravity, luminosity, etc.) are estimated from photometric or spectroscopic data by various methods. These methods are usually based on characteristic spectral features that vary depending on the type of stars. Therefore, it is important to adopt the appropriate method to the star under consideration and to the observed wavelength range. As the volume of accumulated observations ever-expand in recent days, it has become important to develop time-efficient tools that analyse large amounts of data in a faster and more consistent way. This is why artificial neural networks (NNs; Goodfellow et al., 2016) have been utilised in many astronomical fields these days. For instance, NNs have been used to predict physical parameters (e.g. Fabbro et al., 2018; Ksoll et al., 2020; Olney et al., 2020; Kang et al., 2022) or to efficiently analyse images such as identifying structures (e.g. Abraham et al., 2018) and exoplanets (e.g. de Beurs et al., 2022) or classifying observations (e.g. Wu et al., 2019; Walmsley et al., 2021; Whitmore et al., 2021). In this study, we develop NNs that can efficiently analyse numerous spectra in the optical wavelength range of young low-mass stars. We prepare our networks to analyse VLT/MUSE observations adopting the wavelength coverage and spectral resolution of MUSE. In the follow-up paper, we will apply our tool to the spectra of young stars in the Carina Nebula observed with VLT/MUSE. We adopt conditional invertible neural network (cINN) architecture developed by Ardizzone et al. (2021). Estimating physical parameters from observed measurements is a non-trivial task. As the information we obtain from observation is limited due to the information loss during the forward process (i.e. translation from physical systems to observations), different physical systems can be observed similarly or almost identically, which we call a degenerate system. cINN architecture is specialised for solving the inverse problem of the degenerate system (i.e. from observations to physical systems). In particular, cINN has its own advantage in that cINN always provides a full posterior distribution of the physical system without any additional computations. In astronomy, the cINN approach has been used so far to characterise the internal properties of planets (Haldemann et al., 2022), analyse photometric data of young stars (Kosli et al., 2020), study emission lines in H ii regions (Kang et al., 2022), or infer the merger history of galaxies (Eisert et al., 2023). The cINN architecture adopts a supervised learning approach that learns the hidden rules from a number of well-labelled data sets of physical parameters and observations. As it is difficult to collect a sufficient number of well-interpreted real observations, synthetic observations have been usually used instead to generate enough training data. In this study, we utilise Phoenix stellar atmosphere libraries (e.g. Allard et al., 2012; Husser et al., 2013; Baraffe et al., 2015) to train cINNs. Selecting Settl, NextGen, and Dusty Phoenix libraries, we introduce three cINNs (Settl-Net, NextGen-Net, and Dusty-Net) trained on each library. A few studies have developed NNs to analyse low-mass stars from photometric or spectroscopic data (e.g. Kosli et al., 2020; Olney et al., 2020; Sharma et al., 2020). For example, Kosli et al. (2020) developed a network using cINN architecture to estimate the physical parameters of individual stars from HST photometric data and Olney et al. (2020) used a convolutional neural network (CNN) to estimate physical parameters (e.g. effective temperature, surface gravity, and metallicity) from near-infrared spectra observed with Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectrograph. Sharma et al. (2020) used CNN as well to diagnose the optical spectra of stars in a wide range of spectral types but their network only estimates the spectral type of the stars, not the other physical parameters. On the other hand, in this paper, our networks directly estimate the stellar parameters from the optical spectrum of low-mass stars, including the stars in both the main sequence and pre-main sequence phases. Moreover, our network provides a posterior distribution by adopting cINN architecture, which is useful to study the degeneracy between parameters. In this paper, we focus on validating the performance of three cINNs. We evaluate our networks not only on Phoenix synthetic observations but also on real spectra of 36 young low-mass stars to investigate how well our cINNs work on real observations. These stars are template stars in the Class III phase, well-interpreted by literature (e.g. Manara et al., 2013, 2017; Stelzer et al., 2013). The paper is structured as follows. In Sect. 2, we describe the structure and principles of cINN and explain implementation detail on the machine learning side. In Sect. 3, we introduce our three networks and three training databases. In the following section (Sect. 4), we describe the Class III template stars used in this paper. Our main results are in Sect. 5. We validate our networks using synthetic Phoenix spectra and 36 template stars. We not only evaluate the parameter prediction power of the cINN but also check whether the predicted parameters do explain the input observations. Section 6 present which parts of the spectrum cINN relies mostly upon. In Sect. 7, we investigate the gap between Phoenix synthetic spectra and real observations. We summarise the results in Sect. 8. ## 2 Neural network ### Conditional invertible neural network The conditional invertible neural network (cINN; Ardizzone et al., 2019, 2019) is a deep learning architecture that is well suited for solving inverse problems that are tasks where the underlying physical properties \(\mathbf{x}\) of a system are to be recovered from a set of observable quantities \(\mathbf{y}\). In nature, recovering the inverse mapping \(\mathbf{x}\leftarrow\mathbf{y}\) is often challenging and subject to degeneracy due to an inherent loss of information in the forward mapping \(\mathbf{x}\rightarrow\mathbf{y}\), such that multiple sets of physical properties may appear similar or even entirely the same in observations. To tackle these difficulties, the cINN approach introduces a set of unobservable, latent variables \(\mathbf{z}\) with a known, prescribed prior distribution \(P(\mathbf{z})\) to the problem in order to encode the information that is otherwise lost in the forward mapping. The cINN achieves this by learning a mapping \(f\) from the physical parameters \(\mathbf{x}\) to the latent variables \(\mathbf{z}\)_conditioned_ on the observations \(\mathbf{y}\), that is \[f\left(\mathbf{x};\mathbf{c}=\mathbf{y}\right)=\mathbf{z}, \tag{1}\] capturing all the variance of \(\mathbf{x}\) not explained by \(\mathbf{y}\) in \(\mathbf{z}\), while enforcing that \(\mathbf{z}\) follows the prescribed prior \(P(\mathbf{z})\). Given a new observation \(\mathbf{y}^{\prime}\) at prediction time, the cINN can then query the encoded variance by sampling the latent space according to the known prior distribution and by making use of its invertible architecture run in reverse to estimate the full posterior distribution \(p\left(\mathbf{x}|\mathbf{y}^{\prime}\right)\) as \[p\left(\mathbf{x}|\mathbf{y}^{\prime}\right)\sim g\left(\mathbf{z};c=\mathbf{y} ^{\prime}\right),\text{ with }\mathbf{z}\propto P\left(\mathbf{z}\right), \tag{2}\] where \(f^{-1}(\cdot,\mathbf{c})=g(\cdot,\mathbf{c})\) represents the inverse of the learned forward mapping for fixed condition \(\mathbf{c}\). In practise \(P(\mathbf{z})\) is usually prescribed to be a multivariate normal distribution with zero mean and unit covariance, and the dimension of the latent space is chosen to be equal to that of the target parameter space, that is \(\dim(\mathbf{z})=\dim(\mathbf{x})\). The invertibility of the cINN architecture is achieved by chaining so-called (conditional) affine coupling blocks (Dinh et al., 2016). Each of these blocks performs two complementary affine transformations on the halves \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) of the block input vector \(\mathbf{u}\), following \[\mathbf{v}_{1} =\mathbf{u}_{1}\odot\exp\left(s_{2}(\mathbf{u}_{2},\mathbf{c}) \right)\oplus t_{2}(\mathbf{u}_{2},\mathbf{c})\] \[\mathbf{v}_{2} =\mathbf{u}_{2}\odot\exp\left(s_{1}(\mathbf{v}_{1},\mathbf{c}) \right)\oplus t_{1}(\mathbf{v}_{1},\mathbf{c}). \tag{3}\] As the equation shows, these two transformations are easily inverted given the halves \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\) of the output vector \(\mathbf{v}\) according to \[\mathbf{u}_{2} =\left(\mathbf{v}_{2}\oplus t_{1}(\mathbf{v}_{1},\mathbf{c}) \right)\odot\exp\left(-s_{1}(\mathbf{v}_{1},\mathbf{c})\right)\] \[\mathbf{u}_{1} =\left(\mathbf{v}_{1}\oplus t_{2}(\mathbf{u}_{2},\mathbf{c}) \right)\odot\exp\left(-s_{2}(\mathbf{u}_{2},\mathbf{c})\right). \tag{4}\] In both sets of Eqs. (3) and (4), \(s_{i}\) and \(t_{i}\) (\(i\in\{1,2\}\)) denote arbitrarily complex transformations, which need not themselves be invertible (as they are only ever evaluated in the forward direction) and can also be learned by the cINN itself when realised as small sub-networks (Ardizzone et al. 2019, 2019). Another advantage of the cINN architecture is that, as the observations are treated as a condition and simply concatenated to the input of the subnetworks \(s_{i}\) and \(t_{i}\) in each affine coupling layer, it allows for a) an arbitrarily large dimension of the input \(\mathbf{y}\) and \(\mathbf{b}\) the introduction of a conditioning network \(h\) (trained together with the cINN itself), which transforms the input observation into a more helpful, learned representation \(\mathbf{\tilde{y}}=h(\mathbf{y})\) for the cINN (Ardizzone et al. 2019). ### Implementation details In this paper, we employ a cINN consisting of 11-16 conditional affine coupling layers in the GLOW (Generative Flow; Kingma & Dhariwal 2018) configuration, where the transformation outputs \(s_{i}(\cdot)\) and \(t_{i}(\cdot)\) are estimated by a single subnetwork \(r_{i}(\cdot)=(s_{i}(\cdot),t_{i}(\cdot))\). The latter choice, reduces the number of sub-networks per affine layer from four to two, reducing network complexity and computation time. As sub-networks \(r_{i}\) we employ simple fully-connected architectures consisting of 5-7 layers of size 256 using the rectified linear unit (ReLU, \(\text{ReLU}(x)=\max(0,x)\)) as activation function. The affine coupling layers are, furthermore, alternated with random permutation layers, which randomly (but in a fixed and, thus, invertible way) permute the output vector in between coupling layers to improve the mixing of information between the two streams \(\text{u}_{1}\) and \(\text{u}_{2}\)(Ardizzone et al. 2019, 2019). For the conditioning network \(h\), we also employ a three-layer fully-connected architecture with layer size 512 and ReLU activation, extracting 256 features in the final layer. Prior to training, we perform a linear scaling transformation on both the target parameters \(\mathbf{x}=\{x_{1},\dots,x_{N}\}\) and input observations \(\mathbf{y}=\{y_{1},\dots,y_{M}\}\), where each target property \(x_{i}\) and input feature \(y_{i}\) is modified according to \[\hat{x}_{i} =\frac{x_{i}-\mu_{x_{i}}}{\sigma_{x_{i}}},\] \[\hat{y}_{i} =\frac{y_{i}-\mu_{y_{i}}}{\sigma_{y_{i}}}, \tag{5}\] where \(\mu_{x_{i}}\), \(\mu_{y_{i}}\) and \(\sigma_{x_{i}}\), \(\sigma_{y_{i}}\), denote the means and standard deviations of the respective parameter/feature across the training data set. These transformations ensure that the distributions of individual target parameters/input features have zero mean and unit standard deviation, and are trivially inverted at prediction time. The transformation coefficients \(\mu_{x_{i}}\), \(\mu_{x_{i}}\) and \(\sigma_{x_{i}}\), \(\sigma_{y_{i}}\) are determined from the training set and applied in the same way to new query data. We train the cINN approach for this problem by minimisation of the maximum likelihood loss as described in (Ardizzone et al. 2019) using the Adam (Kingma & Ba 2014) optimiser for stochastic gradient descent with a step-wise learning rate adjustment. ## 3 Training data ### Stellar photosphere models The approach used to train the cINN is to use libraries of theoretical models for stellar photospheres. Our goal is to use the cINN to be able to classify and derive photospheric parameters from medium to low-resolution optical spectroscopy. To this purpose, we selected the most extensive set of available models that offer a spectral resolution better than R\(\sim\)10000. The most extensive, homogeneous, tested, and readily available1 library of theoretical photospheric spectra, including different treatments of dust and molecules formation and opacities, applicable in the range of effective temperatures covering the range from \(\sim\)2000 to \(\sim\)7000 K and gravities appropriate for pre-main sequence stars and brown dwarfs are the Phoenix spectral libraries (e.g. Allard et al. 2012; Husser et al. 2013; Baraffe et al. 2015). In this study, we have used the NextGen, Dusty, and Settl models, the latter is expected to provide the best description of the atmospheric characteristics in most cases of interest (Allard et al. 2012). We have included the older NextGen models as a comparison set, and the Dusty models as they seem to more accurately describe photospheres in the range of 2000 K \(\leq T_{\mathrm{eff}}\leq\) 3000 K (e.g. Testi 2009). For a more detailed description and comparison of the physical assumption in the models, see the discussion and references in Allard et al. (2012). Footnote 1: We downloaded the theoretical spectra from the websites: [https://osubtd.ens-lyon.fr/phoenix/](https://osubtd.ens-lyon.fr/phoenix/) and [http://svo2.cab.intac.csic.es/theory/newvo2/](http://svo2.cab.intac.csic.es/theory/newvo2/) The grid of synthetic spectra is available for regularly spaced values of \(T_{\mathrm{eff}}\) and \(\log g\), with steps of 100 K in \(T_{\mathrm{eff}}\) and 0.5 in \(\log g\). To compute a synthetic spectrum for a given set of (arbitrary but within the grid ranges) values of (\(T_{\mathrm{eff}}\), \(\log g\), and \(A_{\mathrm{V}}\)) we set up the following procedure: first, we identify the values of \(T_{\mathrm{eff}}\) and \(\log g\) in the grid that bracket the requested values, then we interpolate linearly in \(\log g\) at the values of the two bracketing \(T_{\mathrm{eff}}\) values, then we interpolate linearly the two resulting spectra at the requested \(T_{\mathrm{eff}}\) value, finally, we compute and apply the extinction following the Cardelli et al. (1989) prescription, with \(R_{\mathrm{V}}\) as a user selectable parameter (in this study we use \(R_{\mathrm{V}}\)=4.4, see Sect. 3.2). The resulting spectrum is then convolved at the MUSE resolution, using a Gaussian kernel, and resampled on the MUSE wavelength grid. ### Databases and networks In this study, we analyse the cINN performance based on each of the three spectral libraries described in the previous section. Accordingly, we construct a training data set for each spectral library using the interpolation scheme we have outlined. For the target parameter space, we adopt the following limits: For NextGen and Settl we limit \(T_{\mathrm{eff}}\) to a range of 2600 to 7000 K and \(\log(g/\text{cm}\,\text{s}^{-2})\) from 2.5 to 5. The Dusty library has an overall smaller scope, therefore we can only probe from 2600 to 4000 K in \(T_{\mathrm{eff}}\) and from 3 to 5 in \(\log(g/\text{cm}\,\text{s}^{-2})\). Here. For \(A_{\mathrm{V}}\) we select the same range of 0 to 10 mag for all three libraries, where we use the Cardelli et al. (1989) extinction law with \(R_{\mathrm{V}}=4.4\) to artificially redden the model spectra. We choose \(R_{\mathrm{V}}=4.4\) considering the application of our networks to the Carina Nebula (Hur et al. 2012) in the follow-up study. As some of the template stars used in this paper (Sect. 4) are dereddened assuming \(R_{\mathrm{V}}=3.1\), we have also experimented with training data sets using \(R_{\mathrm{V}}=3.1\). We have not found a significant difference in our main results, therefore we keep using \(R_{\mathrm{V}}=4.4\) in this study. In terms of wavelength coverage, we match the range of the template spectra described in Sect. 4, (i.e. \(\sim 5687\) to \(\sim 9350\) A) and adopt the MUSE spectral resolution subdividing the wavelength interval into a total of 2930 bins with a width of 1.25 A. Additionally, we normalise the spectra to the sum of the total flux across all bins. To generate the training data we opt for a uniform random sampling approach, where we sample both \(T_{\mathrm{eff}}\) and \(g\) in log space and only \(A_{\rm V}\) in linear space within the above-specified limits for the three libraries. We generate a total of 65,536 synthetic spectra models for each library. Note that we have also experimented with larger training sets, but have not found a significant increase in the predictive performance of our method, such that we deemed this training set size sufficient. Finally, we randomly split each of these three initial databases 80:20 into the respective training and test sets for the cINN. The former subsets mark the data that the cINN is actually trained on, whereas the latter are with-held during training and serve to quantify the performance of the trained cINN on previously unseen data with a known ground truth of the target parameters. We first train 50 networks for each library with randomised hyper-parameters of cINN and we select the best network based on the performance on the test set and template stars. We train the network until both training loss and test loss converge or either of them diverges, where the latter cases are discarded. It takes about 50 min to train one network (6 hours for 50 networks using 7 processes in parallel) with an NVIDIA GeForce RTX 2080 Ti graphic card. Once trained, our networks can sample posterior estimates very efficiently. Using the same graphic card (NVIDIA GeForce RTX 2080 Ti graphic card) and sampling 4096 posterior estimates per observation, it takes about 1.1 sec to sample posterior distributions for 100 observations (91 observations per second). When tested with M1 pro CPU with 8 cores, it takes about 13 sec for 100 observations (7.6 observation/sec). ## 4 Class III templates The set of observations on which we validate our networks contains 36 spectra of well-known Class III stars observed with VLT/X-Shooter (Manara et al., 2013, 2017). We refer the reader for details of observations and data reduction to original papers. Templates come from different star-forming regions (Taurus, Lupus, Upper Scorpius, \(\sigma\) Orionis, TW Hydrae Association, Chameleon I) and span a broad range of effective temperatures (\(2300-5800\) K), as well as spectral types (M9.5 - G5.0). In this work we use their properties provided by Manara et al. (2013, 2017), Stelzer et al. (2013). Spectral types for stars later than K5 were obtained based on the depth of molecular absorption bands (TiO, VO and CaH) and a few photospheric lines (e.g. Na1, Ca1, Mg1, etc.) present in the optical part of the spectra (Manara et al., 2013). Earlier K-type stars were identified using the spectral indices introduced by Herczeg & Hillenbrand (2014), while G-type stars were identified based on the difference at 5150 A of continuum estimated between 4600 and 5400 A, and 4900 and 5150 A (Herczeg & Hillenbrand, 2014). Effective temperatures (\(T_{\rm eff}\)) were derived from spectral types using relations from Luhman et al. (2003) for M-type objects and Kenyon & Hartmann (1995) for K- and G-type stars. Most of the templates have none or negligible extinction (\(A_{\rm V}<0.5\) mag, Manara et al., 2017); those with \(A_{\rm V}>0.3\) were dereddened before analysis assuming the extinction law from Cardelli et al. (1989) and \(R_{\rm V}=3.1\). Surface gravity (\(\log g\)) of Class III sources was estimated using the ROTFIT tool (Frasca et al., 2003). It compares the observed spectrum with the grid of referenced spectra and finds a best-fit minimising the \(\chi^{2}\) of difference between the spectra in specific wavelength ranges. Stelzer et al. (2013) and Manara et al. (2017) used BT-Settl spectra in a \(\log g\) range of \(0.5-5.5\) dex as reference. The tool also provides \(T_{\rm eff}\), radial and rotational velocities; but we use \(T_{\rm eff}\) derived from spectral types in the subsequent analysis. Table 1 provides a summary of the Class III stars and their stellar parameters. We exclude from the original paper sources, which are suspected to be unresolved binaries or their youth is doubtful due to the lack of lithium absorption line at 6708 A (Manara et al., 2013). X-Shooter has higher spectral resolution than MUSE, thus template spectra were degraded to MUSE resolution (R\(\sim\)4000) using a Gaussian kernel and re-sample on MUSE spectra within the range of 5687.66 - 9348.91 A (the common spectral range of MUSE and optical arm of X-Shooter). Subsequently, spectra are normalised to the sum of the total flux of the stellar spectrum within the analysed spectral range. ## 5 Validation ### Validations with synthetic spectra In this section, we validate whether the trained networks well learned the physical rules hidden in the synthetic Phoenix models or not. We use the test set of each database, the synthetic models that are not used for the training but share the same physics as the training data. As mentioned in Sect. 3.2, we only used 80% of the database for training and remained the rest for validation. Each test set consists of 13,107 test models. #### 5.1.1 Prediction performance We introduce an accuracy index for evaluating the parameter prediction performance of the network. The accuracy of the prediction is defined as the deviation between the posterior estimate of the parameter and the ground true value (\(x^{*}\)) of the test model. In this section, we calculate the accuracy in the same physical scales we used to build the databases in Sect. 3.2, meaning that we use the logarithmic scales for the effective temperature and surface gravity and use the linear scale for the extinction magnitude. We use either all posterior estimates sampled for one test model or the maximum a posteriori (MAP) point estimate as a representative. To determine the MAP estimate from the posterior distribution, we perform a Gaussian kernel density estimation on a 1D posterior distribution and find the point where the probability density maximises, similar to the method used in Kosoll et al. (2020) and Kang et al. (2022). In most parts of this paper, we use the MAP estimate to quantify the accuracy of the prediction. We evaluate the three networks (Settl-Net, NextGen-Net, and Dusty-Net) by using all 13,107 test models in the corresponding test set. For each test model, we sample 4096 posterior estimates and measure the MAP estimates for three parameters from the 1D posterior distributions. In Fig. 1, we present 2D histograms comparing the MAP values estimated by the Settl-Net with the true values of the entire test models. The Settl-Net predicts all three parameters extremely well so the data points are all lying on the 1-to-1 correspondence line. The NextGen-Net and Dusty-Net as well show extremely good results on the test set. The results of the other two networks are very similar to the result of Settl-Net (Fig. 1), so we do not include figures of them in this paper. To quantify the average accuracy of the network for multiple test models, we measure the root mean square error (RMSE) following, \[\text{RMSE}=\sqrt{\frac{\sum_{i=1}^{N}(x_{i}^{\text{MAP}}-x_{i}^{*})^{2}}{N}}. \tag{6}\] In the case of the Dusty-Net, the training ranges of the effective temperature and surface gravity are narrower than that of the other two networks. As the total number of models is the same for all three databases (i.e. 65,536 models), the number density of the model for the effective temperature and surface gravity in the Dusty database is higher than the other two. On this account, we define the normalised RMSE (NRMSE), \[\mathrm{NRMSE}=\frac{\mathrm{RMSE}}{\frac{\mathrm{missing}{\mathrm{missing}{\mathrm{missing}{\mathrm{missing}{\mathrm{missing}{missing}{\mathrm{missing}{missing}{\mathrm{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}}{missing}{missing}{missing}{missing}{missing}{missing}{missing}{missing}}{{missing}}{missing}{missing}{missing}{missing}{missing}}{{missing}{missing}}{missing}{missing}{missing}}{{missingmissing}}{{missingmissing}}{{missingmissing}}{}{missingmissing}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\] \}\}\}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\} measure and is defined as \[R^{2}=1-\frac{\sum_{i}(y_{i}-\hat{y}_{i})^{2}}{\sum_{i}(y_{i}-\hat{y})^{2}} \tag{8}\] for a set of \(N\) observations \(y_{i}\) with corresponding predictions \(\hat{y}_{i}\), where \(\hat{y}=\frac{1}{N}\sum_{i}^{N}y_{i}\) denotes the mean of the observations. It takes on values between 0 and 1, with the latter indicating a perfect match (James et al., 2017). Figure 1 summarises the results for Settl-Net, showing the median relative residual against the wavelength in the left panel and the distribution of RMSEs in the right one. The corresponding plots for NextGen-Net and Dusty-Net can be found in Figs. 1 and 2 in the Appendix. Out of the 13,107 test cases, we could not resimulate spectra for only 52, 32 and 9 MAP predictions for the Settl-Net, NextGen-Net and Dusty-Net, respectively. Only in these few instances either the predicted temperature or gravity (or both) fall outside the interpolation limits of the respective spectra library, such that the spectrum cannot be resimulated. Notably, all of these cases are extreme edge cases, that are right at the training boundaries of either \(T_{\rm eff}\) or \(\log(g)\) such that the cINN MAP estimates fall ever so slightly outside the limits while still being an excellent match to the ground truth. Figure 1 confirms the excellent precision of the MAP predictions demonstrated in the ground truth comparison in Fig. 1. With a median RMSE of the resimulated spectra of \(1.57^{+1.81}_{-0.77}\times 10^{-7}\) (and median \(R^{2}\) score of 1), we find that the resimulated spectra are practically spot-on to the corresponding input. In the left panel of Fig. 1 we can also see that, while the overall median residual is very small, there is a systematic trend towards a larger discrepancy between resimulation and input within a shorter wavelength regime (\(<7250\) A). This is likely an effect of the overall low flux in the short wavelength regime for the colder stars (\(<4000\) K), such that even a small deviation in flux results in a comparably larger value of the relative residual. Although, it has to be noted again that with most relative deviations falling below 0.2% the discrepancy is overall marginal even in the short wavelength regime. As Figs. 1 and 2 show, NextGen-Net and Dusty-Net exhibit similar behaviour in the resimulation test, although we find slightly lower mean RMSEs with \(2.28\pm 2.48\times 10^{-7}\) and \(9.01\pm 7.34\times 10^{-8}\), respectively. Given that the mean RMSEs across the three different spectral libraries agree within one \(\sigma\), however, it is safe to say that all three networks achieve equally excellent performance in the resimulation test. ### Validations with Class III template stars In this section, we investigate how well our cINNs predict each parameter when applied to real observations by analysing Class III template stars introduced in Sect. 4. Stellar parameter values (i.e. effective temperature, surface gravity, and extinction) provided by previous papers (Manara et al., 2013, 2017; Stelzer et al., 2013) are listed in Table 1. Among the 36 template stars, there are cases where the literature value of effective temperature \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{RMSE} & \multicolumn{3}{c}{NRMSE} \\ \cline{2-7} Network & \(\log T_{\rm eff}\) & \(\log(g)\) & \(A_{\rm V}\) & \(\log T_{\rm eff}\) & \(\log(g)\) & \(A_{\rm V}\) \\ \hline Settl & \(4.260\times 10^{-4}\) & \(1.211\times 10^{-2}\) & \(7.893\times 10^{-3}\) & \(9.904\times 10^{-4}\) & \(4.846\times 10^{-3}\) & \(7.893\times 10^{-4}\) \\ NextGen & \(3.064\times 10^{-4}\) & \(6.742\times 10^{-3}\) & \(6.499\times 10^{-3}\) & \(7.123\times 10^{-4}\) & \(2.697\times 10^{-3}\) & \(6.499\times 10^{-4}\) \\ Dusty & \(7.274\times 10^{-5}\) & \(1.573\times 10^{-3}\) & \(2.517\times 10^{-3}\) & \(3.888\times 10^{-4}\) & \(7.863\times 10^{-4}\) & \(2.517\times 10^{-4}\) \\ \hline \end{tabular} 1 \end{table} Table 2: Average prediction performance of three networks (Settl-Net, NextGen-Net, and Dusty-Net) on 13,107 Phoenix synthetic models in the test set. Figure 1: Resimulation results of Settl-Net for the entire synthetic spectra in the test set. The left panel presents the median relative error across the wavelength range of the resimulated spectra based on the MAP predictions of the cINN trained on the Settl models averaged over the 13,107 synthetic spectra in the test set. Here the grey envelope indicates the interquartile range between the 25% and 75% quantiles. In the right panel, we present the histogram of the RMSEs of the 13,107 resimulated spectra. The mean resimulation RMSE across the test set is \(3.01\pm 4.35\times 10^{-7}\).[ENDFOOTNOTE] \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{RMSE} \\ \cline{2-7} Network & \(\log T_{\rm eff}\) & \(\log(g)\) & \(A_{\rm V}\) & \(\log T_{\rm eff}\) & \(\log(g)\) & \(A_{\rm V}\) \\ \hline Settl & \(4.260\times 10^{-4}\) & \(1.211\times 10^{-2}\) & \(7.893\times 10^{-3}\) & \(9.904\times 10^{-4}\) & \(4.846\times 10^{-3}\) & \(7.893\times 10^{-4}\) \\ NextGen & \(3.064\times 10^{-4}\) & \(6.742\times 10^{-3}\) & \(6.499\times 10^{-3}\) & \(7.123\times 10^{-4}\) & \(2.697\times 10^{-3}\) & \(6.499\times 10^{-4}\) \\ Dusty & \(7.274\times 10^{-5}\) & \(1.573\times 10^{-3}\) & \(2.517\times 10^{-3}\) & \(3.888\times 10^{-4}\) & \(7.863\times 10^{-4}\) & \(2.517\times 10^{-4}\) \\ \hline \end{tabular} 1 \end{table} Table 2: Average prediction performance of three networks (Settl-Net, NextGen-Net, and Dusty-Net) on 13,107 Phoenix synthetic models in the test set. is out of the training range of the cINNs, or where the literature value of gravity is missing. Two out of 36 stars have temperatures below 2600 K, outside the temperature range of all three databases. Also, 14 stars with temperatures between 4000 K and 7000 K are out of the training range of the Dusty-Net. These stars will be excluded from some analyses in the following sections. Using each network, we sample 4096 posterior estimates per star and measure MAP estimation for three parameters. We list the MAP values predicted by three networks in Table 3. #### 5.2.1 Parameter comparison between literature and cINN In Fig. 2, we compare the stellar parameter values from literature (\(x_{\rm lit}\)) with MAP predictions (\(x_{\rm MAP}\)). Each row shows the result of different cINNs. The first two columns are the results of effective temperature and surface gravity. As the extinction value of template stars is negligible, we compare the literature value of the temperature with the MAP estimation of extinction. We calculate the uncertainty of the MAP estimate based on the width of the posterior distribution, but as the uncertainties are all very small, we did not present the uncertainty of the MAP estimate in the figure. For the uncertainty of the literature values, we adopt \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{MAP estimate} \\ \cline{2-10} & \multicolumn{3}{c}{\(T_{\rm eff}\) (K) [\(\Delta_{\rm lit}\)]} & \multicolumn{3}{c}{log(\(g\)/cm s\({}^{-2}\)) [\(\Delta_{\rm lit}\)]} & \multicolumn{3}{c}{\(A\)v (mag)} \\ \cline{2-10} Object Name & Settl & NextGen & Dusty & Settl & NextGen & Dusty & Settl & NextGen & Dusty \\ \hline RXJ0445.8+1556 & 5391 [379] & 5692 [78] & 4161 [1609] & 4.28 [-0.35] & 4.13 [-0.20] & 4.14 [-0.21] & 0.21 & 0.38 & -0.02 \\ RXJ1508.6-4423 & 5069 [451] & 5434 [86] & 4141 [1379] & 4.10 [-0.04] & 4.16 [-0.10] & 4.13 [-0.07] & -0.31 & -0.04 & -0.13 \\ RXJ1526.0-4501 & 5150 [260] & 5443 [-33] & 4170 [1240] & 4.25 [0.13] & 4.21 [0.17] & 4.13 [0.25] & -0.02 & 0.19 & 0.14 \\ HBC407 & 5129 [-19] & 5497 [-387] & 4165 [945] & 4.71 [-0.38] & 4.64 [-0.31] & 4.26 [0.07] & 0.17 & 0.37 & 0.02 \\ PZ991I60843.4+260216 & 5006 [44] & 5366 [-316] & 4154 [896] & 4.43 [-0.95] & 4.42 [-0.94] & 4.28 [-0.80] & 0.15 & 0.38 & -0.09 \\ RXJ15158.3-3331 & 4895 [155] & 5248 [-198] & 4177 [873] & 4.25 [-0.39] & 4.32 [-0.46] & 4.31 [-0.45] & 0.00 & 0.32 & 0.27 \\ PZ991I6055.5-253313 & 4759 [241] & 5168 [-168] & 4.02 [-0.21] & 4.19 [-0.38] & 4.34 [-0.53] & 0.09 & 0.40 & 0.21 \\ RXJ0457.5+2014 & 4644 [356] & 5105 [-105] & 4123 [877] & 4.37 [0.14] & 4.63 [-0.12] & 4.47 [0.04] & -0.13 & 0.34 & -0.17 \\ RXJ0438.6+1546 & 4588 [312] & 4992 [-92] & 4177 [723] & 4.01 [0.11] & 4.20 [-0.08] & 4.50 [-0.38] & 0.01 & 0.44 & 0.20 \\ RXJ1547.7-4018 & 4615 [115] & 5015 [-285] & 4185 [545] & 4.15 [0.07] & 4.40 [-0.18] & 4.52 [-0.30] & -0.02 & 0.26 & 0.13 \\ RXJ1538.6-3916 & 4464 [126] & 4830 [-240] & 4180 [410] & 4.17 [0.04] & 4.38 [-0.17] & 4.69 [-0.48] & 0.01 & 0.30 & 0.21 \\ RXJ1540.7-3756 & 4225 [-20] & 4260 [-55] & 4115 [90] & 4.22 [0.20] & 4.17 [0.25] & 4.92 [-0.50] & -0.11 & 0.12 & 0.22 \\ RXJ1543.1-3920 & 4269 [-64] & 4299 [-94] & 4132 [73] & 4.34 [-0.22] & 4.32 [-0.20] & 5.00 [-0.88] & 0.03 & 0.28 & 0.39 \\ SO879 & 4106 [-46] & 4027 [33] & 3909 [151] & 3.96 [-0.06] & 4.09 [-0.19] & 4.78 [-0.88] & 0.22 & 0.29 & -0.12 \\ Tyc7760283.1 & 3881 [-31] & 3748 [102] & 3742 [108] & 5.00 [-0.30] & 4.99 [-0.29] & 5.23 [-0.53] & -0.17 & -0.34 & -0.52 \\ TWA14 & 3819 [-39] & 3739 [41] & 3677 [103] & 5.07 [-0.37] & 4.87 [-0.17] & 5.09 [-0.39] & -0.32 & 0.19 & -0.30 \\ RXJ1121.3-3447\_app2 & 3797 [-92] & 3622 [83] & 3635 [70] & 4.78 [-0.18] & 4.68 [-0.08] & 5.13 [-0.53] & 0.38 & 0.30 & 0.02 \\ RXJ1121.3-3447\_app1 & 3719 [-14] & 3559 [146] & 3564 [141] & 4.90 [-0.10] & 4.77 [0.03] & 5.16 [-0.36] & 0.01 & 0.04 & -0.07 \\ CD\_29888TA & 3670 [-110] & 3483 [77] & 3491 [69] & 4.79 [-0.39] & 4.57 [-0.17] & 5.05 [-0.65] & 0.56 & 0.51 & 0.07 \\ CD\_36.7429B & 3423 [-8] & 3264 [151] & 3262 [153] & 4.70 [-0.20] & 4.44 [0.06] & 4.82 [-0.32] & 0.52 & 0.50 & 0.13 \\ TWA15.\_app2 & 3467 [-52] & 3289 [126] & 3036 [109] & 4.93 [-0.53] & 4.71 [-0.31] & 5.02 [-0.62] & 0.17 & 0.31 & 0.09 \\ TWA7 & 3519 [-104] & 3321 [94] & 3316 [99] & 4.83 [-0.23] & 4.45 [0.15] & 4.80 [-0.20] & 0.41 & 0.94 & 0.14 \\ TWA15.\_app1 & 3469 [-129] & 3285 [55] & 3310 [30] & 5.01 [-0.51] & 4.79 [-0.29] & 5.08 [-0.58] & 0.06 & 0.20 & 0.10 \\ SO797 & 3248 [-48] & 3225 [-25] & 3078 [122] & 3.93 [-0.03] & 3.47 [0.43] & 4.03 [-0.13] & 1.07 & 1.48 & 0.73 \\ SO641 & 3129 [-4] & 3237 [-112] & 2997 [128] & 3.86 [-0.06] & 3.20 [0.60] & 3.81 [-0.01] & 0.68 & 1.46 & 0.43 \\ Par\_Lup3\_2 & 3181 [-56] & 3245 [-120] & 3048 [77] & 3.96 [-0.26] & 3.29 [0.41] & 4.00 [-0.30] & 0.72 & a 1-subclass temperature interval as the uncertainty of temperature and use the surface gravity uncertainty provided by the literature (Stelzer et al., 2013; Manara et al., 2017). According to the literature, the 1-\(\sigma\) uncertainty of extinction is \(\sim\) 0.1-0.2 mag, so we indicate from \(-\)0.2 to 0.2 mag range in grey to show the uncertainty range. In this section, we do not use some stars in our analyses where the stellar parameter value from literature is out of the training range or where any stellar parameter value is missing, although they are presented in Fig. 2 by triangle symbol. We use 34, 34, and 20 stars for Settl-Net, NextGen-Net, and Dusty-Net, respectively when analysing temperatures or extinction, and use 32, 32, and 18 stars respectively when analysing gravity. Comparing the temperature MAP estimates with the literature values, we confirm that the majority of stars are lying close to the 1-to-1 correspondence line. We calculate the RMSE for each network by only using stars whose temperature literature values are within the training range (i.e. circle markers in Fig. 2). Considering that the average of the 1-subclass temperature interval of these stars is about 140 K, the RMSE values of Figure 2: Comparison of MAP predictions with literature values in Table 1. Stars are basically denoted by circle symbols but triangle symbols denote stars excluded in analyses such as RMSE calculation either because their literature values of temperature are out of the clNN training range or because their literature values of surface gravity are missing. The colour indicates the temperature deviation between the MAP estimate and the literature value. We indicate the training range of each parameter with green dotted lines. In the third column, the grey horizontal area presents the 1-\(\sigma\) uncertainty (i.e. 0.2 mag) of extinction provided by literature. 175.3 K, 192.3 K, and 94.02 K for Settl-Net, NextGen-Net, and Dusty-Net, respectively, are well within 1 to 2 subclasses interval. As shown in the figure and RMSE values, Dusty-Net has the best agreement with the literature value when the temperature is within its training range of \(2600-4000\) K. However, Dusty-Net shows very poor agreement with the literature values when the temperature is outside the training range. This implies the caution of using cINN to analyse stars far from the training range. Comparing Settl-Net and NextGen-Net having the same training range, MAP estimates of Settl-Net are closer to the literature values. To compare the performance of the three networks on the temperature in more detail, we present the relative temperature deviations between the MAP predictions and the literature values sorted by their spectral type. Figure 3 as well shows that MAP estimates from Dusty-Net are in good agreement with the literature value within a 5 per cent discrepancy. In the case of Dusty-Net, all but one star have a deviation within the 1-subclass interval. In the case of Settl-Net and NextGen-Net, 23 and 16 stars out of 34, respectively, have a deviation less than a 1-subclass interval. MAP estimates of Settl-Net and NextGen-Net have a relatively poor agreement with the literature values for hot stars of 4500 K (e.g. K4.0 type) or higher. However, the discrepancies are still within 10 per cent. The average absolute relative deviations when only using the templates within the training range of each network are 3.28, 4.49, and 2.58 per cent for Settl-Net, NextGen-Net, and Dusty-Net, respectively (Table 4). These average errors are equivalent to 1.08, 1.12, and 0.601 subclasses. In the case of surface gravity, the RMSE of Settl-Net, NextGen-Net, and Dusty-Net are 0.30, 0.51, and 0.42 dex, respectively. However, because the surface gravity value from previous studies (Stelzer et al., 2013; Manara et al., 2017) is obtained by fitting the spectrum on the Settl models, the MAP estimate of Settl-Net is essentially the closest to the literature value. Although Settl-Net has the smallest RMSE value, considering the uncertainty of literature values, the other two networks also have good agreement with the literature value. To combine the results of temperature and surface gravity, we define the combined error of two parameters as, \[\mbox{Combined error} = \sqrt{\frac{1}{2}\left(\left(\frac{\Delta T_{\rm eff}}{\log T_{ \rm eff}^{\rm lit}}\right)^{2}+\left(\frac{\Delta g}{\log g^{\rm lit}}\right)^ {2}\right)},\] \[\mbox{for}\] \[\Delta T_{\rm eff} = \log T_{\rm eff}^{\rm MAP}-\log T_{\rm eff}^{\rm lit},\] \[\Delta g = \log g^{\rm MAP}-\log g^{\rm lit}, \tag{9}\] and present the combined error of each template star. We use the effective temperature in the logarithmic scale to match the scale with surface gravity. The overall result using combined error presented in Fig. 4 are not significantly different from Fig. 3, but by adding the gravity error, Settl-Net shows better performance than Dusty-Net even for low-temperature stars. In the case of NextGen-Net, the combined error is larger than the other two networks because there are cases where temperature and gravity errors are both large. The average combined errors across the stars of Settl-Net NextGen-Net, and Dusty-Net are 3.93, 7.20, and 6.47 per cent, respectively. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{3}{c}{Average relative error [\%]} & \multicolumn{3}{c}{Average relative error [\(\sigma\)]} \\ \cline{2-7} Network & \(T_{\rm eff}\) & log(\(g\)) & \(A_{\rm V}\) & \(T_{\rm eff}\) & log(\(g\)) & \(A_{\rm V}\) \\ \hline Settl & 3.28 & 5.5 & - & 1.08 & 0.809 & 2.78 \\ NextGen & 4.49 & 10.2 & - & 1.12 & 1.38 & 4.95 \\ Dusty & 2.58 & 9.13 & - & 0.601 & 1 & 3.87 \\ \hline \hline \end{tabular} 1 \end{table} Table 4: Average absolute relative error between cINN predictions and literature values for template stars. Figure 4: Average relative error of the template stars between the MAP estimates and the literature values sorted by their spectral type. The average error is calculated as a root mean square of the relative errors of temperature and gravity, both in log-scale (Eq. 9). The pink area indicates the 1-\(\sigma\) uncertainty of the literature value. We only present template stars whose literature value of temperature is within the network training range and whose literature value of gravity is presented. Colour codes are the same as in Fig. 3. Figure 3: Relative temperature deviations of the template stars between the MAP estimates and the literature values sorted by their spectral type. Different colours and symbols indicate the results of three different cINNs. The pink area indicates the uncertainty of the literature value of temperature. We only present template stars whose literature value of temperature is within the network training range. In the case of Settl-Net, all but 7 stars are in good agreement with the literature values within 1-\(\sigma\) uncertainty. Excluding one star with a large error, most of the stars have errors of less than 5 per cent and a maximum of 10 per cent. Dusty-Net also has small errors (\(<\)15 per cent) but Dusty-Net has a disadvantage in that it is inherently less versatile than the other two networks because of its training range. NextGen-Net also shows an error of less than 10 per cent for stars with spectral type earlier than M5.0. Lastly, in the case of extinction, the deviation between MAP estimates and literature values varies depending on the temperature. For stars hotter than about 3400 K (i.e. M3.0 type), all three networks predict near-zero extinction, with little deviation from literature values. In the case of NextGen-Net, there are stars that are slightly outside the error range but their MAP estimates are sufficiently small. On the other hand, for cool stars below 3400 K, the discrepancy between the MAP value and the literature value gradually grows. In the case of Settl-Net and Dusty-Net, the MAP estimate does not exceed the maximum of 3, but in the case of NextGen-Net, the MAP estimates are slightly larger than the other two networks. In this section, we showed that the discrepancy between the network MAP prediction and literature value varies with the characteristics of the stars. Based on the overall results, a star of * K1.0 (\(2935-5000\) K) for Settl-Net, * K1.0 (\(3200-5000\) K) for NextGen-Net, * M0.0 (\(3060-4000\) K) for Dusty-Net shows especially high agreement with the literature values. Settl-Net showed the best agreement with the literature values overall. Dusty-Net also shows a good agreement for stars whose temperature is within the Dusty database of 2600 - 4000 K. NextGen-Net has relatively large errors compared to the other two, but it still shows reliable performance for early-type stars. Given that Settl-Net and NextGen-Net cover a wider range of temperature (i.e. 2600 - 7000 K) and gravity (\(2.5-5\log(\mathrm{cm\,s^{-2}})\)) than Dusty-Net, Settl-Net is the best choice among the three networks. However, all three networks showed good agreement with the literature values considering their uncertainty. This result shows how well our cINN predictions are in good agreement with values obtained with the classical methods in previous studies. Differences between literature values and network predictions do not demonstrate that the network prediction is wrong. For example, in the case of surface gravity, because the literature value was also obtained by fitting spectra based on the Settl model, there is inevitably a larger discrepancy between the literature values and MAP predictions of NextGen-Net and Dusty-Net. This means that we need to consider methods used in the literature, and additional analysis is required to judge whether the cINN prediction is really wrong or not. The resimulation following in the next section will provide a better clue to determine the correctness of our cINN predictions. #### 5.2.2 Resimulation As we have done for the synthetic test data in Sect. 5.1.2, we also evaluate the accuracy of the cINN predictions on the Class III template by resimulation to quantify the agreement between the spectra corresponding to the MAP estimates with the input spectra. In this case, we also run a resimulation for the nominal literature stellar parameters of the Class III sources listed in Table 1 for comparison. Some of the Class III template sources in our sample do not have an estimate for \(\log(g)\) in the literature. For these sources, we assume a fixed value of \(\log(g/\mathrm{cm}^{-2})=4.0\) in our resimulation, which is a reasonable guess for the spectral types in our sample. The sources in question are marked as "fixed" in the last column of Table 1. There are a few templates (7 for Settl, 1 for NextGen and 8 for Dusty; see Table 3), where the cINN extinction MAP estimate has a non-physical negative value. Since most of these are only barely below zero, we decide to allow these negative values to be accounted for during the resimulation. Figure 5 shows an example result of the resimulation for the M4-type template star SO797 for all three spectral libraries with the top panels comparing the resimulation spectra to the input spectrum and bottom panels showing the corresponding residuals. Here the red curve indicates the resimulation result derived from the cINN MAP estimates, whereas the blue curve marks the Figure 5: Resimulation results for Class III star SO797. The columns show in order the results for the three different spectral libraries Settl, NextGen and Dusty. Top: Comparison of resimulated spectrum. The blue spectrum indicates the resimulation derived from the literature stellar parameters from Table 1. The red spectrum shows the corresponding resimulation based on the cINN MAP prediction. The respective input parameters for the resimulation are summarised in the table in the bottom right corner. The relative residuals \((I_{\mathrm{resim}}-I_{\mathrm{in}})/I_{\mathrm{in}}\) of the resimulated spectra compared to the input spectrum are shown in the bottom panels, respectively. literature-based outcome. In this particular example, the cINN recovers both \(T_{\rm eff}\) and \(\log(g)\) quite accurately for all three spectral libraries but overestimates \(A_{V}\) for this supposedly zero extinction template Class III source by 1.07, 1.48 and 0.73 mag based on Settl, NextGen and Dusty, respectively. Interestingly, however, we find that the resimulated spectrum based on the cINN MAP prediction with the supposedly wrong \(A_{\rm V}\) matches the input spectrum better than the spectrum derived from the literature value in all three examples as e.g. attested by the smaller RMSE and better \(R^{2}\) score of \(2.7\times 10^{-5}\) and 0.98 compared to \(3.77\times 10^{-5}\) and 0.97 in the Settl case. Figure 15 in the Appendix shows another such example, where it is immediately apparent that the cINN-based resimulated spectrum matches the input observation much better than the literature-based solution, which evidently does not capture the slope of the observed spectrum correctly. Figures 6, 7 and Table 2 in the Appendix summarise the resimulation results across the entire Class III template sample, showing the median relative residuals against the wavelength, the distributions of RMSEs and \(R^{2}\) scores, and a table of all RMSEs and \(R^{2}\) scores, respectively. Note that the resimulation statistics vary between the libraries here. Given the lower effective temperature limits of the libraries (i.e. 2600 K) 2 of the 36 templates, namely TWA26 and DENIS1245, can a priori not be resimulated with Settl and NextGen. For Dusty, the literature sample is even smaller with only 20 out of 36 templates due to the low upper temperature limit of 4000 K. For the resimulation of the MAP estimates we can use 31 templates with the Settl-Net,29 with NextGen-Net and only 17 with Dusty-Net. For more details, we refer to Table 2. Note that for the Dusty resimulation there are actually 7 templates, where the \(\log(g)\) prediction is above the training set limit of 5. However, since the Dusty spectral library does actually extend to \(\log(g/\mathrm{cms^{-2}})=5.5\), we decide to run the resimulation for these 7 templates anyways, in particular since for most of those the \(\log(g)\) prediction is only barely above 5 (see Table 3). Figure 6 shows that for all three libraries, we find that our observation from Fig. 5, where the resimulated spectrum based on the cINN prediction fits the input spectrum better than the literature-based resimulation, holds on average across the entire template sample. The distributions of the RMSEs and \(R^{2}\) scores of the resimulated spectra in Fig. 7 further confirm this, as the cINN-based resimulated spectra tend towards smaller RMSEs and slightly better \(R^{2}\) scores compared to the literature-based spectra for all three spectral libraries. Examining the 7 templates, for which the Dusty-based cINN prediction of \(\log(g)\) exceeds the learned upper limit of 5 (i.e. the cINN extrapolated), more closely, the resimulation results show that even when the cINN extrapolates, the set of predicted parameters corresponds to a spectrum, which matches the input observation quite well and, in particular, equally if not better than the respective spectrum resimulated from the literature values as indicated by the \(R^{2}\) scores (see Table 2 and Fig. 2 for an example). This result shows that the cINN prediction is actually fairly robust even in the event of slight extrapolation. Comparing our chosen resimulation accuracy measures to the spectral types of the Class III templates in Fig. 8, we find that the RMSEs exhibit an increasing trend towards the M-types for all three spectral libraries. For the \(R^{2}\) scores, we find a notable dip in the goodness of fit for the intermediate spectral types, that is between M2 to K3, in both the resimulation of the literature and cINN-based values for Settl and NextGen. The beginning of this dip can also be seen in the Dusty-based results up to the temperature limit of this library at the K7 type. Interestingly, when compared to Fig. 4 in this spectral type range the discrepancy between the cINN prediction and literature stellar properties is relatively low, where both cINN and literature values correspond to an equally sub-optimal fit to the observed spectra. Overall the resimulation test shows that the cINN approach predicts parameters for the real Class III template spectra that correspond to spectra, which not only fit the input observations very well (as shown by the good \(R^{2}\) scores in Fig. 7 and Table 2), but also match better than the spectra resimulated from the literature values in most instances. Figure 6: Comparison of the median relative error of the resimulated spectra for the Class III template stars between the resimulations based on the literature stellar parameters (blue, see Table 1) and the cINN MAP predictions (red). From top to bottom, the panels show the corresponding results for the three tested spectral libraries Settl, NextGen and Dusty. ## 6 Feature importance ### Importance calculation In this section, we evaluate which parts of the spectra the cINN prediction relies the most upon. To do so we measure the so-called _permutation feature importance_, an approach first described by Breiman (2001) for random forest models and later generalised by Fisher et al. (2018). In this study we implement the Fisher et al. (2018) algorithm as described in Molnar (2022), operating as follows: First, we compute the error on the original held-out test set \[e_{\text{orig}}=L\left(\mathbf{X},g(\mathbf{Y})\right), \tag{10}\] where \(g\) represents the inverse translation (\(\mathbf{x}\leftarrow\mathbf{y}\)) of the trained cINN, \(\mathbf{X}\) denotes the matrix of the target parameters of the test set (\(n_{\text{test}}\times n_{\text{parameters}}\)), \(\mathbf{Y}\) is the \(n_{\text{test}}\times n_{\text{features}}\) feature matrix of the test set and \(L\) represents a loss measure. In our case, \(L\) is the RMSE of the MAP estimates. Next, for each feature \(j\in\{1,\dots,n_{\text{features}}\}\), we generate a feature matrix \(\mathbf{Y}_{\text{perm},j}\) via random permutation of the \(j\)-th column in order to break the association between feature \(j\) and the target parameters \(\mathbf{x}\), estimate the prediction error \(e_{\text{perm},j}=L\left(\mathbf{X},g\left(\mathbf{Y}_{\text{perm},j}\right)\right)\) based on the permuted data set, and compute the feature importance of feature \(j\) as the quotient \[\text{FI}_{j}=\frac{e_{\text{perm},j}}{e_{\text{orig}}}. \tag{11}\] The larger \(\text{FI}_{j}\) is, the worse the model prediction becomes if feature \(j\) is scrambled via permutation, that is the more important feature \(j\) is to the model's decision making. The closer \(\text{FI}_{j}\) is to 1, on the other hand, the less feature \(j\) affects the predictive performance and, thus, the less relevant it is to the model's reasoning. In our particular case, the feature space is very high dimensional with 2930 spectral bins per spectrum. Consequently, computing the individual per spectral bin feature importance is rather computationally expensive as it requires generating the posteriors and determining the MAP estimates for each of the 2930 bins. Although the computational cost alone is not prohibitive in this case given the cINNs great efficiency, we still opt for a slightly different approach, because the spectral bins themselves are also not necessarily independent of each other. Instead of using the individual bins, we group them together into combined features, for which we then estimate the importance. In practise, this means that we permute multiple columns at once (each column with its own permutation seed though) corresponding to the spectral bins in a given group. For the setup in this study in particular we decide to evaluate the feature importance across the wavelength range using groups of 10 bins, which corresponds to a spectral width of 12.5 A. We set all groups to overlap by 5 bins (i.e. 6.25 A) with the preceding and following groups. We average feature importance for overlapping bins. Figure 7: Average error for the resimulation spectra for the Class III template stars. Top: Histograms of the RMSEs for the resimulation on the Class III template spectra for the three different spectral libraries. Bottom: Histograms of the corresponding \(R^{2}\) scores for the resimulated spectra. ### Important features for M-, K-, and G-type stars We draw three groups from the test set according to the temperature of the test model: M-type (2600K-3850 K) group, K-type (3900K-5110 K) group, and G-type (5150K-6000 K) group, and evaluate the feature importance across the wavelength for each group per network. In the case of Dusty-Net, we only evaluate for the M-type group because the maximum temperature of the Dusty database is 4000 K. Figure 9 presents the feature importance of Settl-Net for M-type stars. To compare the important features with the locations of stellar parameter tracers existing in the real spectrum, we plot the median flux of M-type template stars in the first row, and indicate the locations of several tracers of stellar parameters (Table 5): Na i doublet 5890, 5896 A (\(T_{\rm eff}\) and \(\log g\), Allen & Strom 1995), Ca i 6122, 6162, 6439 A (\(\log g\), Allen & Strom 1995), Ba ii, Fe i, and Ca i blend 6497 A (\(T_{\rm eff}\) and \(\log g\), Allen & Strom 1995; Herczeg & Hillenbrand 2014), H\(\alpha\) 6563 A (\(T_{\rm eff}\), Luhman et al. 2003), K i doublet 7665, 7699 A (\(T_{\rm eff}\) and \(\log g\), Manara et al. 2013, 2017), Na i doublet 8183, 8195 A (\(T_{\rm eff}\) and \(\log g\), Kirkpatrick et al. 1991; Allen & Strom 1995; Riddick et al. 2007), Ca ii IR triplet 8498, 8542, 8662 A (\(T_{\rm eff}\), Kirkpatrick et al. 1991; Allen & Strom 1995; Luhman et al. 2003), Mg i 8807 A (\(T_{\rm eff}\), Manara et al. 2013; Herczeg & Hillenbrand 2014), hydrogen Paschen series (\(A_{\rm V}\), Edwards et al. 2013), CaH 6750-7050 A (\(T_{\rm eff}\) and \(\log g\), Kirkpatrick et al. 1993; Allen & Strom 1995), TiO 6080-6390, 7053-7270 A (\(T_{\rm eff}\), Kirkpatrick et al. 1991; Henry et al. 1994; Jeffries et al. 2007), TiO 7550-7570, 7920-8000 A (\(T_{\rm eff}\), Allen & Strom 1995; Riddick et al. 2007; Manara et al. 2013), and R1 8015-8130 A (\(T_{\rm eff}\), Riddick et al. 2007). To evaluate whether these observational tracers act as important features in our networks, we check whether the feature importance value corresponding to each tracer's wavelength is larger than a fiducial value. We use the value of median plus one standard deviation over the entire wavelength range as a fiducial value to determine an important tracer. For tracers with multiple lines or molecular bands, we average the feature importance for each line or over the wavelength range. In Table 5, we mark tracers whose average importance is larger than the fiducial value. We also indicate for which parameters these lines and bands trace in real observations. Figure 9 shows that Na i doublet 8183, 8195 A lines are the most important feature for Settl-Net to predict stellar parameters of M-type stars. In the case of extinction, there are two wide peaks near 7500 A, where the redder peak overlaps with the VO molecular band. However, Na i has a similarly large importance value. In the case of temperature and gravity, K i doublet 7665, 7699 A lines play a second important role, and in extinction, H\(\alpha\) does. VO and R1 molecular absorption bands as well act as important features to determine the temperature and extinction. We present the feature importance evaluated for NextGen-Net and Dusty-Net in Fig. 7. The fact that Na i, K i, and H\(\alpha\) are important features for M-type stars is the same for all three Figure 8: Comparison of the resimulation accuracy measures (RMSE in the top row, \(R^{2}\) score in the bottom) for the three spectra libraries to the spectral type of the Class III templates. In all panels, the dotted red line indicates the results for the resimulation based on the literature stellar properties, while the black line shows the cINN-based outcomes. networks. However, for NextGen-Net, there is a large bump at 7500 A in the case of temperature. The results of NextGen-Net in overall are spikier than in the other two networks. In the case of Dusty-Net, the importance value of Na i doublet 5890, 5896 A (Na i D) is relatively large compared to the other networks, and there is a very wide bump around Na i doublet 8183, 8195A. Given the fact that extinction affects the overall shape of the spectrum, it is interesting that the Settl-Net relies a lot on a few Figure 9: Feature importance evaluation for M-type synthetic models in the test set using Settl-Net. We present the median flux of M-type Class III template stars in the first row. The grey area indicates the interquantile range between the 25% and 75% quantiles. The other three rows show the feature importance across the wavelength for each stellar parameter. Vertical lines and shades indicate the location of typical tracers of stellar parameters listed in Table 5. certain lines. Broad bumps exist in the red part of the spectrum, but there are particularly important lines and areas such as the Na i, H\(\alpha\), and near VO bands. The result of NextGen-Net is similar to Settl-Net but shows a little more spiky trend with wider peaks. Dusty-Net shows a more wavy shape across the entire wavelength range compared to the others. Next, in the case of K-type stars, the results of Settl-Net and NextGen-Net are similar to each other, unlike the case of M-type stars, so we only present the result of Settl-Net in this paper (left panels in Fig. 10). Compared to the results of M-type stars, it is noticeable that important features are different for each parameter. In the case of temperature and extinction, the overall shapes are similar, with the H\(\alpha\) line being the most important feature. Na i doublet 8183, 8195 A are no longer so important to determine temperature and extinction for K-type stars. In addition, Na i D lines and hydrogen Paschen series have relatively high importance values. On the other hand, in the case of surface gravity, the Na i doublet 8183, 8195 A lines still play the most important role. The importance of Na i D in gravity becomes noticeable in K-type stars compared to M-type stars. Additionally, there are several peaks at K i, Mg i 8807 A used as important features to determine gravity. The result of G-type stars (i.e. right panels in Fig. 10) is similar to the K-type stars. The H\(\alpha\) is still the most important feature for temperature and extinction and there are several peaks at the Paschen series as well. For gravity, Na i D becomes more important in G-type stars and has an importance value comparable to that of Na i doublet 8183, 8195A. These sodium lines are the most important features to determine gravity. On the other hand, the importance of K i lines decreases in G-type stars compared to K-type stars. These results show that the features that our networks rely on to determine parameters vary depending on the input object. In particular, when changing from M- to K-type, important features change noticeably. For example, the Na i doublet 8183, 8195 A lines are essential features for networks to understand M-type stars, sensitive to all three stellar parameters, but for earlier type stars (K- and G-types), it is important only to determine gravity. Similarly, the K i doublet lines are gravity-sensitive features for late-type stars but they are less essential for earlier types. In the case of Na i doublet 5890, 5896 A lines, on the other hand, they are more important for hot stars rather than for cold stars to determine gravity. Please note that the feature-importance tests presented in this section indicate the features that affect the network's judgement which is based on the Phoenix models. Some of the important features (that are essential for the network) behave very similarly to our knowledge, but some do not. Above all, the behaviour of the Na i doublet 8183, 8195 A lines in the feature importance test agrees well with our knowledge. The Na i line, tracing the gravity (Rididick et al., 2007; Herczeg & Hillenbrand, 2014; Manara et al., 2017) and the temperature of late-type stars (Kirkpatrick et al., 1991; Allen & Strom, 1995; Riddick et al., 2007), is also essential for networks to determine stellar parameters of late-type stars and gravity. Based on Table 5, we find that the R1 8015-8130 A, K i doublet 7665, 7699 A, and Ba ii, Fe i, and Ca i blend 6497 A as well behave similarly to our knowledge. On the other hand, unlike our knowledge that the Ca ii IR triplet 8498, 8542, 8662 A and Mg i 8807 A trace the temperature (Kirkpatrick et al., 1991; Allen & Strom, 1995; Luhman et al., 2003; Manara et al., 2013; Herczeg & Hillenbrand, 2014), the networks do not rely much on these lines to estimate the temperature. Figure 10: Feature importance evaluation for K-type synthetic models (left) and for G-type synthetic models (right) in the test set using Settl-Net. The panels in the first row show the median flux of K-type and G-type Class III template stars, respectively. Lines and shades are the same as Fig. 9. In the feature-importance results of extinction, we showed the interesting results that there are particularly influential features although the extinction affects the overall shape of the spectrum, not the particular lines. One of the possible causes is the degeneracy between temperature and extinction. In our results, the features influential in determining the temperature tend to have high importance in extinction as well (e.g. Na i doublet 8183, 8195 A, VO band, and H\(\alpha\)). Due to the degeneracy between the two parameters, the over- or under-estimation of temperature can be compensated by over- or under-estimate of extinction. So, if the features important for temperature are scrambled, it can also affect the determination of extinction. Another possible cause is that the network determines extinction based on correlations between multiple features. For example, if the network relies on the ratios between several features to the H\(\alpha\), H\(\alpha\) may have relatively higher importance than others because scrambling the H\(\alpha\) affects all these ratios. The feature importance only shows how much the error increases by scrambling a certain feature. Therefore, it is not easy to clearly understand the reasons for the error increment. Compared to the spectra of template stars, however, it is obvious that cINN captures important information from the point where absorption or emission exists. There are many features used to predict parameters besides the major features indicated in the figures or in the table, but the important point is that the most influential features are the same as the tracers we already know. This confirms that even though we do not exactly know how cINNs learn the hidden rules from the training data, what cINNs learned is very close to the physical knowledge we have. ## 7 Simulation gap and the best network In sections 5.1.1 and 5.2.1, we showed that, for the synthetic models, our cINNs predict stellar parameters perfectly and for the template stars, network predictions are in good agreement with literature values within a 5 to 10 per cent error. The difference between literature values and network predictions slightly varies depending on the characteristics of the template stars. In sections 5.1.2 and 5.2.2, we confirmed that resimulation of the spectrum based on the network prediction well restored the original input spectrum. This means that the network successfully finds the most suitable model that satisfies the given observational data, as the network is designed to. In other words, the very good resimulation results indicate that cINNs provided us with the best results within the physics it has learned. Interestingly, the resimulated spectrum based on the network prediction is closer to the original input spectrum than the resimulated spectrum based on the literature values for template stars (see Fig. 5 and Table A.1), despite the discrepancy between the network prediction and literature value. This can be considered to be one of the following two cases. One is because there is a simulation gap, i.e. a gap between the physics within training data (i.e. the Phoenix atmosphere models), and the physics of the real world. The other is because of misclassification, meaning the literature value used as a reference in this paper is inaccurate. In the former case, no matter how perfectly trained the network is in terms of machine learning, it encounters inherent limitations. The simulation gap can be improved if we use better training data. The three Phoenix libraries used in this paper reflect lots of important physics and characteristics of stellar atmosphere, but, of course, do not perfectly reflect reality. Therefore, we suspect that it is because of the simulation gap that the parameter predictions differ from the literature values even though the resimulation results are almost perfect. In this section, we will introduce how we can quantify the simulation gap using the trained cINN and determine how large the gap is between the Phoenix models and reality. Finally, we will draw comprehensive conclusions about the performance and usage of our cINNs. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline & & \multicolumn{3}{c}{M-type} & \multicolumn{3}{c}{K-type} & \multicolumn{3}{c}{G-type} \\ \cline{3-11} Tracers & used in observations for & \(T_{\mathrm{eff}}\) & log(\(g\)) & \(A_{\mathrm{V}}\) & \(T_{\mathrm{eff}}\) & log(\(g\)) & \(A_{\mathrm{V}}\) & \(T_{\mathrm{eff}}\) & log(\(g\)) & \(A_{\mathrm{V}}\) \\ \hline Na i doublet 5890, 5896 Å & \(T_{\mathrm{eff}}\), log(\(g\)) & - & - & - & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ TiO 6080–6390, 7053–7270 Å & \(T_{\mathrm{eff}}\) (M- and late K-type) & - & - & - & - & - & - & - & - & - \\ Ca i 6122, 6162, 6439 Å & log(\(g\)) & - & - & - & - & - & - & - & - & - \\ Ba ii, Fe i, and Ca i blend 6497 Å & \(T_{\mathrm{eff}}\), log(\(g\)) & - & - & - & ✓ & - & ✓ & ✓ & - & ✓ \\ H\(\alpha\) 6563 Å & \(T_{\mathrm{eff}}\) (early type) & - & - & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ CaH 6750–7050 Å & \(T_{\mathrm{eff}}\) (M-type), log(\(g\)) & - & - & - & - & - & - & - & - & - \\ VO 7550–7570, 7920–8000 Å & \(T_{\mathrm{eff}}\) (M-type) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & - & ✓ \\ K i doublet 7665, 7699 Å & \(T_{\mathrm{eff}}\), log(\(g\)) & ✓ & ✓ & ✓ & - & - & - & - & - & - \\ R1 8015–8130 Å & \(T_{\mathrm{eff}}\) (M-type) & ✓ & ✓ & ✓ & - & - & - & ✓ & ✓ & - \\ Na i doublet 8183, 8195 Å & \(T_{\mathrm{eff}}\) (M-type), log(\(g\)) & ✓ & ✓ & ✓ & ✓ & ✓ & - & - & ✓ & ✓ \\ hydrogen Paschen series & \(A_{\mathrm{V}}\) & - & - & - & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ Ca ii IR triplet 8498, 8542, 8662 Å & \(T_{\mathrm{eff}}\) (early type) & ✓ & ✓ & - & ✓ & ✓ & - & ✓ & ✓ & ✓ \\ Mg i 8807 Å & \(T_{\mathrm{eff}}\) & - & - & - & - & ✓ & - & - & ✓ & - \\ \hline \hline \end{tabular} * **Notes.** For tracers with multiple lines (e.g. doublets) or molecular bands, we average the feature importance values. The results are based on the feature importance evaluation of Settl-Net (Figs. 9 and 10). \end{table} Table 5: We mark tracers whose feature importance values are larger than the fiducial value of median plus 1 standard deviation, meaning that marked tracers are significantly important features to determine each stellar parameter. ### Quantifying simulation gap As explained in section 2.1, cINN consists of the main network that connects parameters (**x**) and latent variables (**z**) and conditioning network (\(h\)) that transforms the input observation (**y**) to the useful representative (i.e. condition, **c**). Both are trained together, and the conditioning network in this paper compresses 2930 features (\(y_{1},\ldots,y_{2930}\)) included in one spectrum into 256 conditions (\(c_{1},\ldots,c_{256}\)). If the condition of the real observational data passed through the conditioning network (**c**obs) follows the same probability distribution as the condition of the training data (**c**train) this means there is no simulation gap. Because the conditioning network extracts only important features from the spectrum. However, unlike the latent variables set up to follow a prescribed distribution (i.e. a standard normal distribution), the distribution of conditions does not follow a certain known distribution. Therefore, we build a network (\(k\)) that transforms the distribution of conditions (\(p(\textbf{c})\)) into a prescribed probability distribution. The \(k\) network based on the cINN architecture is described as \(k(\textbf{c})=\textbf{s}\), and the output **s** is trained to follow a standard normal distribution. By definition of the cINN architecture, the dimensions of **c** and **s** are the same. Using the conditioning network \(h\) and transformation network \(k\), we check the simulation gap between the Phoenix models and template stars by comparing the distribution of the transformed condition of template stars \(k(h(\textbf{y}_{\text{pl}}))=\textbf{s}_{\text{pl}}\) with the distribution of transformed condition of the training data \(\textbf{s}_{\text{train}}\) which follows a known distribution. We evaluate the simulation gap based on the \(R^{2}\) score between two probability distributions, \(p(\textbf{s}_{\text{train}})\) and \(p(\textbf{s}_{\text{pl}})\). The bigger the \(R^{2}\) value, the smaller the simulation gap. ### Simulation gap We trained transformation networks (\(k\)) for each cINNs (SettlNet, NextGen-Net, and Dusty-Net) and compare the probability distribution of the transformed conditions of the training data and template stars. Figure 11 shows that the distribution of the training data (blue line) well follows the prescribed standard normal distribution (pink line) but the distribution of template stars (black) differs from that of the training data. There are 256 condition components for each star, but we present all these components in one distribution. The \(R^{2}\) scores for all template stars are 0.805, 0.709, and 0.425 for Settl, NextGen, and Dusty, respectively. The dusty model seems to have the widest simulation gap, but we need to consider that Dusty-Net has a narrower training range than the parameter space of the template stars. As the performance of the cINN varies depending on the temperature of the template star, we divided the stars into three groups based on the prediction performance of the networks shown in section 5.2.1 (see Figs. 3 and 4). For example, SettlNet and NextGen-Net predicted parameters with good agreement with literature values, especially for stars with temperatures between \(\sim\)3000 K and \(\sim\)5000 K. So we divided the stars into 3 groups based on 3000 K and 5000 K for Settl-Net and NextGen-Net. In the case of Dusty-Net, due to the temperature upper limit of 4000 K for the Dusty training set, we divided groups based on 3000 K and 4000 K. In the case of Settl and NextGen libraries (Fig. 12), the earlier the spectral type, the smaller the gap and Settl has a smaller gap than NextGen in the overall temperature range. While the simulation gap is small for hot stars above 3000 K, the gap is large for later-type stars below 3000 K. In the case of NextGen, in particular, the simulation gap is very large for stars below 3000 K. In the case of Dusty, the simulation gap for the coldest group (\(T<3000\) K) is also very large and comparable to that for hot stars (\(T>4000\) K), out of the temperature range of the Dusty library. The large gap for the lowest temperature group (\(T<3000\) K) is an obvious result because perfectly implementing the atmo Figure 11: Probability distributions of transformed conditions of the training data (blue) and template stars (black) for three networks. The gap between the blue and black lines means the gap between the Phoenix model and the template spectrum. The \(R^{2}\) value between the blue and black line and the number of template stars used are presented in the upper left corner of each panel. sphere of late-type stars through the simulation is a much more difficult task than the earlier type stars. For late-type stars, condensation of vapour is important but the relevant physical processes are complex, making it very difficult to produce a good atmosphere model. Thus, these results demonstrate the inherent limitations of modelling low-temperature stars. These results show that the degree of simulation gap varies with the characteristics of the star, just as the difference between the prediction of cINN and the literature value varies, as shown in section 5.2.1. Interestingly, both Settl and NextGen have the smallest simulation gaps for the early-type stars with temperatures above 5000 K. However, in Figs. 3 and 4, the difference between the MAP prediction and the literature value of this group is slightly larger than that of the intermediate temperature group (3000-5000 K). The smallest simulation gap (Fig. 12) and good resolution results better than the resimulation of literature values (Fig. A.4 and Table A.1) imply that MAP estimates of our networks for early-type stars above 5000 K are sufficiently reliable. Therefore, we suggest that the parameter estimations by our networks may be more accurate than the literature values for early-type stars above 5000 K. ### Best network It is clear that the simulation gap is large for late-type stars. Interestingly, however, our cINNs nevertheless predict the temperature and surface gravity well. First of all, all three networks had poor predictions of extinction for late-type stars below 3000 K. It is therefore very difficult for the network to estimate extinction accurately for stars in this temperature range, and the estimated extinction is not very reliable compared to the other two stellar parameters. However, Settl-Net, NextGen-Net, and Dusty-Net estimated the temperature accurately with maximum errors of less than 10, 5, and 15 per cent, respectively, despite the large simulation gap. This is a sufficiently accurate prediction considering the temperature interval between 1 subclass of stellar spectral type (see Fig. 3). Using combined error in Fig. 4, we demonstrated that Dusty-Net and Settl-Net predict surface gravity and temperature accurately within 5 per cent for late-type stars as well as early-type stars, despite the simulation gap of late-type stars. This shows that our networks are still applicable to low-temperature stars despite the limitations of training data. In the case of NextGen-Net, its performance was relatively poor for low-temperature stars compared to the other two networks, which is explained by the large simulation gap shown in Fig. 12. Figure 12: Probability distributions of transformed conditions of the training data and template stars. Each column represents three networks (Settl-Net, NextGen-Net, and Dusty-Net) and each row represents the group of template stars depending on their temperature (\(T_{\rm eff}^{\rm h}\)). Colour codes are the same as in Fig. 11. On the other hand, for earlier type stars with relatively small simulation gaps, the network performs more reliably. Except for one or two outliers, both Settl-Net and NextGen-Net accurately predict temperature and gravity within a maximum error of 5 to 10 per cent. NextGen-Net tends to estimate extinction and temperature slightly higher than Settl-Net. This seems that NextGen-Net is adopting a degenerate solution that satisfies the same input spectrum by increasing both extinction and temperature slightly. Overall, Settl-Net, with the smallest simulation gap, shows the best performance among the three networks. We conclude that Settl-Net is the best network considering both parameter prediction performance and simulation gap. Against low-temperature stars (e.g. M-type stars), Dusty-Net also shows comparable performance to Settl-Net. However, given that the stellar parameter coverage (i.e. temperature and gravity) of Settl-Net is wider than that of Dusty-Net, Settl-Net is more versatile and usable. Therefore, based on our overall results, we recommend using Settl-Net when applying the network to real observations. The only limitation to be cautious of is the estimation of extinction. Regardless of the spectral type of the stars, cINN estimates temperature and gravity accurately, but it should be cautious of using estimated extinction when the estimated temperature is below 3000 K. ## 8 Summary In this paper, we introduce a novel tool to estimate stellar parameters from the optical spectrum of an individual young, low-mass star. cINN is one of the deep learning architectures specialised in solving a degenerate inverse problem. The degenerate problem here means that, due to the inevitable information loss during the forward process from the physical system to observation, different physical systems are mapped onto similar or almost identical observations. Many of the major tasks in astrophysics are solving degenerate inverse problems like estimating physical properties from observations. In this work, we develop a cINN for young low-mass stars to efficiently diagnose their optical spectra and estimate stellar parameters such as effective temperature, surface gravity, and extinction. cINN adopts a supervised learning approach, meaning that the network is trained on the database consisting of numerous well-labelled data sets of physical parameters and observations. However, it is difficult to collect a sufficient number of well-interpreted observations in real. Therefore, we use synthetic observations instead to generate enough training data. In this work, we utilise three Phoenix stellar atmosphere libraries (i.e. Settl, NextGen, and Dusty) to produce the database for training and evaluation of the network. Interpolating the spectrum on the temperature - gravity space and adding the extinction effect on the synthetic spectra, we produce a database for each Phoenix library consisting of 65,536 synthetic models. To produce databases, we randomly sampled three parameters from the given parameter ranges. Settl and NextGen databases cover the temperature range of 2600-7000 K and \(\log(g/\mathrm{cm\,s^{-2}})\) range of 2.5-5. In the case of the Dusty database, it covers the temperature of 2600-4000 K and \(\log(g/\mathrm{cm\,s^{-2}})\) of 3-5. All three databases have extinction values within 0-10 mag. Then, we build and train cINNs using each database but only use 80% of the synthetic models in the database to train the network and remain the rest for evaluation. In this paper, we present three cINNs that learned about different Phoenix atmosphere models: Settl-Net, NextGen-Net, and Dusty-Net. We validated the performance of our cINNs in various methods. Our main results are the following: 1. All three networks provide perfect predictions on the test set with the RMSE of less than 0.01 dex for all three parameters, demonstrating that the cINNs are well-trained. Additionally, we resimulate the spectrum using the parameters estimated by the network using our interpolation method and compare it with the original input spectrum. The resimulated spectra perfectly match the input spectra of the test models with RMSE of about \(10^{-7}\). These results prove that our three cINNs perfectly learned the hidden rules in each training data. 2. To test the performance on the real observational data, we analyse 36 Class III template stars well-interpreted by Manera et al. (2013, 2017); Stelzer et al. (2013) with our cINNs. We demonstrate that stellar parameters estimated by our cINNs are in good agreement with the literature values. 3. Each network has a slightly different error depending on the temperature of the given star. Settl-Net works especially well for M6.5 - K1.0 (\(2935-5000\) K) stars and NextGen-Net works well for M4.5 - K1.0 (\(3200-5000\) K) stars. Dusty-Net works well for M5.5 - M0.0 (\(3060-4000\) K) stars. Given that the temperature upper limit of Dusty training data is 4000 K, Dusty-Net works well for stars within its training range. For stars in other temperature ranges, three networks perform well with an error of less than 10 per cent. 4. The most difficult parameter for cINNs to predict is the extinction of cold stars with temperatures less than 3200 K. All three networks tend to estimate extinction higher than the literature value for cold stars. However, cINNs estimate extinction well for hot stars with temperatures above 3200 K. 5. We resimulate spectra based on cINN estimations and literature values and compare them with the original input spectrum. Interestingly, most of the resimulated spectra based on cINN estimations are closer to the input spectra than the resimulated spectra derived from literature values. This implies that our cINNs well understand the physics in each Phoenix library and are able to find the best-fitting Phoenix model (i.e. parameters) for the given observation. 6. Results that the resimulations are perfect even though the prediction of the network is slightly different from the literature can be explained by a gap between the Phoenix model and reality, so-called the simulation gap. We quantify the simulation gap between each library and template stars using the conditioning networks included in our cINNs. We confirm that the simulation gaps are relatively large for cold stars below 3000 K where the cINNs have difficulty estimating extinction. We confirm that the simulation gap is small for hot stars where cINNs predict parameters well. 7. The overall results imply that although there is an obvious gap between the Phoenix model and reality, especially for cold stars below 3000 K, our networks can nonetheless provide reliable predictions for all stars within 5-10 per cent error, especially for temperature and gravity. Extinction estimated by cINN is also reliable unless the estimated temperature is less than 3200 K. 8. We investigate which parts of the spectrum cINN relies mostly upon to predict stellar parameters and compare the important features with typically used stellar parameter tracers. We find that cINN relies on different features depending on the physical parameters and on the input observations (e.g. spectral types). We confirm that the major features are equivalent to the typically used tracers such as H\(\alpha\) 6563 A and Na i doublet 8183, 8195 A. Our overall results show that our cINNs present reliable enough performance applicable to real observational data. Among the three networks introduced in this paper, we recommend Settl-Net trained on the Settl library as the best network because of its remarkable performance and versatility on the parameter space. ###### Acknowledgements. This work was partly supported by European Union's Horizon 2020 research and innovation program and the European Research Council via the ERC Synergy Grant "EcoGAL" (project B853103), and the Marie Sklodowska-Curie grant DUSTBUSETERS (project No 8238238), by the Deutsche Forschungsgemeinschaft (DFG) via the Collaborative Research Center "The Milky Way System" (SFB 881 - funding ID 138713538 - subprojects A1, B1, B2 and B8), by the Heidelberg Cluster of Excellence (EXC 2181 - 390900948) "STRUCTURES", funded by the German Excellence Strategy, and by the German Ministry for Economic Affairs and Climate Action in project "MANIN" (funding ID 50002206). We also thank for computing resources provided by the Ministry of Science, Research and the Arts (MWK) of the State of Baden-Wurttnerberg through bwHPC and DFG through grant INST 35/1134-1 FUGG and for data storage at SDS@gld through grant INST 35/1314-1 FUGG.
2310.00897
Practical Radar Sensing Using Two Stage Neural Network for Denoising OTFS Signals
Our objective is to derive the range and velocity of multiple targets from the delay-Doppler domain for radar sensing using orthogonal time frequency space (OTFS) signaling. Noise contamination affects the performance of OTFS signals in real-world environments, making radar sensing challenging. This work introduces a two-stage approach to tackle this issue. In the first stage, we use a generative adversarial network to denoise the corrupted OTFS samples, significantly improving the data quality. Following this, the denoised signals are passed to a convolutional neural network model to predict the values of the velocities and ranges of multiple targets. The proposed two-stage approach can predict the range and velocity of multiple targets, even in very low signal-to-noise ratio scenarios, with high accuracy and outperforms existing methods.
Ashok S Kumar, Sheetal Kalyani
2023-10-02T04:29:04Z
http://arxiv.org/abs/2310.00897v2
# Practical Radar Sensing Using Two Stage Neural Network for Denoising OTFS Signals ###### Abstract Noise contamination affects the performance of orthogonal time frequency space (OTFS) signals in real-world environments, making radar sensing challenging. Our objective is to derive the range and velocity from the delay-Doppler (DD) domain for radar sensing by using OTFS signaling. This work introduces a two-stage approach to tackle this issue. In the first stage, we use a convolutional neural network (CNN) model to classify the noise levels as moderate or severe. Subsequently, if the noise level is severe, the OTFS samples are denoised using a generative adversarial network (GAN). The proposed approach achieves notable levels of accuracy in the classification of noisy signals and mean absolute error (MAE) for the entire system even in low signal-to-noise ratio (SNR) scenarios. OTFS, delay-Doppler domain, convolutional neural network, generative adversarial network. ## I Introduction Orthogonal time frequency space (OTFS) signaling has been identified as a promising signal waveform for fully using the capabilities of the integrated sensing and communication (ISAC) system [1, 2]. The OTFS signal modulates the data in the delay-Doppler (DD) domain. The target's range and velocity characteristics, which may be derived from the DD domain, are the essential parameters to be calculated in radar signal processing. A 2D correlation-based approach to evaluate the Doppler and delay indices for radar sensing has been studied in [3]. The advantages of using OTFS waveform for velocity and range estimates in radar sensing applications have been investigated in [4, 5] and [6]. A single target scenario has been considered in [4], in which the root mean square error (RMSE) performance of the range and velocity as a function of signal-to-noise ratio (SNR) up to -15 dB has been analyzed for radar target estimation. The work in [5] mentions the estimation of range and velocity RMSE as a function of radar SNR in a multipath scenario. The work reported in [7] exploited the application of three distinct sparse algorithms in the estimation of range and velocity of target using OTFS signaling. All the studies mentioned above examine the range and velocity RMSE of the target by considering fixed SNR levels. Unlike the existing state- of-the-art methodologies, we calculated the range and velocity RMSE of each target by using a two-stage noise reduction method for OTFS signals. For the first stage, a convolutional neural network (CNN) model is proposed in our work for the classification of noise as severe or moderate [8]. The proposed CNN model effectively classifies the noise without incorporating any preprocessing steps for noise removal. In wireless communication systems, deep neural networks (DNNs) significantly improve data interpretation, leading to better signal quality and system performance [9, 10, 11]. The GAN-based denoising models have several advantages over traditional denoising methods. The model can rival the performance of artificial denoising methods. The advantages include better performance, enhanced generalization capabilities, the ability to study complex patterns in the data, and automation of the entire process [12, 13, 14]. The aforementioned advantages serve as the driving force behind the incorporation of GAN into the second stage of our proposed method for denoising. In summary, this letter presents a two-stage neural network model comprised of CNN and GAN with the goal of estimating the range and velocity for radar target detection by means of OTFS signaling. In real-world radar applications, the severity of noise in the signal is unpredictable, which makes radar target detection difficult. The existing literature falls short of providing a system dealing with radar target estimation in extremely noisy conditions. Simulation results show that our system has the capability to reduce the values of mean absolute error (MAE), range RMSE, and velocity RMSE of the target significantly. The system can even operate in extremely noisy conditions ranging from 0 to -20 dB SNR, thus expanding the SNR range for radar target detection. ## II System Model In this section, we first describe the OTFS based system. ### _Otfs_ For each OTFS frame, \(N\) and \(M\) represent the number of time slots and the number of sub-carriers respectively. \(T\) represents the symbol duration and \(\delta f_{s}\) is the subcarrier frequency spacing. For a given \(\delta f_{s}\), the total bandwidth is \(B=M\delta f_{s}\) and the duration of one OTFS frame is given by \(NT\). The information bits are mapped to a symbol set in the DD domain. The information symbols are generally QAM symbols. The symbol set corresponding to \(l^{th}\) delay and \(k^{th}\) doppler bins is \(A_{\mathrm{DD}}[k,l]\), for \(k=0,\ldots,N-1\) and \(l=0,\ldots,M-1\). The DD domain symbols are mapped to time-frequency (TF) domain symbols using inverse symplectic finite Fourier transform (ISFFT), operation as \[A_{\mathrm{TF}}[n,m]=\frac{1}{\sqrt{NM}}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}A_{ \mathrm{DD}}[k,l]e^{j2\pi\left(\frac{nk}{M}-\frac{ml}{M}\right)} \tag{1}\] where \(n=0,\ldots,N-1\) and \(m=0,\ldots,M-1\). The TF symbols are translated to the time domain transmit signal, \(x(t)\) by using the Heisenberg transform, \[x(t)=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}A_{\mathrm{TF}}[n,m]g_{tx}(t-nT)e^{j2\pi m \delta f_{x}(t-nT)} \tag{2}\] where \(g_{tx}\) is a pulse-shaping waveform. The time domain signal \(x(t)\) is passed through the linear time-varying channel, which has \(P\) targets in the DD domain. The \(p^{th}\) target has reflection coefficient \(h_{p}\), delay \(\tau_{p}\) with \(0<\tau_{p}<T\) and Doppler shift \(\nu_{p}\). The complex base-band channel response, \(h(\tau,\nu)\) in the DD domain can be expressed as \[h(\tau,\nu)=\sum_{p=0}^{P-1}h_{p}\delta\left(\tau-\tau_{p}\right)\delta\left( \nu-\nu_{p}\right) \tag{3}\] For integer delays and Doppler, \(\tau_{p}=\frac{l_{p}}{M\delta f_{x}}\) and \(\nu_{p}=\frac{k_{p}}{NT}\), where \(l_{p}\) and \(k_{p}\) denote the corresponding delay and Doppler indices of the \(p^{th}\) target. The received signal \(r(t)\) is given by \[r(t)=\iint h(\tau,\nu)e^{j2\pi\nu(t-\tau)}x(t-\tau)d\tau d\nu+w(t) \tag{4}\] where \(w(t)\) denotes the additive white Gaussian noise (AWGN) process with one side power spectral density (PSD), \(N_{0}\). The received signal \(r(t)\) is converted back to the TF domain using Wigner transform, \[B_{\mathrm{TF}}[n,m]=\int_{-\infty}^{\infty}r(t)g_{rx}^{*}(t-nT)e^{-j2\pi m \delta f_{x}(t-nT)}dt \tag{5}\] where \(g_{rx}(t)\) is the pulse-shaping filter at the receiver. The TF domain signals \(B_{\mathrm{TF}}[n,m]\) are then converted to DD domain symbols \(B_{\mathrm{DD}}[k,l]\) using symplectic finite Fourier transform (SFFT), which is given by, \[B_{\mathrm{DD}}[k,l]=\frac{1}{\sqrt{NM}}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}B_{ \mathrm{TF}}[n,m]e^{-j2\pi\left(\frac{mk}{N}-\frac{m}{N}\right)} \tag{6}\] In view of the fact that \(B_{\mathrm{DD}}\) contains information symbols, we are not able to identify the target areas of interest directly. Instead, a 2D correlation-based approach has been used between \(B_{\mathrm{DD}}\) and \(A_{\mathrm{DD}}\) to obtain the delay and Doppler index [3]. The matrix \(V\) contains information about the correlation between the transmitted and received signals at different delay and Doppler indices. The accumulated correlation coefficient under different delay and Doppler indices is given by, \[\begin{split} V[k,l]=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}B_{ \mathrm{DD}}^{*}[n,m]A_{\mathrm{DD}}\left[[n-k]_{N},[m-l]_{M}\right]\\ \times\gamma[n-k,m-l]e^{j2\pi\frac{(m-1)k}{NM}},\end{split} \tag{7}\] where \(k\in[0,N-1]\) and \(l\in[0,M-1]\), and \(\gamma[k,l]\) is a phase offset given by \[\gamma[k,l]=\begin{cases}1,&l\geq 0,\\ e^{-j2\pi\frac{k}{N}},&l<0.\end{cases} \tag{8}\] ### _Dataset Description_ We describe the following datasets which we use to train the proposed deep learning model. * **Transmitted dataset:** The transmitted dataset contains the transmitted OTFS signals \(x(t)\) that are generated by the transmitter and sent out into the environment. This signal is used to probe the environment and detect the objects or targets that reflect the signal to the receiver. * **Low Noise dataset:** The low noise dataset has the signal \(r(t)\), which is obtained by using the equation (4). Typically, deep learning applications operate under the assumption of completely clean dataset with no noise. However, in practical scenarios, such datasets are rarely available. Hence we use low noisy datasets in our work. We have separately created datasets with 5 dB, 20 dB, and 40 dB SNR values. This is for comparing MAE, range RMSE, and velocity RMSE of the target at different SNR values. These datasets are used as the input to the GAN only during the training phase * **Corrupted/ Noisy dataset:** The corrupted dataset has the signals \(r(t)\), after it has been corrupted with AWGN noise, where SNR ranges from 0 to -20 dB. * **Label:** The estimated target location is obtained in the DD domain by correlating the corrupted signal with the transmitted signal. This is obtained by using the equation (7). The true target location indicates the actual target location. The estimated target value is compared with the true target value to generate the labels. The label '0' denotes a match between the estimated and true target values which indicates moderate noise. The label '1' denotes a mismatch between the estimated and true target values which correspond to severe noise. Fig. 1. shows the DD matrices for radar sensing after performing the 2D correlation between moderately corrupted signal and transmitted signal in the DD domain. In this example, one target is considered with \(M=N=28\). The delay and Doppler indices of the targets are 7 and 12 respectively. The same scenario has been considered in Fig. 2. by performing the 2D correlation between the severely corrupted signal and the transmitted signal in the DD domain. It is seen that we cannot identify the location of the target exactly from the DD domain matrix. ### _Proposed CNN for classification of noise_ In the proposed system, the input to the CNN consists of transmitted dataset and corrupted dataset. The proposed CNN architecture which is used to classify noise as moderate or severe is shown in Fig. 3. The network starts with an image input layer. This layer is succeeded by a convolution layer with 32 filters of size \(13\times 13\). The padding is set to be the same. The Batch normalization layer is inserted between the layers which is then followed by the ReLU layer. The CNN then continues with a series of similar layers alternating between convolutional layers, batch normalization layers, and ReLU layers. The filter size also gradually increases from 32 and doubles up to 256 in the subsequent layers. A dropout of value 0.5 is also added after this layer. Two fully connected layers of 512 nodes and one fully connected layer of 256 nodes follow the convolution layer. In the proposed model, the activation function used at the output layer is the softmax layer. The final classification layer then computes the loss and accuracy of the network during training and evaluation. In this case, the loss function employed is the cross-entropy loss. The CNN architecture classifies the noise as moderate or severe. If the noise is moderate, then by using equation (7), the location of the target can be identified. Consequently, we can derive the velocity and range of the target from the DD domain. On the other hand, if the CNN outputs severe noise as the classification, all the corresponding samples are aggregated and passed through the GAN for denoising. ### _Denoising OTFS Signals using GAN_ GANs use two neural networks called a Generator Network \(G\) and a Discriminator Network \(D\), which compete against each other to create the desired result. The inputs to the discriminator and generator are real data \(u\) and random variable \(w\) respectively. The discriminator gives a value \(D(u)\) suggesting the possibility that \(u\) is a real sample. The main purpose of the discriminator is to maximize the probability of labeling the real samples as real and the generated fake samples \(G(w)\) as fake. The objective of the generator is to produce fake samples \(G(w)\) which are as close as possible to that of real samples, so that the discriminator fails to separate between fake and real samples. Hence a GAN can be defined as a minimax game in which \(G\) wants to minimize the value function \(\tilde{V}(D,G):\) while \(D\) wants to maximize it. \[\min_{G}\max_{D}\tilde{V}(D,G)=\mathbb{E}[\log D(u)]+\mathbb{E}[\log(1-D(G(w)))] \tag{9}\] The GAN is trained using pairs of low noise and corrupted OTFS signals. Fig. 4. shows the block diagram of GAN for denoising OTFS signals. The generator network is trained to generate denoised signals from the corrupted signals. The input to the discriminator network is low noise signals and generated signals from the generator. During training, the generator network attempts to minimize the difference between the generated and low noise signals, while the discriminator network aims to maximize the difference between the generated and low noise signals. The discriminator network starts with an input layer followed by the convolution layer which has 32 filters of size (3,3) and stride (2,2). The padding is set to the same. The output of the convolution layer is then fed through a Leaky ReLU activation function. The same convolution layers are repeated with increasing filter sizes of 64, 128, 128, 256, and 512. A dropout regularisation layer with 0.5 is inserted at the output to prevent overfitting. The output layer is a fully connected layer (Dense) with a single unit and a sigmoid activation function. The corrupted signal is preprocessed before feeding to the Generator. The preprocessing is done Fig. 1: The DD matrices for radar sensing after performing the 2D correlation between moderately corrupted signal and transmitted signal in the DD domain. The below plot corresponds to the 3D plot by taking the magnitude of \(V\) along the \(z-\)axis Fig. 2: The DD matrices for radar sensing after performing the 2D correlation between severely corrupted signal and transmitted signal in the DD domain. The below plot corresponds to the 3D plot by taking the magnitude of \(V\) along the \(z-\)axis by using Gaussian filtering and the preprocessed signal is normalized to a range of [0,1]. This is then passed through a series of two sets of convolution layers with 32 filters each with size (3,3), each succeeded by a max pool layer. This is repeated with two more batches of convolution layer and max pool layer with the filter size of 64 and 128. Each layer uses the ReLU activation function and the same padding. The upsampling path also consists of 3 sets of 2 convolution layers with filter sizes of 64, 32, and 3 respectively. The output layer produces the denoised signals after training. It represents the generated signals which are then fed to the discriminator. ## III Simulation Results In the simulations, we consider a single target scenario with \(P=1\). A DD domain grid with \(M=N=28\) and \(\delta f_{s}\) = 150 kHz is considered. The carrier frequency \(f_{c}\) is taken as 60 GHz. The velocity and range resolution can be calculated by \(V_{res}=\frac{Bc}{2M\)\(M\)\(f_{c}\) and \(R_{res}=\frac{c}{2B}\), where \(c\) is the speed of light. The maximum velocity and range are given by \(V_{max}=\frac{c\delta f_{s}}{2f_{c}}\) and \(R_{max}=\frac{cT}{2}\). The dataset comprising 50000 complex OTFS samples, each with a length of \(MN\) are taken for low noise, corrupted, and transmitted signals. We have also created three separate datasets corresponding to low noise signals with different SNR levels of 5 dB, 20 dB, and 40 dB. Each dataset contains 50000 samples and is utilized for analyzing MAE, range RMSE, and velocity RMSE of the target. Out of 50000 samples, 40000 samples are used for training the system. The plot in Fig. 5. shows the variation in loss across the number of iterations for the CNN. Notably, the proposed CNN achieved a remarkable training accuracy of \(98.8\%\). Concurrently, the loss reduced progressively, reaching a low value of 0.05. ### _System Validation and Performance Analysis_ For testing the system, a total of 10000 transmitted and corrupted samples are given to CNN. The test accuracy for the CNN is found to be \(97.89\%\). If the CNN produces severely noisy results, associated samples are aggregated and sent through the GAN for denoising. Table 1. shows the comparative analysis of MAE, range RMSE, and velocity RMSE of the target by utilizing the samples of corrupted signals in the range 0 to -20 dB SNR along with low noise signals of different SNR values. In this work, we have achieved Fig. 4: Block diagram of GAN for denoising Fig. 3: CNN Architecture for classification of noise a MAE value of 0.68 in the DD domain by using low noise signals with 40 dB SNR and corrupted signals. Thus, for a maximum distance of 1000 m and a maximum velocity of 375 m/s, we obtain a range RMSE and a velocity RMSE of 69.5 m and 26.06 m/s respectively. Similarly, the MAE, range RMSE, and velocity RMSE value of the target under the same scenario by using low noise signals with 20 dB and 5 dB SNR is depicted in the Table. 1. ### _Comparison with the state-of-the-art methods_ The works [5] and [6] address modified versions of maximum likelihood (ML) estimation algorithms for target detection using OTFS signaling. In [5] and [6], the maximum range RMSE (at an SNR of up to -15 dB) is determined to be 0.7 m and 12 m, respectively. Similarly, the maximum velocity RMSE at -15 dB SNR is found to be 3 m/s and 10 m/s, respectively. The work in [7] performs target sensing up to -10 dB by utilizing sparse algorithms and they could obtain both the maximum range RMSE and velocity RMSE as 1. These non-zero values of RMSE in [4]-[6] indicate inaccurate target localization in radar sensing. In the SNR range from 0 to -15 dB, our approach delivers highly accurate results, with both range RMSE and velocity RMSE effectively reduced to zero. The work in [6] and [7] does not address SNR levels below -15 dB. Due to our dual-stage technique, we are able to work even with SNR as low as -20 dB. Despite [5] addressing SNR levels below -15 dB, which resulted in range RMSE and velocity RMSE values of 100 m and 120 m/s at -20 dB, our approach demonstrates superior performance with lower values as indicated in Table 1. It is evident that the values of MAE, range RMSE, and velocity RMSE are low when considering corrupted signals ranging from 0 to -20 dB SNR. Hence, radar target detection in the highly challenging, low SNR environments, is now possible due to our proposed approach. ## IV Conclusions In this work, we have proposed a two-stage approach for denoising OTFS signals for radar sensing. The first step involves the classification of noisy OTFS samples as moderate or severe with the help of a CNN. The CNN accurately distinguishes the noisy samples and provides a solid foundation for the subsequent denoising process. The second step focuses on denoising the identified noisy samples with the help of a GAN. The proposed system has yielded promising results demonstrating its effectiveness in both the classification and denoising processes even in very low SNR environment.
2306.08293
Efficient Training of Physics-Informed Neural Networks with Direct Grid Refinement Algorithm
This research presents the development of an innovative algorithm tailored for the adaptive sampling of residual points within the framework of Physics-Informed Neural Networks (PINNs). By addressing the limitations inherent in existing adaptive sampling techniques, our proposed methodology introduces a direct mesh refinement approach that effectively ensures both computational efficiency and adaptive point placement. Verification studies were conducted to evaluate the performance of our algorithm, showcasing reasonable agreement between the model based on our novel approach and benchmark model results. Comparative analyses with established adaptive resampling techniques demonstrated the superior performance of our approach, particularly when implemented with higher refinement factor. Overall, our findings highlight the enhancement of simulation accuracy achievable through the application of our adaptive sampling algorithm for Physics-Informed Neural Networks.
Shikhar Nilabh, Fidel Grandia
2023-06-14T07:04:02Z
http://arxiv.org/abs/2306.08293v1
# _Efficient Training of Physics-Informed Neural Networks with Direct Grid Refinement Algorithm_ ###### Abstract This research presents the development of an innovative algorithm tailored for the adaptive sampling of residual points within the framework of Physics-Informed Neural Networks (PINNs). By addressing the limitations inherent in existing adaptive sampling techniques, our proposed methodology introduces a direct mesh refinement approach that effectively ensures both computational efficiency and adaptive point placement. Verification studies were conducted to evaluate the performance of our algorithm, showcasing reasonable agreement between the model based on our novel approach and benchmark model results. Comparative analyses with established adaptive resampling techniques demonstrated the superior performance of our approach, particularly when implemented with higher refinement factor. Overall, our findings highlight the enhancement of simulation accuracy achievable through the application of our adaptive sampling algorithm for Physics-Informed Neural Networks. ## 1 Introduction Physics-informed neural networks (PINNs) have gained prominence in recent years as a versatile tool for solving partial differential equations (PDEs) governed problems using deep neural networks (DNNs). Although PINNs have demonstrated success, addressing a broad range of increasingly complex PDE problems presents theoretical and practical challenges, necessitating further advancements to enhance prediction accuracy, computational efficiency, and training robustness [1]. Various techniques, such as loss function meta-learning [2], gradient-enhanced PINN [3], and adaptive sampling of residual points [4; 5] have been employed to enhance the accuracy of PINNs. Our focus is on improving PINN accuracy through a novel algorithm for adaptive non-uniform sampling. Two common approaches for adaptive sampling methods (ASM) are identified [4]. The first approach (ASM 1) selects points from the original residual set based on a probability mass function (PMF) [6]. Although computationally efficient, ASM 1 only selects the additional point only from the existing set of residual points. Therefore, it does not introduce any new points at different locations within the input space. Previous research has shown that the adaptive location of the resampled points further enhances the accuracy of PINNs [7]. An alternative approach to Adaptive Sampling, reffered as ASM2, considers addition of residual points at new locations within the input space [5]. In ASM2, a random sampling of residual points takes place over the input space, and those with relatively higher PDE residual values are selected. This method facilitates the adaptive positioning of residual points and is intentionally similar to adaptive mesh refinement technique used in numerical methods. However, ASM 2 is computationally expensive as it requires calculation of PDE residual at all randomly chosen points during each resampling period [8]. It is worth noting that recent research studies have introduced various variants of these two adaptive sampling methods [9; 10; 11; 12]. In this research, we introduce a novel adaptive sampling scheme for sampling points from new locations in the input space based on the PDE residual of original residual points (ASM 3). The scheme consists of three steps. In Step 1, an equi-spaced grid of residual points is defined as the reference set throughout the training process. In Step 2, new points are sampled from the reference residual points at each resampling period using their probability distribution function (as in ASM 1). In Step 3, a new set of points is added in the neighborhood of each sampled point from Step 2. For a refinement factor of 2, one point is added, while for higher refinement factors, multiple points are added. The mathematical definition of the neighborhood in Step 3 is provided in Section 2. The proposed method exhibits computational efficiency by utilizing the PDE residual on the reference residual points in Step 2, eliminating the need for additional calculations. This efficiency enables a higher frequency of resampling, thereby increasing accuracy. Furthermore, the method achieves adaptive point placement by assigning new points in the vicinity of the sampled points from Step 2, akin to the mesh refinement technique used in numerical studies. In contrast to ASM2, which indirectly refines the grid and is independent of the original set of residual points, this algorithm directly refines the grid formed by the reference set of residual points in Step 1. Additionally, like adaptive mesh refinement, it offers flexibility in assigning higher refinement for improved PINN accuracy. Notably, in order to enhance computational efficiency of ASM3 algorithm, previously sampled points in Step 3 are not retained or utilized in the subsequent resampling events. ## 2 Direct Grid Refinement Method For the development of ASM 3, we considered a transient PDE case with \(t\in[0,T]\) and \(x\in\Omega\) (where \(\Omega\in R^{D}\)). In step 1, a set of uniformly spaced residual points \(\left\{t_{f}^{i},x_{f}^{i}\right\}_{i=1}^{N_{f}}\) are defined with a corresponding grid size of \(h_{t}\) and \(h_{x}\). At each resampling period, m points \(\left\{t_{s}^{i},x_{s}^{i}\right\}_{i=1}^{m}\) are sampled from the set of reference residual points based on a probability mass function (elaborated in section (2.1)). For a refinement factor of 2, a new point \(\left\{t_{r}^{i},x_{r}^{i}\right\}_{i=1}^{m}\) is selected in the neighborhood of each sampled point \(\left(t_{s}^{i},x_{s}^{i}\right)\) in step 2. The location of the refined point \(\left(t_{r}^{i},x_{r}^{i}\right)\) on the input space is represented by Equation 1 and (2): \[t_{r}^{i}=t_{s}^{i}+\lambda_{t}h_{t} \tag{1}\] \[x_{r}^{i}=x_{s}^{i}+\lambda_{x}h_{x} \tag{2}\] where \(\left\{\lambda_{t},\lambda_{x}\right\}\) are the refinement coefficients which range between \(-1\) to \(1\). These coefficients can be assigned as constants or dynamically selected by randomly picking values between \(-1\) and \(1\) for each sampled point in Step 2. Equation (1) and (2) illustrate the addition of a single point in the neighborhood of the sampled point from Step 2, resulting in a refinement factor of \(2\). It is also possible to achieve higher refinement orders by adding multiple refined points using different values of the refinement coefficients. Fig. (1) depicts the implementation of ASM3 with refinement factors of 2, 3, and 4, considering randomized refinement coefficients. The blue dots represent the reference Figure 1: Implementation of ASM 3 with refinement factor of 2, 3 and 4. The blue dots represent reference residual points defined in Step 1, the blue dot encircled by red circle represent the sampled point from a PMF of reference residual points. The green points represent the addition of the new residual points by the direct grid refinement algorithm (Step 3). residual points defined in Step 1, while the blue dot encircled by a red circle represents the sampled point from the PMF of reference residual points in Step 2. The green dots represent the adaptively sampled points in Step 3. ### Probability Mass function The probability mass function used in Step 2 of ASM and is represented by Equation (3): \[\overline{p}\left(x\right)=\frac{p(x)}{\sum_{x\in S_{0}}p(x)} \tag{3}\] where \(x(x\in S_{0})\) represents a point from the reference set of equi-distant residual points \(\left(S_{0}\right)\). \(p(x)\) is the probability density function which is a non-linear function of PDE residual. [4] defined a general expression for PDF using k and c as hyperparameters (Equation (4)). \[p(x)\propto\frac{\epsilon^{k}(x)}{\mathbb{E}[\epsilon^{k}(x)]}+c \tag{4}\] where \(\epsilon(x)\) is the PDE residual at residual point \(x\). In this research work, the value of k and c is taken as 2 and 0 respectively, a combination which has already been tested in a previous research work [3]. ### Solving Advection Dispersion Equation A non-linear transient PDE equation governing the advection-dispersion process in a 1 dimensional porous medium is solved with ASM 3 (Equation (5),(6),(7)): \[\frac{\partial\left(\epsilon c\right)}{\partial t}=-\frac{\partial\left( \nu c\right)}{\partial x}+\frac{\partial\left(D_{e}+\alpha_{L}\nu\right) \frac{\partial c}{\partial x}}{\partial x} \tag{5}\] where \(x\in\left[0,1\right],t\in\left[0,6000\right]\) \[c\left(t,0\right)=\left(\frac{1}{1+e^{-0.02\left(t-500\right)}}\right)\times \left(\frac{1}{1+e^{0.02\left(t+500\right)}}\right) \tag{6}\] \[\left(D_{e}+\alpha_{L}\nu\right)\frac{\partial_{c}\left(t,1\right)}{\partial x }=0 \tag{7}\] where \(\epsilon\) is the porosity, \(D_{e}\) is molecular dispersivity \(\left(m^{2}\cdot s^{-1}\right)\), \(\alpha_{L}\) dispersivity (m). Table 1 enlists the properties of the porous medium for the model development. The network output is \(c(t,x)\) which represent the concentration of a non-reactive chemical species in the porous medium. The entry of this species in the medium is mathematically smoothed using a sigmoidal step function. The other boundary condition, \(\left(c(t,1)=0\right)\), represents an open boundary of the exit of chemical species at the other end of the porous medium. The porous medium is considered to be free of the chemical species and thus \(c(0,x)\) is considered to be 0. The neural network consists of 3 hidden layers with 50 neurons each, utilizing sigmoid activation function for non-linearity. A total of 202 residual points enforce the boundary conditions, while 441 residual points (equally spaced) enforce equation (5). The study utilizes a weighted-loss function, as formulated in a previous research work [13]. For the comparative study, four different simulations are run for each of the adaptive sampling methods, namely ASM 1, ASM 2, ASM 3. The model runs for 15000 steps of gradient descent (using \begin{table} \begin{tabular}{l c} \hline \hline Properties & Value \\ \hline porosity & 0.3 \\ dispersivity \(\left(m\right)\) & 0.01 \\ Groundwater velocity \(\left(m\cdot s^{-1}\right)\) & 0.0003 \\ Dispersion coefficient \(\left(m^{2}\cdot s^{-1}\right)\) & \(1\times 10^{-9}\) \\ \hline \hline \end{tabular} \end{table} Table 1: List of porous medium properties and their values used for model development. Adam optimizer) with resampling period of 1000 epoch. For each method, 150 new points are added at each resampling event. Two variants of ASM 3 are implemented, one with RF =2 (ASM 3a) and RF = 4 (ASM 3b). In addition, a simulation with fixed number of residual points (without a sampling strategy) is also simulated for the base case study. ## 3 Results ### Model Verification The accuracy of the novel direct grid refinement algorithm is evaluated using a benchmark model implemented in Comsol. Fig. 2 (a) displays the concentration profile \(c(t,x)\) predicted by the Comsol benchmark model. The direct grid refinement algorithm, after 15000 epochs, closely approximates the Comsol model with minor discrepancies, as shown in Fig. 2 (b). The discrepancies primarily occur in the region with sharp concentration gradients. These discrepancies are further illustrated in the error analysis, where the plot of PDE residual (Fig. 2 (c)) reveals an error ranging up to \(9\times 10^{-5}\) at the front of the concentration gradient. For the verification of ASM 3a against the comsol result, the concentration profile at \(x=1\) is plotted for all time steps (Fig. Figure 2: (a) Benchmark model result from Comsol used for the verification study, (b) Result from PINN using ASM 3a sampling method (c) Error analysis of the reference set of residual points highlights relatively higher error at a region of higher concentration gradient, (d) Comparative analysis of the PINN model result and the Comsol model result for the chemical species concentration \(c(t,1)\) shows excellent match with a \(R^{2}\) value of 99.84%. 2 (d)). The high \(R^{2}\) value of \(99.84\%\) demonstrates the efficiency of the PINN utilizing the novel sampling algorithm. Fig. 3 depicts the adaptively sampled point using the grid refinement algorithm (ASM 3a), represented by a green dots. The alignment of the green dots with the regions of high PDE residual Fig. 2 (c) is evident. Additionally, for comparison, the adaptively sampled points from ASM 1 and ASM 2 are also plotted. Fig. 3 illustrates that the majority of adaptively sampled points lie in the region with relatively higher residual error. ASM 1 samples from set of original residual points, as indicated by the red encircled residual points, while ASM 2 selects randomized residual points with higher error. The adaptively sampled points in ASM 3a exhibit dense clusters compared to the residual points sampled with the ASM 1 method, as shown in Fig. 3. This observation can be attributed to the repetitive sampling of residual points in the second step of ASM 3a, similar to the case of ASM 1. To investigate the repetitive sampling, a color map representing the sampled points from the probability density function (PDF) of the reference grid is plotted Fig. 4. Each resampling event adds 150 new points, and the repetition of certain points occurs during the sampling process of Step 2 in ASM 3a (indicated by filled circles). The color map represents the frequency of point repetition, ranging from 1 to 13. The higher repetition of sampled points in the steps contributes to the clustering of adaptively sampled points in Step 3, as observed in Fig. (3). ### Comparative Study The efficiency of different resampling methods is evaluated by assessing their relative L2 error in comparison to the Comsol-benchmark model. As illustrated in the Table 2, the results indicate that the adaptive sampling techniques exhibit higher accuracy levels than the base case (PINN without adaptive resampling scheme). The model based on the novel algorithm (ASM 3a) demonstrates Figure 3: After 15000 epochs, the reference residual points (blue color) along with the sampled points from ASM1, 2 and 3 (red encircled blue dots, black dots and green dots respectively). relatively superior accuracy when compared to ASM 1, as evidenced by the associated relative L2 error. However, in contrast to ASM 2, the novel algorithm-based model (ASM 3a) exhibits a relatively higher relative L2 error. With refinement factor of 4, the accuracy corresponding to the novel algorithm (ASM 3b) improves further and outperforms the other adaptive sampling techniques. ## 4 Conclusion In this study, we developed a novel algorithm for adaptive sampling of residual points. Verification studies using a Physics-Informed Neural Network (PINN) with our sampling method (ASM 3a) demonstrated reasonable agreement with benchmark model results. Comparative analysis with other techniques revealed that our approach delivered satisfactory results, especially with higher refinement factors. Overall, our adaptive sampling algorithm enhances simulation accuracy and efficiency, offering promising prospects for future research and applications. Further research work is needed to explore its application beyond the equi-spaced residual points. \begin{table} \begin{tabular}{l c} \hline \hline ASM & Relative L2 error \\ \hline Base case & \(4.01\times 10^{-4}\) \\ ASM 1 & 2.80\(\times 10^{-4}\) \\ ASM 2 & 2.38\(\times 10^{-4}\) \\ ASM \(3a\) & 2.51\(\times 10^{-4}\) \\ ASM \(3b\) & 2.09\(\times 10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Efficiency of PINN using different sampling techniques using relative L2 error. Figure 4: Sampling of residual points in Step 1 and Step 2 of ASM 3a. The empty blue circles represent the points from the set of reference grid defined in Step 1 of ASM 3a. The color filled circle represents the sampled points in the Step 2 of ASM 3a. The color map represents the repetition frequency of each point sampled in Step 2. Impact Statement The presented research makes a significant impact on the research area of physics-based machine learning techniques by introducing a novel algorithm for adaptive sampling of residual points. By addressing the challenges associated with existing adaptive sampling schemes, the proposed algorithm enhances prediction accuracy, computational efficiency, and training robustness. Through verification studies, the algorithm demonstrates reasonable agreement with benchmark model results, showcasing its effectiveness in improving simulation accuracy. Comparative analysis with other techniques underscores the superiority of the proposed approach, particularly with higher refinement factors.
2305.03866
Spiking neural networks with Hebbian plasticity for unsupervised representation learning
We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure. We achieved this by transforming the non-spiking feedforward Bayesian Confidence Propagation Neural Network (BCPNN) model, employing an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network with Poisson statistics and low firing rate comparable to in vivo cortical pyramidal neurons. We evaluated the representations learned by our spiking model using a linear classifier and show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks.
Naresh Ravichandran, Anders Lansner, Pawel Herman
2023-05-05T22:34:54Z
http://arxiv.org/abs/2305.03866v2
# Spiking neural networks with Hebbian plasticity ###### Abstract We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure. We achieved this by transforming the non-spiking feedforward Bayesian Confidence Propagation Neural Network (BCPNN) model, employing an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network with Poisson statistics and low firing rate comparable to _in vivo_ cortical pyramidal neurons. We evaluated the representations learned by our spiking model using a linear classifier and show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks. ## 1 Introduction The success of deep learning (DL) in solving various real-world pattern recognition benchmarks has shown the importance of building large-scale artificial neural networks (ANNs) with the ability to learn distributed internal representations from real-world data. One of the emerging concerns however is the energy footprint of heavy computations involved in training large ANN architectures. In response to this challenge there has been growing interest in neuromorphic approaches that build on more biologically plausible spiking neural networks (SNNs). This new generation of neural network models holds a promise for energy-efficient neuromorphic hardware that can handle real-time data streams efficiently with sparse and asynchronous event-based communication [1]. It is therefore imperative that, in parallel to DL development, we develop SNNs that can learn representations from real-world data. Building such SNN models has been typically addressed either by converting a traditional non-spiking ANN trained with gradient descent learning into a SNN, or by modifying backprop-based gradient descent algorithms to accommodate spiking neurons [1, 2]. Since the current approaches do not fully leverage the biological nature of the learning principles in SNNs, there is a motivated concern that full potential of SNNs and their neuromorphic implementations may not be harnessed. Our philosophy for SNN design is steeped into the biological brain's inspirations and hence we aim to develop a biologically constrained SNN model that performs unsupervised representation learning based on Hebbian learning principles. For this, we derive our model from an abstract (non-spiking) brain-like BCPNN architecture, previously shown to perform representation learning by solely using Hebbian learning (synaptic plasticity) and Hebbian rewiring (structural plasticity) mechanisms [5]. Crucially, we employ on-line Hebbian learning directly in the spiking domain. To this end, we interpret spikes as stochastic independent samples from a Poisson distribution, where the underlying firing rates are computed as probabilities from the BCPNN model. This is motivated from the observations that _in vivo_ cortical pyramidal neurons show reliable firing rate whereas the timing of spikes is highly irregular and the corresponding inter-spike intervals closely resembles a Poisson distribution [3, 4]. Our main contribution is to show that the BCPNN model can be converted to a SNN preserving the biological details with minimal compromise on performance. The spiking statistics in our model reach a maximum firing rate of around 50 spikes/s, matching the sparse firing of _in vivo_ cortical pyramidal neurons. We evaluated the internal representation of the model by means of a linear classifier and compared it with the corresponding non-spiking model as well as other SNNs with Hebbian learning. ## 2 Model description We summarize key details of the model relevant to the spiking version and refer to previous works on the feedforward non-spiking BCPNN model for full details [5]. **Modular network design**: Our spiking BCPNN model consists of one spiking input layer and one spiking hidden layer. The layer architecture is derived from the columnar organization of the neocortex. Each layer in our network model is composed of many identical hypercolumns modules, each of which in turn comprises many neuron units (referred to as minicolumns) sharing the same receptive field. **Localized learning**: The learning mechanism is local, online, and correlation-based Hebbian-Bayesian synaptic plasticity where each synapse accumulates short and long-term traces of pre-, post-, and joint activities. From the pre- and post-synaptic spikes at time \(t\), \(S_{i},S_{j}\in\{0,1\}\), we compute \(Z\)-traces, \(Z_{i}\) and \(Z_{j}\), as a form of short-term filtering (\(\tau_{z}\) \(\sim\) few milliseconds) providing a coincidence detection window between pre- and post-synaptic spikes for subsequent LTP/LTD induction (Eq. 1). The \(Z\)-traces are further transformed into \(P\)-traces, \(P_{i},P_{j}\), and \(P_{ij}\), with long time-constants (\(\tau_{p}\) \(\sim\) seconds to hours) reflecting LTP/LTD synaptic processes (Eq. 2). The \(P\)-traces are finally transformed to bias and weight parameter of the synapse corresponding to terms in ANNs (Eq. 3). All the spike and trace variables are time dependent (time index is dropped for the notation brevity). \[\tau_{x}\,\frac{dZ_{i}}{dt}=\frac{\tau_{x}}{\Delta t}S_{i}-Z_{i},\qquad\tau_{x} \,\frac{dZ_{j}}{dt}=\frac{\tau_{x}}{\Delta t}S_{j}-Z_{j}, \tag{1}\] \[\tau_{p}\,\,\frac{dP_{i}}{dt}=Z_{i}-P_{i},\qquad\tau_{p}\,\,\frac{dP_{ij}}{dt}= Z_{i}\,Z_{j}-P_{ij},\qquad\tau_{p}\,\,\frac{dP_{j}}{dt}=Z_{j}-P_{j}, \tag{2}\] \[b_{j}=\log\,\,P_{j}\,,\qquad w_{ij}=\log\,\,\frac{P_{ij}}{P_{i}\,P_{j}}\,, \tag{3}\] **Localized rewiring:** The synaptic rewiring mechanism adaptively finds efficient sparse connectivity between the layers, mimicking structural plasticity in the brain [5]. This mechanism uses the \(P\)-traces locally available at each synapse to maximize a "usage" score and updates the sparse binary connectivity matrix \(c_{ij}\) accordingly. **Neuronal activation:** The total input \(I_{j}\) for neuron \(j\) is updated to be weighted sum of incoming spikes with the time-constant \(\tau_{x}\) (acting here as the depolarization time constant) (Eq. 4). The activation of the neuron, \(\pi_{j}\), is computed as a softmax function of the input \(I_{j}\) (Eq. 5), which induces a soft-winner-takes-all competition between the minicolumn units within each hypercolumn module. The output of the softmax function reflects the posterior belief probability of the minicolumn unit according to the BCPNN formalism [5]. In the non-spiking (rate-based) BCPNN model, this activation \(\pi_{j}\) acts as the firing rate and can be directly communicated as the neuronal signal. For SNNs, we independently sample binary values from this \(\pi_{j}\) activation probability scaled by the maximum firing rate \(f_{max}\) for each time step (Eq. 6). Note that when \(f_{max}=1000\) spikes/s (and \(\Delta t=1\)ms), the spike generation process from Eq. 6 is simply a stochastic sampling of the underlying firing rate probability and setting \(f_{max}<1/\Delta t\) linearly scales the activation probability to the maximum firing rate. Also, in both learning (Eq. 1) and synaptic integration (Eq. 4) steps, we scaled the binary spiking signal by \(\tau_{x}/\Delta t\) as this renders the filtered spike statistics of model to be equivalent to the rate model. \[\tau_{x}\,\frac{dI_{j}}{dt}=b_{j}+\frac{\tau_{x}}{\Delta t}\sum_{i=0}^{N_{i}} \,S_{i}\,w_{ij}\,c_{ij}-I_{j}, \tag{4}\] \[\pi_{j}=\,\frac{\exp\,\left(I_{j}\right)}{\sum_{k=1}^{Mh}\exp\,\left(I_{k} \right)}, \tag{5}\] \[S_{j}\sim P(\text{spike between t and t}+\Delta t)=\pi_{j}\,f_{max}\,\,\Delta t \tag{6}\] ## 3 Experiments ### Comparison of classification performance To benchmark our spiking BCPNN model on the MNIST (hand-written digit images) and F-MNIST (fashion apparel images) datasets, we first trained it in a purely unsupervised manner (representation learning) and then used a linear classifier (cross entropy loss, SGD with Adam optimizer, 100 training epochs) to predict class labels (\(n\) = 3 randomized runs, all parameters are listed in Table 1). Table 2 shows that the classification accuracy of our model is competitive with the non-spiking BCPNN as well as other SNNs with Hebbian-like plasticity (STDP and its variants). ### Spiking BCPNN with sparse firing learns distributed representations In Fig. 1A we plotted the neuronal support, i.e., input, \(I_{j}\), superimposed with the spiking output, \(S_{j}\), for 30 randomly selected neurons within a single hypercolumn module after training the network on MNIST data (for visualization, we offset each neuron's input by 50mV vertically, scaled them to be in the range -80mV to -55mV and added a spike event of 40 mV) and observed sparse spiking with occasional bursts. In Fig. 1B we plotted the firing rate of each neuron in a single randomly chosen hypercolumn module by convolving the spike train with a Gaussian kernel (\(\sigma\) = 50ms). We see that most neurons have low firing rates (-2 spikes/s), with a very few (typically one) neurons showing high level of activity (-50 spikes/s) within the duration of a stimulus pattern presentation (gray vertical bars) due to the local competition within the hypercolumn. We plotted the receptive fields for three hypercolumns and the filters learned by six minicolumns each (randomly chosen) in Fig. 1C. They provide a good qualitative match to the previously published results of the non-spiking BCPNN model [5]. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Type & Parameter & Value & Description \\ \hline Synaptic & \(\tau_{x}\) & 20 ms & Short-term filtering time constant \\ \cline{2-4} & \(\tau_{p}\) & 5 s & Long-term learning time constant \\ \cline{2-4} & \(p_{conn}\) & 10 \% & Sparse connectivity between layers \\ \hline Neuronal & \(H_{l},M_{l}\) & 784, 2 & N:o input layer hypercolumns \& minicolumns \\ \cline{2-4} & \(H_{h},M_{h}\) & 100, 100 & N:o hidden layer hypercolumns \& minicolumns \\ \cline{2-4} & \(f_{max}\) & 50 spikes/s & Maximum firing rate \\ \hline Training & \(\Delta t\) & 1 ms & Simulation time step \\ \cline{2-4} protocol & \(T_{pat}\) & 200 ms & Time period for each pattern \\ \cline{2-4} & \(T_{gap}\) & 100 ms & Time period of gap between patterns \\ \cline{2-4} & \(N_{epoch}\) & 10 & N:o of training epochs \\ \cline{2-4} & \(N_{pat}\) & 60000 & N:o of training patterns \\ \hline \end{tabular} \end{table} Table 1: Network parameters. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Model & Activity & Plasticity & MNIST & F-MNIST \\ \hline BCPNN (this work) & spiking & BCPNN & \(97.7\pm 0.09\) & \(83.8\pm 0.12\) \\ \hline BCPNN & rate & BCPNN & \(98.6\pm 0.08\) & \(89.9\pm 0.09\) \\ \hline Diehl \& Cook, 2015 [6] & spiking & STDP & 95.0 & – \\ \hline Kheradpisheh et al., 2018 [7] & spiking & STDP & 98.4 & – \\ \hline Mozafari et al., 2019 [8] & spiking & STDP-R & 97.2 & – \\ \hline Hao et al., 2019 [9] & spiking & sym-STDP & 96.7 & 85.3 \\ \hline Dong et al., 2023 [10] & spiking & STB-STDP & 97.9 & 87.0 \\ \hline \end{tabular} \end{table} Table 2: Linear classification accuracy (%). ### Filtering enables spiking models to approximate the non-spiking model We studied the effects of short-term filtering (\(Z\)-traces) in terms of classification performance (Fig. 2). We ran our experiments by training on a reduced version of MNIST dataset with 1000 training and 1000 test patterns while varying \(\tau_{z}\) and \(f_{max}\) (all other parameters same as in Table 1). For biologically realistic values of \(f_{max}\), like 50 spikes/s, performance with \(\tau_{z}\leq 10\)ms is very low (\(\tau_{z}=1\)ms is effectively no filtering). This is because pre- and post- synaptic spikes are expected to coincide within this time-window for learning to occur, whereas spikes are generated sparsely and irregularly from a Poisson distribution. However, for \(\tau_{z}\sim\)50ms, the performance closely approximates the non-spiking model since this time window is sufficient to expect pre- and post-synaptic spikes to coincide and be associated. For \(f_{max}>500\)Hz (non-biological case), accuracy is high for \(\tau_{z}\) over a wider range since the spikes are dense samples of the underlying neuronal activation and short-term filtering is not necessarily helpful. All models irrespective of \(f_{max}\) drop sharply in performance after \(\tau_{z}>100\)ms, very likely because the temporal window provided is too long compared to the presentation time of each pattern (\(T_{pat}+T_{gap}=300\)ms) and the learning wrongly associates spikes of a pattern with spikes from previous patterns. Figure 2: Effect of short-term filtering on classification performance. Figure 1: Neuronal support recorded after training for a time period of 6s across 30 randomly selected neurons shows sparse spiking activity. **B.** Firing rate computed from all (\(M_{h}\)=100) neurons within one hypercolumn. For the duration of pattern presentations (gray vertical bars, \(T_{pat}=200\)ms), mostly a single neuron shows a high firing rate while the rest are at a baseline firing rate. **C.** Local receptive fields are formed from randomly initialized connections through the rewiring mechanism, and individual minicolumns learn filters within their receptive field resembling orientation edge detectors. Conclusion We have demonstrated that our spiking BCPNN model can learn internal representations, preserving the learning and rewiring mechanisms introduced in the non-spiking BCPNN model, offering competitive classification performance. Our Poisson spike generation mechanism is simpler than integrate-and-fire models, but it still recapitulates _in vivo_ irregular cortical pyramidal spiking patterns with realistic firing rates. We suggest that it is the Hebbian plasticity mechanism that provides a robust learning algorithm tolerating the highly irregular sparse spiking. This is in stark contrast to backprop-based algorithms where it is not straightforward to accommodate spiking neurons. We found that short-term filtering (\(Z\)-traces) was crucial for this process. The time constants we found to work best (\(\tau_{z}\sim\)50ms) roughly match the dendritic depolarization time constant (paralleling the integration step in Eq. 4), and the NMDA-dependent Ca\({}^{2+}\) influx required for synaptic plasticity (learning step in Eq. 1). Our scaling experiments (not shown) suggested that the network scales well in terms of performance although the running time is 100x slower compared to the non-spiking model since the timestep needs to be around 1ms (simulations took \(\sim\)10 hours on custom CUDA code running on A100 GPUs). More efficient software and custom hardware implementation can make large-scale SNN simulations more efficient. Another direction of interest is in developing a more complex network architecture that combines recurrent attractors implementing associative memory with hierarchical representation learning (unsupervised) networks.
2307.08192
HOPE: High-order Polynomial Expansion of Black-box Neural Networks
Despite their remarkable performance, deep neural networks remain mostly ``black boxes'', suggesting inexplicability and hindering their wide applications in fields requiring making rational decisions. Here we introduce HOPE (High-order Polynomial Expansion), a method for expanding a network into a high-order Taylor polynomial on a reference input. Specifically, we derive the high-order derivative rule for composite functions and extend the rule to neural networks to obtain their high-order derivatives quickly and accurately. From these derivatives, we can then derive the Taylor polynomial of the neural network, which provides an explicit expression of the network's local interpretations. Numerical analysis confirms the high accuracy, low computational complexity, and good convergence of the proposed method. Moreover, we demonstrate HOPE's wide applications built on deep learning, including function discovery, fast inference, and feature selection. The code is available at https://github.com/HarryPotterXTX/HOPE.git.
Tingxiong Xiao, Weihang Zhang, Yuxiao Cheng, Jinli Suo
2023-07-17T01:46:15Z
http://arxiv.org/abs/2307.08192v1
# HOPE: High-order Polynomial Expansion of Black-box Neural Networks ###### Abstract Despite their remarkable performance, deep neural networks remain mostly "black boxes", suggesting inexpilability and hindering their wide applications in fields requiring making rational decisions. Here we introduce HOPE (High-order Polynomial Expansion), a method for expanding a network into a high-order Taylor polynomial on a reference input. Specifically, we derive the high-order derivative rule for composite functions and extend the rule to neural networks to obtain their high-order derivatives quickly and accurately. From these derivatives, we can then derive the Taylor polynomial of the neural network, which provides an explicit expression of the network's local interpretations. Numerical analysis confirms the high accuracy, low computational complexity, and good convergence of the proposed method. Moreover, we demonstrate HOPE's wide applications built on deep learning, including function discovery, fast inference, and feature selection. The code is available at [https://github.com/HarryPotterXY/HOPE.git](https://github.com/HarryPotterXY/HOPE.git). explainable artificial intelligence (XAI), high-order derivative, Taylor expansion, neural network, deep learning. ## 1 Introduction Deep neural networks have gained widespread adoption due to their ability for universal approximation, as proved by numerous studies [1, 2, 3, 4]. However, deep learning networks are largely considered black boxes that hinder their practical applications. Therefore, understanding the rationale behind predictions is crucial when making relatively logical decisions based on the network output or deciding whether to deploy a new model. This requirement for understanding is particularly important in areas such as clinical decision-making [5][6], drug discovery [7], and physical law identification [8], so people often prioritize models that align with their intuition over accuracy [9, 10, 11]. Therefore, there is a growing need for explainable AI (XAI) approaches to make deep learning more transparent and convincing [12, 13, 14]. XAI can be divided into five categories [7]: feature attribution, instance-based, graph-convolution-based, self-explaining, and uncertainty estimation, among which the feature attribution family calculates the relevance of every input feature for the final prediction and has been the most widely used XAI in recent years. The implementations identifying the feature attribution can be further grouped into the following three categories. _Gradient-based feature attribution_ approaches measure the impact of a change within an input neighborhood on the change in the output. These methods are mainly inspired by back-propagation [15], and some well-known methods include saliency map [16], Integrated Gradients [17], SmoothGrad [18], local explanation vectors [19], Grad-CAM [20], guided backpropagation [21], LRP [22], and deep Taylor decomposition [23]. The gradient-based methods calculate the first-order derivative of the model as the feature attribution, but omit the higher-order terms, which are important for nonlinear functions. _Surrogate-model feature attribution_ aims at developing a surrogate explanatory model to the original function, to mirror the computational logic of the original model. The representative surrogate models include LIME [24], DeepLIFT [25], Shapley value [26], LRP [22], SHAP [27], and BETA [28]. However, such surrogate models are mostly linear and suffer from insufficient approximation accuracy. _Perturbation-based feature attribution_ modifies or removes part of the input to measure the corresponding change in the model output, which reflects the feature importance of the neural network input. Methods like feature masking [29], perturbation analysis [30], response randomization [31], and conditional multivariate models [32] fall into this category. Although intuitive, perturbation-based methods are computationally slow when the number of input features increases [32], and the final result tends to be strongly influenced by the number of perturbed features [33]. In summary, an approach capable of approximating a general deep neural network with high accuracy, low cost and good interpretability is demanded. Here we present HOPE (High-order Polynomial Expansion), an approach to expand a deep neural network into a high-order Taylor polynomial on a reference input. The Taylor expansion is built on calculating the derivatives of the target neural network, which is intrinsically a nonlinear function. We first derive the high-order derivative rule for composite functions and extend this to neural networks to obtain the high-order derivatives quickly and accurately. Our method serves as a gradient-based method and a surrogate model, as it integrates the high-order derivatives and uses nonlinear polynomials to locally approximate the neural network. Our expansion is of high approximation accuracy, and our computational cost is far lower than perturbation-based methods because we can get all derivatives with only one back-propagation. Actually, in recent years, some researchers have paid attention to the Taylor expansion of neural networks. LRP [22] and deep Taylor decomposition [23] provide a first-order approximation, but they neglect the high-order terms. Morala P et al. [34] explored a mathematical framework for expanding a single-hidden-layer neural network, but it does not apply to multi-layer networks. NN-Poly [35], on the other hand, can infer the Taylor expansion of single-layer neural networks, and obtain the Taylor polynomial of the multi-layer neural network through forward composition, but at extremely high computational complexity. SHoP [36] proposed a Taylor expansion framework to solve high-order partial differential equations, but it only considered fully connected neural networks and adopted a layer-wise approach for the derivation process, rather than a module-wise approach. Differently, our method is similar to back-propagation and computes the derivative of the final output with respect to the intermediate output, propagating from the output layer back to the input layer. Compared to forward composition, our method, HOPE, has significant advantages in terms of accuracy, speed, computational complexity, and memory consumption. To summarize, the main technical contributions are listed as follows: * We infer the high-order derivative rule and extend it for calculating the derivatives of deep neural networks with higher accuracy, higher speed, and less memory consumption than the conventional computational-graph-based counterpart. * We propose a framework to expand a deep neural network into a Taylor series, providing an explicit explanation of the inner workings of the "black-box" model. * We prove the equivalence between a neural network and its Taylor series, and analyze its convergence condition. * We demonstrate the wide applications of Taylor expanded deep neural networks, such as function discovery, fast inference, and feature selection. The paper is structured as follows: In Sec. 2, we present the high-order derivative rule for composite functions. In Sec. 3, we extend this rule to deep neural networks, which allows for efficient and accurate computation of their high-order derivatives. Based on the calculated derivatives, we propose the Taylor expansion framework--HOPE, and analyze its bounds, convergence, and computational complexity in Sec. 4. In Sec. 5, we conducted a series of experiments to show HOPE's significant advantages over the computational-graph-based method, verified the convergence condition, and demonstrated its applications in function discovery, fast inference, and feature selection. Finally, in Sec.6, we analyze the advantages and disadvantages of the proposed approach and discuss some possible future work. ## 2 High-order Derivative Rule for Composite Functions Considering a composite function \[\mathbf{y}=h(g(\mathbf{x})), \tag{1}\] which is constructed by two functions \(\mathbf{z}=g(\mathbf{x})\) and \(\mathbf{y}=h(\mathbf{z})\) to map the input \(\mathbf{x}\in\mathbb{R}^{p}\) to the intermediate state variable \(\mathbf{z}\in\mathbb{R}^{s}\), and then to the final output \(\mathbf{y}\in\mathbb{R}^{o}\) sequentially. Assuming the two functions \(g(\cdot)\) and \(f(\cdot)\) are \(n\)-order differentiable at \(\mathbf{x}0\) and \(\mathbf{z}0=g(\mathbf{x}0)\) respectively, this section will derive the high-order derivative rule for the composite function \(\mathbf{y}=h(g(\mathbf{x}))\) in three systems: Single-Input Single-State Single-Output (SISSSO), Multiple-Input Multiple-State Single-Output (MIMSSO), and Multiple-Input Multiple-State Multiple-Output (MIMSMO). ### _Sissso_ For an SISSSO system, \(\mathbf{x},\mathbf{z},\mathbf{y}\in\mathbb{R}\). From the chain rule, the first three derivatives of \(\mathbf{y}=h(g(\mathbf{x}))\) can be calculated as \[\left\{\begin{array}{l}\frac{\partial\mathbf{y}}{\partial \mathbf{x}}=\frac{\partial\mathbf{z}}{\partial\mathbf{x}}\frac{\partial \mathbf{y}}{\partial\mathbf{z}},\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}^{2}}=\frac{\partial^{2} \mathbf{y}}{\partial\mathbf{x}^{2}}\frac{\partial\mathbf{y}}{\partial \mathbf{z}}+\frac{(\partial\mathbf{z})}{(\partial\mathbf{x})}^{2}\frac{ \partial^{2}\mathbf{y}}{\partial\mathbf{z}^{2}},\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}^{2}}=\frac{\partial^{2} \mathbf{y}}{\partial\mathbf{x}^{2}}\frac{\partial\mathbf{y}}{\partial \mathbf{z}}+\frac{3}{\partial\mathbf{x}}\frac{\partial^{2}\mathbf{y}}{ \partial\mathbf{x}^{2}}\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{z}^{2} }+(\frac{\partial\mathbf{z}}{\partial\mathbf{x}})^{3}\frac{\partial^{2} \mathbf{y}}{\partial\mathbf{z}^{3}}.\end{array}\right. \tag{2}\] For more terms, we can convert \(\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{z}^{2}-1}\frac{\partial \mathbf{x}}{\partial\mathbf{x}}\) to \(\frac{\partial\mathbf{z}}{\partial\mathbf{x}}\frac{\partial^{2}\mathbf{y}}{ \partial\mathbf{z}^{2}}\), and calculate \(\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}^{2}}\) from \(\{\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}},k=1,\dots,n\}\) and \(\{\frac{\partial^{2}\mathbf{x}}{\partial\mathbf{x}},k=1,\dots,n\}\). Hence, Eq. (2) turns into following matrix form \[\left[\begin{array}{l}\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\\ \vdots\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}^{2}}\end{array}\right]=\left[ \begin{array}{l}\frac{\partial\mathbf{z}}{\partial\mathbf{x}}&0&0\\ \frac{\partial^{2}\mathbf{z}}{\partial\mathbf{x}^{2}}&(\frac{\partial \mathbf{z}}{\partial\mathbf{x}})^{2}&0&0\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}^{2}}&3&\frac{\partial^{2} \mathbf{y}}{\partial\mathbf{x}^{2}}&(\frac{\partial\mathbf{z}}{\partial \mathbf{x}})^{3}&0\\ \vdots&\vdots&\vdots&\ddots\end{array}\right]\left[\begin{array}{l}\frac{ \partial\mathbf{y}}{\partial\mathbf{z}}\\ \vdots\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{z}^{2}}\end{array}\right], \tag{3}\] which can be further abbreviated as \[\mathbf{v}^{y,x}=\mathbf{M}^{z,x}\mathbf{v}^{y,z}. \tag{4}\] In this equation \(\mathbf{v}^{y,x},\mathbf{v}^{y,z}\in\mathbb{R}^{n}\) are respectively the vectors composed of derivatives \(\{\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{x}}\}\) and \(\{\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{x}}\}\); \(\mathbf{M}^{z,x}\in\mathbb{R}^{n\times n}\) is the transformation matrix composed of \(\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{x}^{k}}\) and takes a lower triangular form. So far, the calculation of \(h(g(\mathbf{x}))\)'s \(n\)-order derivatives turns into the computation of \(\mathbf{M}^{z,x}\). From Eq. (3) the \(i\)th and \(i+1\)th terms (\(i<n\)) are respectively \[\frac{\partial^{i}\mathbf{y}}{\partial\mathbf{x}^{i}}=\sum_{j=1}^{n}\mathbf{M }_{i,j}^{z,x}\frac{\partial^{j}\mathbf{y}}{\partial\mathbf{z}^{j}} \tag{5}\] and \[\frac{\partial^{i+1}\mathbf{y}}{\partial\mathbf{x}^{i+1}}=\sum_{j=1}^{n}\mathbf{ M}_{i+1,j}^{z,x}\frac{\partial^{j}\mathbf{y}}{\partial\mathbf{z}^{j}}. \tag{6}\] Taking derivatives over both sides of Eq. (5) we arrive at \[\begin{split}\frac{\partial^{i+1}\mathbf{y}}{\partial \mathbf{x}^{i+1}}&=\sum_{j=1}^{n}\frac{\partial}{\partial\mathbf{x}}( \mathbf{M}_{i,j}^{z,x}\frac{\partial^{j}\mathbf{y}}{\partial\mathbf{z}^{j}})\\ &=\sum_{j=1}^{n}\frac{\partial}{\partial\mathbf{x}}\mathbf{M}_{i,j}^{z,x} \frac{\partial^{j}\mathbf{y}}{\partial\mathbf{z}^{j}}+\sum_{j=1}^{n}\frac{ \partial\mathbf{z}}{\partial\mathbf{x}}\mathbf{M}_{i,j}^{z,x}\frac{\partial^{j+1 }\mathbf{y}}{\partial\mathbf{z}^{j+1}}\\ &=\sum_{j=1}^{n}\left(\frac{\partial}{\partial\mathbf{x}}\mathbf{M}_{i,j}^ {z,x}+\frac{\partial\mathbf{z}}{\partial\mathbf{x}}\mathbf{M}_{i,j-1}^{z,x} \right)\frac{\partial^{j}\mathbf{y}}{\partial\mathbf{z}^{j}}\\ &\quad-\frac{\partial\mathbf{z}}{\partial\mathbf{x}}\mathbf{M}_{i,0}^ {z,x}\frac{\partial\mathbf{y}}{\partial\mathbf{z}}+\frac{\partial\mathbf{z}}{ \partial\mathbf{x}}\mathbf{M}_{i,n}^{z,x}\frac{\partial^{n+1}\mathbf{y}}{ \partial\mathbf{z}^{n+1}}.\end{split} \tag{7}\] Because \(\mathbf{M}_{i,0}^{z,x}=0\) and \(\mathbf{M}_{i,n}^{z,x}=0\)\((i<n)\), Eq. (7) can be simplified into \[\frac{\partial^{i+1}\mathbf{y}}{\partial\mathbf{x}^{i+1}}=\sum_{j=1}^{n} \left(\frac{\partial}{\partial\mathbf{x}}\mathbf{M}_{i,j}^{z,x}+\frac{ \partial\mathbf{z}}{\partial\mathbf{x}}\mathbf{M}_{i,j-1}^{z,x}\right) \frac{\partial^{j}\mathbf{y}}{\partial\mathbf{z}^{j}}. \tag{8}\] Compare Eq. (6) and Eq. (8), we can get the recurrence formula of \(\mathbf{M}^{z,x}\) as \[\left\{\begin{array}{ll}\mathbf{M}_{1,1}^{z,x}=\frac{\partial_{x}}{\partial x}, &i<j\\ \mathbf{M}_{1,j}^{z,x}=0,&i\geq j\\ \mathbf{M}_{i,j}^{z,x}=\frac{\partial\mathbf{M}_{i-1,j}^{z,x}}{\partial x}+ \frac{\partial_{z}}{\partial x}\mathbf{M}_{i-1,j-1}^{z,x},&i\geq j\end{array}\right. \tag{9}\] which explicitly composes the \(n\)-order transformation matrix \(\mathbf{M}^{z,x}\) in Eq. (4). ### _Mimso_ Unmixed partial derivatives.For a MIMSSO system with \(p\)-dimensional input and \(s\) intermediate states, i.e., \(\mathbf{x}\in\mathbb{R}^{p}\), \(\mathbf{z}\in\mathbb{R}^{s}\), \(\mathbf{y}\in\mathbb{R}\), we can get the first three unmixed partial derivatives of \(\mathbf{y}=h(g(\mathbf{x}))\) as \[\left\{\begin{array}{ll}\frac{\partial\mathbf{y}}{\partial\mathbf{x}_{i}}= \sum_{j=1}^{s}\left(\frac{\partial\mathbf{z}_{j}}{\partial\mathbf{x}_{i}}, \frac{\partial\mathbf{y}}{\partial\mathbf{z}_{j}}\right),\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}_{i}^{2}}=\sum_{j=1}^{s} \left(\frac{\partial^{2}\mathbf{z}_{j}}{\partial\mathbf{x}_{i}^{2}},\frac{ \partial^{2}\mathbf{y}}{\partial\mathbf{z}_{j}}+(\frac{\partial\mathbf{z}_{j }}{\partial\mathbf{x}_{i}})^{2}\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{z }_{j}^{2}}\right),\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}_{i}^{2}}=\sum_{j=1}^{s} \left(\frac{\partial^{2}\mathbf{z}_{j}}{\partial\mathbf{x}_{i}^{2}},\frac{ \partial\mathbf{y}}{\partial\mathbf{z}_{j}}+3\frac{\partial\mathbf{z}_{j}}{ \partial\mathbf{x}_{i}}\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}_{i}^{ 2}},\frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}_{j}^{2}}+(\frac{\partial \mathbf{z}_{j}}{\partial\mathbf{x}_{i}})^{3}\frac{\partial^{3}\mathbf{y}}{ \partial\mathbf{z}_{j}^{2}}\right).\end{array}\right. \tag{10}\] To facilitate derivation, we define an operator \(\beta\) to save the information of their \(k\)-order unmixed partial derivatives \[\frac{\beta^{k}}{\beta\mathbf{x}^{k}}=\left[\begin{array}{ll}\frac{\partial ^{k}}{\partial\mathbf{x}_{i}^{2}}\\ \vdots\\ \frac{\partial^{k}}{\partial\mathbf{x}_{p}^{k}}\end{array}\right], \tag{11}\] and the following equations hold \[\frac{\beta^{k}\mathbf{y}}{\beta\mathbf{x}^{k}}=\left[\begin{array}{ll}\frac {\partial^{k}\mathbf{z}_{j}}{\partial\mathbf{x}_{i}^{2}}&\cdots&\frac{\partial ^{k}\mathbf{z}_{j}}{\partial\mathbf{x}_{i}^{2}}\\ \vdots&\ddots&\vdots\\ \frac{\partial^{k}\mathbf{z}_{j}}{\partial\mathbf{x}_{p}^{k}}&\cdots&\frac{ \partial^{k}\mathbf{z}_{j}}{\partial\mathbf{x}_{p}^{k}}\\ \end{array}\right], \tag{12}\] Based on the above definitions, Eq. (10) can be rewritten as \[\left\{\begin{array}{ll}\frac{\partial\mathbf{y}}{\beta\mathbf{x}}=\frac{ \beta\mathbf{x}^{k}}{\beta\mathbf{x}^{k}}\frac{\partial\mathbf{y}}{\beta \mathbf{x}},&\\ \frac{\beta^{2}\mathbf{y}^{2}}{\beta\mathbf{x}^{2}}=\frac{\beta^{2}\mathbf{x}^{ 2}}{\beta\mathbf{x}^{2}}\frac{\partial\mathbf{y}}{\beta\mathbf{x}}+(\frac{ \beta\mathbf{z}^{2}}{\beta\mathbf{x}})^{\circ 2}\frac{\beta^{2}\mathbf{y}}{\beta\mathbf{z}^{2}},&\\ \frac{\beta^{3}\mathbf{y}}{\beta\mathbf{x}^{3}}=\frac{\beta^{3}\mathbf{x}^{ 2}}{\beta\mathbf{x}^{3}}\frac{\partial\mathbf{y}}{\beta\mathbf{z}}+(3\frac{ \beta\mathbf{z}^{2}}{\beta\mathbf{x}})^{\circ 2}\frac{\beta^{2}\mathbf{y}}{\beta\mathbf{x}^{2}} \frac{\partial^{2}\mathbf{y}}{\beta\mathbf{z}^{2}}+(\frac{\beta\mathbf{z}^{2} \mathbf{x}}{\beta\mathbf{x}})^{\circ 3}\frac{\beta^{3}\mathbf{y}}{\beta\mathbf{z}^{3}},&\end{array}\right. \tag{13}\] where \(\circ k\) is Hadamard power, \((\mathbf{A}^{\circ k})_{i,j}=(\mathbf{A}_{i,j})^{k}\), and \(\odot\) is Hadamard product, \((\mathbf{A}\odot\mathbf{B})_{i,j}=\mathbf{A}_{i,j}\), \(\mathbf{B}_{i,j}\). Similar to Eq. (3), we turn Eq. (13) into a matrix form \[\left[\begin{array}{c}\frac{\partial\mathbf{y}}{\beta\mathbf{x}}\\ \vdots\\ \frac{\partial^{2}\mathbf{y}}{\beta\mathbf{x}^{k}}\end{array}\right]=\left[ \begin{array}{ll}\frac{\partial^{2}\mathbf{x}^{2}}{\beta\mathbf{x}^{2}}&0&0& 0\\ \frac{\partial^{2}\mathbf{x}^{2}}{\beta\mathbf{x}^{2}}&(\frac{\partial^{2} \mathbf{x}^{2}}{\beta\mathbf{x}^{2}})^{\circ 2}&0&0\\ \frac{\partial^{2}\mathbf{y}}{\beta\mathbf{x}^{2}}&3\frac{\partial\mathbf{x}^{2}}{ \beta\mathbf{x}^{2}}&(\frac{\partial^{2}\mathbf{x}^{2}}{\beta\mathbf{x}^{2}})^{ \circ 3}&0\\ \vdots&\vdots&\ddots\end{array}\right]\left[\begin{array}{c}\frac{\partial \mathbf{y}}{\beta\mathbf{x}}\\ \vdots\\ \frac{\partial^{3}\mathbf{y}}{\beta\mathbf{z}^{2}}\end{array}\right], \tag{14}\] which is of a consistent form with Eq. (3), only with scalar elements replaced by matrices, and the power and multiplication turn into Hadamard power \(\circ k\) and Hadamard product \(\odot\) respectively. We further abbreviated the above equation as \[\mathbf{v}^{y,x}=\mathbf{M}^{z,x}\mathbf{v}^{y,z}. \tag{15}\] Mixed partial derivatives.The first module of the neural network is mostly a linear layer, such as a fully connected layer or convolutional layer that satisfies \[\frac{\partial^{k}\mathbf{z}_{j}}{\partial\mathbf{x}_{i_{1}}\ldots\partial \mathbf{x}_{i_{k}}}=0\;(k>1). \tag{16}\] The first three mixed derivatives of \(\mathbf{y}=h(g(\mathbf{x}))\) are calculated as \[\left\{\begin{array}{ll}\frac{\partial\mathbf{y}}{\partial\mathbf{x}_{i_{1}} }=\sum_{j=1}^{s}\frac{\partial\mathbf{z}_{j}}{\partial\mathbf{x}_{i_{1}}}\frac{ \partial\mathbf{y}}{\partial\mathbf{z}_{i}},&\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}_{i_{1}}\partial\mathbf{x}_{i_{2}}}= \sum_{j=1}^{s}\frac{\partial\mathbf{z}_{j}}{\partial\mathbf{x}_{i_{1}}}\frac{ \partial\mathbf{z}_{j}}{\partial\mathbf{x}_{i_{2}}}\frac{\partial^{2}\mathbf{y}}{ \partial\mathbf{z}_{j}^{2}},&\\ \frac{\partial^{2}\mathbf{y}}{\partial\mathbf{x}_{i_{1}}\partial\mathbf{x}_{i_{2}}} \partial\mathbf{x}_{i_{3}}=\sum_{j=1}^{s}\frac{\partial\mathbf{z}_{j}}{ \partial\mathbf{x}_{i_{1}}}\frac{\partial\mathbf{z}_{j}}{\partial\mathbf{x}_{i_{2}}} \frac{\partial\mathbf{z}_{j}}{\partial\mathbf{x}_{i_{3}}}\frac{\partial^{3} \mathbf{y}}{\partial\mathbf{z}_{j}^{2}}.&\\ \end{array}\right. \tag{17}\] Based on the definition in Eqns. (11)(12), the above equations turns into \[\left\{\begin{array}{ll}\frac{\partial\mathbf{y}}{\beta\mathbf{x}}=\mathbf{Q} _{1}\frac{\partial\mathbf{y}}{\beta\mathbf{x}},&\\ \frac{\partial\mathbf{y}}{\beta\mathbf{x}}\otimes\frac{\partial\mathbf{y}}{\beta \mathbf{x}}=\mathbf{Q}_{2}\frac{\partial^{2}\mathbf{y}}{\beta\mathbf{x}^{2}},&\\ \frac{\partial\mathbf{y}}{\beta\mathbf{x}}\otimes\frac{\partial\mathbf{y}}{\beta \mathbf{x}}\otimes\frac{\partial\mathbf{y}}{\beta\mathbf{x}}=\mathbf{Q}_{3}\frac{ \partial^{2}\mathbf{y}}{\beta\mathbf{x}^{3}},&\\ \mathbf{Q}_{1}=\frac{\beta\mathbf{x}^{2}}{\beta\ ## 3 High-order Derivative Rule for Neural Networks This section mainly introduces the high-order derivative rule of a deep neural network. Since a multiple-output network can be regarded as multiple single-output networks, we consider only the single-output cases. Without loss of generality, we derive the back-propagation of high-order derivatives of the most common modules, with the network structure illustrated in Fig. 1. Before proceeding with the detailed derivations, we define the following notations. The input is denoted as \(\mathbf{x}\), the output of \(m\)th module as \(\mathbf{y}^{(m)}\), the length of \(\mathbf{y}^{(m)}\) as \(o_{m}\), and the final output as \(\mathbf{y}\). To simplify the expression, we omit the superscripts of \(\mathbf{v}^{\mathbf{y},\mathbf{y}^{(m)}}\) and \(\mathbf{M}^{\mathbf{y}^{(m)},\mathbf{y}^{(m-1)}}\) respectively as \(\mathbf{v}_{m}\) and \(\mathbf{M}_{m}\). ### _Output Unit_ As for the final output \(\mathbf{y}=\mathbf{y}^{(7)}\in\mathbb{R}\), according to the definition in Eq. (12), its \(k\)-order derivatives can be calculated as (23) Further from Eq. (15), we can obtain the initial derivative in vector form (24) ### _Fully Connected Layer_ For a fully connected layer, its input-output relationship is defined as \[\mathbf{y}^{(m+1)}=\mathbf{W}^{(m+1)}\mathbf{y}^{(m)}+\mathbf{b}^{(m+1)}, \tag{25}\] where \(\mathbf{W}^{(m+1)}\in\mathbb{R}^{o_{m+1}\times o_{m}}\) is the weight matrix, and \(\mathbf{b}^{(m+1)}\in\mathbb{R}^{o_{m+1}}\) is the bias vector. The \(k\)-order derivative of the \(i\)-th node of \(\mathbf{y}^{(m+1)}\) with respect to the \(j\)-th node of \(\mathbf{y}^{(m)}\) is \[\frac{\partial^{k}\mathbf{y}_{i}^{(m+1)}}{\partial\mathbf{y}^{(m)}_{j}}^{k}= \left\{\begin{array}{cc}\mathbf{W}_{i,j}^{(m+1)},&k=1\\ 0,&k>1.\end{array}\right. \tag{26}\] Combining with the definition in Eq. (12), we have \[\frac{\partial^{k}\mathbf{y}^{(m+1)}}{\partial\mathbf{y}^{(m)}_{i}}^{k}= \left\{\begin{array}{cc}\mathbf{W}^{T},&k=1\\ \mathbf{0},&k>1.\end{array}\right. \tag{27}\] On the one hand, we can get all the unmixed partial derivatives \(\{\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{y}_{i}^{(m)}}^{k},i=1,\dots, o_{m}\}\) by calculating \(\mathbf{v}_{m}\) \[\mathbf{v}_{m}=\mathbf{M}_{m+1}\mathbf{v}_{m+1}, \tag{28}\] with the transformation matrix \(\mathbf{M}_{m+1}\) in Eq. (14) rewritten as a block diagonal matrix \[\mathbf{M}_{m+1}=diag(\mathbf{W}^{T},\mathbf{W}^{T^{\mathcal{O}2}},\dots, \mathbf{W}^{T^{\mathcal{O}n}}). \tag{29}\] On the other hand, we can obtain all the mixed partial derivatives \(\{\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{y}_{i_{1}}^{(m)},\dots, \partial\mathbf{y}_{i_{k}}^{(m)}},i_{1},\dots,i_{k}=1,\dots,o_{m}\}\) by calculating \(\mathbf{v}_{m}^{\star}\) from \[\mathbf{v}_{m}^{\star}=\mathbf{M}^{\star}_{m+1}\mathbf{v}_{m+1}. \tag{30}\] In this equation, the transformation matrix \(\mathbf{M}^{\star}_{m+1}\) defined in Eq. (19) can be also rewritten as a block diagonal matrix \[\mathbf{M}^{\star}_{m+1}=diag(\mathbf{Q}_{1},\mathbf{Q}_{2},\dots,\mathbf{Q} _{n}), \tag{31}\] with \(\mathbf{Q}_{1}=\mathbf{W}^{T}\), \(\mathbf{Q}_{k}=\left(\mathbf{W}^{T}\otimes\mathbf{1}^{(o_{m}{}^{k-1})}\right) \odot(\mathbf{1}^{o_{m}}\otimes\mathbf{Q}_{k-1})\). ### _Convolutional Layer_ The input-output relationship of a convolutional layer can be described as \[\mathbf{y}^{(m+1)}=\mathbf{y}^{(m)}\ast\mathbf{F}^{(m+1)}, \tag{32}\] where \(\mathbf{F}^{(m+1)}\) is a convolutional kernel, and \(\ast\) denotes convolutional operation. Although a convolutional layer can be regarded as a fully connected layer with a sparse weight matrix and zero bias, taking derivative is both time consuming and memory demanding if transformed into a fully connected layer. Therefore, we derive the high-order derivative rule on the convolution representation. The \(u\)-th output sums over the product of some elements in \(\mathbf{y}^{(m)}\) and all elements in \(\mathbf{F}^{(m+1)}\), \[\mathbf{y}_{u}^{(m+1)}=\sum_{v}\mathbf{y}_{v}^{(m)}\mathbf{F}_{u,v}^{(m+1)}, \tag{33}\] Fig. 1: **The framework for calculating the high-order derivatives of HOPE.** The network has seven modules: fully connected layer (fc), activation function (act), unflatten module (unflatten), convolutional layer (conv), pooling layer (pooling), flatten module (flatten), and single-output fully connected layer. The output of \(m\)th module is \(\mathbf{y}^{(m)}\) and the final output \(\mathbf{y}=\mathbf{y}^{(7)}\). HOPE is similar to back-propagation in that we calculate the derivatives of \(\mathbf{y}\) with respect to the intermediate output \(\mathbf{y}^{(m)}\), starting from the output layer and moving backward towards the input layer. In the intermediate layer, we only need to calculate \(\mathbf{v}_{m}\), which contains all the unmixed partial derivatives. In the input layer, we can calculate \(\mathbf{v}_{m}^{\star}\), which contains all the mixed partial derivatives. where \(\mathbf{F}_{u,v}^{(m+1)}\) is the weight between \(\mathbf{y}_{u}^{(m+1)}\) and \(\mathbf{y}_{v}^{(m)}\). We can calculate the derivatives as \[\frac{\partial\mathbf{y}}{\partial\mathbf{y}_{v}^{(m)}}=\sum_{u}\frac{\partial \mathbf{y}}{\partial\mathbf{y}_{u}^{(m+1)}}\frac{\partial\mathbf{y}_{u}^{(m+1) }}{\partial\mathbf{y}_{v}^{(m)}}=\sum_{u}\frac{\partial\mathbf{y}}{\partial \mathbf{y}_{u}^{(m+1)}}\mathbf{F}_{u,v}^{(m+1)}. \tag{34}\] Because \(\frac{\partial\mathbf{y}_{v}^{(m+1)^{k}}}{\partial\mathbf{y}_{v}^{(m)}}=0\) (\(k>1\)), the high-order derivatives are \[\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{y}_{v}^{(m)}}=\sum_{u}\frac{ \partial^{k}\mathbf{y}}{\partial\mathbf{y}_{u}^{(m+1)^{k}}}\left(\frac{ \partial\mathbf{y}_{u}^{(m+1)}}{\partial\mathbf{y}_{v}^{(m)}}\right)^{k}= \sum_{u}\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{y}_{u}^{(m+1)^{k}}} \mathbf{F}_{u,v}^{(m+1)^{k}}. \tag{35}\] Taking the matrix form, Eq. (34) turn into \[\frac{\partial\mathbf{y}}{\partial\mathbf{y}^{(m)}}=\frac{\beta\mathbf{y}}{ \partial\mathbf{y}^{(m+1)}}*rot180(\mathbf{F}). \tag{36}\] Comparing Eq. (34) and Eq. (35), we can also get the matrix form of the high-order derivatives as \[\frac{\partial^{k}\mathbf{y}}{\partial\mathbf{y}^{(m)}}^{k}=\frac{\beta^{k} \mathbf{y}}{\beta\mathbf{y}^{(m+1)^{k}}}*rot180(\mathbf{F}^{\circ k}). \tag{37}\] Since the input of a convolution layer is usually image-like data, we only calculate unmized partial derivatives as \[\mathbf{v}_{m}=\left[\begin{array}{cc}\left(\frac{\partial\mathbf{y}}{ \partial\mathbf{y}^{(m)}}\right)^{T}&\dots&\left(\frac{\beta^{n}\mathbf{y}}{ \partial\mathbf{y}^{(m)}}\right)^{T}\end{array}\right]^{T} \tag{38}\] One can also calculate the mixed partial derivatives by converting the convolutional layer into a fully connected counterpart and then applying Eq. (30) directly. ### _Nonlinear Activation Layer_ Consider a nonlinear activation layer \(\mathbf{y}^{(m+1)}=\sigma(\mathbf{y}^{(m)})\), where \(\mathbf{y}_{i}^{(m+1)}=\sigma(\mathbf{y}_{i}^{(m)})\), we have \[\frac{\partial^{k}\mathbf{y}_{i}^{(m+1)}}{\partial\mathbf{y}_{j}^{(m)}}^{k}= \left\{\begin{array}{cc}\sigma^{(k)}(\mathbf{y}_{j}^{(m)}),&i=j\\ 0,&i\neq j\end{array}\right. \tag{39}\] where \(\sigma^{(k)}(\cdot)\) is the \(k\)-order derivative of this activation function. According to the definition in Eq. (12), we can get \[\frac{\beta^{k}\mathbf{y}^{(m+1)^{T}}}{\beta\mathbf{y}^{(m)}}^{k}= \left[\begin{array}{ccc}\frac{\partial^{k}\mathbf{y}_{i}^{(m+1)}}{\partial \mathbf{y}_{i}^{(m)k}}&\dots&\frac{\partial^{k}\mathbf{y}_{i}^{(m+1)}}{ \partial\mathbf{y}_{i}^{(m)k}}\\ \vdots&\ddots&\vdots\\ \frac{\partial^{k}\mathbf{y}_{i}^{(m+1)}}{\partial\mathbf{y}_{i}^{(m)k}}& \dots&\frac{\partial^{k}\mathbf{y}_{i}^{(m+1)}}{\partial\mathbf{y}_{i}^{(m)k} }\end{array}\right] \tag{40}\] \[=diag\left(\sigma^{(k)}(\mathbf{y}_{1}^{(m)}),\dots,\sigma^{(k)}( \mathbf{y}_{o}^{(m)})\right).\] After calculating \(\frac{\beta^{k}\mathbf{y}^{(m+1)^{T}}}{\beta\mathbf{y}^{(m)k}}\), we can further obtain the transformation matrix \(\mathbf{M}_{m+1}\) from Eq. (14) and the unmixed partial derivative vector \(\mathbf{v}_{m}\) from Eq.(28). The only left question is how to calculate the value of \(\sigma^{(k)}(x)\) with \(x\in\mathbb{R}\). Here we give the calculation of several widely used activation functions. \(\bullet\)**Sine \(\boldsymbol{\sigma(x)=sin(x)}\).** The derivatives are \[\sigma^{(k)}(x)=\left\{\begin{array}{ll}cos(x),&k\ mod\ 4=1\\ -sin(x),&k\ mod\ 4=2\\ -cos(x),&k\ mod\ 4=3\\ sin(x).&k\ mod\ 4=0\end{array}\right. \tag{41}\] \(\bullet\)**ReLU \(\boldsymbol{\sigma(x)=max(0,x)}\).** The derivatives are \[\frac{\partial^{k}\sigma(x)}{\partial x^{k}}=\left\{\begin{array}{ll}1,&if \ k=1\ and\ x>0\\ 0.&else\end{array}\right. \tag{42}\] \(\bullet\)**Sigmoid \(\boldsymbol{\sigma(x)=\frac{e^{x}}{1+e^{x}}}\).** The first derivative is \[\sigma^{(1)}(x)=\frac{e^{x}}{(1+e^{x})^{2}}=\sigma(x)-\sigma(x)^{2}. \tag{43}\] Further, we can express \(\sigma^{(k)}(x)\) as the form containing only \(\sigma(x)\), i.e., \[\sigma^{(2)}(x) =\sigma^{(1)}(x)-2\sigma(x)\sigma^{(1)}(x)\] \[=[\sigma(x)-\sigma(x)^{2}]-2\sigma(x)[\sigma(x)-\sigma(x)^{2}] \tag{44}\] \[=\sigma(x)-3\sigma(x)^{2}+2\sigma(x)^{3}.\] After calculating the other derivatives and organizing them into matrix form, we have \[\left[\begin{array}{c}\sigma(x)\\ \sigma^{(1)}(x)\\ \sigma^{(2)}(x)\\ \vdots\\ \sigma^{(n)}(x)\end{array}\right]=\left[\begin{array}{cccc}1&0&0&0\\ 1&-1&0&0\\ 1&-3&2&0\\ \vdots&\vdots&\vdots&\ddots\end{array}\right]\left[\begin{array}{c}\sigma(x) \\ \sigma(x)^{2}\\ \vdots\\ \sigma(x)^{n+1}\end{array}\right]. \tag{45}\] This square matrix takes a lower triangular form, and we abbreviate it as \(B\in\mathbb{R}^{n+1\times n+1}\). Similar to the derivation of the transformation matrix in Eq. (3), from above equation, the \(k\)-th and \((k+1)\)-th derivatives are \[\sigma^{(k)}(x)=\sum_{i=1}^{k+1}B_{k+1,i}\sigma(x)^{i}, \tag{46}\] \[\sigma^{(k+1)}(x)=\sum_{i=1}^{k+2}B_{k+2,i}\sigma(x)^{i}. \tag{47}\] Taking derivatives over both sides of Eq. (46) yields \[\sigma^{(k+1)}(x) =\sum_{i=1}^{k+1}iB_{k+1,i}\sigma(x)^{i-1}\sigma^{(1)}(x)\] \[=\sum_{i=1}^{k+1}iB_{k+1,i}\sigma(x)^{i-1}[\sigma(x)-\sigma(x)^{2}]\] \[=\sum_{i=1}^{k+1}iB_{k+1,i}\sigma(x)^{i}-\sum_{i=1}^{k+1}iB_{k+1,i} \sigma(x)^{i+1}\] \[=\sum_{i=1}^{k+1}iB_{k+1,i}\sigma(x)^{i}-\sum_{i=2}^{k+2}(i-1)B_{k+1,i-1}\sigma(x)^{i}\] \[=\sum_{i=1}^{k+2}[iB_{k+1,i}-(i-1)B_{k+1,i-1}]\sigma(x)^{i}\] Comparing Eq. (47) and Eq. (48), we arrive at the following relationship \[\left\{\begin{array}{ll}B_{1,1}=1,&i<j\\ B_{i,j}=jB_{i-1,j}-(j-1)B_{i-1,j-1}.&i\geq j\end{array}\right. \tag{49}\] * **Tanh \(\mathbf{\sigma(x)}=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\).** The first and second derivatives are \[\begin{split}\sigma^{(1)}(x)&=1-(\frac{e^{x}-e^{-x} }{e^{x}+e^{-x}})^{2}=1-\sigma(x)^{2},\\ \sigma^{(2)}(x)&=-2\sigma(x)\sigma^{(1)}(x)=-2\sigma( x)+2\sigma(x)^{3}.\end{split}\] (50) Organize it into matrix form: \[\left[\begin{array}{c}1\\ \sigma(x)\\ \sigma^{(1)}(x)\\ \sigma^{(2)}(x)\\ \vdots\\ \sigma^{(n)}(x)\end{array}\right]=\left[\begin{array}{ccccc}1&0&0&0&0\\ 0&1&0&0&0\\ 1&0&-1&0&0\\ 0&-2&0&2&0\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right]\left[\begin{array}{c}1 \\ \sigma(x)\\ \sigma(x)^{2}\\ \sigma(x)^{3}\\ \vdots\\ \sigma(x)^{n+1}\end{array}\right]. \tag{51}\] This square matrix takes a lower triangular form, and we abbreviate it as \(\mathbf{C}\in\mathbb{R}^{n+2\times n+2}\). From above equation, the \(k\)-th and \((k+1)\)-th derivatives are \[\sigma^{(k)}(x)=\sum_{i=1}^{k+2}C_{k+2,i}\sigma(x)^{i-1}, \tag{52}\] \[\sigma^{(k+1)}(x)=\sum_{i=1}^{k+3}C_{k+3,i}\sigma(x)^{i-1}. \tag{53}\] Taking derivatives over both sides of Eq. (52) \[\begin{split}&\sigma^{(k+1)}(x)=\sum_{i=1}^{k+2}(i-1)C_{k+2,i} \sigma(x)^{i-2}\sigma^{(1)}(x)\\ &=\sum_{i=1}^{k+2}(i-1)C_{k+2,i}\sigma(x)^{i-2}[1-\sigma(x)^{2}] \\ &=\sum_{i=1}^{k+2}(i-1)C_{k+2,i}\sigma(x)^{i-2}-\sum_{i=1}^{k+2}(i- 1)C_{k+2,i}\sigma(x)^{i}\\ &=\sum_{i=0}^{k+1}iC_{k+2,i+1}\sigma(x)^{i-1}-\sum_{i=2}^{k+3}(i-2)C_{ k+2,i-1}\sigma(x)^{i-1}\\ &=\sum_{i=1}^{k+3}[iC_{k+2,i+1}-(i-2)C_{k+2,i-1}]\sigma(x)^{i-1} \end{split} \tag{54}\] Comparing Eq. (53) and Eq. (54), we have \[\left\{\begin{array}{ll}C_{1,1}=1,C_{2,1}=0,C_{2,2}=1,\\ C_{i,j}=0,&i<j\\ C_{i,j}=jC_{i-1,j+1}-(j-2)C_{i-1,j-1}.&i\geq j,i\geq 3\end{array}\right. \tag{55}\] ### _Pooling Layer_ The pooling layer divides the input into blocks and takes either the maximal or average value of each block as the output. To demonstrate the expansion of the pooling layer, we take the simple 2-D pooling layer shown in Fig. 2 as an example. * **Max pooling** layer outputs the maximum entry of each block, i.e., \[\begin{split}\mathbf{y}_{1}^{(m+1)}&=max(\mathbf{y}_{1}^{(m )},\mathbf{y}_{2}^{(m)},\mathbf{y}_{5}^{(m)},\mathbf{y}_{6}^{(m)})=\mathbf{y} _{idx}^{(m)},\\ \mathbf{y}_{2}^{(m+1)}&=max(\mathbf{y}_{3}^{(m)},\mathbf{y}_{4}^{(m )},\mathbf{y}_{7}^{(m)},\mathbf{y}_{8}^{(m)})=\mathbf{y}_{idx_{2}}^{(m)},\\ \mathbf{y}_{3}^{(m+1)}&=max(\mathbf{y}_{9}^{(m)},\mathbf{y}_{10}^{(m )},\mathbf{y}_{13}^{(m)},\mathbf{y}_{14}^{(m)})=\mathbf{y}_{idx_{3}}^{(m)}, \\ \mathbf{y}_{4}^{(m+1)}&=max(\mathbf{y}_{11}^{(m)},\mathbf{y}_{12}^{(m )},\mathbf{y}_{15}^{(m)},\mathbf{y}_{16}^{(m)})=\mathbf{y}_{idx_{4}}^{(m)}, \end{split}\] (56) where \(idx_{1}\in\{1,2,5,6\},idx_{2}\in\{3,4,7,8\},idx_{3}\in\{9,10,13,14\},idx_{4} \in\{11,12,15,16\}\) are the subscripts of the maximum inputs in four blocks, respectively. Given \(\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y}_{i}^{(m+1)^{k}}}\), we can calculate the derivatives with respect to \(\mathbf{y}^{(m)}\) as \[\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y}_{j}^{(m)}}=\left\{\begin{array}[] {ll}\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y}_{i}^{(m+1)^{k}}},&if \ j=idx_{i}\\ 0.&else\end{array}\right.\] (57) Therefore, we only need to record the subscripts of the corresponding maximum inputs and assign \(\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y}_{i}^{(m+1)^{k}}}\) to \(\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y}_{i}^{(m)}}\). When the stride is less than the kernel size, one input may be related to multiple outputs and the derivative can be written as the following general formula \[\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y}_{j}^{(m)}}=\sum_{\begin{subarray} {c}i=1\\ j=idx_{i}\end{subarray}}^{o_{m+1}}\frac{\partial\mathbf{y}^{k}}{\partial\mathbf{y }_{i}^{(m+1)^{k}}}.\] (58) * **Average pooling** layer takes the average over each block as the output, i.e., \[\begin{split}\mathbf{y}_{1}^{(m+1)}&=\frac{1}{4}(\mathbf{y }_{1}^{(m)}+\mathbf{y}_{2}^{(m)}+\mathbf{y}_{5}^{(m)}+\mathbf{y}_{6}^{(m)}), \\ \mathbf{y}_{2}^{(m+1)}&=\frac{1}{4}(\mathbf{y}_{3}^{(m)}+\mathbf{y }_{4}^{(m)}+\mathbf{y}_{7}^{(m)}+\mathbf{y}_{8}^{(m)}),\\ \mathbf{y}_{3}^{(m+1)}&=\frac{1}{4}(\mathbf{y}_{9}^{(m)}+\mathbf{y }_{10}^{(m)}+\mathbf{y}_{13}^{(m)}+\mathbf{y}_{14}^{(m)}),\\ \mathbf{y}_{4}^{(m+1)}&=\frac{1}{4}(\mathbf{y}_{11}^{(m)}+\mathbf{y }_{12}^{(m)}+\mathbf{y}_{15}^{(m)}+\mathbf{y}_{16}^{(m)}).\end{split}\] (59) As shown in Fig. (3), we can regard a average pooling layer as a special convolutional layer, with the input channel being the same as the output channel. After getting the equivalent convolutional layer of the average pooling layer, we can obtain the derivatives easily from the high-order derivative rule of the convolutional layer. Fig. 2: **The illustration of a simple 2-D pooling layer downsampling \(\mathbf{y}^{(m+1)}\) to \(\mathbf{y}^{(m)}\).** Here the kernel size is \(2\times 2\), and the stride is \(2\times 2\). ## 4 Taylor Expansion of Neural Networks As shown in Fig. 1, applying a forward propagation and back-propagation, we can get a mixed partial derivative vector \(\mathbf{v}_{0}^{*}\) on a reference input \(\mathbf{x}0\), which contains all the \(n\)-order derivatives \(\{\frac{\partial^{2}\mathbf{y}^{*}}{\partial\mathbf{x}_{i_{1}}\ldots\partial \mathbf{x}_{i_{k}}},i_{1},\ldots,i_{k}=1,\ldots,p\}\). Denote the neural network as \(\mathbf{y}=f(\mathbf{x};\theta)\), with \(\theta\) being the network parameters, the \(n\)-order Taylor polynomial of \(f(\mathbf{x};\theta)\) can be expressed as \[\begin{split} f(\mathbf{x})&=f(\mathbf{x}0;\theta )+\sum_{i=1}^{p}\frac{\frac{\partial f(\mathbf{x}\theta)}{\partial\mathbf{x}_ {i_{1}}\ldots\partial\mathbf{x}_{i_{k}}}|\mathbf{x}0}{1!}\Delta\mathbf{x}_{i} +\ldots\\ &+\sum_{i_{1},\ldots,i_{n}=1}^{p}\frac{\frac{\partial^{2}\mathbf{y }^{*}}{\partial\mathbf{x}_{i_{1}}\ldots\partial\mathbf{x}_{i_{n}}}|\mathbf{x }0}{n!}\Delta\mathbf{x}_{i_{1}}\ldots\Delta\mathbf{x}_{i_{n}},\end{split} \tag{60}\] where \(\Delta\mathbf{x}=\mathbf{x}-\mathbf{x}0\), \(\frac{\partial^{k}f(\mathbf{x}\theta)}{\partial\mathbf{x}_{i_{1}}\ldots\partial \mathbf{x}_{i_{n}}}|\mathbf{x}0}|\mathbf{x}0\) is a \(k\)-order partial derivative on the reference input \(\mathbf{x}0\). When all the modules are infinitely differentiable, the network is equivalent to an infinite Taylor polynomial. If the high-order derivatives are much smaller than the low-order derivatives, we can ignore the high-order terms and Eq. (60) can provide an accurate and explicit explanation for the prediction. ### _Upper and Lower Bounds of Network and Taylor Polynomial_ For simplicity, we take the 1-D neural network as an example (i.e., \(\mathbf{x},\mathbf{x}0\in\mathbb{R}\)). Suppose \(f(\mathbf{x};\theta)\) has \(n\)-order continuous derivatives in the interval \([a,b]\) and \(\mathbf{x}0\in(a,b)\), for \(\forall\mathbf{x}\in[a,b]\), \(\exists\xi\in[\min(\mathbf{x},\mathbf{x}0),\max(\mathbf{x},\mathbf{x}0)]\), \(s.t.\) \[f(\mathbf{x};\theta)=\sum_{k=0}^{n-1}\frac{f(\mathbf{x}0;\theta)^{(k)}}{k!} \Delta\mathbf{x}^{k}+\frac{f(\xi;\theta)^{(n)}}{n!}\Delta\mathbf{x}^{n}, \tag{61}\] where \(f(\mathbf{x};\theta)^{(k)}\) is the \(k\)-order derivative with respect to the input \(\mathbf{x}\), and \(\frac{f(\xi;\theta)^{(n)}}{n!}\Delta\mathbf{x}^{n}\) is a \(n\)-order Lagrange remainder. After applying the proposed high-order Taylor expansion at \(\mathbf{x}0\), the \(f(\mathbf{x};\theta)\)'s \(n\)-order Taylor polynomial is derived as \[f(\mathbf{x})=\sum_{k=0}^{n-1}\frac{f(\mathbf{x}0;\theta)^{(k)}}{k!}\Delta \mathbf{x}^{k}+\frac{f(\mathbf{x}0;\theta)^{(n)}}{n!}\Delta\mathbf{x}^{n}. \tag{62}\] Setting \[\begin{split} f_{1}(\mathbf{x})&=\sum_{k=0}^{n-1} \frac{f(\mathbf{x}0;\theta)^{(k)}}{k!}\Delta\mathbf{x}^{k}+\frac{\max_{ \mathbf{x}\in[a,b]}f(\mathbf{x};\theta)^{(n)}}{n!}\Delta\mathbf{x}^{n},\\ f_{2}(\mathbf{x})&=\sum_{k=0}^{n-1}\frac{f(\mathbf{ x}0;\theta)^{(k)}}{k!}\Delta\mathbf{x}^{k}+\frac{\min_{\mathbf{x}\in[a,b]}f( \mathbf{x};\theta)^{(n)}}{n!}\Delta\mathbf{x}^{n},\end{split} \tag{63}\] we can provide the upper and lower boundaries of the network and the Taylor polynomial as \[\begin{split} f_{d}(\mathbf{x})&\leq f(\mathbf{x}; \theta)\leq f_{u}(\mathbf{x}),\\ f_{d}(\mathbf{x})&\leq f(\mathbf{x})\leq f_{u}( \mathbf{x}),\end{split} \tag{64}\] where \[\begin{split} f_{u}(\mathbf{x})&=\max\left(f_{1}( \mathbf{x}),f_{2}(\mathbf{x})\right),\\ f_{d}(\mathbf{x})&=\min\left(f_{1}(\mathbf{x}),f_{2}( \mathbf{x})\right).\end{split} \tag{65}\] Further, we can provide an upper boundary of the approximation error as \[\begin{split}|f(\mathbf{x})&-f(\mathbf{x};\theta)| \leq|f_{u}(\mathbf{x})-f_{d}(\mathbf{x})|=|f_{1}(\mathbf{x})-f_{2}(\mathbf{x} )|\\ &=\frac{\max_{\mathbf{x}\in[a,b]}f(\mathbf{x};\theta)^{(n)}-\min_{ \mathbf{x}\in[a,b]}f(\mathbf{x};\theta)^{(n)}}{n!}|\Delta\mathbf{x}|^{n}.\end{split} \tag{66}\] From the above equation, we can see that the approximation performance is closely related to three factors. 1. The range of \(f(\mathbf{x};\theta)^{(n)}\) in the interval \([a,b]\): \(r=\max_{\mathbf{x}\in[a,b]}f(\mathbf{x};\theta)^{(n)}-\min_{\mathbf{x}\in[a,b]}f (\mathbf{x};\theta)^{(n)}\). Specifically, \(r\) decreases with \(f(\mathbf{x};\theta)^{(n+1)}\), and a smaller \(r\) leads to a smaller approximation error. 2. The order of Taylor polynomial: \(n\). When \(n\) is large, the growth rate of \(n!\) far exceeds the growth speed of \(|\Delta\mathbf{x}|^{n}\), resulting in a decrease in \(|f(\mathbf{x})-f(\mathbf{x};\theta)|\). 3. The distance from \(\mathbf{x}\) to the reference point \(\mathbf{x}0\): \(|\Delta\mathbf{x}|\). The closer \(\mathbf{x}\) is to \(\mathbf{x}0\), the smaller the approximation error tends to be. It should be noted that Eqns. (64)(66) are theoretical bounds for the error between neural networks and Taylor polynomial, and they hold only when the first \(n\) derivatives are accurate enough. However, in practice, due to the precision limitations of computer storage and computation, it cannot be guaranteed that the upper bounds for the approximation error can always be estimated in all cases. ### _Convergence Analysis of HOPE_ If \(\exists n\), \(s.t.\)\(\forall\mathbf{x}\in[a,b]\), \(|f(\mathbf{x};\theta)^{(n+1)}|\to 0\), we have \[\max_{\mathbf{x}\in[a,b]}f(\mathbf{x};\theta)^{(n)}-\min_{\mathbf{x}\in[a,b]}f (\mathbf{x};\theta)^{(n)}\to 0.\] Further from Eq. (66), the approximation error \(|f(\mathbf{x})-f(\mathbf{x};\theta)|\to 0\) and then the Taylor polynomial will converge to the target neural network, i.e., \(f(\mathbf{x})\to f(\mathbf{x};\theta)\). In this section, we will analyze the condition of \(|f(\mathbf{x};\theta)^{(n+1)}|\to 0\) Fig. 3: **The illustration of a convolutional layer equivalent to the average pooling layer.** Here the number of input and output channels are both 3, and 3 kernels are involved. Convolution the input with kernel \(\#1\) acts as an average pooling applied specifically to the first input channel, and similar equivalence hold for the other two kernels. From Eqns. (29)(31)(37), we can see that the \(k\)-order derivatives are related with \(\mathbf{W}_{i,j}^{k}\). \[|f(\mathbf{x};\theta)^{(k)}|\propto|\mathbf{W}_{i,j}|^{k}. \tag{67}\] When the elements in \(\mathbf{W}\) are concentrated near 0, high-order derivatives are more likely to approach 0. When the parameters are located far from 0, high-order derivatives may become increasingly larger, and thus the Taylor polynomial diverges, i.e., \(f(\mathbf{x})\nrightarrow f(\mathbf{x};\theta)\). \[\begin{split}\lim_{\begin{subarray}{c}|\mathbf{W}_{i,j}| \to 0\\ k\rightarrow\infty\end{subarray}}|f(\mathbf{x};\theta)^{(k)}|=0,\\ \lim_{\begin{subarray}{c}|\mathbf{W}_{i,j}|>1\\ k\rightarrow\infty\end{subarray}}|f(\mathbf{x};\theta)^{(k)}|=+\infty.\end{split} \tag{68}\] The above analysis tells that the parameter distribution of each layer has a great influence on the convergence of Taylor expansion. Therefore, we can refer to the above rules to design network structures or impose constraints on the network parameters during network training to achieve deep neural networks with high-order Taylor approximation, which facilitates leveraging the advantages of such explicit expansion. We will verify this conclusion in the Experiment section. ### _Time Complexity Analysis of HOPE_ As a fundamental building block and one of the most time-consuming operations in deep learning [15], back-propagation is implemented via automatic differentiation on the computational graph of the neural network in most deep learning frameworks, such as Autograd [37]. Here, we analyze and compare the time complexity of the computational-graph-based method and HOPE for a \(p\)-D neural network. When calculating high-order derivatives, the length of the computational graph increases exponentially with the order of the derivative at base 2, because for each node one needs to accumulate the local derivatives along all the paths from the node to the input, resulting in an exponential increase in the number of nodes in the computational graph. For computational-graph-based method, mathematically, there are \(p^{k}\)\(k\)-order derivatives, and the length of their computational graphs is \(2^{k-1}\), so the time complexity of the computational graph is \[T(n)=\sum_{k=1}^{n}p^{k}2^{k-1}\sim\mathcal{O}((2p)^{n}). \tag{69}\] Differently, HOPE obtains all the derivatives at one time, with the main calculations lying in calculating the transformation matrix \(\mathbf{M}\) and conducting back-propagation. Since \(\mathbf{M}\) is a lower triangular matrix and the block matrices in \(k\)-th row need \(k\) operations, the complexity of calculating \(\mathbf{M}\) is \(T(n)=\sum_{k=1}^{n}k^{2}=\frac{n(n+1)(2n+1)}{6}\sim\mathcal{O}(n^{3})\). For linear layers, \(\mathbf{M}\) turns into a diagonal matrix and the complexity reduces to \(T(n)=\sum_{k=1}^{n}k=\frac{n(n+1)}{2}\sim\mathcal{O}(n^{2})\). For mixed partial derivatives in Eq. (21), \(\mathbf{M}^{*}\) is a diagonal matrix and the size of \(\mathbf{Q}_{k}\) is \(p^{k-1}\) times larger than \(\mathbf{W}\), and the complexity is about \(T(n)=\sum_{k=1}^{n}p^{k-1}=\frac{1-p^{2}}{1-p}\sim\mathcal{O}(p^{n})\). Therefore, the complexity of HOPE is \[\mathcal{O}(n^{2})<T(n)<\mathcal{O}(p^{n}). \tag{70}\] ## 5 Experiments In this section, we quantitatively demonstrate HOPE's significant advantages in terms of accuracy, speed, and memory consumption. We further explored the influence of the target network's parameter distribution on the convergence of its Taylor series, which verified the conclusion in Section 4.2. Besides, we also visualized the upper and lower bounds of a neural network and its Taylor polynomial at increasing order of polynomial terms. Finally, we conducted three experiments to show HOPE's applications, including function discovery, low-cost inference, and feature selection. ### _Approximation Accuracy_ Among all the computational-graph-based methods, Autograd [37] stands out as the most widely used and convenient approach. Therefore, we have selected Autograd as the benchmark for comparison in this section. Fig. 4 shows the approximation accuracy of HOPE and Autograd on different neural networks. Specifically, we calculated the first 10-order derivatives with HOPE and Autograd separately, and get the Taylor polynomials with Eq. (60). In Fig. 4(a), we compared the output curves of the 1-D network, the Taylor polynomial of HOPE, and Autograd. When all the modules of the network are \(n\)-order differentiable, HOPE can perform high local approximation on this neural network, while Autograd suffers from large deviation as the input moves far away from the reference point, which indicates that HOPE can get the high-order derivatives more accurately. When the network contains modules like ReLU and Max Pooling, both HOPE and Autograd can only obtain the first-order derivative. In Fig. 4(b), we drew the input-to-output mapping surfaces and the residues of a 2D-CNN. One can observe that HOPE's output is closer to the target surface, as shown in the residual of both methods, further demonstrating HOPE's superior accuracy in calculating high-order derivatives and capability to perform local approximations to neural networks. In Fig. 4(c), we trained an MNIST [41] 01 classifier, and get all the first 10-order unmixed partial derivatives on a certain input image with HOPE, while Autograd is unable to decompose this model because the input dimension is too large. We varied the intensity of point \(\mathbf{x}_{18,9}\) while keeping the other input points unchanged, and plot the input-to-output mapping of the network and the Taylor polynomial, showing the local approximation ability of HOPE. The top \(10\%\) positive factors of the first 10-order derivatives are shown on the right. Since previous works on network interpretability usually analyze the feature contribution from the first-order derivatives of the adopted neural network, while the visualized derivatives indicate that higher-order derivatives also reflect the influence of input on output, we will explore how to utilize higher-order heat maps in our future work. ### _Running Efficiency_ We test the efficiency of the proposed approach HOPE and Autograd, in terms of running time and memory costs. The experiments are conducted on the Windows operating system with a GPU of NVIDIA GeForce RTX 3080 Ti, 32 GB memory, and 20 CPU cores. The results are shown in Tab. (1). We test on three MLPs with different input dimensions (\(1\sim 3\) ) but the same structure (10-layer MLP with a width of 1024), and an MNIST \(01\) classifier. We calculated all the mixed partial derivatives of the \(1\sim 3\) dimension networks, and for the MNIST network, we only calculate its unmixed partial derivatives. For a \(p\)-D network, there're \(n\) (\(p=1\)) or \(\frac{p^{n+1}-p}{p-1}\) (\(p>1\)) mixed partial derivatives, and for the MNIST network, there are \(784n\) unmixed partial derivatives. HOPE takes less than 1s and 6% of the available memory in all cases, while the time consumption of Autograd is much longer and increases exponentially with \(n\), or even out of memory (OOM). The significant superiority largely validates the time and memory efficiency of HOPE. ### _Convergence Under Different Parameter Settings_ Tab. (2) shows the influence of the parameter distribution of the target neural network on the convergence of its Taylor series. We initialize the weights of an MLP (width 512, depth 5) to follow a uniform distribution \(\mathbf{W}_{i,j}^{(m+1)}\sim U(-\frac{w_{0}}{\sigma_{m}},\frac{w_{0}}{\sigma_ {m}})\). As the value of \(w_{0}\) decreases, the parameters tend to concentrate more closely around zero, and it is more likely that the high-order derivatives of the model become increasingly smaller according to the inference in Section 4.2. The data in this table is the absolute value of the \(n\)-order derivative divided by the first-order derivative (i.e., \(\frac{|\partial^{2}f}{\partial x^{n}}/\frac{\partial f}{\partial x}\)). We varied \(w_{0}\) from 0.01 to 100. When \(w_{0}=0.01\) and \(w_{0}=0.1\), the high-order derivatives are much smaller than the low-order derivatives. When \(w_{0}=1\), almost all of the derivatives are on the same order of magnitude. When \(w_{0}=10\) and \(w_{0}=100\), the high-order derivatives are far larger than the low-order derivatives, which means that we cannot ignore the higher-order terms and make local approximations to neural networks. ### _Upper and Lower Bounds of a Neural Network and its Taylor Polynomial_ Based on Eq. 66, one can calculate the theoretical error bound between a neural network and its Taylor polynomial. In Fig. 5, the first panel shows the maximum approximation error \(e_{1}\) in the interval [-6,6] at different orders, and the theoretical upper bound of error \(e_{2}\). A small \(e_{1}\) or \(e_{2}\) indicates that the model has small prediction errors at each point. One can see that the theoretical error \(e_{2}\) is always larger than \(e_{1}\), and when \(n>14\) the magnitude of \(e_{2}\) reduces to a small value, resulting in a high approximation accuracy (small \(e_{1}\)). Fig. 4: **Approximation results of HOPE and Autograd.** (a) Approximation curves of 1-D networks with Sine (middle left), ReLU (bottom left), Average Pooling (middle right), and Max Pooling (bottom right). (b) Approximation surfaces (left) and approximation residuals (right) of a 2-D network. (c) Expansion results of an MNIST 01 classifier. High-order heat maps were calculated using HOPE (right), and a comparison (left) was made between the network and the Taylor polynomial with \(\mathbf{x}_{18,9}\) changed while keeping other input points unchanged. Other panels in Fig. 5 show the evolution of the approximation curves with increasing orders. \(f_{1}(x)\) and \(f_{2}(x)\) are the theoretical bounds of the network and its Taylor polynomial, and their expressions can be found in Eq. 63. As the degree of the approximation increases, the bounds \(f_{1}(x)\) and \(f_{2}(x)\) gradually converge, resulting in a closer approximation between this neural network and its Taylor polynomial. ### _Applications_ **Interpretation of deep neural networks and function discovery.** Implicit neural functions [40, 42, 43, 44, 45] are of strong expression capability and can be used to describe some unknown systems from observations. Taking the 2-D function \[y=\frac{\mathbf{x}_{1}^{2}+\mathbf{x}_{2}}{2},\ \mathbf{x}1,\mathbf{x}2\in[-1,1] \tag{71}\] as an example, we uniformly sample in the range \([-1,1]^{2}\) and then use a 2-D MLP to fit its observations. Since the MLP is a "black-box", we expand it into 2-order Taylor polynomials on reference inputs (0.0, 0.0), (0.5, 0.5), and (-0.5, -0.5) separately, and achieve following explicit expressions \[\mathbf{y} =-0.01+0.00\mathbf{x}_{1}+0.51\mathbf{x}_{2}+0.55\mathbf{x}_{1}^ {2}-0.00\mathbf{x}_{1}\mathbf{x}_{2}+0.03\mathbf{x}_{2}^{2}\] \[\approx 0.51\mathbf{x}_{2}+0.55\mathbf{x}_{1}^{2},\] \[\mathbf{y} =0.38+0.54(\mathbf{x}_{1}-0.5)+0.51(\mathbf{x}_{2}-0.5)+0.53( \mathbf{x}_{1}-0.5)^{2}\] \[+0.06(\mathbf{x}_{1}-0.5)(\mathbf{x}_{2}-0.5)+0.03(\mathbf{x}_{2} -0.5)^{2}\] \[=0.01-0.02\mathbf{x}_{1}+0.45\mathbf{x}_{2}+0.53\mathbf{x}_{1}^ {2}+0.06\mathbf{x}_{1}\mathbf{x}_{2}+0.03\mathbf{x}_{2}^{2}\] \[\approx 0.45\mathbf{x}_{2}+0.53\mathbf{x}_{1}^{2},\] \[\mathbf{y} =-0.13-0.51(\mathbf{x}_{1}+0.5)+0.49(\mathbf{x}_{2}+0.5)+0.50( \mathbf{x}_{1}+0.5)^{2}\] \[-0.07(\mathbf{x}_{1}+0.5)(\mathbf{x}_{2}+0.5)+0.02(\mathbf{x}_{2} +0.5)^{2}\] \[=-0.03-0.05\mathbf{x}_{1}+0.48\mathbf{x}_{2}+0.50\mathbf{x}_{1}^ {2}-0.07\mathbf{x}_{1}\mathbf{x}_{2}+0.02\mathbf{x}_{2}^{2}\] \[\approx 0.48\mathbf{x}_{2}+0.50\mathbf{x}_{1}^{2}. \tag{72}\] The aforementioned equations provide local explanations for the "black-box" network. When all these local explanations align and reach a consistent conclusion, a global explanation can be obtained. The findings suggest that HOPE possesses the capability of local interpretation and also exhibits potential for global interpretation and function discovery. Furthermore, the results validate that the expanded polynomial can learn the latent function with the same fidelity as the trained neural network, within the whole interval \([-1,1]^{2}\). This also implies that HOPE can be employed to assess the quality of the model. **Low-cost inference of deep neural networks.** To show the advantageous running efficiency after Taylor expansion, we test on a controller of a single-tank liquid system implemented with a neural network. We simulated the following liquid system, in which the opening of the water outlet valve \(v_{2}\) keeps constant, while the water output \(q_{2}\) is determined by the liquid level height \(h\), as illustrated in Fig. 6(a). Specifically, to achieve the desired liquid level height \(h_{s}\), the opening of the inlet valve \(v_{1}\) is manipulated to regulate the quantity of inlet water \(q_{1}\), and the system is a first-order differential system, with transfer function \[G(s)=\frac{K}{Ts+1}. \tag{73}\] Here \(K\)=1 is the system's gain, \(T\)=1 is the system's time constant, and \(s\) is the complex frequency domain variable. We trained a neural network level controller \(\hat{q}_{1}=f(e;\theta)\), where the input is the liquid level difference \(e=h_{s}-h\), and the output is the estimated input flow rate \(\hat{q}_{1}\). The network \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{\(\mathbf{x}\in\mathbb{R}\)} & \multicolumn{2}{c|}{\(\mathbf{x}\in\mathbb{R}^{2}\)} & \multicolumn{2}{c|}{\(\mathbf{x}\in\mathbb{R}^{3}\)} & \multicolumn{2}{c}{\(\mathbf{x}\in\mathbb{R}^{28.728}\)} \\ \cline{2-9} **n** & HOPE & Autograd & HOPE & Autograd & HOPE & Autograd & HOPE & Autograd \\ \hline 1 & 0.0059 / 0.20 & 0.0328 / 0.10 & 0.0109 / 0.30 & 0.0617 / 0.10 & 0.0120 / 0.20 & 0.0462 / 0.10 & 0.0120 / 0.10 & 0.0339 / 0.80 \\ 2 & 0.0159 / 0.40 & 0.0358 / 0.10 & 0.0160 / 0.30 & 0.0820 / 0.10 & 0.0162 / 0.30 & 0.0921 / 0.20 & 0.0431 / 0.10 & 3.3456 / 2.10 \\ 3 & 0.0229 / 0.40 & 0.0438 / 0.10 & 0.0232 / 0.30 & 0.1278 / 0.10 & 0.0234 / 0.40 & 0.2558 / 0.20 & 0.0229 / 0.10 & 7.5919 / 4.80 \\ 4 & 0.0428 / 0.50 & 0.0538 / 0.10 & 0.0448 / 0.60 & 0.3559 / 0.20 & 0.0468 / 0.50 & 1.2151 / 0.90 & 0.0342 / 0.10 & 18.457 / 10.8 \\ 5 & 0.0649 / 0.60 & 0.0812 / 0.10 & 0.0669 / 0.60 & 1.1938 / 0.80 & 0.0711 / 0.60 & 6.9751 / 5.80 & 0.0399 / 0.10 & 101.02 / 25.7 \\ 6 & 0.0857 / 0.80 & 0.1446 / 0.10 & 0.0867 / 0.80 & 5.0142 / 0.40 & 0.0974 / 0.80 & 59.799 / 43.7 & 0.0479 / 0.10 & 1129 / 56.4 \\ 7 & 0.1086 / 0.80 & 0.3037 / 0.20 & 0.1196 / 0.90 & 31.890 / 21.2 & 0.1340 / 1.00 & OOM / 76+ & 0.0648 / 0.10 & OOM / 76+ \\ 8 & 0.1387 / 0.90 & 0.7244 / 0.50 & 0.1405 / 0.90 & OOM / 76+ & 0.1839 / 1.40 & OOM / 76+ & 0.0731 / 0.20 & OOM / 76+ \\ 9 & 0.1634 / 1.00 & 1.9066 / 1.50 & 0.1638 / 1.10 & OOM / 76+ & 0.3234 / 2.40 & OOM / 76+ & 0.0864 / 0.30 & OOM / 76+ \\ 10 & 0.1942 / 1.20 & 5.4099 / 4.10 & 0.1982 / 1.50 & OOM / 76+ & 0.7079 / 5.10 & OOM / 76+ & 0.1000 / 0.30 & OOM / 76+ \\ \hline \end{tabular} \end{table} TABLE I: **Comparison of time (s) / memory (%) between HOPE and Autograd.** Here \(n\) is the order of derivatives. When the program is not running, the system memory usage is recorded at 24%. If an Out-of-Memory (OOM) error is encountered, it indicates that the program is utilizing 76% or more of the available memory. \begin{table} \begin{tabular}{c||c|c|c|c|c} \hline \(n\) & \(w_{0}\)=0.01 & \(w_{0}\)=0.1 & \(w_{0}\)=1.0 & \(w_{0}\)=10 & \(w_{0}\)=100 \\ \hline 1 & 1.00e+00 & 1.00e+00 & 1.00e+00 & 1.00e+00 & 1.00e+00 \\ 2 & 4.69e-03 & 6.39e-02 & 7.36e-02 & 3.14e+01 & 2.98e+03 \\ 3 & 3.94e-05 & 6.99e-03 & 7.98e-01 & 3.83e+01 & 6.08e+03 \\ 4 & 3.12e-07 & 5.18e-04 & 1.10e-01 & 2.28e+03 & 5.27e+07 \\ 5 & 1.97e-09 & 5.30e-05 & 7.01e-01 & 3.49e+03 & 1.66e+08 \\ 6 & 2.15e-11 & 4.42 structure is shown in Fig. 6(b). Here we set \(h_{s}=10\), and the label for training is designed as \[f(e)=\left\{\begin{array}{ll}20,&6<e\leq 10\\ 15,&2<e\leq 6\\ 10,&-2<e\leq 2\\ 5,&-6<e\leq-2\\ 0.&-10<e\leq-6\end{array}\right. \tag{74}\] We normalize both the inputs and labels to the range of -1 to 1, and the loss function is defined as \[J(e;\theta)=\|f(\frac{e}{10};\theta)-(\frac{f(e)}{10}-1)\|_{2}. \tag{75}\] Due to the continuity of this neural network, \(f(e;\theta)\) will be relatively smooth and not perfectly fit the function \(f(e)\). We expand \(f(e;\theta)\) to 3-order Taylor polynomials, within the range [-1, 1], to replace the neural network perfectly. We conducted liquid-level control experiments on the system using both the original neural network and the 3-order polynomial, under different initial liquid levels and initial inflow rates. The results are shown in Fig. 6(c), from which we can see that the polynomial can exactly replace the neural network. We conducted statistical analysis on the inference time of the deep neural network and its polynomial approximation by HOPE, across different input batch sizes. The results in Fig. 5: **Upper and lower bounds of a network and its Taylor polynomials of different orders.** The first figure shows the errors between a neural network and the \(n\)-order Taylor polynomials in the interval \([-6,6]\). Here \(e_{1}=\max_{\mathbf{x}\in[-6,6]}|f(\mathbf{x})-f(\mathbf{x};\theta)|\), and \(e_{2}=\frac{\max f(\mathbf{x};\theta)^{(n)}-\min f(\mathbf{x};\theta)^{(n)}} {n!}6^{n}\). \(f_{1}(x)\) and \(f_{2}(x)\) are respectively the theoretical bound of the neural network and Taylor polynomial. Fig. 6: **Illustration of the single-tank liquid level control system.** (a) A single-tank system. (b) The structure of the deep neural network implementing the controller, with the name of each module corresponds to the name of the class invoked in PyTorch. (c) Liquid level control curves of the neural-network controller and its polynomial expansion under different initial liquid levels and inflow rates. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c} \hline Batch size & 1 & 4 & 16 & 64 & 256 & 1024 & 4096 \\ \hline Network & 0.2056 & 0.2640 & 0.4163 & 2.5155 & 10.805 & 36.401 & 121.98 \\ HOPE & 0.0284 & 0.0284 & 0.0284 & 0.0284 & 0.0291 & 0.0294 & 0.0324 \\ \hline \end{tabular} \end{table} TABLE III: Inference time (ms) under different input batches. Tab. (3) indicate that the expanded polynomial has significantly shorter and more concentrated inference time. When the input batch grows to 4096, the polynomial inference time is approximately 3765 times faster than that of the neural network, while maintaining the same prediction accuracy. In terms of the model size, the file size of the deep neural network reaches 44,671 bytes, while the Taylor polynomial merely occupies 160 bytes, showcasing a remarkable reduction in storage space. **Feature selection.** For a neural network taking multiple inputs, an explicit expansion can efficiently measure the quantitative contribution of each input element to the final output, and facilitate feature selection. To demonstrate this application, we trained an MNIST handwritten digit classifier, and then separate it into ten equivalent single-output classifiers for easier expansion, as illustrated in Fig. 7(a). Denoting the input image as \(\mathbf{x}\), prediction as \(\mathbf{y}\), and \(\mathbf{x}_{i,j}\)'s impact on \(\mathbf{y}\) as \(\Delta\mathbf{y}_{i,j}\in\mathbb{R}\), we have \[\Delta\mathbf{y}_{i,j}\approx\sum_{k=1}^{n}\frac{\frac{\partial^{k}\mathbf{y} }{\partial\mathbf{x}_{i,j}^{k}}}{k!}\Delta\mathbf{x}_{i,j}^{k}, \tag{76}\] which can be further converted into matrix form \[\Delta\mathbf{y}\approx\sum_{k=1}^{n}\frac{\frac{\beta^{k}\mathbf{y}}{\partial \mathbf{x}^{k}}}{k!}\odot\Delta\mathbf{x}^{\circ k}. \tag{77}\] Here \(\Delta\mathbf{x}\in\mathbb{R}^{28\times 28}\) represents the perturbation applied to the input image, \(\frac{\beta^{k}\mathbf{y}}{\partial\mathbf{x}^{k}}\in\mathbb{R}^{28\times 28}\) contains all the \(k\)-order unmixed partial derivatives, and \(\Delta\mathbf{y}\in\mathbb{R}^{28\times 28}\) is the heat map reflecting the impact of all input elements on the output. We initialize \(\Delta\mathbf{x}\) as \(\mathbf{1}^{28\times 28}\) and expanded the ten single-output models to obtain 10 heat maps, and then applied perturbation analysis to get another 10 counterparts, as visualized in the upper and lower rows of Fig. 7(b). The results demonstrate that HOPE is capable of generating an equivalent heat map to the perturbation-based method. Besides, HOPE can infer significantly faster, taking only 0.002s to generate the heat map, whereas perturbation analysis requires 0.342s. ## 6 Conclusion Aiming at providing a high precision polynomial interpretation of the "black-box" deep neural networks, we propose the network's high-order Taylor expansion, which is of high accuracy, low computational cost, good convergence and wide applications. Understanding the mechanism behind the deep neural networks is quite important, and we believe that neural networks will become more transparent with HOPE, accelerating the development and application of neural networks. **Summary of the approach.** Specifically, we first derive the high-order derivative rule for a general composite function and then extend the rule to neural networks for fast and accurate calculation of its high-order derivatives. From all above derivatives, we can expand a black-box network into an explicit Taylor polynomial, providing a local explanation for the network's mapping from the input to the output. We also theoretically prove that a neural network is equivalent to its infinite Taylor polynomial if all of the modules are infinitely differential, and analyze the polynomial's convergence condition as well. **Advantageous and applications.** HOPE has significant advantages in terms of accuracy, speed, and memory cost compared with computational-graph-based method. It works as a general expansion and thus of wide applicability for diverse deep neural networks, e.g., with different dimensions and layers. The explicit Taylor expansion possesses the ability to conduct data-driven explicit model identification, facilitate the fast inference of a trained deep neural network, and select informative features contributing to the output. **Limitations and future work.** It should be noted that HOPE can be used only for modules that are \(n\)-order differentiable. For networks including components such as ReLU, LeakReLU, or Max Pooling, both HOPE and Autograd can only obtain their first-order information. Further increasing the applicability is one of our ongoing work. In the future, we would like to explore deeper the relationship between the convergence of the Taylor series and the parameter distribution, and apply it to the optimization and structure design of deep neural networks. We will also explore how to use high-order heat maps to determine the contribution of inputs to outputs more accurately. Moreover, HOPE can get the derivatives between any nodes of a neural network, which might inspire lightweight network design. ## Acknowledgments This work is jointly funded by National Natural Science Foundation of China (Grant No. 61931012) and Beijing Municipal Natural Science Foundation (Grant No. Z200021).
2306.10392
GlyphNet: Homoglyph domains dataset and detection using attention-based Convolutional Neural Networks
Cyber attacks deceive machines into believing something that does not exist in the first place. However, there are some to which even humans fall prey. One such famous attack that attackers have used over the years to exploit the vulnerability of vision is known to be a Homoglyph attack. It employs a primary yet effective mechanism to create illegitimate domains that are hard to differentiate from legit ones. Moreover, as the difference is pretty indistinguishable for a user to notice, they cannot stop themselves from clicking on these homoglyph domain names. In many cases, that results in either information theft or malware attack on their systems. Existing approaches use simple, string-based comparison techniques applied in primary language-based tasks. Although they are impactful to some extent, they usually fail because they are not robust to different types of homoglyphs and are computationally not feasible because of their time requirement proportional to the string length. Similarly, neural network-based approaches are employed to determine real domain strings from fake ones. Nevertheless, the problem with both methods is that they require paired sequences of real and fake domain strings to work with, which is often not the case in the real world, as the attacker only sends the illegitimate or homoglyph domain to the vulnerable user. Therefore, existing approaches are not suitable for practical scenarios in the real world. In our work, we created GlyphNet, an image dataset that contains 4M domains, both real and homoglyphs. Additionally, we introduce a baseline method for a homoglyph attack detection system using an attention-based convolutional Neural Network. We show that our model can reach state-of-the-art accuracy in detecting homoglyph attacks with a 0.93 AUC on our dataset.
Akshat Gupta, Laxman Singh Tomar, Ridhima Garg
2023-06-17T17:16:53Z
http://arxiv.org/abs/2306.10392v1
GlyphNet: Homoglyph domains dataset and detection using attention-based Convolutional Neural Networks ###### Abstract Cyber attacks deceive machines into believing something that does not exist in the first place. However, there are some to which even humans fall prey. One such famous attack that attackers have used over the years to exploit the vulnerability of vision is known to be a Homoglyph attack. It employs a primary yet effective mechanism to create illegitimate domains that are hard to differentiate from legit ones. Moreover, as the difference is pretty indistinguishable for a user to notice, they cannot stop themselves from clicking on these homoglyph domain names. In many cases, that results in either information heft or malware attack on their systems. Existing approaches use simple, string-based comparison techniques applied in primary language-based tasks. Although they are impactful to some extent, they usually fail because they are not robust to different types of homoglyphs and are computationally not feasible because of their time requirement proportional to the string's length. Similarly, neural network-based approaches are employed to determine real domain strings from fake ones. Nevertheless, the problem with both methods is that they require paired sequences of real and fake domain strings to work with, which is often not the case in the real world, as the attacker only sends the illegitimate or homoglyph domain to the vulnerable user. Therefore, existing approaches are not suitable for practical scenarios in the real world. In our work, we created GlyphNet, an image dataset that contains 4M domains, both real and homoglyphs. Additionally, we introduce a baseline method for a homoglyph attack detection system using an attention-based convolutional Neural Network. We show that our model can reach state-of-the-art accuracy in detecting homoglyph attacks with a 0.93 AUC on our dataset. **Keywords: Homoglyph Attacks, Convolutional Neural Networks, Cyber Security, Phishing** ## Introduction In cyber security, attackers employ different attacks to infiltrate our systems and networks, with the objective varying from stealing crucial information to inflicting system damage. One such deceptive attack is the homoglyph attack [17], which involves an attacker trying to fool humans and computer systems by using characters and symbols that may appear visually similar to characters used in real domain and process names but are different. For example, a typical homoglyph attack may involve changing "d" to "cl", "o" to "\(\theta\)", and "l" to "1". Some of the above substitutions can be difficult for the naked eye to detect, as shown in Figure1, It would mean that users would be easily susceptible to clicking on the homoglyph links, more so when navigating from one website to another. The problems arising from such an attack are of two types: a) Deceiving humans to believe that an illegitimate domain name is real by fooling the users, resulting in users using fake webpages as if they were the real ones. b) Create fake academic documents and papers by changing the real strings with homoglyphs to deceive plagiarism detection tools such as Grammarly.com Both types of problems are hard to detect and hence require robust methods to identify an attack before it causes any information breach. Previous approaches mainly used methods from comparative algorithms such as edit distance to identify homoglyph attacks from legit strings[10]. Any domain name string that returned an edit distance beyond an acceptable threshold was considered a homoglyph. Edit distance covers simple approaches like insertion, deletion, transposition, swapping, and substitution. Due to this shortcoming, a slight change to the illegitimate domain name can easily bypass it as quickly as a real one. Now, a slightly better version of it was proposed, which was called Visual Edit Distance[14]. It proposes to have a particular edit distance for the visual similarity of Figure 1: example of a real domain and their homoglyphs the two domain name strings. However, these methods were more relevant in academia and had negligible prevalence in the real world. A homoglyph attack differs from a phishing attack because domain names in the former are hardly distinguishable but can be apparent in the latter. We have taken the famous poem "The Road Not Taken" by Robert Frost to demonstrate this concept. In Figure 2, we have taken the poem text and run Grammarly's Plagiarism Detector tool. It reports 100% plagiarism, which is correct but later when we passed the homoglyphed version of the same text, it reports the text to be 100% unique, as shown in Figure 3. This proves that even today's state-of-the-art systems cannot effectively deal with texts comprising homoglyphs. Recently, Microsoft obtained a court order to remove many fraudulent "homoglyph" domains used to conduct fraud and pose as Office 365 users.Page (2021) following a customer complaint about a business email compromise attack, Microsoft conducted an investigation and discovered that the unidentified criminal organization responsible for the attack also created 17 other malicious domains, which were combined with the customer credentials that had been stolen to gain unauthorized access to Office 365 accounts and monitor the contacts of the customers. Microsoft stated that the cybercriminals have caused and continue to cause irreparable injury to Microsoft, its customers, and the general public. The complaint also stated that the cybercriminals have illegally accessed customer accounts, monitored customer email traffic, gathered information on pending financial transactions, and criminally impersonated [12] customers. According to studies, this attack hit \(71\%\) organizations in 2021. Sixty-two countries people were the subject of a massive cyberattack last year. In this research, we aim to create a data set that can help expand research on homoglyph attacks. We propose to apply an attention-based Convolutional Neural Network (CNN) to detect homoglyphs without the need to provide paired data. Additionally, our model achieves a significant performance boost compared to other approaches due to its architectural design. Our method can be applied directly as an API or web service for an organization to check a domain or process name before accessing it. For evaluation, we compared our performance with other baselines and found that our model outperforms them. Moreover, our approach also addresses the problem of unpaired data setting, which is often the case in the real world. The major contributions of our research are as follows: 1. Created a benchmark dataset of 4 million real and homoglyph domain images based on known homoglyph patterns and URLs. It is generated via strings from single- and dual-character noise sampled using a Gaussian distribution over a homoglyph character pool. 2. A method that uses an image dataset for detecting homoglyphs involving an attention-based convolutional neural network trained in a supervised fashion achieves a better AUC score than the other existing baselines. The paper's organization starts by introducing the problem faced by existing approaches to detect homoglyph-based phishing attacks in both academia and the real world. In Related Work, we discuss the existing approaches which propound the idea of solving this problem with either string matching or Deep Learning based methods like Siamese Neural Networks and GANs. We have explained their major pitfalls in terms of generalizing capabilities and feasibility. In the Dataset section, a comprehensive description is provided of the generation of the proposed images dataset. It follows a brief description of our attention-based CNN baseline implementation. The Experimentation section describes dataset splitting, metrics used, and other settings. Later, in the Results section, we examine the results and scores obtained after the experiments conducted in the last section. Both data and baseline implementation results are validated and explained with the help of an elegant table within the same section. The following section, Discussion, presents experiments we tried that did not work. Finally, the Conclusion Section summarizes the observations and contributions. ## Related Work The work by [10] used a Siamese Neural Network to detect homoglyphs using a paired dataset. This dataset included pairs of strings; one was a real domain name, and the other was a homoglyph domain name. In their work, they converted this pair of strings into binary images that were later fed to a Siamese Neural Network[14]. Figure 3: Homoglyph text on Grammarly plagiarism detector Figure 2: Real text on Grammarly plagiarism detector 2015). The Siamese neural network uses two identical convolutional neural networks(LeCun, Bengio et al., 1995) to learn the visual difference between a pair of images. They were applied to domains such as healthcare, finance, and others but have recently gained popularity in cyber security. Though(LeCun, Bengio et al., 1995) their work showed significant improvement from previous baselines but suffered from two major pitfalls: 1. In online security systems, it is impossible to provide paired data, without which these systems will not work. 2. It cannot be used in academia due to the inability to find a paired word for each word present in a scientific article. Therefore, although this approach performs well, it cannot be employed in real-world systems. The traditional solutions to prevent homoglyph attacks were inspired by genomicsLu, Mohammed, and Wang (2019), which proposed the idea that homoglyph domains are in string formats and, therefore, should be compared with legitimate ones to detect whether they are real or not. Edit Distance(Ristad and Yianilos, 1998) is the measure of the minimum number of operations required to transform one sequence (domain or process name string in our case) into another. If the value exceeds an acceptable threshold, it should predict as homoglyph. This looks effective but not when giving it a second thought. The reason is that in cases like "google.com" and "go@ge.com", edit distance would return only '1' which does not look so threatening but is a homoglyph domain name. Furthermore, a paired sequence of strings is required to make comparisons, which would not be the case if it was a homoglyph of a new domain name. Finally, in the real world, this approach lacked severely good results. Phishing attacks(Hong, 2012) should not be confused with homoglyph attacks. Phishing is an attack involving the attacker sending homoglyph/false/fake links that appear to be coming from a trusted vendor. It leads to information compromiseHelms, Ettkin, and Morris (2000), data breachesCheng, Liu, and Yao (2017), and financial fraud(Rewink, 2018). The difference between Phishing and Homoglyphs is that the former uses tricks such as longer, bad, and misspelled domain names and URLsMa et al. (2009) to fool people. At the same time, the latter takes advantage of the inability to differ in visually similar but different domain name strings. Thus, it is required to create better solutions for homoglyph detection. ### Siamese Neural Networks The Siamese neural network architecture is proposed to detect homoglyphs using a paired data set. This dataset included pairs of strings, one was a real domain name, and the other was a homoglyph domain name. Each instance was a tuple that contained a real domain string, a homoglyph domain string, and a score that denotes whether the second element is a valid homoglyph of the first or not as part of its elements. In their work, they converted this pair of strings into binary images, images that were later fed to a Siamese Neural Network. However, we observed a significant difference while reproducing the results in our dataset. ### PhishGANs Approaches such as Siamese Neural Networks suffered severely in terms of performance due to lack of data, as they only had close to \(91k\) real domain images. As a remedial solution, we were required to produce comprehensive data to train our models well. Recently, Lee Joon Sern et al. proposed PhishGANs to generate synthetic data(Sern, David, and Hao, 2020). They discussed creating a generative adversarial networkGoodfellow et al. (2014) that aimed to create images similar to real domain names to increase existing data sets. PhishGANs(Sern, David, and Hao, 2020) being a GANGoodfellow et al. (2014) involved a generator and a discriminator, both trained in an adversarial fashion such that the generator is trained well enough to produce images similar to those of a real domain which the discriminator cannot detect. Later, these images were fed to a different network for binary classification aimed at distinguishing real domain names from homoglyphs. They used UNet(Ronneberger, Fischer, and Brox, 2015) architecture as a generator using a custom loss function called a dot product loss. The PhishGANs(Sern, David, and Hao, 2020) were trained similarly to how Pix2Pix(Isola et al., 2017) is trained. Later, for classification purposes, a new architecture was defined and called homoglyph identifier (HI) using CNN(LeCun, Bengio et al., 1995) as an encoder using a triplet loss function (Hoffer and Ailon, 2015) as input, the positive domain (google.com), the an Figure 4: Siamese neural network architecture(Woodbridge et al., 2018) Figure 5: PhishGAN architecture(Sern, David, and Hao, 2020) chor domain (go0gle.com) and the negative domain (apple.com). On some popular domains, such as youtube.com and facebook.com, HI achieved an accuracy of roughly \(0.81\) while testing their homoglyphs. On an unseen domain, HI achieved an accuracy of \(0.76\) while feeding it back again in PhishGANs(Sern, David, and Hao 2020) and generating its homoglyphs, and later training on them, which helped detect its homoglyphs using \(0.86\) accuracy. Although the idea of generating synthetic data using GANs(Goodfellow et al. 2014) looks promising and intriguing but is not motivating when it comes to real-world usage. GANs(Goodfellow et al. 2014), in general, is one of the trickiest architectures in Deep Learning(LeCun, Bengio, and Hinton 2015) since their advent and are often found to have issues while training in the real world, which is not the case in the constrained environment of academia. It is common to encounter issues like problems in convergence, generator oscillating between generating specific examples in the domain, and multiple inputs resulting in generating the same output. Also, the performance increase was not drastic enough to compel us for its usage. ## Dataset The work by (Woodbridge et al. 2018) proposed their custom paired data set that comprises \(91k\) real domains and \(900k\) homoglyphs. Each real domain is used to generate its respective homoglyphs. Each point in this dataset is a three-element tuple denoting domain, homoglyph, and score. Here, if the value of the score is \(1.0\), then it is a valid homoglyph of the real domain. The real-world data limitation to Homoglyph-based attacks is the lack of publicly available data sets. **Proposed dataset: GlyphNet** We have proposed a dataset consisting of real and homoglyph domains. To generate homoglyph domains, real domains are needed. We have obtained domains from the Domains Project(Turkynewych 2020). This repository is one of the largest collections of publicly available active domains. The entire repository comprises 500M domains, and we restricted our work to 2M domains due to hardware restrictions. **Homoglyph Creation Algorithm** Homoglyph Generation is an important task, as one needs to ensure enough randomness to make it appear real and keep the process simple enough to fool the target. Publicly available tools like dnstwist(Ulikowski 2015) replace every character in the real input domain with their respective glyphs. It generates poor homoglyphs for the large part because it relies on paired data which is not fit to serve the purpose practically. We created our novel algorithm for the generation of homoglyph domains to ensure that real homoglyphs are generated with randomness and closeness. To achieve this, we sample homoglyph noise characters using Gaussian sampling(Boor, Overmars, and Van Der Stappen 1999) from the glyph pool. We used 1M real domains to generate \(2M\) homoglyphs with a single glyph character and introduce diversity in our dataset; we reran this algorithm on the remaining 1M real domains to generate homoglyph domains with two character glyphs. Finally, we have the 4M real and homoglyph domains. **Image Generation** Homoglyph attacks exploit the weakness of human vision to differentiate real from homoglyph domain names. From a visual perspective, we are interested in learning the visual characteristics of real and homoglyph domain names. To do so, we rendered images from the real and homoglyph strings generated via our algorithm. We have used ARIAL Typeface as our chosen font, a \(28\) font size, on a black background with white text from the middle left of the image; the image size is \(150\times 150\). ## Methodology This section presents our approach to building an end-to-end homoglyph detection system. We build on attention-based(Bahdanau, Cho, and Bengio 2014)(Vaswani et al. 2017) convolutional neural network(LeCun, Bengio et al. 1995) that aims to exploit the visual dissimilarity between real and homoglyph domain names. The architecture of our model is shown in Figure 7 and Figure 8. The rendered images are then used as input to the CNN to learn the desired visual feature information. The model consists of four Conv2D layers to learn visual information such as edges, curves, and strokes. Each convolutional layer is paired with a max-pooling layer to perform dimensionality reduction on the learned features. This model is developed in keras(Chollet et al. 2015). Each convolution block is followed by a convolutional block attention module (CBAM), as described in the following. Figure 6: Rendered images from the dataset, \(0\); homoglyph domain and, \(1\); real domain Attention processes boost the strength of representation by focusing on essential traits and suppressing unneeded ones. It uses the feed-forward convolutional neural network's CBAM, a specific and efficient attention module. Given a preliminary feature map, the module successively infers attention maps along the channel and spatial dimensions. It then multiplies the attention maps by the preliminary feature map to achieve adaptive feature refinement. The overall attention process is summarized as follows: \[\begin{array}{l}F^{\prime}=M_{c}(F)\otimes F,\\ F^{\prime\prime}=M_{s}(F^{\prime})\otimes F^{\prime},\end{array}\] 1. Given an intermediate feature map \(F\in\mathcal{R}^{C\times H\times W}\) as input. \(C\) represents a number of channels, \(H\) and \(W\) represent the height and width of the feature map \(F\) respectively. 2. CBAM sequentially infers a 1D channel attention map \(M_{c}\in\mathcal{R}^{C\times 1\times 1}\) 3. And a 2D spatial attention map \(M_{s}\in\mathcal{R}^{1\times H\times W}\) 4. \(\otimes\) element-wise multiplication For the sake of non-linearity, the RELU activation function is used. ## Experimentation ### Dataset and Metrics We have split our dataset into three parts, train, validation, and test, with a ratio of \(70:20:10\), respectively which \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Dataset Name** & **Real** & **Homoglyph** & **Total** \\ \hline Domain and Process StringsWoodbridge et al. (2018) & \(90k\) & \(900k\) & \(990k\) \\ \hline Similar and Dissimilar PairsMajumder et al. (2020) & \(2257\) & \(2257\) & \(4514\) \\ \hline **GlyphNET (Ours)** & \(2000k\) & \(2000k\) & \(4000k\) \\ \hline \end{tabular} \end{table} Table 1: Dataset comparison Figure 8: Zoom in view of conv-attention module Figure 7: Our neural network architecture amounts to \(2.8M\), \(0.8M\), and \(0.4M\) images in train, validation, and test sets. Each image size is \(150\times 150\). We use accuracy for measuring the performance of the classification task. Since accuracy can sometimes be misleading in a binary classification task, especially for unbalanced data sets, we consider precision, recall, and F1 score as our evaluation metrics, even though our dataset is balanced. We have also used the AUC score to compare our solution with some other works. ### Experimental Settings For the training part, we used binary cross-entropy as a Loss Function. We have used RMSProp optimizer to optimize the loss obtained from the binary cross-entropy loss function, with a learning rate of \(10e^{-4}\), and the network is trained for \(30\) epochs with early stopping. We trained with a batch size of \(256\). We evaluated the performance of our model in terms of accuracy vs. epochs and loss vs. epochs plots. ## Results We evaluated our model on two unpaired data sets for domain names. We took an input string from the dataset we created in the previous section, converted it into an image, and fed it to the model to generate a binary label. The results for the domain names are tabulated in Table 2. Out of the \(400k\) test images, our model correctly categorized \(372k\) images resulting in \(0.93\) accuracy. Our model achieved an f1-score of \(0.93\), \(13\) points higher than the previous models. Our model outperforms other baselines and comparable works on the other metrics, including accuracy, precision, recall, and AUC. The performance of other models on our dataset was also below par compared with the proposed datasets in their works, signifying our dataset's variations, difficulty, and importance. Our dataset, code, and models are publicly available under MIT LICENSE and can be accessed from our project's GitHub repository1 Footnote 1: [https://github.com/Akshat4112/Glyphnet](https://github.com/Akshat4112/Glyphnet) ## Discussion We now discuss some interesting observations and experiments which did not work and possible explanations regarding them. ### Using only Grayscale Images During the image rendering phase, where we generated images from the data set containing real and homoglyph domains, we experimented with generating colored images instead of grayscale ones. We used (\(73\), \(109\), \(137\)) as the background color while (\(255\),\(255\),\(0\)) as the color of the text to be written. However, the network trained from these colored images was always found to be underperforming the network trained on grayscale images. One possible reason might be that the grayscale involves black and white as two colors, which are two extremes. Hence, it preserves the difference in adjoining pixels at the periphery of the letter and background pixels. Meanwhile, the colors though appearing to us as distinctly different, suffered to preserve the difference when later passed through resizing operations. We perform data augmentation on our data and later train our network using the data generated, but it leads to a downfall in accuracy. One possible reason might be that data augmentation[14] is used in those scenarios where we expect distinctive image features to exist, but they do not exist in the actual data set. It can be understood from a Cat vs. Dogs example. Usually, data sets contain cats and dogs in limited positions in the pictures, so our model fails to recognize some of the real images. The reason is that in the real world, either a cat or a dog might turn their heads and might be sitting in different postures, which makes it difficult for our model to locate distinctive features like whiskers and pointy ears in cats and tongues in case of dogs in the absence of large amounts of data catering these considerations. Therefore, to mimic such behavior, Data Augmentation is used, which helps to create all these different types of images. However, in our case, using it leads to flipped characters, and rotated images lead to anchor and tilde signs over letters going in different directions, which is not the case with real-world strings. Therefore, data augmentation was, in fact, counterproductive for our use case. We rendered images of sizes \(256\times 256\) during the image generation phase. Apart from the image size \(256\times 256\), at which we observed the best results, we tried experimenting with the following image sizes: \(128\times 128\), \(150\times 150\), \(224\times 224\), and \(512\times 512\). The smaller the image size, the more performance degradation there is relative to it. An increase in size did not lead to any significant improvement but increased the training time of the model. Hence, we use \(256\times 256\) image size. ### Building Model without Transfer Learning We train a base network on a base dataset and task and then reuse the learned features or transfer them to a second target network to be trained on a target dataset. This process will tend to work if the features are general, meaning suitable for both base and target tasks rather than specific to the base task. We performed experiments with transfer \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Architecture** & **Accuracy** & **Precision** & **Recall** & **F1-score** & **AUC** \\ \hline Siamese CNN[14] & \(0.79\) & \(0.78\) & \(0.71\) & \(0.74\) & \(0.78\) \\ \hline Ensemble CNN[1] & \(0.83\) & \(0.82\) & \(0.79\) & \(0.80\) & \(0.83\) \\ \hline PhishGAN[1] & \(0.71\) & \(0.74\) & \(0.65\) & \(0.69\) & \(0.71\) \\ \hline **Attention CNN (Ours)** & \(0.93\) & \(0.93\) & \(0.93\) & \(0.93\) & \(0.93\) \\ \hline \end{tabular} \end{table} Table 2: Model performance comparison on our dataset learningPan and Yang (2009) by incorporating networks like VGG16Simonyan and Zisserman (2014), Resnet18He et al. (2016), Resnet34, Resnet50, Wide ResNet-101-2, ResNeXt-50-32x4d and ResNeXt-101-32x8d which were trained on ImageNet(Deng et al., 2009) dataset. Our experiments did not obtain good accuracy using these architectures, either pre-trained or from scratch. There are two possible reasons: 1) Large number of hidden layers: These architectures have many hidden layers ranging from \(16\) up to \(100\). The deeper the network, the more it tries to aggregate the learned features to create high-level features. It works well in images of real-world entities, but in our context, it does not help as these are just images generated from strings. Going further deep into the network makes the network lose all the subtle features from parts of strings like tilde and apostrophes. It has learned to differentiate real from homoglyph strings. 2) Pre-trained in a data set of different domains: Another reason is that these networks were pre-trained on the ImageNet dataset, which contains images from real-world entities, but does not have images similar to our problem. Hence, using a pre-trained network having weights learned from such images instead of our domain problem did not help. We obtained an accuracy of \(63\%\) to \(67\%\) using the above architecture. ## Conclusion In this work, we created a first-of-its-kind large-scale homoglyph phishing image dataset comprising 4M images of real and homoglyph domains. Later, we presented a baseline that relied on learning features from an attention-based convolutional neural network using our constructed data set to differentiate real domain names from homoglyph domain names to avoid homoglyph attacks. Our dataset and approach are robust because we can generalize on unseen homoglyphs as compared to other approaches which are data-dependent for every single inference, which leads it to outperform existing approaches. We believe this work is significant and provides an important benchmark to propel work in this area, and its applications would serve as a safeguard against phishing attacks in the real world.
2308.08071
Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks
The delayed feedback problem is one of the most pressing challenges in predicting the conversion rate since users' conversions are always delayed in online commercial systems. Although new data are beneficial for continuous training, without complete feedback information, i.e., conversion labels, training algorithms may suffer from overwhelming fake negatives. Existing methods tend to use multitask learning or design data pipelines to solve the delayed feedback problem. However, these methods have a trade-off between data freshness and label accuracy. In this paper, we propose Delayed Feedback Modeling by Dynamic Graph Neural Network (DGDFEM). It includes three stages, i.e., preparing a data pipeline, building a dynamic graph, and training a CVR prediction model. In the model training, we propose a novel graph convolutional method named HLGCN, which leverages both high-pass and low-pass filters to deal with conversion and non-conversion relationships. The proposed method achieves both data freshness and label accuracy. We conduct extensive experiments on three industry datasets, which validate the consistent superiority of our method.
Xiaolin Zheng, Zhongyu Wang, Chaochao Chen, Feng Zhu, Jiashu Qian
2023-08-15T23:49:07Z
http://arxiv.org/abs/2308.08071v1
# Freshness or Accuracy, Why Not Both? Addressing Delayed Feedback via Dynamic Graph Neural Networks ###### Abstract The _delayed feedback_ problem is one of the most pressing challenges in predicting the conversion rate since users' conversions are always delayed in online commercial systems. Although new data are beneficial for continuous training, without complete feedback information, i.e., conversion labels, training algorithms may suffer from overwhelming _fake negatives_. Existing methods tend to use multitask learning or design data pipelines to solve the delayed feedback problem. However, these methods have a trade-off between data freshness and label accuracy. In this paper, we propose Delayed Feedback Modeling by Dynamic Graph Neural Network (DGDFEM). It includes three stages, i.e., preparing a data pipeline, building a dynamic graph, and training a CVR prediction model. In the model training, we propose a novel graph convolutional method named HLGCN, which leverages both high-pass and low-pass filters to deal with conversion and non-conversion relationships. The proposed method achieves both data freshness and label accuracy. We conduct extensive experiments on three industry datasets, which validate the consistent superiority of our method. Recommender System, Conversion Rate Prediction, Delayed Feedback, Dynamic Graph, Importance Sampling ## I Introduction Conversion rate (CVR)-related problems are fundamental in the setting of E-commerce. Most CVR-related methods focus on the CVR prediction [1, 2, 3], which aims to estimate the probabilities of specific user behaviors, e.g., buying recommended items. However, if conversions are not collected in time, i.e., the user feedback is delayed, it is difficult to predict CVR accurately. Here, we take two examples to introduce the delayed feedback problem. Fig. 1(a) illustrates why this problem exists. Without loss of generality, suppose these clicks occur at time \(t_{1}\). For most clicks, there are no conversions, e.g., sample \(s_{1}\), \(s_{3}\), and \(s_{4}\). For the rest, their corresponding conversions are delayed for different times, e.g., sample \(s_{2}\) and \(s_{5}\). When setting an observation window at time \(t_{3}\), we can only observe the conversion of sample \(s_{5}\) at time \(t_{2}\). To distinguish whether these samples will convert, we should postpone the observation window until the _attribution window_ ends at time \(t_{5}\). An attribution window is a period of time in which we can claim that a conversion is led by a click. As a result, the delivery of training samples is significantly delayed. Although the delay feedback problem exists, most online commercial systems train their CVR prediction models in real-time data streams to capture data distribution changes. Fig. 1(b) illustrates the importance of delayed feedback modeling with a real-world dataset, i.e., Criteo Sponsored Search Traffic [4]. From it, we can observe half of the conversions in several hours but need to wait a couple of days or even longer to observe the rest. The label accuracy of training samples increases with more observed conversions while data freshness decreases. However, both data freshness and label accuracy are necessary for online commercial systems. Training with new data enhances real-time response capability for online systems. Training with accurate sample labels enhances model performance. Towards addressing the delayed feedback problem, we can divide existing methods into two types, i.e., _mutitask learning_ and _designing data pipelines_. The former [5, 6, 7, 8, 9] uses additional models to assist in learning the CVR estimation. But it waits a long time to collect all conversions, leading to recommendation performance degradation. The latter [10, 11, 12, 13, 14, 15] designs new data pipelines for CVR prediction models. But it faces a trade-off between data freshness and label accuracy simultaneously. In summary, the above models cannot solve the delayed feedback problem. To address the delayed feedback problem successfully, we propose **D**elayed **F**eedback **M**odeling by **D**ynamic **G**raph Neural Network (DGDFEM). The framework of DGDFEM includes three stages, i.e., preparing a data pipeline (Stage 1), building a dynamic graph (Stage 2), and training a CVR prediction model (Stage 3). **In Stage 1**, we design a novel data pipeline Fig. 1: Introduction of the delayed feedback problem. that achieves the best data freshness. To capture data freshness, we deliver each sample, i.e., a user-item click, as soon as it appears, even when it is _unlabeled_. These delivered unlabeled samples are used to build the dynamic graph. To achieve high label accuracy, we set a time window to wait for each sample's conversion. When the time window ends, we mark the sample as either positive or negative and deliver it again for the following two stages. Finally, after fake negatives are converted, we calibrate them as positive and deliver them one more time. **In Stage 2**, we build a dynamic graph to facilitate CVR prediction by capturing frequent changes in data distribution. We build the dynamic graph upon a sequence of chronologically delivered samples. In the graph, nodes are the users and items from samples, and edge attributes come from sample labels. **In Stage 3**, we take users and items in the delivered samples as seeds to sample multi-hop neighbors from the dynamic graph. To deal with conversion and non-conversion relationships, we propose a novel graph convolutional method, namely HLGCN, which aggregates features through _high-pass_ and _low-pass_ filters. High-pass filters capture node differences among non-conversion relationships. Low-pass filters retain node commonalities among conversion relationships. To alleviate the noises introduced by fake negatives, we further propose _distribution debias_ and prove its effectiveness theoretically. Our method achieves both data freshness and label accuracy. We summarize the main contributions of this paper as follows: **(1)** We propose a novel delayed feedback modeling framework that highlights label accuracy with the best data freshness. **(2)** We propose the first dynamic graph neural network to solve the delayed feedback problem, which can capture data distribution changes and hence facilitates CVR predictions. **(3)** We conduct extensive experiments on three large-scale industry datasets. Consistent superiority validates the success of our proposed framework. ## II Related Work ### _Delayed Feedback Modelling_ In solving the delayed feedback problem, we can divide existing methods into two types, i.e., multitask learning and designing data pipelines. **Multitask Learning.** This method uses additional models to assist in learning the CVR estimation. DFM [5] designs two models for estimating CVR and delay time. One successor of DFM, i.e., NoDeF [6], proposes fitting the delay time distribution by neural networks. Another successor [7] extracts pre-trained embeddings from impressions and clicks to predict CVR and leverages post-click information to estimate delay time. DLA-DF [8] trains a CVR predictor and propensity score estimator based on a dual-learning algorithm. MTDFM [9] trains the actual CVR network by simultaneously optimizing the observed conversion rate and non-delayed positive rate. These methods wait a long time to collect all conversions, leading to recommendation performance degradation. In contrast, our method delivers samples as soon as they appear, achieving the best data freshness. **Designing Data Pipelines.** This method designs new data pipelines to change the way data is delivered. FNW and FNC [10] deliver all samples instantly as negative and duplicate them as positive when their conversions occur. But these two methods introduce numerous fake negatives into model training. FSIW [11] waits a long time for conversions. But it uses stale training data and does not allow the correction, even when the conversion occurs afterward. ES-DFM [12] sets a short time window to wait for conversions and rectifies these fake negatives. Based on ES-DFM, DEFER [13] and DEFUSE [14] further duplicate and ingest real negatives for training. But ES-DFM, DEFER, and DEFUSE cannot simultaneously achieve label accuracy and data freshness. As the time window extends, these three methods observe more conversion feedback, i.e., label accuracy becomes higher, but data freshness gets lower. FTP [15] constructs an aggregation policy on top of multi-task predictions, where each task captures the feedback pattern during different periods. But some tasks in FTP are trained with accurate but stale data, while others are trained with new but inaccurate data. These methods face a trade-off between data freshness and label accuracy. In contrast, our method achieves both data freshness and label accuracy. ### _Dynamic Graph Neural Networks_ In dynamic graph neural networks, we can divide existing methods into two types, i.e., discrete-time dynamic graphs and continuous-time dynamic graphs. _Discrete-time dynamic graph_ represents a dynamic graph as a series of static graph snapshots at different time intervals. DySAT [16] computes node representations through joint self-attention within and across the snapshots. DGNF [17] captures the missing dynamic propagation information in graph snapshots by dynamic propagation. TCDGE [18] co-trains a linear regressor to embed edge timespans and infers a common latent space for all snapshot graphs. _Continuous-time dynamic graph_ represents a dynamic graph as event sequences, e.g., creating nodes and adding edges. DyGNN [19] updates node states involved in an interaction and propagates the updates to neighbors. GNN-DSR [20] models the short-term dynamic and long-term static representations of users and items and aggregate neighbor influences. HDGCN [21] explores dynamic features based on a content-propagation heterogeneous graph. TGN [22] combines different graph-based operators with memory modules to aggregate temporal-topological neighborhood features and learns time-featured interactions. All these methods leverage stale or even wrong interactions in delayed feedback scenarios since the samples used to build the graph are delayed and mislabeled. In contrast, our method provides the newest data with accurate labels for the dynamic graph and proposes HLGCN to deal with conversion and non-conversion relationships in the graph. ## III Method In this section, we present the framework of DGDFEM followed by its main stages in detail. DGDFEM has three stages, i.e., preparing a data pipeline, building a dynamic graph, and training a CVR prediction model. In the first stage, we prepare a data pipeline that achieves high label accuracy with the best data freshness. In the second stage, we build a dynamic graph, which captures data distribution changes. In the third stage, we propose a graph convolutional method named HLGCN to deal with conversion and non-conversion relationships. Besides, we propose _distribution debias_ to alleviate the noises from fake negatives. DGDFEM achieves both data freshness and label accuracy, which solves the delayed feedback problem. ### _Preparing Data Pipeline_ **Preliminary.** There are three possible types of samples in the data pipeline of DGDFEM, namely _real negative_, _fake negative_, and _positive_. For ease of description, we let \(t_{d}\) denote the delay time of conversion feedback, \(l_{w}\) denote the time window, and \(l_{a}\) denote the attribution window length. We define three possible sample types as follows. **(1)**_Fake negative_ (\(l_{w}\leq t_{d}\leq l_{a}\)) is the sample whose conversion occurs beyond the time window. It is incorrectly delivered as negative at first. **(2)**_Positive_ (\(t_{d}\) < \(l_{w}\)) is the sample whose conversion occurs within the time window. **(3)**_Real negative_ (\(t_{d}\) > \(l_{a}\)) is the sample that does not convert eventually. We set \(t_{d}=\infty\) for these samples. From the above definition, we observe that time window length \(l_{w}\) determines the proportions of fake negatives and positives. When we set \(l_{w}\to 0\), we transform all positives into fake negatives. When we set \(l_{w}\to l_{a}\), we eliminate all fake negatives. **Our Proposed Data Pipeline.** We describe our proposed data pipeline through an example illustrated in Fig. 3. Suppose there are three samples, denoted as \(s_{1}\), \(s_{2}\), and \(s_{3}\), representing 'fake negative,' 'positive,' and'real negative,' respectively. We first deliver each sample, i.e., a user-item click, instantly into the data pipeline as soon as it appears in the commercial system, even when it is unlabeled. We set a time window to wait for each sample's conversion and deal with different sample types in various ways. **(1)** For fake negative \(s_{1}\), which arrives at time \(t_{1}\), we set a time window between time \(t_{1}\) and \(t_{1}+l_{w}\). When the window ends at time \(t_{1}+l_{w}\), we mark it as negative and deliver it for a second time as we do not observe the conversion feedback within the window. Finally, we calibrate it as positive and deliver it one more time when its conversion occurs at time \(t_{5}\). **(2)** For positive \(s_{2}\), which arrives at time \(t_{2}\), we set a time window between time \(t_{2}\) and \(t_{2}+l_{w}\). When the window ends at time \(t_{2}+l_{w}\), we mark it as positive and deliver it again as its conversion occurs at time \(t_{3}\), i.e., within the time window. **(3)** For real negative \(s_{3}\), which arrives at time \(t_{4}\), when the time window ends at time \(t_{4}+l_{w}\), we mark it as negative and deliver it. We utilize these three sample types differently. We deliver unlabeled samples to build the dynamic graph and negative samples. to train the model. We deliver positive samples to build the dynamic graph and train the model. **Comparison with Existing Data Pipelines.** We further analyze the differences in the data pipelines between our method and the state-of-the-art methods, i.e., FNW [10] and ES-DFM [12] to demonstrate our motivations. **(1)** FNW marks each sample as negative when it appears and delivers it instantly for training the model, e.g., sample \(s_{1}\) at time \(t_{1}\). This introduces numerous fake negatives into model training, which lowers label accuracy. In contrast, DGDFEM delivers each sample for model training after the time window ends. There is sufficient time for most samples to convert, i.e., DGDFEM achieves high label accuracy. **(2)** ES-DFM marks all samples as either positive or negative and delivers each sample after its time window, e.g., sample \(s_{1}\) at time \(t_{1}+l_{w}\). ES-DFM postpones the delivery for each sample until the end of the time window, which lowers data freshness. In contrast, DGDFEM first delivers samples instantly to build a dynamic graph, achieving the best data freshness. To this end, DGDFEM enjoys high label accuracy with the best data freshness. ### _Building the Dynamic Graph_ **Motivation of the Dynamic Graph.** Conversions correlate with data distribution changes. For example, when an item Fig. 3: Data pipelines of FNW, ES-DFM, and DGDFEM. Fig. 2: The proposed DGDFEM framework. The solid arrowed lines present the framework workflow. The dashed arrowed lines separate the changes in the graph at different times. is on sale, users are more inclined to buy it. As a result, conversions associated with this item are more likely to occur. Leveraging a dynamic graph can better capture the data distribution changes, facilitating CVR prediction. We can represent users and items in delivered samples as a graph. In the graph, nodes are users and items, and edge attributes come from sample labels. Our dynamic graph explicitly models data distribution changes as variations of node attributes and edges. **Dynamic Graph Definition.** We build the dynamic graph upon a sequence of chronologically delivered samples, i.e., \(\mathcal{G}=\{s(t_{1}),s(t_{2}),...\}\) at times \(0\leq t_{1}\leq t_{2}\leq\ldots\). Besides, we set \(s=(u,i,y)\) where \(u\) and \(i\) are the user and item in sample \(s\), and \(y\in\{-1,0,1\}\) denotes the sample label, with \(-1\) denoting the unlabeled one, \(0\) denoting the negative one, and \(1\) denoting the positive one. **Building the Dynamic Graph.** Delivered samples are involved in _node-wise events_ and _edge-wise events_ to build the dynamic graph chronologically [22]. The former creates nodes or updates node features, and the latter adds or deletes edges. We represent a node-wise event by \(v_{j}(t)\), where \(j\) is the node index and \(v\) is the vector attribute associated with the sample. This event _only_ occurs for unlabeled samples as they carry the latest data information. In contrast, positive and negative samples are duplicates of previous unlabeled samples. If node \(j\) is not in \(\mathcal{G}\), the event creates it along with its attributes. Otherwise, the event updates its feature. Take Fig. 2 as an example. unlabeled sample \(s_{2}\) is involved in two node-wise events at time \(t_{2}\), i.e., adding a new node \(u_{2}\) to the graph and updating the feature of node \(i_{2}\). We represent an edge-wise event as \(e_{uij}(t)\), where \(y\) is the sample label. It occurs for unlabeled and positive samples, i.e., \(s\) with \(y\in\{-1,1\}\) as negative samples (\(s\) with \(y=0\)) may be fake negative and represent wrong interactions. We keep at most \(m\)_recent_ edges for each node and delete the rest to improve efficiency. In Fig. 2, unlabeled sample \(s_{1}\) is first involved in an edge-wise event to add an unlabeled edge between user \(u_{1}\) and item \(i_{1}\) at time \(t_{1}\). It is then involved in another edge-wise event to connect user \(u_{1}\) and item \(i_{1}\) with a positive edge when it becomes positive at time \(t_{6}\). After describing node-wise and edge-wise events, we define the dynamic graph at time \(t\) as follows. In time interval \(T\), we denote temporal sets of nodes and edges as \(\mathcal{V}_{T}=\{j:\exists v_{j}(t)\in\mathcal{G},t\in T\}\) and \(\mathcal{E}_{T}=\{(u,i,y):\exists e_{uij}(t)\in\mathcal{G},t\in T\}\). We denote a snapshot of dynamic graph \(\mathcal{G}\) at time \(t\) as \(\mathcal{G}_{t}=(\mathcal{V}_{[0,t]},\mathcal{E}_{[0,t]})\). We denote the neighbors of node \(j\) at time \(t\) as \(\mathcal{N}_{j}(t)=\{m:(j,m)\in\mathcal{E}_{[0,t]}\}\) and the \(k\)-hop neighborhood as \(\mathcal{N}_{j}^{k}(t)\). ### _Training CVR Prediction Model_ This section introduces training a CVR prediction model in DGDFEM. We use positive and negative samples for model training. For each positive sample, the model training process occurs before updating the dynamic graph to prevent the model from leveraging future information. We can further divide the whole training process into four steps. **Neighbor Sampling.** Online commercial systems require real-time services for recommendation and advertising [10, 23]. To balance model performance and response time, we take the user and item as seeds to sample \(k\)-hop neighbors from dynamic graph \(\mathcal{G}_{t}\). We use Fig. 2 to exemplify neighbor sampling with \(k=2\). For sample \(s_{3}\) delivered at time \(t_{7}\), its corresponding user \(u_{3}\) and item \(i_{3}\) are used to sample \(2\)-hop neighbors from the dynamic graph. To simplify the description, we continue to use \(\mathcal{G}_{t}\) to represent the sampled graph. **Node Embedding.** To describe nodes in the sampled graph, we embed them based on their attributes. Node features consist of sparse and dense features. The former is embedded through embedding matrices, whereas the latter is scaled into \([0,1]\) through min-max normalization. We use a multilayer perceptron (MLP) model to map them into the same dimension, as user and item features do not necessarily share the same dimension size. We define the embedding process as, \[e_{j}^{(0)}=\mathrm{MLP}([x_{j,1},x_{j,2},...,x_{j,m},...]),\] where \(x_{j,m}\) denotes the \(m\)-th processed feature of node \(j\in\mathcal{N}_{i}^{K}(t)\bigcup\mathcal{N}_{i}^{K}(t)\), \([\cdot]\) denotes the concatenation function, and \(e_{j}^{(0)}\) denotes the initial embedding of node \(j\). **HLGCN.** After processing node embeddings, we propose a novel graph convolutional method named HLGCN to gather features from neighbors. In our setting, a successful graph convolutional method should deal with non-conversion and conversion relationships in the dynamic graph. Traditional graph convolutional methods [24, 25] can only handle conversion relationships as they use low-pass filters to retain node commonalities. Our proposed HLGCN uses both _high-pass_ and _low-pass_ filters. The former captures node differences among non-conversion relationships, whereas the latter retains node commonalities among conversion relationships. It aggregates features from neighbors, as shown in (1). We introduce its implementation details later in Section III-D. \[e_{u}^{(l)}=g(e_{u}^{(0)},e_{u}^{(l-1)},\{e_{i}^{(l-1)}\mid i\in\mathcal{N}_{ u}(t)\}), \tag{1}\] where \(e_{u}^{(l)}\) is the aggregated embedding of node \(u\) from layer \(l\in\{1\ldots k\}\), and \(g(\cdot)\) denotes the graph convolution. **Distribution Debias.** Although we calibrate fake negatives in our data pipeline, these samples shift actual data distribution as fake negatives are duplicated. The gap between actual and biased data distribution results in training noises. Here, we leverage the importance sampling [26] to alleviate the training noises. Compared with ESDFM [12], we predict sample probabilities of being positive and fake negative to assist in learning CVR. We define the prediction process as, \[\hat{y}_{p} =\mathrm{Sigmoid}(\mathrm{MLP}_{p}([e_{u}^{(k)},e_{i}^{(k)}])),\] \[\hat{y}_{fn} =\mathrm{Sigmoid}(\mathrm{MLP}_{fn}([e_{u}^{(k)},e_{i}^{(k)}])),\] \[\hat{y}_{cvr} =\mathrm{Sigmoid}(\mathrm{MLP}_{cvr}([e_{u}^{(k)},e_{i}^{(k)}])).\] When updating the CVR prediction model, we take \(\hat{y}_{fn}\) and \(\hat{y}_{p}\) as values, cut down their backpropagation, and update these two prediction models separately. We optimize the CVR prediction model by minimizing the loss as, \[\mathcal{L}=-\sum_{(u,i,y_{ui})\in\mathcal{S}}(y_{ui}(1+\hat{y}_{fn}) log(\hat{y}_{cvr}) \tag{2}\] \[+(1-y_{ui})(1+\hat{y}_{fn})(1-\hat{y}_{p}-\hat{y}_{fn})log(1-\hat{y }_{cvr})),\] where \(\mathcal{S}\) is a set of positive and negative samples drawn from the data pipeline, and \(y_{ui}\) is the sample label, with \(y_{ui}=1\) denoting a positive sample and \(y_{ui}=0\) denoting a negative one. **Proposition 1**.: _The distribution debias can correct distribution shifts._ Proposition 1 argues that our distribution debias alleviates training noises resulting from the fake negatives, which boosts CVR prediction accuracy. We give its detailed proof in Appendix -A. ### Hlgcn We have provided our motivation for HLGCN in Section III-C that it should be able to deal with non-conversion and conversion relationships in the dynamic graph. Here, we introduce its technical details. HLGCN calculates user preferences to determine each edge's filter type and weight as conversions only depend on user preferences [7]. It then uses appropriate filters to exploit both types of relationships and aggregates neighbor features. Fig. 4 illustrates the aggregation process of HLGCN. It first uses a high-pass filter for the edge between user \(u_{1}\) and item \(i_{1}\) at time \(t_{1}\) and then replaces it with a low-pass filter at time \(t_{6}\). This is because our data pipeline delivers sample \(s_{1}\) as positive at time \(t_{6}\). The edge between user \(u_{1}\) and item \(i_{1}\) changes from unlabeled to positive. In other words, the preference of user \(u_{1}\) on item \(i_{1}\) is maximized at time \(t_{6}\). **Filter Definition.** A high-pass filter \(F_{H}\) amplifies high-frequency signals, while a low-pass filter \(F_{L}\) magnifies low-frequency signals [27]. We state the definitions filter \(F_{H}\) and \(F_{L}\) as, \[F_{H} =\epsilon I-D_{t}^{-1/2}A_{t}D_{t}^{-1/2},\] \[F_{L} =\epsilon I+D_{t}^{-1/2}A_{t}D_{t}^{-1/2},\] where \(A_{t}\) denotes adjacency matrix of \(\mathcal{G}_{t}\), \(D_{t}\) denotes diagonal degree matrix, \(I\) denotes the identity matrix, and \(\epsilon\) is a hyper-parameter determining the retention proportion of a node's initial embedding. We use \(F_{H}\) to capture node difference and \(F_{L}\) to retain node commonality. We define their aggregation process as, \[\hat{e}_{u}^{(l)} =F_{H}\cdot H_{t}=\epsilon\cdot e_{u}^{(0)}-\sum_{i\in\mathcal{N} _{u}(t)}e_{i}^{(l-1)}/\sqrt{d_{u}(t)d_{i}(t)}, \tag{3}\] \[\hat{e}_{u}^{(l)} =F_{L}\cdot H_{t}=\epsilon\cdot e_{u}^{(0)}+\sum_{i\in\mathcal{N} _{u}(t)}e_{i}^{(l-1)}/\sqrt{d_{u}(t)d_{i}(t)},\] where \(H_{t}\) denotes node embeddings and \(d_{u}(t)\) denotes the degree of node \(u\). Besides, \(\hat{e}_{u}^{(l)}\) and \(\hat{e}_{u}^{(l)}\) are aggregated embeddings of node \(u\) at the \(l\)-th layer through \(F_{H}\) and \(F_{L}\), respectively. **Aggregating Features from Neighbors.** Since we sample \(k\)-hop neighbors from \(\mathcal{G}_{t}\), we use \(k\) layers of HLGCN to aggregate features from neighbors. By combining (1) and (3), we define its graph convolution \(g(\cdot)\) as, \[e_{u}^{(l)} =g(e_{u}^{(0)},e_{u}^{(l-1)},\{e_{i}^{(l-1)}\mid i\in\mathcal{N} _{u}(t)\}) \tag{4}\] \[=\hat{w}\cdot\hat{e}_{u}^{(l)}+\hat{w}\cdot\hat{e}_{u}^{(l)}\] \[=\epsilon(\hat{w}_{ui}+\hat{w}_{ui})\cdot e_{i}^{(0)}\] \[+\sum_{i\in\mathcal{N}_{u}(t)}(\hat{w}_{ui}-\hat{w}_{ui})\cdot e _{i}^{(l-1)}/\sqrt{d_{u}(t)d_{i}(t)}\] \[=\epsilon\cdot e_{u}^{(0)}+\sum_{i\in\mathcal{N}_{u}(t)}p_{ui} \cdot e_{i}^{(l-1)}/\sqrt{d_{u}(t)d_{i}(t)},\] where \(\hat{w}_{ui}\) and \(\hat{w}_{ui}\) denote the weight of the high-pass and low-pass filter for edge \(e_{uiy}(t)\). We set \(\hat{w}_{ui}+\hat{w}_{ui}=1\) as these two types of filters share the same processing on initial node embeddings. We let user preference for the item \(p_{ui}\) determine the types and weight of the used filters, i.e., \(\hat{w}_{ui}-\hat{w}_{ui}=p_{ui}\). The above definition enables HLGCN to use appropriate filters to exploit conversion and non-conversion relationships. **(1)** When \(p_{ui}<0\), it uses \(F_{H}\) to capture node difference. The weight of \(F_{H}\) increases as \(p_{ui}\) becomes lower. **(2)** When \(p_{ui}>0\), it uses \(F_{L}\) to retain node commonality. The weight of \(F_{L}\) increases as \(p_{ui}\) increases. **(3)** When \(p_{ui}\approx 0\), it only keeps initial node embeddings. **Calculating User Preference.** Selecting a proper filter for each edge requires a priori knowledge of user preference. However, user preference varies over time, e.g., a user may gradually lose interest in an item, decreasing conversion probability. Therefore, HLGCN calculates user preferences each time to support filter selection. Fig. 4: An illustration of the used filter for each edge in HLGCN. The shades of node colors in the dynamic graph reflect feature changes over time. The filter color reflects the change in filter type, and the filter thickness demonstrates the weight over time. _For unlabeled edges_, ConvE [28], an effective knowledge representation learning model, fits our scenario. Since aggregated user and item embeddings in graph neural networks represent their general preferences and properties [29, 30], it can transform user embedding \(e_{u}^{(l)}\) into preference \(p_{ui}\) through item embedding \(e_{i}^{(l)}\). We define the preference calculation process as, \[p_{ui} =\tanh(c_{ui}),\] \[c_{ui} =\mathrm{ConvE}(e_{u}^{(l)},e_{i}^{(l)})=\mathrm{Flatten}( \mathrm{CNN}(f(e_{u}^{(l)},e_{i}^{(l)})))W_{p},\] where \(f(\cdot)\) is a 2D reshaping and concatenation function, which converts \(e_{u}^{(l)}\) and \(e_{i}^{(l)}\) into two matrices and concatenates them row by row. \(\mathrm{CNN}\) denotes a 2D convolutional layer, \(\mathrm{Flatten}(\cdot)\) reshapes its output into a vector, \(W_{p}\) denotes a projection matrix, and \(\tanh\) is the hyperbolic tangent function, which maps the result into \([-1,1]\). _For positive edges_, we set \(p_{ui}=1\) as \(e_{uiy}(t)\) is formed by the conversion between user \(u\) and item \(i\), i.e., the preference is maximized. ## IV Experiments In this section, we conduct empirical studies on three large-scale datasets to answer the following four research questions. **RQ1**: How does DGDFEM perform, compared with state-of-the-art methods for delayed feedback modeling (Section IV-B)? **RQ2**: Are the data pipeline and HLGCN beneficial in DGDFEM (Section IV-C)? **RQ3**: What are the impacts of different parameters on DGDFEM (Section IV-D)? **RQ4**: How does HLGCN deal with conversion and non-conversion relationships in our dynamic graph (Section IV-E)? ### _Experimental settings_ **Datasets.** We conduct experiments on three industrial datasets, i.e., _TENCENT_, _CRITEO2_, and _CIKM_. The statistics of these datasets are shown in Table I. TENCENT is an industry dataset for social advertising competition1. It includes 14 days of click and conversion logs. We exclude data from the last two days. CRITEO2 is a _newly-released_ industry dataset for the sponsored search of Criteo [4]. It represents the Criteo live traffic data in 90 days and publishes feature names additionally, _which enables graph construction_. CIKM is a competition dataset used in CIKM 2019 Ecomm AI2. It provides a sample of user behavior data in Taobao within 14 days. We keep the first click and conversion for each pair of the user and item to adapt to the delayed feedback problem. Footnote 1: [https://algo.qq.com/?lang-en](https://algo.qq.com/?lang-en) Footnote 2: [https://tianchi.aliyun.com/competition/entrance/231721/information?lang-en](https://tianchi.aliyun.com/competition/entrance/231721/information?lang-en) We divide all datasets into two parts evenly according to the click time of each sample. We use the first part to pretrain CVR prediction models offline and divide the second part into multiple pieces by hours. We train models on the \(t\)-th hour data and test them on the \((t+1)\)-th data. We reconstruct training data in the second part according to different data pipelines and remain evaluation data unchanged. We report the _average_ performance across hours on the evaluation data. Note that, among all datasets, _CRITEO2_ has the highest proportion of delayed conversions. **Comparison Methods.** We implement the following competitive methods as competitors for our proposed approach and categorize these methods into three classes. **(1) Baselines**. We implement two baseline methods, namely **PRETRAIN** and **NODELAY**. PRETRAIN and NODELAY share the same model structure with DFM methods. We train PRETRAIN offline and use it for online evaluation directly. We train NODELAY with ground-truth labels and take its performance as the upper bound of all DFM methods as there is no delayed conversion. **(2) Delayed Feedback Modeling (DFM)**. We implement four delayed feedback modeling methods, including **FNW**[10], **FSIW**[11], **ES-DFM**[12], **DEFER**[13], and **DEFUSE**[14]. These methods are widely deployed in online commercial systems. We use precisely the same data pipelines and loss functions as their original papers. **(3) Graph Neural Network (GNN)**. We implement three graph neural networks, including **NGCF**[29], **FAGCN**[27], and **TGN-attn**[22]. NGCF refines user and item representations via information from multi-hop neighbors. FAGCN uses high-pass and low-pass filters to model assortative and disassortative networks. Since NGCF and FAGCN are _static_ graph neural networks, we build their graphs with offline training data and train their models with the online data stream. TGN-attn, a _dynamic_ graph neural network, has the best performance among all variants of TGN. We use positive samples to build the dynamic graph for TGN-attn, avoiding the damage of fake negatives. **Evaluation Metrics.** We adopt two widely-used metrics for CVR evaluation, which describe model recommendation performance from different perspectives. **(1) Area Under ROC Curve (AUC)**. AUC [31] is widely used in CVR predictions. It measures the ranking performance between conversions and non-conversions; the higher, the better. **(2) Negative Log Likelihood (NLL)**. NLL [32] is sensitive to the absolute value of CVR prediction. It emphasizes the quality of the probabilistic model, and the lower, the better. Following ESDFM [12] and DEFER [13], we compute relative improvement of our model over PRETRAIN as follows. We take these results to demonstrate the performance of DGDFEM straightforwardly; the higher, the better. \[\%\mathrm{Improv.}=\frac{\mathrm{Metric}_{\texttt{DGDFEM}}-\mathrm{Metric }_{\texttt{PRETRAIN}}}{\mathrm{Metric}_{\texttt{NODELAY}}-\mathrm{Metric}_{ \texttt{preTRAIN}}}.\] **Experimental Settings.** We introduce experimental settings as follows. We use Adam [33] as an optimizer with a learning rate of \(0.0001\). We set the batch size to \(1,024\) for all competitors. We repeat experiments ten times with different random seeds and report their average results. Following ES-DFM [12], we search time window length \(l_{w}\) from \(0.25\) hour. We set the maximum time window length for searching to \(24\) as it is the common attribution window length in many business scenarios [13]. We optimize all hyperparameters carefully through grid search for all methods to give a fair comparison. The consumed resources vary with methods and hyperparameters. ### _Overall Performance Comparisons (RQ1)_ Table II shows the comparison results on all datasets. From it, we observe that: **(1)** In DFM methods, FNW and FSIW are inferior to ES-DFM as they either focus on data freshness or label accuracy. ES-DFM, DEFER, and DEFUSE achieve superior performance in DFM methods on all datasets as they balance accuracy and freshness. These results demonstrate that data freshness and label accuracy are essential for delayed feedback modeling. **(2)** GNN methods generally outperform DFM methods as GNN can better represent user preferences and item properties. Among GNN methods, NGCF surprisingly outperforms TGN-attn on _CRITEO2_. This result demonstrates that leveraging a dynamic graph does not always benefit delayed feedback modeling, as the samples used to build the dynamic graph are delayed and mislabeled. **(3)** DGDFEM is superior to both DFM and GNN methods. It even outperforms NODLAY on _CRITEO2_ and _CIKM_. These results demonstrate that our framework contributes to delayed feedback modeling as it obtains accurate conversion feedback and keeps the CVR prediction model fresh. ### _Ablation Study (RQ2)_ We verify the effectiveness of different components in DGDFEM via ablation study. We implement variants of our proposed method. **(1)**_Pretrain_ and _Oracle_ are the lower and upper bounds of DGDFEM. _Pretrain_ replaces the dynamic graph with a static graph built with offline training data. _Oracle_ builds the dynamic graph and trains models with ground-truth labels. **(2)** When building the dynamic graph, _w/o Unlabel_ removes unlabeled samples, while _w/o POS_ removes positive samples. **(3)**_GCN_[24] and _GAT_[25] are two substitutes for HLGCN. _w/o ConvE_ replaces ConvE by two linear layers with Leaky ReLu [34] in HLGCN. **(4)**_w/o DD_ replaces distribution debias with binary cross entropy loss. From Table III, we conclude: **(1)** DGDFEM largely outperforms _Pretrain_ and performs closer to _Oracle_. _Pretrain_ cannot capture frequent data distribution changes. _Oracle_ degrades our data pipeline, HLGCN, and distribution debias, as they are specially designed for delayed feedback modeling, limiting the upper bound of our performance. **(2)** For data pipelines, depending on positive or unlabeled samples alone cannot achieve the best recommendation performance. The delivered positive samples are critical in the E-commerce scenario, i.e., _CIKM_, as positive behaviors strongly reflect user preference. In contrast, the delivered unlabeled samples are necessary for the scenario with highly delayed conversions, i.e., _CRITEO2_. **(3)** For HLGCN, high-pass and low-pass filters are indispensable to achieve ideal performance. The underperformance of _GAT_ and _GCN_ demonstrates that conventional graph neural networks are unsuitable for our dynamic graph, as they only use low-pass filters. The difference between _w/o ConvE_ and DGDFEM demonstrates that \(\mathrm{ConvE}\) can better calculate the user preference for the item. **(4)** The result of _w/o DD_ shows that our distribution debias benefits model performance, as it corrects the distribution shifts. **(5)** All components in DGDFEM are essential and our proposed data pipeline is the most crucial part for delayed feedback modeling. ### _Parameter Analyses (RQ3)_ We evaluate the impacts of different model parameters on DGDFEM and state-of-the-art methods with _CRITEO2_. _CRITEO2_ best reflects the delayed feedback problem as it has the highest proportion of delayed conversions. From Fig. 5, we conclude: **(1)** For time window length \(l_{w}\), all methods except DGDFEM has performance degradation as \(l_{w}\) increases. This result demonstrates that DGDFEM successfully solves the trade-off between data freshness and label accuracy. In DGDFEM, we can either set high \(l_{w}\) to increase model performance or use low \(l_{w}\) to save resources. **(2)** For hop number \(k\), the performance of all methods except FAGCN slightly improves as \(k\) increases. This indicates that signals from deeper neighbors are less beneficial for delayed feedback modeling than other scenarios. **(3)** For neighbor number \(m\), the performance of DGDFEM, TGN-attn, and FAGCN has no significant relationship with \(m\), whereas the performance of NGCF slightly improves as \(m\) increases. This indicates that it is unnecessary to set large \(m\) for delayed feedback scenarios. **(4)** For retention proportion \(\epsilon\), DGDFEM and FAGCN perform much worse with \(\epsilon=0\), indicating that only depending on neighbor features is insufficient. DGDFEM prefers a higher retention proportion of initial node embeddings, whereas FAGCN prefers a lower one. Our proposed HLGCN is intrinsically different from FAGCN, although they both use high-pass and low-pass filters. FAGCN aims to capture node differences or retain node commonalities among conversion relationships. In contrast, HLGCN aims to deal with conversion and non-conversion relationships in the dynamic graph. ### _Case Study (RQ4)_ As motivated in Section III-D, HLGCN should use a low-pass filter for an edge with high conversion possibility, and vice versa. It calculates user preferences to _directly_ determine the filter weight and type (see (4)). Thus, the CVR of the system should correlate with the average user preferences when time elapses. To illustrate it clearly, we use a sliding window with different lengths to average CVR and filter weight in the window. We set the window length as 8 hours for both _TENCENT_ and _CIKM_, and 72 hours for _CRITEO2_. From Fig. 6, we can observe that the average user preferences correlate with the CVR. This phenomenon is particularly significant on _TENCENT_ and _CIKM_. These results demonstrate that HLGCN can use correct filters to deal with non-conversion and conversion relationships. ## V Conclusion As the first work to exploit graph neural network in delayed feedback modeling, DGDFEM successfully solves the trade-off between label accuracy and data freshness. The framework of DGDFEM includes three stages, i.e., preparing the data pipeline, building the dynamic graph, and training the CVR prediction model. We further propose a novel graph convolutional method named HLGCN. The dynamic graph captures frequent data distribution changes and HLGCN leverages appropriate filters for the edges in the graph. We conduct extensive experiments on three large-scale industry datasets and the results validate the success of our framework. ## Acknowledgment This work was supported in part by the National Key R&D Program of China (No. 2022YFF0902704) and the "Ten Thousand Talents Program" of Zhejiang Province for Leading Experts (No. 2021R52001). Fig. 5: Performance with different Parameters. Fig. 6: Filter and CVR variations with time elapse
2310.19322
Progressive Neural Network for Multi-Horizon Time Series Forecasting
In this paper, we introduce ProNet, an novel deep learning approach designed for multi-horizon time series forecasting, adaptively blending autoregressive (AR) and non-autoregressive (NAR) strategies. Our method involves dividing the forecasting horizon into segments, predicting the most crucial steps in each segment non-autoregressively, and the remaining steps autoregressively. The segmentation process relies on latent variables, which effectively capture the significance of individual time steps through variational inference. In comparison to AR models, ProNet showcases remarkable advantages, requiring fewer AR iterations, resulting in faster prediction speed, and mitigating error accumulation. On the other hand, when compared to NAR models, ProNet takes into account the interdependency of predictions in the output space, leading to improved forecasting accuracy. Our comprehensive evaluation, encompassing four large datasets, and an ablation study, demonstrate the effectiveness of ProNet, highlighting its superior performance in terms of accuracy and prediction speed, outperforming state-of-the-art AR and NAR forecasting models.
Yang Lin
2023-10-30T07:46:40Z
http://arxiv.org/abs/2310.19322v2
# ProNet: Progressive Neural Network for Multi-Horizon Time Series Forecasting ###### Abstract In this paper, we introduce ProNet, an novel deep learning approach designed for multi-horizon time series forecasting, adaptively blending autoregressive (AR) and non-autoregressive (NAR) strategies. Our method involves dividing the forecasting horizon into segments, predicting the most crucial steps in each segment non-autoregressively, and the remaining steps autoregressively. The segmentation process relies on latent variables, which effectively capture the significance of individual time steps through variational inference. In comparison to AR models, ProNet showcases remarkable advantages, requiring fewer AR iterations, resulting in faster prediction speed, and mitigating error accumulation. On the other hand, when compared to NAR models, ProNet takes into account the interdependency of predictions in the output space, leading to improved forecasting accuracy. Our comprehensive evaluation, encompassing four large datasets, and an ablation study, demonstrate the effectiveness of ProNet, highlighting its superior performance in terms of accuracy and prediction speed, outperforming state-of-the-art AR and NAR forecasting models. time series forecasting; deep learning; Transformer; variational inference ## Introduction Time series forecasting has a wide range of applications in industrial domain for decades, including predicting electricity load, renewable energy generation, stock prices and traffic flow, air quality [1] etc. Many methods have been developed for this task that can be classified into two broad categories. In the early years, statistical models such as Auto-regressive Integrated Moving Average (ARIMA) and State Space Models (SSM) [2] were widely used by industry forecasters. However, they fit each time series independently and are not able to infer shared patterns from related time series [3]. On the other hand, machine learning methods have been developed for modelling the non-linearity from time series data. Preliminary methods are random forest [4], Support Vector Machine (SVM) [5] and Bayesian methods [6]. Moreover, recent research has widely acknowledged the effectiveness of time series decomposition and ensemble learning methods in refining forecasting models [7, 8, 6, 9]. Ensemble learning methods have gained recognition for their ability to combine individual models and enhance overall predictive performance while minimizing overfitting. Du et al. [6] develop the ensemble strategy that takes advantage of high diversification statistical, machine learning and deep learning methods, and assigns time-varying weights for model candidates with bayesian optimization, to avoid the shortage of model choice and alleviates the risk of overfitting. Similarly, Gao et al. [10] introduced an online dynamic ensemble of deep random vector functional link with three stages for improved performance. Decomposition-based methods have also shown promise in time series forecasting by breaking down the data into underlying components, leading to more accurate and manageable predictions. Different decomposition approaches, such as classical decomposition, moving averages, and state space model have been explored. For instance, Li et al. [11] proposed a convolutional neural network ensemble method that leverages decomposed time series and batch normalization layers to reduce subject variability. Wang et al. [12] proposed a fuzzy cognitive map to produce interpretable results by forecasting the decompositional components: trend, fluctuation range, and trend persistence. Lin et al. [13] developed SSDNet, employing Transformer architecture to estimate state space model parameters and provide time series decomposition components: trend and seasonality. Tong et al. [14, 15] introduced Probabilistic Decomposition Transformer with hierarchical mechanisms to mitigate cumulative errors and a conditional generative approach for time series decomposition. Furthermore, Wang et al. [9] introduced the ternary interval decomposition ensemble learning method, addressing limitations of point and interval forecasting models. The amalgamation of machine learning models, time series decomposition, and ensemble learning has demonstrated great promise as a potent solution for advancing forecasting performance. Notably, the philosophy of decomposition and ensemble can seamlessly integrate with major machine learning models, further enhancing their effectiveness in various applications. Recently, a sub-class of machine learning methods - deep learning, has been widely studied for forecasting tasks due to their strong ability of modelling complex and related dependencies from large-scaled time series. Existing deep learning methods can be divided into AutoRegressive (AR) and Non-AutoRegressive (NAR) models on the perspective of how they make the multi-step forecasts. Notable examples of AR models include DeepAR [16], DeepSSM [17], DeepFactor [18], DAt-tAE [19], LogSparse Transformer [3], visibility graph model [20], RDNM-ANN [21] and TimeGrad [22]. NAR - MQ-RNN [23], N-BEATS [24], AST [25] and Informer [26] are the prominent NAR methods. AR forecasting models have the problem of slow inference speed and error accumulation due to the use of a recursive method that use previously predicted values to make future forecasts. AR models are usually trained with the teacher-forcing mechanism and consider ground truth as previous predictions to feed into the model during training. This causes a discrepancy between training and prediction, and could cause unsatisfied accuracy for long forecasting horizons [27, 25]. In contrast, NAR forecasting models overcome the aforementioned problems since they generate all predictions within forecasting horizon simultaneously. However, NAR model ignores interdependencies in output space and such assumption violates real data distribution for sequence generation tasks [28, 29]. This may result in unrelated forecasts over the prediction horizon and accuracy degradation [30, 25]. Empirically, AR methods were found to be better for shorter horizons but outperformed by NAR for longer horizons due to error accumulation [30]. Thus, both AR and NAR models have their own complementary strengths and limitations for multi-horizon forecasting which stem from their prediction strategy. Recently NAR models have been proposed specific for translation tasks that can alleviate the accuracy degradation by performing dependency reduction in output space and reduce the difficulty of training [28, 31, 32, 29]. However, such studies are scarce for time series forecasting tasks. A balance must be struck between AR and NAR forecasting models to tackle the challenges of error accumulation and low latency in AR models, alongside the NAR models' inability to adequately capture interdependencies within the output space. Recent strides in this domain have illuminated the advantages of incorporating dependency and positional information within the prediction horizon. These breakthroughs have exhibited their efficacy across a spectrum of sequence modeling tasks. For instance, Ran et al. [33] have ingeniously integrated future predictions to overcome the multi-modality predicament in neural machine translation. In a parallel vein, Fei [34] and Zhou et al. [35] have skillfully amalgamated information from future time steps to generate past predictions, exemplified in the context of caption generation. Furthermore, Han et al. [36] have introduced a diffusion-based language model with bidirectional context updates, adding a notable dimension to the evolving landscape of research in this field. To address these challenges and capitalize on the strengths of both AR and NAR modeling, we introduce Progressive Neural Network (ProNet), a novel deep learning approach designed for time series forecasting. ProNet strategically navigates the AR-NAR trade-off, leveraging their respective strengths to mitigate error accumulation and slow prediction while effectively modeling dependencies within the target sequence. Specifically, ProNet adopts a partially AR prediction strategy by segmenting the forecasting horizon. It predicts a subset of steps within each segment using a non-autoregressive approach, while maintaining an autoregressive decoding process for the remaining steps. Fig. 1 illustrates the different prediction mechanism of AR, ProNet's partially AR, and NAR decoding mechanisms. For example, when the AR decoder considers step \(t+4\) dependent on steps from \(t\) to \(t+3\), the NAR decoder assumes no dependency. In contrast, ProNet's partially AR decoder takes into account dependencies from past steps \(t\), \(t+1\), \(t+3\), as well as future step \(t+5\). The initiation of horizon segments is determined by latent variables, optimizing their training through variational inference to capture the significance of each step. Consequently, in comparison to AR models, ProNet's predictions require fewer iterations, enabling it to overcome error accumulation while achieving faster testing speeds. Moreover, compared to NAR models, ProNet excels in capturing dependencies within the target space. The main contributions of our work are as follows: 1. We propose ProNet, a partially AR time series forecasting approach that generates predictions of multiple steps in parallel to leverage the strength of AR and NAR models. Our ProNet assumes an alternative dependency in target space and incorporates information of further future to generate forecasts. 2. We evaluate the performance of ProNet on four time series forecasting tasks and show the advantages of our model against most state-of-the-art AR and NAR methods with fast and accurate forecasts. An ablation study confirmed the effectiveness of the proposed horizon dividing strategy. ## Related Work Recent advancements in forecasting methodologies have led to the emergence of NAR forecasting models [23, 24, 25, 26]. These models seek to address the limitations of AR models by eschewing the use of previously generated predictions and instead making all forecasts in a single step. Fig. 1: Illustration of AR, ProNet partially AR and NAR decoding process: 1) AR decoder forecasts with covariates and all previous predictions; 2) NAR decoder forecasts all steps with covariates only in parallel; 3) our partially AR decoder divides horizon into segments (indicated by red dash lines), individual each segment is predicted autoregressively with covariates and previous predictions of all segments, while each prediction of segments can be made simultaneously. However, the effectiveness of NAR forecasting models is hindered by their assumption of non-interdependency within the target space. This assumption arises from the removal of AR connections from the decoder side, leading to the estimation of separate conditional distributions for each prediction independently [28, 31, 32, 29]. While both AR and NAR models have proven successful in forecasting applications, AR methods tend to excel for shorter horizons, while NAR methods outperform AR for longer horizons due to error accumulation [30]. Unlike AR models, NAR models offer the advantage of parallelizable training and inference processes. However, their output may present challenges due to the potential generation of unrelated forecasts across the forecast horizon. This phenomenon could lead to discontinuous and unrealistic forecasts [30], as the incorrect assumption of independence prevents NAR models from effectively capturing interdependencies between each prediction. Serval research [28, 31, 32, 29] have been made to enhance NAR models, although most of these efforts have been focused on Neural Machine Translation (NMT) tasks. Gu et al. [28] introduced the NAR Transformer model, which reduces output dependencies by incorporating fertilities and leveraging sequence-level knowledge distillation techniques [37, 38]. Recent developments have seen the adaptation of NAR models for translation tasks, mitigating accuracy degradation by tackling output space dependencies. This approach aims to capture and manage dependencies, thereby alleviating training challenges [28, 31, 32]. Notably, knowledge distillation [37, 38] emerges as a highly effective technique to enhance NAR model performance. The trade-off between AR and NAR [39, 33, 34, 35] has been a subject of exploration, particularly in the context of NMT and other sentence generation tasks. Notable instances include the works of [39, 35], which retain AR properties while enabling parallel prediction of multiple successive words. Similarly, [33, 34] employ a strategy that generates translation segments concurrently, each being generated autoregressively. However, prior approaches have relied on dividing the target sequence into evenly distributed segments, assuming fixed dependencies among time steps. This assumption, while applicable in some contexts, proves unsuitable for time series forecasting due to the dynamic and evolving nature of real-world time series data. For instance, in Fig. 2, we visualize the partial correlation of two distinct days (comprising 20 steps each) from the Sanyo dataset. Evidently, the two plots exhibit varying dependency patterns, signifying that the most influential time steps differ between the two cases. Additionally, it becomes apparent that future steps can exert substantial influence on preceding steps. Take Fig. 2 (a) as an example, where step 5 exhibits strong partial correlation with steps 17 and 18. This correlation suggests that incorporating information from steps 17 and 18 while predicting step 5 could be highly beneficial. In this work, we present ProNet that navigates the intricate balance between AR and NAR models. We extend previous work with several key enhancements: 1) assuming a non-fixed dependency pattern and identifying the time steps that need to be predicted first via latent factors and then predict further groups of steps autoregressively; 2) assuming the alternative time-varying dependency and incorporating future information into forecasting; 3) introducing the sophisticated-designed masking mechanism to train the model non-autoregressively. ## Problem Formulation Given is: 1) a set of \(N\) univariate time series (solar or electricity series) \(\left\{\mathbf{Y}_{i,1:T_{l}}\right\}_{i=1}^{N}\), where \(\mathbf{Y}_{i,1:T_{l}}:=[y_{i,1},y_{i,2},...,y_{i,T_{l}}]\), \(T_{l}\) is input sequence length, and \(y_{i,t}\in\Re\) is value of the \(i\)th time series at time \(t\); 2) a set of associated time-based multi-dimensional covariate vectors \(\left\{\mathbf{X}_{i,1:T_{l}+T_{l}}\right\}_{i=1}^{N}\), where \(T_{h}\) is forecasting horizon length and \(T_{l}+T_{h}=T\). Our goal is to predict the future values of the time series \(\left\{\mathbf{Y}_{i,T_{l}+1:T_{l}+T_{h}}\right\}_{i=1}^{N}\), i.e. the PV power or electricity usage for the next \(T_{h}\) time steps after \(T_{l}\). AR forecasting models produce the conditional probability of the future values: \[p\left(\mathbf{Y}_{i,T_{l}+1:T_{l}+T_{h}}\mid\mathbf{Y}_{i,1:T_{l}}, \mathbf{X}_{i,1:T_{l}+T_{h}};\theta\right) \tag{1}\] \[= \prod_{t=T_{l}+1}^{T_{l}+T_{h}}p\left(y_{i,t}\mid\mathbf{Y}_{i,1 :t-1},\mathbf{X}_{i,1:t};\theta\right),\] where the input of model at step \(t\) is the concatenation of \(y_{i,t-1}\) and \(x_{i,t}\) and \(\theta\) denotes the model parameters. For NAR forecasting models, the conditional probability can be modelled as: \[p\left(\mathbf{Y}_{i,T_{l}+1:T}\mid\mathbf{Y}_{i,1:T_{l}}, \mathbf{X}_{i,1:T};\theta\right) \tag{2}\] \[= \prod_{t=T_{l}+1}^{T}p\left(y_{i,t}\mid\mathbf{Y}_{i,1:T_{l}}, \mathbf{X}_{i,1:T};\theta\right)\] Table I presents a comparison of available information for predicting step \(t+1\) using AR and NAR forecasting methods. Both AR and NAR methods have access to covariates and ground truth from the past. However, there is a distinction in the scope of information they can utilize. The AR method can only make use of covariates and ground truth up to time step Fig. 2: Partial correlation of Sanyo set for two different days (20 time steps for each day). \(t\), whereas NAR methods can utilize all covariates within the forecasting horizon but do not have access to ground truth. Specifically, ProNet produces the conditional probability distribution of the future values, given the past history: \(p\left(\mathbf{Y}_{i,T_{t}+1:T}\mid\mathbf{Y}_{i,1:T_{1}},\mathbf{X}_{i,1:T}; \theta\right)\), where the input of model at step \(t\) is the concatenation of \(y_{i,t-1}\) and \(x_{i,t}\) and \(\theta\) denotes the model parameters. The models are applicable to all time series, so the subscript \(i\) will be omitted in the rest of the paper for simplicity. ## Progressive Neural Network In this section, we first present the architecture of ProNet and then explain its details in four sections: 1) _partially AR forecasting mechanism_ to overcome the limitations of AR and NAR decoders, 2) _progressive forecasting_ to correct inaccurate predictions made at early stages, 3) _progressive mask_ that implements the previous two mechanisms for Transformer model and 4) _variational inference_ to generate the latent variables with dependency information to serve the partially AR forecasting mechanism. ### Architecture Figure. 3 illustrates the architecture of ProNet, a partially AR time series forecasting model by using latent variables to model the uncertainty in target space. ProNet comprises four core components: an encoder, a forecasting decoder, a prior model denoted as \(p_{\theta}(z\mid x)\), and a posterior model denoted as \(q_{\phi}(z\mid y,x)\). During each training iteration, the feedforward process unfolds through four stages: 1. Encoder for Pattern Extraction: The encoder analyzes patterns from preceding time steps, contributing valuable insights to all decoders. 2. Significance Assessment by Posterior Model: The posterior model \(q_{\phi}(z\mid y,x)\) integrates both ground truth and covariates, effectively discerning the significance of time steps within the forecasting horizon. This assessment identifies pivotal steps, subsequently used to segment the forecasting horizon. 3. Significance Assessment by Prior Model: A separate prior model \(p_{\theta}(z\mid x)\) employs covariates to predict the importance of time steps within the horizon. The outputs of this prior model are meticulously calibrated to closely approximate the posterior model's outcomes. 4. Decoding and Forecast Generation: The decoder \(p(y\mid x,z)\) employs the ground truth, covariates, and the output of the posterior model \(q_{\phi}(z\mid y,x)\) to segment the forecasting horizon into distinct segments for accurate forecast generation. During the inference phase, the posterior model is omitted, and the prior model seamlessly takes on its role, facilitating accurate predictions. Notably, in the absence of ground truth during prediction, the decoder employs past predictions to generate forecasts. As the architectural backbone of ProNet, we adopt the Informer architecture [26]; however, it is pertinent to highlight that alternative Transformer-based architectures can be seamlessly integrated into the ProNet framework. Impressively, ProNet's efficacy remains pronounced when employing a vanilla Transformer as its architectural backbone. In summary, the prior and posterior models are trained employing a _variational inference_ approach, facilitating the identification of pivotal steps for the decoder's operation. This decoder employs _progressive masks_, thereby engendering the realization of _partially and progressive forecasting_ strategies. The intricate implementation intricacies of these components are elaborated upon in the subsequent sections. ### Partial Autoregressive Forecasting Our ProNet makes predictions by combining AR and NAR decoding mechanisms together. To facilitate efficient predictions, we introduce a multi-step prediction strategy organized into segments. Specifically, we divide the forecasting horizon into \(n_{g}\) segments and make predictions for the starting positions of each segment, denoted by \(S_{1}=[s_{1},s_{2},...,s_{n_{g}}]\). Subsequently, we employ an autoregressive approach to forecast the subsequent steps of each segment, specifically \(S_{2}=[s_{1}+1,s_{2}+1,...,s_{n_{g}}+1]\), in parallel. This process continues iteratively until all forecasted steps within the horizon are generated. Notably, the initial position of the first segment is set to the first step (\(s_{1}=1\)). The length of each segment is determined as the difference between the starting positions of two consecutive segments, denoted as \(T_{i}=s_{i+1}-s_{i}\) where \(s_{n_{g}+1}=T_{h}\). In line with NAR forecasting models, we set the initial input of the decoder (\(y\)) for the first predictions as 0, since prior predictions have not yet been established. In order to predict all steps within the horizon, ProNet employs AR predictions a maximum of \(n_{step}=\max(T_{i:n_{g}})\) times, where \(n_{step}\) represents the maximum segment length. This approach ensures accurate forecasts by iteratively refining predictions while considering relevant historical context. Unlike traditional AR and NAR models, our method introduces a unique probability distribution formulation: Fig. 3: Structure of the four components in ProNet: encoder, decoder, prior model \(p_{\theta}\) and posterior model \(q_{\phi}\). \[p\left(\mathbf{Y}_{i,T+1:T}\mid\mathbf{Y}_{i,1:T_{l}},\mathbf{X}_{i,1:T}\right)\] \[= \prod_{t=1}^{T_{g_{t}}}\prod_{j=1}^{n_{g}}p\left(y_{i,t}^{j}\mid \mathbf{Y}_{i,1:T_{l}},\mathbf{X}_{i,1:T_{l}},\right.\] \[\left.\mathbf{Y}_{i,T+1:T+1}^{1},\mathbf{X}_{i,T+1:T_{l}+t}^{1},...\ \mathbf{Y}_{i,T+1:T_{l}+t}^{n_{g}},\mathbf{X}_{i,T+1:T_{l}+t}^{n_{g}}\right), \tag{3}\] where \(y_{i,t}^{j}\) is prediction at \(t\)th step of the \(j\)th segment and \(\mathbf{Y}_{i,T+1:T_{l}+t}^{j}\) denotes the prediction history up to step \(t\) of the \(j\)th segment. ### Progressive Prediction In ProNet, the forecasting horizon is divided into segments of varying lengths. However, the number of AR steps is determined by the maximum segment length, leading to situations where certain segments complete their predictions before the AR iteration concludes. To capitalize on the additional dependency information available, completed segments are tasked with re-forecasting steps within their subsequent segments that have already been predicted. This progressive prediction strategy acknowledges that early steps in each segment may involve limited or no dependency information and therefore benefit from iterative refinement as more context becomes available. ### Progressive Mask The architecture of the AR Transformer decoder [40] employs a lower triangular attention mask to prevent future information leakage. Conversely, NAR Transformer decoders (e.g., Informer [26]) use unmasked attention. However, these standard masking mechanisms are inadequate for ProNet, as it operates with a partially autoregressive framework that integrates future information for predictions. In response, we introduce a progressive masking mechanism to facilitate access to the first \(t\) steps of all segments during the \(t\)-th step prediction. Given the sample size \(N\), forecasting horizon length \(T_{h}\) and segment size \(n_{g}\), the progressive mask \(M\) is created by Algorithm 1. Initially, we take the top \(n_{g}\) indexes of latent variable \(z\) that encodes the importance of steps for forecasting and stores them as \(ind\), which is also the starting position \(S_{1}\). Then we set the elements of zero vector \(row\) located at \(ind\) as one. We iterate from 1 to the maximum AR step \(n_{s}tep\) to create the mask \(M\): firstly, we set the rows of mask \(M\) that is located at \(ind\) as the variable \(row\); secondly, we increment all elements of \(ind\) by one and limit their value by the upper bound of forecasting horizon \(T_{h}\) as shown in line 5 and 6 respectively; thirdly, we update the elements of \(row\) located at \(ind\) as one. For instance, Fig. 4 illustrates how the elements change in Algorithm 1 from initialization to the final settings. We firstly initialize the mask \(M\) as a \(7\times 7\) zero matrix. For the first iteration, the starting position or the index is \(ind=S_{1}=[1,3,5]\), which means ProNet predicts the 1st, 3rd and 5th steps simultaneously. Then, we update the temporary variable \(row\rightarrow[1\ 0\ 1\ 0\ 1\ 0\ 0]\) (line 2 of Algorithm 1) and use it to fill the 1st, 3rd and 5th row of \(M\) (line 4 of Algorithm 1) as shown in the upper right of Fig. 2. Afterwards, we increment elements of \(ind\rightarrow[2,4,6]\) by one and update temporary variable \(row\rightarrow[1\ 1\ 1\ 1\ 1\ 0]\). The second iteration is as the first one, while final iteration implements progressive prediction: we now have the variable \(row\rightarrow[1\ 1\ 1\ 1\ 1\ 1]\) and index \(ind=[3,6,7]\). We fill the 3th, 6th and 7th row of \(M\) with \(row\), which means we use all previous predictions to forecast the 7th step and re-forecast the 3th and 6th steps. ### Variational Inference The ProNet algorithm addresses the challenge of segmenting sequences and prioritizing forecasted steps to achieve optimal performance. It is crucial to initiate forecasting with steps that carry the most significance and intricate dependencies on subsequent time points. However, obtaining this \begin{table} \begin{tabular}{c c c c c} \hline Prediction at \(t\) & \multicolumn{3}{c}{Input at \(t\)} \\ \hline & past covariates & future covariates & past ground truth & future ground truth \\ \hline AR (\(\mathbf{Y}_{i,t+1}\)) & \(\mathbf{X}_{i,1:T_{l}}\) & \(\mathbf{X}_{i,T_{l}:t}\) & \(\mathbf{Y}_{i,1:T_{l}}\) & \(\mathbf{Y}_{i,T_{l}:t}\) \\ NAR (\(\mathbf{Y}_{i,t+1}\)) & \(\mathbf{X}_{i,1:T_{l}}\) & \(\mathbf{X}_{i,T_{l}:T_{l}+T_{h}}\) & \(\mathbf{Y}_{i,1:T_{l}}\) & None \\ \hline \end{tabular} \end{table} TABLE I: Available information for predicting step \(t+1\) by AR and NAR forecasting methods. Fig. 4: Creation process of progressive mask \(M\): initial \(M\) (upper left), \(M\) after the 1st (upper right) and 2nd (lower left) iteration, and the final \(M\) (lower right) when the forecasting horizon \(T_{h}=7\), the segment size \(n_{g}=3\) and starting positions of each segments \(S_{1}=[1,3,5]\). We mark their changes in bold. vital information is not straightforward. Drawing inspiration from the methodology introduced in [41], we tackle this issue by employing parallel forecasting of step importance, representing them as latent variables denoted as \(z\). These latent variables are derived through conditional variational inference, an approach rooted in conditional Variational Autoencoders (cVAEs) [42]. These cVAEs bridge the gap between observed and latent variables, facilitating a deeper understanding of data patterns. The concept of cVAEs extends the classical Variational Autoencoder (VAE) framework [43], enhancing it by integrating conditioning variables into the data generation process. This augmentation empowers cVAEs to learn a more nuanced and contextually aware latent space representation of data. In a standard VAE, data is mapped to a lower-dimensional latent space using an encoder, and subsequently, a decoder reconstructs this data from points in the latent space. cVAEs introduce conditional variables that encode additional context or prior knowledge into the generative model. This enables cVAEs not only to learn conditional latent representations but also to incorporate provided contextual cues effectively. Particularly, cVAEs are advantageous in scenarios where supplementary information is available, mirroring the case of ProNet, which requires generating initial time steps for predictions based on past ground truth and covariates. In the context of ProNet, the latent variables, denoted as \(z\), correspond to individual output steps and rely on the entire temporal sequence for their determination. Consequently, the conditional probability is articulated as: \[P_{\theta}(y\mid x)=\int_{z}P_{\theta}(y\mid z,x)P_{\theta}(z\mid x)dz \tag{4}\] The \(y\) denotes the ground truth in the forecasting horizon, and conditioning variable \(x\) plays the role of historical data and covariates, allowing the model to capture the relevance of different time steps as latent variable \(z\) for accurate predictions. However, the direct optimization of this objective is unfeasible. To address it, the Evidence Lower Bound (ELBO) [42] is employed as the optimization target, resulting in the following formulation: \[\begin{split}\log P_{\theta}(y\mid x)&\geq\text{Eq} \phi(z\mid y,x)\left[\log P_{\theta}(y\mid z,x)\right]\\ &\quad-\text{KL}\left(q_{\phi}(z\mid y,x)|p_{\theta}(z\mid x) \right)\end{split} \tag{5}\] Here, the Kullback-Leibler (KL) divergence is denoted by \(\text{KL}\). The term \(p_{\theta}(z\mid x)\) represents the prior distribution, \(q_{\phi}(z\mid y,x)\) denotes an approximated posterior, and \(P_{\theta}(y\mid z,x)\) characterizes the decoder. With the ground truth encompassed within the horizon denoted by \(y\) and the condition \(x\), \(q_{\phi}(z\mid y,x)\) effectively models the significance of diverse time steps represented by \(z\). Notably, during prediction, \(y\) is not available, prompting the need to train \(p_{\theta}(z\mid x)\) to approximate \(q_{\phi}(z\mid y,x)\), achieved through the minimization of the KL divergence. Both the prior and the approximated posterior are modelled as Gaussian distributions characterized by their mean and variance. The mean \(\mu\) is obtained via a linear layer, while the variance \(\sigma\) is derived through another linear layer, followed by a SoftPlus activation function. To enable smooth gradient flow through random nodes, the reparameterization trick [42] is invoked. This involves sampling the latent variable \(z\) using the equation \(z=g(\epsilon,\mu,\sigma)=\mu+\sigma\epsilon\), where \(\epsilon\) follows a standard normal distribution \(\mathcal{N}(0,1)\), effectively serving as white noise. The value of \(z\) encapsulates the significance of each time step within the forecasting horizon, guiding the selection of which steps to initiate predictions from. The top \(k\) indices featuring the highest \(z\) values are chosen to initiate forecasting. During the training process, \(z\) is sampled from \(q_{\phi}(z\mid y,x)\), and the approximation of \(q_{\phi}(z\mid y,x)\) to the true posterior \(p_{\theta}(z\mid x)\) is enforced. This entire framework enables ProNet to identify and leverage the most crucial time steps for accurate and effective forecasting. Empirically, we find both the prior and posterior models often assign elevated importance to a sequence of steps, leading to a substantial reduction in decoding speed during testing. Striking a balance between accuracy and speed, we introduce a novel approach to realign the latent factor \(z\) by incorporating a scaling factor with the assistance of weight vectors denoted as \(W\in\Re^{T_{h}-1}\): \[\begin{split} z&=softmax(z)\times W\\ W&=|cos([0,1,...,T_{h}-1]\times\frac{n_{g}\pi}{T_{ h}})|\end{split} \tag{6}\] This re-weighting operation modifies the latent factor \(z\) to achieve a more optimized equilibrium between the forecasting accuracy and the computational speed. Subsequently, we determine the initial position, denoted as \(S_{1}\), and identify the indices of the largest \(n_{g}-1\) elements from \(z[2\ :]\) as potential starting positions. For example, Fig. 5 provides a visual representation of the latent variable \(z\) before and after the re-weighting process. By selecting \(n_{g}=3\), the original \(z\) yields the starting position \(S_{1}=[1,5,6]\), necessitating 4 autoregressive (AR) iterations to complete the forecasting process. Conversely, the re-weighted \(z\) results in a starting position \(S_{1}=[1,3,6]\), reducing the AR iterations required to 3. Remarkably, this re-weighting design elevates decoding speed by 25% in this scenario. Illustrating the tangible benefits of our approach, this strategic re-weighting of the latent variable \(z\) not only preserves forecast accuracy but also significantly enhances the computational efficiency of the process. ## Experiments ### Data Sets We conducted experiments using publicly available time series datasets, namely Sanyo [44], Hanergy [45], Solar [46], and Electricity [47]. These datasets encompass diverse sources of information and provide valuable insights. Specifically, the datasets consist of: **Sanyo** and **Hanergy**: These datasets encompass solar power generation data obtained from two distinct Australian PV plants, covering periods of 6 and 7 years, respectively. We focused our analysis on the time range between 7 am and 5 pm, aggregating the data at half-hourly intervals. In addition to the power generation information, we incorporated covariate time series data related to weather conditions and weather forecasts. Detailed descriptions of the data collection process can be found in [48]. For these datasets, we incorporated calendar features, specifically _month, hour-of-the-day, and minute-of-the-hour_. **Solar**: This dataset comprises solar power data originating from 137 PV plants across the United States. It covers an 8-month span, and the power data is aggregated at hourly intervals. Similarly to the previous datasets, calendar features are integrated, including _month, hour-of-the-day, and age_. **Electricity**: This dataset involves electricity consumption data gathered from 370 households over a duration of approximately 4 years. The electricity consumption data is aggregated into 1-hour intervals. For this dataset, we incorporated calendar features, including _month, day-of-the-week, hour-of-the-day, and age_. Following prior research [3, 48], all datasets were preprocessed by normalizing the data to have zero mean and unit variance. In Table II, we provide an overview of the statistics associated with each dataset. ### Experimental Details We compare the performance of ProNet with seven methods: five state-of-the-art deep learning (DeepAR, DeepSSM, LogSparse Transformer, N-BEATS and Informer), a statistical (SARIMAX) and a persistence model: * Persistence is a typical baseline in forecasting and considers the time series of the previous day as the prediction for the next day. * SARIMAX [49] is an extension of the ARIMA and can handle seasonality with exogenous factors. * DeepAR [16] is a widely used sequence-to-sequence probabilistic forecasting model. * DeepSSM [17] fuses SSM with RNNs to incorporate structural assumptions and learn complex patterns from the time series. It is the state-of-the-art deep forecasting model that employs SSM. * N-BEATS [24] consists of blocks of fully-connected neural networks, organised into stacks using residual links. We introduced covariates at the input of each block to facilitate multivariate series forecasting. * LogSparse Transformer [3] is a recently proposed variation of the Transformer architecture for time series forecasting with convolutional attention and sparse attention; it is denoted as "LogTrans" in Table IV. * Informer [26] is a Transformer-based forecasting model based on the ProbSparse self-attention and self-attention distilling. We modified it for probabilistic forecasts to generate the mean value and variance. Note that Persistence, N-BEATS and Informer are NAR models while the others are AR models. All models were implemented using PyTorch 1.6 and evaluated on Tesla V100 16GB GPU. The deep learning models were optimised by mini-batch gradient descent with the Adam optimiser and a maximum number of epochs 200. In line with the experimental setup from [48] and [3], we carefully partitioned the data to prevent future leakage during our evaluations. Specifically, for Sanyo and Hanergy datasets, we designated the data from the last year as the test set, the second last year as the validation set for early stopping, and the remaining data (5 years for Sanyo and 4 years for Hanergy) as the training set. For the Solar and Electricity datasets, we utilized the data from the last week (starting from 25/08/2006 for Solar and 01/09/2014 for Electricity) as the test set, and the week preceding it as the validation set. To ensure consistency, the data preceding the validation set was further divided into three subsets, and the corresponding validation set was employed to select the best hyperparameters. Throughout the process, our hyperparameter selection was based on achieving the minimum loss on the validation set, enabling us to fine-tune the model for optimal performance. We used Bayesian optimization for hyperparameter search for all deep learning models with a maximum number of iterations 20. The models used for comparison were tuned based on the recommendations in the original papers. We selected the hyperparameters with a minimum loss on the validation set. Probabilistic forecasting models use NLL loss while the point forecasting model(N-BEATS) uses mean squared loss. For the Transformer-based models, we used learnable position and ID (for Solar and Electricity sets) embedding. For Fig. 5: Visualization of latent variable \(z\): (a) original \(z\), (b) re-weighted \(z\). Higher brightness indicates the higher value of \(z\) element. ProNet, the constant sampling factor for Informer backbone was set to 2, and the length of start token \(T_{d}e\) is fixed as half of the forecasting horizon. The learning rate \(\lambda\) was fixed; the number of segments \(n_{g}\) was fixed as 10 for Sanyo and Hanergy data sets, and 12 for Solar and Electricity sets; the dropout rate \(\delta\) was chosen from {0, 0.1, 0.2}. The hidden layer dimension size \(d_{hid}\) was chosen from {8, 12, 16, 24, 32, 48, 96}; the Informer backbone Pos-wise FFN dimension size \(d_{f}\) and number of heads \(n_{h}\) were chosen from {8, 12, 16, 24, 32, 48, 96} and {4, 8, 16, 24, 32}; the number of hidden layers of encoder \(n_{e}\) and decoder \(n_{d}\) were chosen from {2, 3, 4}. Following [26, 50], we restrict the decoder layers to be less than encoder layers for a fast decoding speed. The selected best hyperparameters for ProNet are listed in Table III and used for the evaluation of the test set. As in [16], we report the standard \(\rho\)0.5 and \(\rho\)0.9-quantile losses. Note that \(\rho\)0.5 is equivalent to MAPE. Given the ground truth \(y\) and \(\rho\)-quantile of the predicted distribution \(\hat{y}\), the \(\rho\)-quantile loss is defined by: \[\begin{split}\mathrm{QL}_{\rho}(y,\hat{y})&=\frac{2 \times\sum_{t}P_{\rho}\left(y_{t},\hat{y}_{t}\right)}{\sum_{t}|y_{t}|},\\ P_{\rho}(y,\hat{y})&=\left\{\begin{array}{ll} \rho(y-\hat{y})&\text{if }y>\hat{y}\\ (1-\rho)(\hat{y}-y)&\text{otherwise}\end{array}\right.\end{split} \tag{7}\] ## Results ### Accuracy Analysis The performance of our proposed ProNet model, along with several benchmark methods, is summarized in Table IV. This table presents the \(\rho\)0.5 and \(\rho\)0.9 loss metrics for all models. Notably, since N-BEATS and Persistence generate point forecasts, we report only the \(\rho\)0.5 loss for these models. We can see ProNet is the most accurate method - it outperforms other methods on all data sets except for \(\rho\)0.9 on Solar and \(\rho\)0.5 on Electricity where Logsparse Transformer shows better performance. A possible explanation is that the ProNet backbone - Informer has subpar performance for the two cases. As a NAR forecasting model, Informer ignores dependency in target space, while our ProNet assumes the alternative dependency and therefore achieves better accuracy than Informer. Comparing the performance of AR and NAR models, we can see our ProNet is the most successful overall - ProNet achieves a trade-off between AR and NAR forecasting models by assuming an alternative dependency and accessing both past and future information for forecasting with latent variables. ### Visualization Analysis We provide visual representations of example forecasts produced by our ProNet model on three distinct datasets: Sanyo, Hanergy, and Solar. As shown in Fig. 6, these illustrations demonstrate the remarkable forecasting accuracy achieved by ProNet, highlighting its ability to effectively capture intricate and diverse patterns within the forecasting horizon. The visualizations underscore the model's capacity to handle complex temporal dependencies and produce reliable predictions. Moreover, Fig. 7 showcases the predictive prowest of ProNet on the Electricity dataset. This particular visualization presents the results for a consecutive 8-day period from the test set. Notably, ProNet employs a 7-day history to generate a 1-day forecasting output. The graph reveals ProNet's remarkable capability to leverage the interconnected nature of related time series and exploit extensive historical context to generate accurate and informative predictions. ### Error Accumulation To investigate the ability of ProNet to handle error accumulation and model the output distribution, we compare ProNet with an AR model (DeepAR) and a NAR model (Informer) on the Sanyo and Hanergy as a case study. Fig. 8 shows the \(\rho\)0.5-loss of the models for the forecasting horizons range from 1 (20 steps) to 10 days (200 steps). We can see the \(\rho\)0.5-loss of all models increases with the forecasting horizon but the performance of DeepAR drops more significantly due to its AR decoding mechanism and error accumulation. ProNet consistently outperforms Informer for short horizon and has competitive performance with Informer for long horizon, indicating the effectiveness of seeking the trade-off between AR and NAR models. ProNet assumes the dependency in target space without fully discarding AR decoding and can improve the forecasting accuracy over all horizons. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & Start date & End date & Granularity & \(L_{d}\) & \(N\) & \(n_{T}\) & \(n_{C}\) & \(T_{l}\) & \(T_{h}\) \\ \hline Sanyo & 01/01/2011 & 31/12/2016 & 30 minutes & 20 & 1 & 4 & 3 & 20 & 20 \\ Hanergy & 01/01/2011 & 31/12/2017 & 30 minutes & 20 & 1 & 4 & 3 & 20 & 20 \\ Solar & 01/01/2006 & 31/08/2006 & 1 hour & 24 & 137 & 0 & 3 & 24 & 24 \\ Electricity & 01/01/2011 & 07/09/2014 & 1 hour & 24 & 370 & 0 & 4 & 168 & 24 \\ \hline \end{tabular} \end{table} TABLE II: Dataset statistics. \(L_{d}\) - number of steps per day, \(N\) - number of series, \(n_{T}\) - number of time-based features, \(n_{C}\) - number of calendar features, \(T_{l}\) - length of input series, \(T_{h}\) - length of forecasting horizon. \begin{table} \begin{tabular}{c c c c c c c c} \hline & \(\lambda\) & \(\delta\) & \(d_{hid}\) & \(n_{e}\) & \(n_{d}\) & \(d_{f}\) & \(n_{h}\) \\ \hline Sanyo & 0.005 & 0.1 & 24 & 3 & 3 & 32 & 4 \\ Hanergy & 0.005 & 0.1 & 24 & 2 & 2 & 32 & 12 \\ Solar & 0.005 & 0.1 & 48 & 4 & 3 & 24 & 12 \\ Electricity & 0.001 & 0.1 & 48 & 3 & 3 & 32 & 12 \\ \hline \end{tabular} \end{table} TABLE III: Hyperparameters for ProNet The results show that error accumulation degrades the performance of AR models but ProNet can successfully overcome this by assuming the alternative dependency and fusing future information into predictions with a shorter AR decoding path. ### Inference Speed We evaluate the prediction time of ProNet with varying number of segments \(n_{g}\) and compare it with the AR and NAR model: LogTrans and Informer. Table V shows the average elapsed time and the standard deviation over 10 runs; all models were run on the same computer configuration. As expected, ProNet has a faster inference speed than the AR LogTrans for their shorter AR decoding path. The \begin{table} \begin{tabular}{c c c c} & Sanyo & Hinery & Solar & Electricity \\ \hline Persistence & 0.154/- & 0.242/- & 0.256/- & 0.091/- \\ SARIMAX & 0.124/0.096 & 0.145/0.098 & 0.256/0.192 & 0.196/0.079 \\ DeepAR & 0.070/0.031 & 0.092/0.045 & 0.222/0.093\({}^{\circ}\) & 0.075\({}^{\circ}\)0.040\({}^{\circ}\) \\ DeepSSM & **0.042**/0.023 & **0.070**/0.053 & 0.223/0.181 & 0.083/0.056\({}^{\circ}\) \\ LogTrans & 0.067/0.036 & 0.124/0.066 & 0.210\({}^{\circ}\)0.082\({}^{\circ}\) & **0.059**/0.034\({}^{\circ}\) \\ N-BEATS & 0.077/- & 0.132/- & 0.212/- & 0.071/- \\ Informer & 0.046/0.022 & 0.084/0.046 & 0.215/0.115 & 0.068/0.033 \\ ProNet & **0.042**/**0.021** & **0.070**/**0.035** & **0.205**/0.091 & 0.071/**0.32** \\ \hline \end{tabular} \end{table} TABLE IV: \(\rho\)0.5/\(\rho\)0.9-loss of data sets with various granularities. \(\diamond\) denotes results from [3]. Fig. 8: \(\rho\)0.5-loss for various forecasting horizons on (a) Sanyo and (b) Hanergy datasets. Fig. 6: Actual vs ProNet predicted data with trend and seasonality components and 95\(\%\) confidence intervals: (a) and (b) - Sanyo; (c) and (d) - Hanergy; (e) and (f) - Solar data sets. Fig. 7: ProNet case study on Electricity data set: actual vs predicted data. inference speed of ProNet increases with the number of segments \(n_{g}\) up to 10. This is because the number of AR steps decreases with \(n_{g}\). ProNet with \(n_{g}=10\) and \(n_{g}=15\) have similar speed as both are expected to have same number of steps 2. As the number of segments \(n_{g}\) increases, ProNet has competitive inference speed with Informer when \(n_{g}=10\) and \(n_{g}=15\). The results confirm that ProNet remains the fast decoding advantage of NAR models, in addition to being the most accurate. ### Ablation and Hyperparameter Sensitivity Analysis To evaluate the effectiveness of proposed methods, we conducted an ablation study on Sanyo and Hanergy sets. Table VI shows the performance of: 1) Trans (AR Transformer); 2) PAR-Trans is the partially AR Transformer implemented by simply dividing the horizon evenly [34]; 3) ProNet-Trans is the ProNet that uses Transformer as backbone instead of Informer; 4) Informer; 5) PAR-Informer is the partially AR Informer [34]; 6) our ProNet. We can see PAR-Trans outperforms Trans and PAR-Informer performs worse than Informer that indicate the partially AR decoding mechanism can improve Trans but degrades the performance of Informer. A possible explanation is that simply dividing forecasting horizon into even segments and the fixed dependency assumption violates the real data distribution, which has time-varying dependency relationships (see Fig. 2). Both ProNet-Trans and ProNet outperform Trans and Informer as well as their partially AR version consistently, showing the effectiveness of our progressive decoding mechanism and confirming the advantage of it over partially AR decoding mechanism. We perform the sensitivity analysis of the proposed ProNet on Sanyo and Hanergy sets. Table VII shows the \(\rho\)0.5/\(\rho\)0.9-loss of ProNet with the number of segments \(n_{g}\) ranges from 2 to 15. ProNet achieves the optimal trade-off with 5 and 10 segments \(n_{g}\), in which cases the performance is the best. It can be explained that when \(n_{g}\) is low, more AR decoding steps are required and error accumulates; when \(n_{g}\) is high, most steps of ProNet are predicted non-autoregressively without the dependency in target space. In summary, considering the ProNet inference speed as provided in Table V, dividing the forecasting horizon by half is the best choice that allows ProNet to achieve the best accuracy and speed. Table VIII and IX present the evaluation of ProNet's \(\rho\)0.5/\(\rho\)0.9-loss performance and prediction speed without the re-weighting mechanism, across varying segment numbers (\(n_{s}\)). Comparing these results with the performance metrics of ProNet showcased in Table VII and V, it becomes evident that ProNet exhibits significantly higher prediction speeds when the re-weighting mechanism is absent. Furthermore, ProNet outperforms its re-weighting mechanism-less counterpart in 10 out of the 16 cases examined. This highlights the important role played by the re-weighting mechanism in enhancing ProNet's prediction speed while preserving its prediction accuracy. The incorporation of this mechanism effectively prevents the assignment of undue importance to specific sequences of steps, thus contributing to the optimization of prediction speed without compromising the overall accuracy of ProNet's forecasting. ## Conclusions We introduced ProNet, a novel deep learning model tailored for multi-horizon time series forecasting. ProNet effectively strikes a balance between autoregressive (AR) and non-autoregressive (NAR) models, avoiding error accumulation and slow prediction while maintaining the ability to model target step dependencies. The key innovation of ProNet lies in its partially AR decoding mechanism, achieved through segmenting the forecasting horizon. It predicts a group of steps non-autoregressively within each segment while locally employing AR decoding, resulting in enhanced forecasting accuracy. The segmentation process relies on latent variables, effectively capturing the significance of steps in the horizon, optimized through variational inference. By embracing alternative dependency assumptions and fusing both past and future information, ProNet demonstrates its versatility and effectiveness in forecasting. Extensive experiments validate the superiority of our partially AR method, showcasing ProNet's remarkable performance and prediction speed compared to state-of-the-art AR and NAR forecasting models.
2303.14058
Newton's methods for solving linear inverse problems with neural network coders
Neural networks functions are supposed to be able to encode the desired solution of an inverse problem very efficiently. In this paper, we consider the problem of solving linear inverse problems with neural network coders. First we establish some correspondences of this formulation with existing concepts in regularization theory, in particular with state space regularization, operator decomposition and iterative regularization methods. A Gauss-Newton's method is suitable for solving encoded linear inverse problems, which is supported by a local convergence result. The convergence studies, however, are not complete, and are based on a conjecture on linear independence of activation functions and its derivatives.
Otmar Scherzer, Bernd Hofmann, Zuhair Nashed
2023-03-24T15:08:42Z
http://arxiv.org/abs/2303.14058v1
# Newton's methods for solving linear inverse problems with neural network coders ###### Abstract Neural networks functions are supposed to be able to encode the desired solution of an inverse problem very efficiently. In this paper, we consider the problem of solving linear inverse problems with neural network coders. First we establish some correspondences of this formulation with existing concepts in regularization theory, in particular with state space regularization, operator decomposition and iterative regularization methods. A Gauss-Newton's method is suitable for solving encoded linear inverse problems, which is supported by a local convergence result. The convergence studies, however, are not complete, and are based on a conjecture on linear independence of activation functions and its derivatives. that the solution of Equation 1.2 is a natural image, or in other words that it can be represented as a combination of neural network functions, we get the operator equation \[N(\vec{p}\,)=F\Psi(\vec{p}\,)=\mathbf{y}, \tag{1.3}\] where \(\Psi:\vec{P}\to\mathbf{X}\) is the before mentioned nonlinear operator that maps neural network parameters to image functions. We call * \(\mathbf{X}\) the _image space_ and * \(\mathbf{Y}\) the _data space_, in accordance with a terminology, which we generated in [2, 1]. * \(\vec{P}\) is called the _parameter space_. We make a different notation for \(\vec{P}\), because it represents parametrizations, and is often considered a space of vectors below. The advantage of this ansatz is that the solution of Equation 1.2 is sparsely coded. However, the price to pay is that the reformulated Equation 1.3 is nonlinear. Operator equations of the form of Equation 1.3 are not new: They have been studied in abstract settings for instance in the context of * _state space regularization_[8] and * in the context of the _degree of ill-posedness_[15, 20, 21, 24, 25] as well as of the _degree of nonlinearity_[24] of nonlinear ill-posed operator equations. * Another related approach is _finite dimensional approximation_ of regularization in Banach spaces (see for instance [42]). Particularly, finite dimensional approximations of regularization methods with neural network functions (in the context of frames and _deep synthesis regularization_) have been studied in [38]. In this paper we aim to link the general regularization of the degree of ill-posedness and nonlinearity with coding theory. We investigate generalized Gauss-Newton's methods for solving Equation 1.3; Such methods replace the inverse of standard Newton's method by approximations of outer inverses (see [37]). The outline of the paper is as follows: In Section 2 we review two decomposition cases as stated in [21] first, one of them is Equation 1.3. The study of decomposition cases follows the work on classifying inverse problems and regularization (see [33]). For operators associated to Equation 1.3, Newton's methods seem to be better suited than gradient descent methods, which we support by a convergence analysis (see Section 3). Section 3.4 is devoted to solving Equation 1.3, where \(\Psi\) is a shallow neural network synthesis operator. ## 2. Decomposition cases We start with a definition for nonlinear operator equations possessing forward operators that are compositions of a linear and a nonlinear operator. Precisely, we distinguish between a first decomposition case (i), where the linear operator is the inner operator and the nonlinear is the outer one, and a second decomposition case (ii), where the nonlinear operator is the inner operator and the linear is the outer operator. **Definition 2.1** (Decomposition cases): Let \(\vec{P},\mathbf{X},\mathbf{Y}\) be Hilbert-spaces. 1. An operator \(N\) is said to satisfy the _first decomposition case_ in an open, non-empty neighborhood \(\mathcal{B}(\vec{p}^{\,\dagger};\rho)\subseteq\vec{P}\) of some point \(\vec{p}^{\,\dagger}\) if there exists a linear operator \(F:\vec{P}\to\mathbf{X}\) and a nonlinear operator \(\Psi:\mathbf{X}\to\mathbf{Y}\) such that \[N(\vec{p}\,)=\Psi(F\vec{p}\,)\text{ for }\vec{p}\,\in\mathcal{B}(\vec{p}^{\, \dagger};\rho).\] 2. \(N\) is said to satisfy the _second decomposition case_ in a neighborhood \(\mathcal{B}(\vec{p}^{\,\dagger};\rho)\subseteq\vec{P}\) of some point \(\vec{p}^{\,\dagger}\) if there exists a linear operator \(F:\mathbf{X}\to\mathbf{Y}\) and a nonlinear operator \(\Psi:\vec{P}\to\mathbf{X}\) such that \[N(\vec{p}\,)=F\Psi(\vec{p}\,)\text{ for }\vec{p}\,\in\mathcal{B}(\vec{p}^{\, \dagger};\rho).\] (2.1) Typically it is assumed that the nonlinear operator \(\Psi\) is well-posed. **Remark 2.2** (First decomposition case): In [21], this decomposition case has been studied under structural conditions, relating the second derivative of \(N\) with the first derivative. Under such assumptions convergence rates conditions (see [21, Lemma 4.1]) could be proven. The first decomposition case also arises in inverse option pricing problems in math finance (see [19] and [23, Sect.4]), where the ill-posed compact linear integration operator occurs as inner operator and a well-posed Nemytskii operator as outer operator. **Remark 2.3** (Second decomposition case): Regularization methods for solving operator equations with operators satisfying the second order decomposition case, see Equation 2.1, were probably first analyzed in [8] under the name of _state space regularization_. They considered for instance Tikhonov-type regularization methods, consisting in minimization of \[J_{\lambda}(\vec{p}\,)=\left\|F\Psi(\vec{p}\,)-\mathbf{y}\right\|_{\mathbf{Y}} ^{2}+\lambda\left\|\Psi(\vec{p}\,)-\tilde{\mathbf{x}}\right\|_{\mathbf{X}}^{ 2}, \tag{2.2}\] where \(\tilde{\mathbf{x}}\) is a prior and \(\lambda>0\) is a regularization parameter. In [8] they derived estimates for the second derivative of \(J_{\lambda}(\vec{p}_{\lambda})(\mathbf{h},\mathbf{h})\), where \(\mathbf{h}\in\vec{P}\), that is for the curvature of \(J_{\lambda}\). If the curvature can be bounded from below by some terms \(\left\|\mathbf{h}\right\|_{\vec{p}}^{2}\) then, for instance, a locally unique minimizer of \(J_{\lambda}\) can be guaranteed and also domains can be specified where the functional is convex. Conditions, which guarantee convexity are called _curvature to size conditions_. Subsequently, these decomposition cases have been studied exemplarily in [21]. The theory developed there directly applies to Equation 1.3. Instead of \(J_{\lambda}\) researchers often study direct regularization with respect to \(\vec{p}\,\). For instance in [13] functionals of the form \[J_{\lambda}(\vec{p}\,)=\left\|F\Psi(\vec{p}\,)-\mathbf{y}\right\|_{\mathbf{Y} }^{2}+\lambda\mathcal{L}(\vec{p}\,), \tag{2.3}\] where \(\mathcal{L}\) is some functional directly regularizing the parameter space. Typically \(\mathcal{L}\) is chosen to penalize for sparsity of parameters. The main difference between Equation 2.2 and Equation 2.3 is that in the prior regularization is performed with respect to the image space \(\mathbf{X}\) and in the later with respect to the parameter space \(\vec{P}\). Well-posedness of the functional \(J_{\lambda}\) in Equation 2.2 follows if \(F\circ\Psi\) is lower-semicontinuous, which in turn follows if \(\Psi\) is invertible. In the following we study the solution of decomposable operator equations, such as Equation 1.3, with Gauss-Newton's methods. Decomposition cases have been used in the analysis of iterative regularization methods as well (see [29]): **Definition 2.4** (Strong tangential cone condition): Let \(N:\mathcal{D}(N)\subset\vec{P}\to\mathbf{Y}\) with \(\mathcal{D}(N)\) its domain be a nonlinear operator. * Then \(N\) is said to satisfy the strong tangential cone condition, originally introduced in [17], if \[N^{\prime}(\vec{p_{2}}\,)=R_{\vec{p}_{2},\vec{p}_{1}}N^{\prime}(\vec{p}_{1}) \text{ for all }\vec{p}_{1},\vec{p}_{2}\in\mathcal{D}(N).\] (2.4) where \[\left\|R_{\vec{p}_{2},\vec{p}_{1}}-I\right\|\leq C_{T}\left\|\vec{p}_{2}-\vec {p}_{1}\right\|_{\vec{P}}.\] (2.5) * In [5] the _order reversed_ tangential cone condition, \[N^{\prime}(\vec{p}_{2}\,)=N^{\prime}(\vec{p}_{1})R_{\vec{p}_{2},\vec{p}_{1}} \text{ for all }\vec{p}_{1},\vec{p}_{2}\in\mathcal{D}(N),\] (2.6) together with Equation 2.5, has been introduced. **Remark 2.5**: Equation 2.4 has been used for analyzing _gradient descent methods_ (see for instance [17, 29]). For the analysis of Newton's methods Equation 2.6 has been used (see [5, 29]). The relation to the decomposition cases is as follows: **Lemma 2.6**: _Let \(N:\mathcal{D}(N)\subseteq\vec{P}\to\mathbf{Y}\) with \(\mathcal{D}(N)=\mathcal{B}(\vec{p}^{\uparrow};\rho)\) satisfy the second decomposition case and assume that \(\Psi^{\prime}(\vec{p}\,)\) is invertible for \(\vec{p}\,\in\mathcal{D}(N)\). Then \(N\) satisfies Equation 2.6._ _Proof:_ If \(N\) satisfies the second decomposition case, Equation 2.1, then \(N^{\prime}(\vec{p}\,)=F\Psi^{\prime}(\vec{p}\,)\) for all \(\vec{p}\,\in\mathcal{D}(N)\). Note, that because \(F\) is defined on the whole space \(\mathbf{X}\), \(\mathcal{D}(N)=\mathcal{D}(\Psi)\). By using the invertability assumption on \(\Psi\) we get \[N^{\prime}(\vec{p}_{2}\,)=F\Psi^{\prime}(\vec{p}_{2}\,)=F\Psi^{\prime}(\vec{p} _{1}\,)\underbrace{\Psi^{\prime}(\vec{p}_{1}\,)^{-1}\Psi^{\prime}(\vec{p}_{2}\,) }_{=:R_{\vec{p}_{2},\vec{p}_{1}}}=N^{\prime}(\vec{p}_{1})R_{\vec{p}_{2},\vec{p} _{1}},\] which gives the assertion. As we have shown, decomposition cases have been extensively studied in the regularization literature. One conclusion out of these studies is that the order reversed tangential cone condition Equation 2.6 is suitable for analyzing Newton's methods [5, 29] and thus in turn for the coded linear operator Equation 1.3 because of Lemma 2.6. The standard tool for analyzing Newton's methods is the Newton-Mysovskii condition as discussed below. ## 3 The Newton-Mysovskii Conditions In this section we put abstract convergence conditions for Newton type methods in context with decoding. We consider first Newton's methods for solving the _general_ operator Equation 1.1. Decomposition cases of the operator \(N\) will be considered afterwards. ### Newton's method with invertible linearizations For Newton's methods _local convergence_ is guaranteed under _Newton-Mysovskii_ conditions. For comparison reasons, we first recall a simple Newton's method analysis in finite dimensional spaces if the nonlinear operator has derivatives which are invertible. The proof of more general results, such as Theorem 3.6 below, applies here as well, and thus here the proof is omitted. Several variants of Newton-Mysovskii conditions have been proposed in the literature (see for instance [11, 12, 37]). Analysis of Newton's method was an active research area in the last century, see for instance [39, 46]. **Theorem 3.1** (Finite dimensional Newton's method): _Let \(N:\mathcal{D}(N)\subseteq\mathds{R}^{n}\to\mathds{R}^{n}\) be continuously Frechet-differentiable on a non-empty, open and convex set \(\mathcal{D}(N)\). Let \(\vec{p}^{\,\dagger}\in\mathcal{D}(N)\) be a solution of Equation 1.1. Moreover, we assume that_ 1. \(N^{\prime}(\vec{p}\,)\) _is invertible for all_ \(\vec{p}\,\in\mathcal{D}(N)\) _and that_ 2. _the_ Newton-Mysovskii condition _holds: That is, there exist some_ \(C_{N}>0\) _such that_ \[\begin{split}\big{\|}N^{\prime}(\vec{q}\,)^{-1}(N^{\prime}(\vec{p }+s(\vec{q}\,-\vec{p}\,))-N^{\prime}(\vec{p}\,))(\vec{q}\,-\vec{p}\,)\big{\|}_ {\vec{p}}\leq sC_{N}\,\|\vec{p}\,-\vec{q}\,\|_{\vec{P}}^{2}\\ \text{for all }\vec{p}\,,\vec{q}\,\in\mathcal{D}(N),s\in[0,1]. \end{split}\] (3.1) _Let \(\vec{p}^{\,0}\in\mathcal{D}(N)\) which satisfies_ \[\overline{\mathcal{B}(\vec{p}^{\,0};\rho)}\subseteq\mathcal{D}(N)\text{ with }\rho:=\big{\|}\vec{p}^{\,\dagger}-\vec{p}^{\,0}\big{\|}_{\vec{P}}\text{ and }h:=\frac{\rho C_{I}C_{L}}{2}<1. \tag{3.2}\] _Then the Newton's iteration with starting point \(\vec{p}^{\,0}\),_ \[\vec{p}^{\,k+1}=\vec{p}^{\,k}-N^{\prime}(\vec{p}^{\,k})^{-1}(N(\vec{p}^{\,k} )-\mathbf{y})\quad k\in\mathds{N}_{0}, \tag{3.3}\] _satisfies that the iterates \(\big{\{}\vec{p}^{\,k}:k=0,1,2,\dots\big{\}}\) belong to \(\overline{\mathcal{B}(\vec{p}^{\,0};\rho)}\) and converge quadratically to \(\vec{p}^{\,\dagger}\in\overline{\mathcal{B}(\vec{p}^{\,0},\rho)}\)._ Now, we turn to the case that \(N\) is a decomposition operator. ### Newton-Mysovskii conditions with composed operator Now, we study the case of convergence of Gauss-Newton's methods where \(N:\vec{P}\to\mathbf{Y}\) with \(\vec{P}=\mathds{R}^{n_{*}}\) and \(\mathbf{Y}\) is an infinite dimensional Hilbert space, where \(F:\mathbf{X}\to\mathbf{Y}\) is linear and bounded and \(\Psi:\vec{P}=\mathds{R}^{n_{*}}\to\mathbf{X}\). In this case the Moore-Penrose inverse, or even more general the outer inverse, replaces the inverse in a classical Newton's method (see Equation 3.3), because linearizations of \(N\) will not be invertible anymore as a simple count of dimensions show. We refer now to Gauss-Newton's methods if the linearizations might not be invertible to distinguish between classical Newton's methods also by name. Before we phrase a convergence result for Gauss-Newton's methods we recall and introduce some definitions: **Notation 3.2** (Inner, outer and Moore-Penrose inverse): (see [36, 34]) Let \(L:\vec{P}\to\mathbf{Y}\) be a linear and bounded operator mapping between two vector spaces \(\vec{P}\) and \(\mathbf{Y}\). Then 1. the operator \(B:\mathbf{Y}\to\vec{P}\) is called a _left inverse_ to \(L\) if \[BL=I\;.\] 2. \(B:\mathbf{Y}\to\vec{P}\) is called a _right inverse_ to \(L\) if \[LB=I\;.\] Left and right inverses are used in different context: * For a left inverse the nullspace of \(L\) has to be trivial, in contrast to \(B\). * For a right inverse the nullspace of \(B\) has to be trivial. 3. \(B:\vec{P}\to\vec{P}\) is called a _inverse_ to \(L\) if \(B\) is a right and a left inverse. 4. \(B:\vec{P}\to\mathbf{Y}\) is an _outer inverse_ to \(L\) if \[BLB=B.\] (3.4) 5. Let \(\vec{P}\) and \(\mathbf{Y}\) be Hilbert-spaces, \(L:\vec{P}\to\mathbf{Y}\) be a linear bounded operator. We denote the orthogonal projections \(P\) and \(Q\) onto \(\mathcal{N}(L)\), the nullspace of \(L\) (which is closed), and \(\overline{\mathcal{R}(L)}\), the closure of the range of \(L\): That is for all \(\vec{p}\in\vec{P}\) and \(\mathbf{y}\in\mathbf{Y}\) we have \[P\vec{p}=\operatorname{argmin}\left\{\|\vec{p}_{1}-\vec{p}\|_{\vec{p}}:\vec{ p}_{1}\in\mathcal{N}(L)\right\}\text{ and }Q\mathbf{y}=\operatorname{argmin}\left\{\|\mathbf{y}_{1}-\mathbf{y}\|_{ \mathbf{Y}}:\mathbf{y}_{1}\in\overline{\mathcal{R}(L)}\right\}.\] (3.5) We therefore have \[P:\vec{P} \to\mathcal{N}(L)\dot{+}\mathcal{N}(L)^{\perp}\] and \[Q:\mathbf{Y}\to\overline{\mathcal{R}(L)}\dot{+}\mathcal{R}(L)^{\perp}.\] \[\vec{p} \mapsto P\vec{p}+0 \mathbf{y}\to Q\mathbf{y}+0\] \[B:\mathcal{D}(B)\subseteq\mathbf{Y}\to\vec{P}\text{ with }\mathcal{D}(B):=\mathcal{R}(L)\dot{+} \mathcal{R}(L)^{\perp}\text{ is called the {\it Moore-Penrose inverse} of }L\text{ if the following identities hold }\] \[LBL =L,\] \[BL =B,\] (3.6) \[BL =I-P,\] \[LB =Q|_{\mathcal{D}(B)}.\] In coding theory it is often stated that the range of a neural network operator \(\Psi\) forms a manifold in \(\mathbf{X}\), a space, which contains the natural images. This is the basis of the following definition making use of the Moore-Penrose inverse. **Definition 3.3** (Lipschitz-differentiable immersion): Let \(\Psi:\mathcal{D}(\Psi)\subseteq\vec{P}=\mathds{R}^{n_{*}}\to\mathbf{X}\) where \(\mathcal{D}(\Psi)\) is open, non-empty, convex and \(\mathbf{X}\) is a seperable (potentially infinite dimensional) Hilbert-space. 1. We assume that \(\mathcal{M}:=\Psi(\mathcal{D}(\Psi))\) is a \(n_{*}\)-dimensional _submanifold_ in \(\mathbf{X}\): * Let for all \(\vec{p}=(p_{i})_{i=1}^{n_{*}}\in\mathcal{D}(\Psi)\) denote with \(\Psi^{\prime}(\vec{p}\,)\) the Frechet-derivative of \(\Psi\): \[\Psi^{\prime}(\vec{p}\,):\vec{P} \to\mathbf{X},\] \[\vec{q} =(q_{i})_{i=1}^{n_{*}} \mapsto\left(\partial_{p_{i}}\Psi(\vec{p}\,)\right)_{i=1,\dots,n_{ *}}\vec{q}\;.\] Here \(\left(\partial_{p_{i}}\Psi(\vec{p}\,)\right)_{i=1,\dots,n_{*}}\) denotes the vector of functions consisting of all partial derivatives with respect to \(\vec{p}\,\). In differential geometry notation this coincides with the _tangential mapping_\(T_{\vec{p}}\Psi\). However, the situation is slightly different here because \(\mathbf{X}\) can be infinite dimensional. * The _representation mapping_ of the derivative \[\Psi^{\prime}:\mathcal{D}(\Psi) \to\mathbf{X}^{n_{*}},\] \[\vec{p} \mapsto\left(\partial_{p_{i}}\Psi(\vec{p}\,)\right)_{i=1,\dots,n_{ *}}.\] has always the same rank \(n_{*}\) in \(\mathcal{D}(\Psi)\), meaning that all elements of \(\partial_{\vec{p}}\Psi(\vec{p}\,)\) are linearly independent. This assumption means, in particular, that \(\Psi\) is an _immersion_ and \(\mathcal{M}\) is a submanifold. _._ 2. _We define_ \[\begin{array}{c}P_{\vec{p}}:\mathbf{X}\rightarrow\mathbf{X}_{\vec{p}}:=\mbox{ span}\left\{\partial_{p_{i}}\Psi(\vec{p}\,):i=1,\ldots,n_{*}\right\},\\ \mathbf{x}\mapsto P_{\vec{p}}\,\mathbf{x}:=\mbox{argmin}\left\{\left\|\mathbf{ x}_{1}-\mathbf{x}\right\|_{\mathbf{X}}:\mathbf{x}_{1}\in\mathbf{X}_{\vec{p}} \right\}\end{array}\] (3.7) _as the projection from_ \[\mathbf{X}=\mathbf{X}_{\vec{p}}\,\dot{+}\mathbf{X}_{\vec{p}}^{\bot}\] _onto_ \(\mathbf{X}_{\vec{p}}\)_, which is well-defined by the closedness of the finite dimensional subspace_ \(\mathbf{X}_{\vec{p}}\)_._ _Next we define the inverse of_ \(\Psi^{\prime}(\vec{p}\,)\) _on_ \(\mathbf{X}_{\vec{p}}\)_:_ \[\begin{array}{c}\Psi^{\prime}(\vec{p}\,)^{-1}:\mbox{span}\left\{\partial_{p_ {i}}\Psi(\vec{p}\,):i=1,\ldots,n_{*}\right\}\rightarrow\vec{P},\\ \mathbf{x}=\sum_{i=1}^{n_{*}}x_{i}\partial_{p_{i}}\Psi(\vec{p}\,) \mapsto(x_{i})_{i=1}^{n_{*}}\end{array}\] _and consequently on_ \(\mathbf{X}\)__ \[\begin{array}{c}\Psi^{\prime}(\vec{p}\,)^{\dagger}:\mathbf{X}=\mathbf{X}_{ \vec{p}}\,\dot{+}\mathbf{X}_{\vec{p}}^{\bot}\rightarrow\vec{P},\\ \mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2})\mapsto\Psi^{\prime}(\vec{p}\,)^{-1 }\mathbf{x}_{1}\end{array}\] (3.8) _which are both well-defined because we assume that_ \(\Psi\) _is an immersion. Note that_ \(x_{i}\)_,_ \(i=1,\ldots,n_{*}\) _are not necessarily the coordinates with respect to an orthonormal system in_ \(\mbox{span}\left\{\partial_{p_{i}}\Psi(\vec{p}\,):i=1,\ldots,n_{*}\right\}\)_._ 3. _Finally, we assume that the operators_ \(\Psi^{\prime}(\vec{p}\,)\) _are locally bounded and locally Lipschitz-continuous in_ \(\mathcal{D}(\Psi)\)_. That is_ \[\begin{array}{c}\left\|\Psi^{\prime}(\vec{p}\,)-\Psi^{\prime}(\vec{q}\,) \right\|_{\vec{P}\rightarrow\mathbf{X}}\leq C_{L}\left\|\vec{p}-\vec{q}\, \right\|_{\vec{P}}\quad\left\|\Psi^{\prime}(\vec{p}\,)\right\|_{\vec{P} \rightarrow\mathbf{X}}\leq C_{I}\mbox{ for }\vec{p}\,,\vec{q}\in\mathcal{D}(\Psi). \end{array}\] (3.9) _If_ \(\Psi\) _satisfies these three properties we call it a Lipschitz-differentiable immersion._ The following lemma is proved by standard means: **Lemma 3.4**: _For a Lipschitz-differentiable immersion_ * _the function_ \(\Psi^{\prime}(\vec{p}\,)^{\dagger}:\mathbf{X}\rightarrow\vec{P}\) _is in fact the Moore-Penrose inverse of_ \(\Psi^{\prime}(\vec{p}\,)\) _and_ * _for every point_ \(\vec{p}\,\in\mathcal{D}(\Psi)\subseteq\vec{P}\) _there exists a non-empty closed neighborhood where_ \(\Psi^{\prime}(\vec{p}\,)^{\dagger}\) _is uniformly bounded and it is Lipschitz-continuous; That is_ \[\left\|\Psi^{\prime}(\vec{p}\,)^{\dagger}-\Psi^{\prime}(\vec{q}\,)^{\dagger} \right\|_{\mathbf{X}\rightarrow\vec{P}}\leq C_{L}\left\|\vec{p}-\vec{q}\, \right\|_{\vec{P}},\quad\left\|\Psi^{\prime}(\vec{p}\,)^{\dagger}\right\|_{ \mathbf{X}\rightarrow\vec{P}}\leq C_{I}\mbox{ for }\vec{p}\,,\vec{q}\in \mathcal{D}(\Psi).\] (3.10) * _Moreover, the operator_ \(P_{\vec{p}}\,\) _from Equation_ 3.7 _is bounded._ * We verify the four conditions Equation 3.6 with * \(L=\Psi^{\prime}(\vec{p}\,):\vec{P}=\mathds{R}^{n_{*}}\rightarrow\mathbf{X}\), \(B=\Psi^{\prime}(\vec{p}\,)^{\dagger}:\mathbf{X}\rightarrow\vec{P}\), with \(\mathcal{D}(B)=\mathcal{D}(\Psi^{\prime}(\vec{p}\,)^{\dagger})=\mathbf{X}\) and * \(P:\vec{P}\rightarrow\vec{P}\) the zero-operator and \(Q=P_{\vec{p}}:\mathbf{X}\rightarrow\mathbf{X}_{\vec{p}}\,\), the projection operator onto \(\mathbf{X}_{\vec{p}}\,\) (see Equation 3.7). * First we prove the third identity with \(P=0\) in Equation 3.6: This follows from the fact that for all \(\vec{q}\,=(q_{i})_{i=1}^{n_{*}}\in\vec{P}\) we have \[\Psi^{\prime}(\vec{p}\,)^{\dagger}\Psi^{\prime}(\vec{p}\,)\vec{q}=\Psi^{\prime }(\vec{p}\,)^{-1}\left(\sum_{i=1}^{n_{*}}q_{i}\partial_{p_{i}}\Psi(\vec{p}\,) \right)=(q_{i})_{i=1}^{n_{*}}=\vec{q}\,.\] (3.11) * For the forth identity we see that for all \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2})\in\mathbf{X}\) there exists \(x_{i}\), \(i=1,\ldots,n_{*}\) (because \(\partial_{p_{i}}\Psi(\vec{p}\,)\), \(i=1,\ldots,n_{*}\) is a basis) such that \[\mathbf{x}=\sum_{i=1}^{n_{*}}x_{i}\partial_{p_{i}}\Psi(\vec{p}\,)+\mathbf{x}_{2 }\mbox{ with }\mathbf{x}_{2}\in\mathbf{X}_{\vec{p}}^{\bot}\] and thus \[P_{\vec{p}}\,\mathbf{x}=\sum_{i=1}^{n_{*}}x_{i}\partial_{p_{i}}\Psi(\vec{p}\,)\] and therefore \[\Psi(\vec{p}\,)^{\dagger}\mathbf{x}=(x_{i})_{i=1}^{n_{*}}=\vec{x}.\] (3.12) Consequently, we have \[\Psi^{\prime}(\vec{p}\,)\Psi^{\prime}(\vec{p}\,)^{\dagger}\mathbf{x}=\Psi^{ \prime}(\vec{p}\,)\vec{x}=P_{\vec{p}}\,\mathbf{x}.\] (3.13) * For the second identity we use that for all \(\mathbf{x}\in\mathbf{X}\) \[\Psi^{\prime}(\vec{p}\,)^{\dagger}\Psi^{\prime}(\vec{p}\,)\Psi^{\prime}(\vec{p} \,)^{\dagger}\mathbf{x}\underset{Equation~{}\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq: * _Decomposition property of the Moore-Penrose inverse:_ \[N^{\prime}(\vec{p}\,)^{\dagger}\mathbf{z}=\Psi^{\prime}(\vec{p}\,)^{\dagger}F^{- 1}\mathbf{z}\text{ for all }\vec{p}\,\in\mathcal{D}(N),\mathbf{z}\in\mathcal{R}(F)\subseteq \mathbf{Y}.\] (3.16) _In particular this means that_ \[N^{\prime}(\vec{p}\,)^{\dagger}N^{\prime}(\vec{p}\,)=I\text{ on }\mathds{R}^{n_{\star}}\text{ and }N^{\prime}(\vec{p}\,)N^{\prime}(\vec{p}\,)^{\dagger}=Q |_{\mathcal{R}(FP_{\vec{p}})},\] (3.17) _where_ \(I\) _denotes the identity operator on_ \(\mathds{R}^{n_{\star}}\) _and_ \(Q:\mathbf{Y}\rightarrow\overline{\mathcal{R}(FP_{\vec{p}})}^{\perp}\hat{+} \mathcal{R}(FP_{\vec{p}})^{\perp}\)_, respectively._ * _Generalized Newton-Mysovskii condition:_ \[\begin{split}\big{\|}N^{\prime}(\vec{p}\,)^{\dagger}(N^{\prime}( \vec{q}+s(\vec{p}-\vec{q}\,)-N^{\prime}(\vec{q}\,))(\vec{p}\,-\vec{q}\,) \big{\|}_{\vec{p}}\leq& sC_{I}C_{L}\,\|\vec{p}-\vec{q}\,\|_{ \vec{p}}^{2}\\ \vec{p}\,,\vec{q}\,\in&\mathcal{D}(N),s\in[0,1]\;. \end{split}\] (3.18) _We recall that the Lipschitz-constants_ \(C_{I}\) _and_ \(C_{L}\) _are defined in Equation_ 3.9_._ _Proof:_ First of all, we note that \[N^{\prime}(\vec{p}\,)=F\Psi^{\prime}(\vec{p}\,)\text{ on }\mathcal{D}(\Psi)= \mathcal{D}(N).\] To prove Equation 3.16 we have to verify Equation 3.6 with \[L:=N^{\prime}(\vec{p}\,)=F\psi^{\prime}(\vec{p}\,):\vec{P}\rightarrow\mathbf{ Y}\text{ and }B:=\Psi^{\prime}(\vec{p}\,)^{\dagger}F^{-1}:\mathcal{R}(F)\subseteq \mathbf{Y}\rightarrow\vec{P}.\] Note that since we assume that \(F\) has dense range we do not need to define and consider \(B\) on \(\mathcal{R}(F)\hat{+}\underbrace{\mathcal{R}(F)^{\perp}}_{=\{0\}}\). Let us first state that with the notation of Equation 3.6 we have for fixed \(\vec{p}\,\): \[\mathcal{D}(B)=\mathcal{D}(\Psi^{\prime}(\vec{p}\,)^{\dagger}F^{-1})= \mathcal{R}(F)\text{ and }\mathcal{R}(L)=\{F\Psi^{\prime}(\vec{p}\,)\vec{q}:\vec{q}\in \mathds{R}^{n_{\star}}\}=\mathcal{R}(FP_{\vec{p}}).\] We use \(P\equiv 0\) in Equation 3.6. In particular the first item shows that for \(\mathbf{z}=F\mathbf{x}=FP_{\vec{p}}\mathbf{x}+F(I-P_{\vec{p}})\mathbf{x}\) we have \[Q\mathbf{z}=Q(FP_{\vec{p}}\mathbf{x}+F(I-P_{\vec{p}}\,)(\mathbf{x}))=FP_{\vec {p}}\mathbf{x}. \tag{3.19}\] Applying Lemma 3.4 and the invertability of \(F\) on the range of \(F\) shows that \[LBL=F\Psi^{\prime}(\vec{p}\,)\Psi^{\prime}(\vec{p}\,)^{\dagger}F^{-1}F\Psi^{ \prime}(\vec{p}\,) =F\Psi^{\prime}(\vec{p}\,)\Psi^{\prime}(\vec{p}\,)^{\dagger}\Psi^{ \prime}(\vec{p}\,)\underbrace{\rule{0.0pt}{12.9pt}}_{\text{Equation \ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: We have now all ingredients to prove a local convergence rates result for a Gauss-Newton's method, where the operator \(N\) is the composition of a linear bounded operator and a Lipschitz-differentiable immersions: **Theorem 3.6**: _Let \(F:\mathbf{X}\to\mathbf{Y}\) be linear, bounded, with trivial nullspace and dense range. Moreover, let \(\Psi:\mathcal{D}(\Psi)\subseteq\vec{P}\to\mathbf{X}\) be a Lipschitz-differentiable immersion with \(\mathcal{D}(\Psi)\) open, non-empty, and convex. Moreover, \(N=F\circ\Psi:\mathcal{D}(\Psi)\to\mathbf{Y}\). We assume that there exist \(\vec{p}^{\,\dagger}\in\mathcal{D}(\Psi)\) that satisfies_ \[N(\vec{p}^{\,\dagger})=\mathbf{y}. \tag{3.20}\] _Moreover, we assume that there exists \(\vec{p}^{\,0}\in\mathcal{D}(\Psi)\), which satisfies Equation 3.2. Then, the iterates of the Gauss-Newton's iteration,_ \[\vec{p}^{\,k+1}=\vec{p}^{\,k}-N^{\prime}(\vec{p}^{\,k})^{\dagger}(N(\vec{p}^{\, k})-\mathbf{y})\quad k\in\mathds{N}_{0} \tag{3.21}\] _are well-defined elements in \(\overline{\mathcal{B}(\vec{p}^{\,0},\rho)}\) and converge quadratically to \(\vec{p}^{\,\dagger}\)._ _Proof:_ First of all note, that \(\mathcal{D}(\Psi)=\mathcal{D}(N)\) since \(F\) is defined all over \(\mathbf{X}\). Let \(\rho=\left\|\vec{p}^{\,\dagger}-\vec{p}^{\,0}\right\|_{\vec{P}}\): We prove by induction that \(\vec{p}^{\,k}\in\overline{\mathcal{B}(\vec{p}^{\,\dagger};\rho)}\) for all \(k\in\mathds{N}_{0}\). * For \(k=0\) the assertion is satisfied by assumption Equation 3.2. * Let \(\vec{p}^{\,k}\in\overline{\mathcal{B}(\vec{p}^{\,\dagger};\rho)}\). Using the first condition of Equation 3.6, which a Moore-Penrose inverse satisfies, we see that \[N^{\prime}(\vec{p}^{\,k})N^{\prime}(\vec{p}^{\,k})^{\dagger}N^{\prime}(\vec{p} ^{\,k})(\vec{p}^{\,k+1}-\vec{p}^{\,\dagger})=N^{\prime}(\vec{p}^{\,k})(\vec{p }^{\,k+1}-\vec{p}^{\,\dagger}).\] The definition of Gauss-Newton's method, Equation 3.21, and Equation 3.20 then imply that \[N^{\prime}(\vec{p}^{\,k})(\vec{p}^{\,k+1}-\vec{p}^{\,\dagger})=N^{\prime}(\vec {p}^{\,k})N^{\prime}(\vec{p}^{\,k})^{\dagger}(N(\vec{p}^{\,\dagger})-N(\vec{p }^{\,k})-N^{\prime}(\vec{p}^{\,k})(\vec{p}^{\,\dagger}-\vec{p}^{\,k})),\] and consequently, using the third identity of Equation 3.6 (note that under the assumptions of this theorem \(P=0\), see the proof prior to Equation 3.11), the second identity of Equation 3.6 and that \(F\) is injective, we get \[\vec{p}^{\,k+1}-\vec{p}^{\,\dagger}=N^{\prime}(\vec{p}^{\,k})^{ \dagger}N^{\prime}(\vec{p}^{\,k})(\vec{p}^{\,k+1}-\vec{p}^{\,\dagger}) =N^{\prime}(\vec{p}^{\,k})^{\dagger}(N(\vec{p}^{\,\dagger})-N( \vec{p}^{\,k})-N^{\prime}(\vec{p}^{\,k})(\vec{p}^{\,\dagger}-\vec{p}^{\,k}))\] \[=\Psi^{\prime}(\vec{p}^{\,k})^{\dagger}(\Psi(\vec{p}^{\,\dagger} )-\Psi(\vec{p}^{\,k})-\Psi^{\prime}(\vec{p}^{\,k})(\vec{p}^{\,\dagger}-\vec{p} ^{\,k})).\] From the Newton-Mysovskii condition Equation 3.18 and Equation 3.2 it then follows that \[\begin{split}\left\|\vec{p}^{\,k+1}-\vec{p}^{\,\dagger}\right\|_{ \vec{P}}\leq\frac{C_{I}C_{L}}{2}\left\|\vec{p}^{\,k}-\vec{p}^{\,\dagger}\right\| _{\vec{P}}^{2}&\leq\frac{C_{I}C_{L}\rho}{2}\left\|\vec{p}^{\,k}- \vec{p}^{\,\dagger}\right\|_{\vec{P}}<\left\|\vec{p}^{\,k}-\vec{p}^{\,\dagger} \right\|_{\vec{P}}\\ &\text{ or }\left\|\vec{p}^{\,k+1}-\vec{p}^{\,\dagger}\right\|_{ \vec{P}}=\left\|\vec{p}^{\,k}-\vec{p}^{\,\dagger}\right\|_{\vec{P}}=0.\end{split}\] (3.22) This, in particular shows that \(\vec{p}^{\,k+1}\in\mathcal{B}(\vec{p}^{\,\dagger};\rho)\), thus the well-definedness of the Gauss-Newton's iterations in the closed ball. * Using Equation 3.22 we then get, since \(h=C_{I}C_{L}\rho/2<1\), that \[\left\|\vec{p}^{\,k+1}-\vec{p}^{\,\dagger}\right\|_{\vec{P}}\leq h^{k+1}\left\| \vec{p}^{\,0}-\vec{p}^{\,\dagger}\right\|_{\vec{P}}\leq h^{k+1}\rho,\] which converges to \(0\) for \(k\to\infty\). * Convergence and the first inequality of Equation 3.22 imply quadratic convergence. \(\Box\) **Remark 3.7**: Based on the assumption of an immersion we have shown in Lemma 3.5 that \(\Psi(\vec{p}^{\,\dagger})^{\dagger}F^{-1}\) is the Moore-Penrose inverse of \(N=F\Psi(\vec{p}^{\,\prime})\). In order to prove (quadratic) convergence of Gauss-Newton's methods one only requires an _outer inverse_ (see Notation 3.2). Following [37] (see also [18]) the analysis of Gauss-Newton's method could be based on _outer inverses_, which is more general than for the Moore-Penrose inverse (compare Equation 3.4 and Equation 3.6). However, it is nice to actually see that \(N^{\prime}(\vec{p}^{\,\prime})^{\dagger}\) is a Moore-Penrose inverse, which is the novelty compared to the analysis of [37]. For excellent expositions on Kantorovich and Mysovskii theory see [30, 40, 46] - here we replace the Newton-Mysovskii conditions by properties of an immersion. For aspects related to Newton's methods for singular points see [10, 16]. For applications of generalized inverses in nonlinear analysis see [34, 35]. ### Neural networks We want to apply the decomposition theory to Gauss-Newton's methods for solving Equation 1.3, where \(\Psi\) is a _shallow neural network operator_. **Definition 3.8** (Shallow neural network operator): Let \(N\in\mathds{N}\) be fixed. We consider the operator \[\begin{split}\Psi:\vec{P}:=\mathds{R}^{N}\times\mathds{R}^{n \times N}\times\mathds{R}^{N}&\to C^{1}([0,1]^{n})\subseteq \mathbf{X}:=L^{2}([0,1]^{n}),\\ (\vec{\alpha},\mathbf{w},\vec{\theta})&\mapsto \left(\vec{x}\rightarrow\sum_{j=1}^{N}\alpha_{j}\sigma\left(\mathbf{w}_{j}^{T} \vec{x}+\theta_{j}\right)\right)\\ \text{where }\alpha_{j},\theta_{j}\in\mathds{R}\text{ and }\vec{x}, \mathbf{w}_{j}\in\mathds{R}^{n}.\end{split} \tag{3.23}\] Note, that with our previous notation, for instance in Definition 3.3, we have \(n_{*}=(n+2)*N\). We summarize the notation, because it is quite heavy: 1. [label=()] 2. \(\vec{\ }\) denotes a vector in \(\mathds{R}^{n}\) or \(\mathds{R}^{N}\), 3. \(\mathbf{w}\) denotes a matrix: The only exception is Definition 3.10, where it is a tensor. \(\mathbf{w}_{j}\) denotes a vector, aside from Definition 3.10, where it is again a tensor. **Example 3.9** (Examples of activation functions): \(\sigma\) is called the _activation function_, such as * the _sigmoid function_, defined by \[\sigma(t)=\frac{1}{1+\mathrm{e}^{-\frac{1}{2}t}}\text{ for all }t\in \mathds{R}.\] (3.24) Note, we omit the \(\varepsilon\) dependence for notational convenience. * The _hyperbolic tangent_ \[t\rightarrow\tanh(t)=\frac{\mathrm{e}^{2t}-1}{\mathrm{e}^{2t}+1}.\] (3.25) * The _ReLU_ activation function, \[\sigma(t)=\max\left\{0,t\right\}\text{ for all }t\in\mathds{R}.\] (3.26) * The _step function_, which is the pointwise limit of the sigmoid function, with respect to \(\varepsilon\to 0\), \[\sigma(t)=\left\{\begin{array}{ll}0&\text{for }t<0,\\ \frac{1}{2}&\text{for }t=0,\text{ }t\in\mathds{R}.\\ 1&\text{for }t>0.\end{array}\right.\] (3.27) We only consider shallow neural networks in contrast to _deep neural networks_, which consist of several layers of shallow neural networks (see for instance [26]): Figure 1: Three different activation functions: Sigmoid, tanh and ReLU. O-th derivative green, first derivative pink with circles and with pink \(\times\) is the derivative multiplied by \(x\), 2nd derivative blue. The ReLU function is scaled by a factor \(1/10\) and the derivative is plotted in original form, derivative times \(x\) is again scaled by \(1/10\) for visualization purposes. **Definition 3.10** (Deep neural networks): Let \[\vec{P}_{l}:=\mathds{R}^{N_{l}}\times\mathds{R}^{n\times N_{l}}\times\mathds{R}^{ N_{l}}\text{ for }l=1,\ldots,L\text{ and }\vec{P}:=\prod_{l=1}^{L}\vec{P}_{l}.\] Then a deep neural network consisting of \(L\) layers is written as \[\begin{split}\Psi:\vec{P}&\to L^{2}([0,1]^{n}),\\ (\vec{\alpha}_{l},\mathbf{w}_{l},\vec{\theta}_{l})_{l=1}^{L}& \mapsto\left(\vec{x}\rightarrow\sum_{j_{L}=1}^{N_{L}}\alpha_{j_{L}}^{L} \sigma_{\varepsilon_{L}}^{L}\left(p_{j_{L},L}\left(\sum_{j_{L-1}=1}^{N_{L-1}} \cdots\left(\sum_{j_{1}=1}^{N_{1}}\alpha_{j_{1},1}\sigma_{\varepsilon_{1}}^{1} \left(p_{j_{1}}^{1}(\vec{x})\right)\right)\right)\right)\right),\end{split} \tag{3.28}\] where \[p_{j}^{i}(\vec{x})=\mathbf{w}_{j,i}^{T}\vec{x}+\theta_{j}^{i}\text{ with }\alpha_{j}^{i},\theta_{j}^{i}\in\mathds{R}\text{ and }\vec{x},\mathbf{w}_{j,i}\in\mathds{R}^{n}\text{ for all }i=1,\ldots,L.\] Note that the values \(\varepsilon_{k}\), \(k=1,\ldots,L\) can be chosen differently for activation functions at different levels (cf. Equation 3.24). The success of neural network is due to the universal approximation properties, proven for the first time in [9, 27]. The universal approximation result states that shallow neural networks are universal, that is, that each continuous function can be approximated arbitrarily well by a neural network function. We review this result now. **Theorem 3.11** ([26]): _In dependence of the smoothness of the activation function \(\sigma\) there exist two classes of results._ * _Theorem 2 from_ _[_26_]__: Let_ \(\sigma:\mathds{R}\rightarrow\mathds{R}^{+}\) _be a_ **continuous, bounded and nonconstant** _function_. Then, for every function_ \(g\in C(\mathds{R}^{n})\) _and every_ \(\nu>0\)_, there exists a function_ \[\vec{x}\to G(\vec{x})=\sum_{j=1}^{N}\alpha_{j}\sigma(\mathbf{w}_{j}^{T} \vec{x}+\theta_{j})\qquad\text{ with }N\in\mathds{N},\alpha_{j},\theta_{j}\in \mathds{R},\mathbf{w}_{j}\in\mathds{R}^{n},\] (3.29) _satisfying_ \[|G(\vec{x})-g(\vec{x})|<\nu\text{ uniformly for all compact subsets }K\subseteq\mathds{R}^{n}.\] * _Theorem 1 from_ _[_26_]__: Let_ \(\sigma:\mathds{R}\rightarrow\mathds{R}^{+}\) _be_ **unbounded and nonconstant**_. Then for every measure_ \(\mu\) _on_ \(\mathds{R}^{n}\) _and every constant_ \(\nu>0\) _and_ \(p\geq 1\) _there exists a function_ \(G\) _of the form Equation_ 3.29 _that satisfies_ \[\int_{\mathds{R}^{n}}|G(\vec{x})-g(\vec{x})|^{p}d\vec{x}<\nu.\] The first result applies for instance to the sigmoid and hyperbolic tangent function (see Equation 3.24 and Equation 3.25). The second result applies to the ReLU function (see Equation 3.26). In particular all approximation properties also hold on the compact set \([0,1]^{n}\), which we are considering. ### Newton-Mysovskii condition with neural networks In the following we verify Newton-Mysovskii conditions for \(\Psi\) being the encoder of Equation 3.23. First we calculate the first and second derivatives of \(\Psi\) with respect to \(\vec{\alpha},\mathbf{w}\) and \(\vec{\theta}\). The computations can be performed for deep neural network encoders as defined in Equation 3.28 in principle analogously, but are technically and notationally more complicated. To make the notation consistent we define \[\vec{p}:=(\vec{\alpha},\mathbf{w},\vec{\theta})\in\mathds{R}^{N}\times \mathds{R}^{n\times N}\times\mathds{R}^{N}=\mathds{R}^{n_{*}}.\] **Lemma 3.12**: _Let \(\sigma:\mathds{R}\rightarrow\mathds{R}^{+}\) be a two times differentiable function with uniformly bounded function values and first, second order derivatives, such as the sigmoid, hyperbolic tangent functions (see Figure 1)1. Then, the derivatives of \(\Psi\) with respect to the coefficients \(\vec{p}\) are given by the following formulas:_ Footnote 1: This assumption is actually too restrictive, and only used to see that \(\Psi\in L^{2}([0,1]^{n})\). * _Derivative with respect to_ \(\alpha_{s}\)_,_ \(s=1,\ldots,N\)_:_ \[\frac{\partial\Psi}{\partial\alpha_{s}}[\vec{p}\,](\vec{x})=\sigma\left(\sum_{i= 1}^{n}w_{s}^{i}x_{i}+\theta_{s}\right)\text{ for }s=1,\ldots,N.\] (3.30) * _Derivative with respect to_ \(w_{s}^{t}\) _where_ \(s=1,\ldots,N\)_,_ \(t=1,\ldots,n\)_:_ \[\frac{\partial\Psi}{\partial w_{s}^{t}}[\vec{p}\,](\vec{x})=\sum_{j=1}^{N} \alpha_{j}\sigma^{\prime}\left(\sum_{i=1}^{n}w_{j}^{i}x_{i}+\theta_{j}\right) \delta_{s=j}x_{t}=\alpha_{s}\sigma^{\prime}\left(\sum_{i=1}^{n}w_{s}^{i}x_{i}+ \theta_{s}\right)x_{t}\] (3.31) * _Derivative with respect to_ \(\theta_{s}\) _where_ \(s=1,\ldots,N\)_:_ \[\frac{\partial\Psi}{\partial\theta_{s}}[\vec{p}\,](\vec{x})=\sum_{j=1}^{N} \alpha_{j}\sigma^{\prime}\left(\sum_{i=1}^{n}w_{j}^{i}x_{i}+\theta_{j}\right) \delta_{s=j}=\alpha_{s}\sigma^{\prime}\left(\sum_{i=1}^{n}w_{s}^{i}x_{i}+ \theta_{s}\right).\] (3.32) _Note, that all the derivatives above are functions in \(\mathbf{X}=L^{2}([0,1]^{n})\). In particular, maybe in a more intuitive way, we have_ \[D\Psi[\vec{p}\,](\vec{x})\vec{h}=\left(\tfrac{\partial\Psi}{\partial\vec{a}} [\vec{p}\,](\vec{x})\quad\tfrac{\partial\Psi}{\partial\mathbf{w}}[\vec{p}\,]( \vec{x})\quad\tfrac{\partial\Phi}{\partial\theta}[\vec{p}\,](\vec{x})\right)^ {T}\vec{h}\text{ for all }\vec{h}=\left(\begin{matrix}\vec{h}_{\vec{a}}\\ \mathbf{h}_{\mathbf{w}}\\ \vec{h}_{\vec{b}}\end{matrix}\right)\in\mathds{R}^{n_{*}}\text{ and }\vec{x}\in\mathds{R}^{n}. \tag{3.33}\] _Moreover, let \(s_{1},s_{2}=1,\ldots,N\), \(t_{1},t_{2}=1,\ldots,n\), then we have in a formal way:_ \[\frac{\partial^{2}\Psi}{\partial\alpha_{s_{1}}\partial\alpha_{s_{ 2}}}(\vec{x}) =0, \tag{3.34}\] \[\frac{\partial^{2}\Psi}{\partial\alpha_{s_{1}}\partial w_{s_{2}} ^{t_{1}}}(\vec{x}) =\sigma^{\prime}\left(\sum_{i=1}^{n}w_{s_{1}}^{i}x_{i}+\theta_{s_ {1}}\right)x_{t_{1}}\delta_{s_{1}=s_{2}},\] \[\frac{\partial^{2}\Psi}{\partial\alpha_{s_{1}}\partial\theta_{s_ {2}}}(\vec{x}) =\sigma^{\prime}\left(\sum_{i=1}^{n}w_{s_{1}}^{i}x_{i}+\theta_{s_ {1}}\right)\delta_{s_{1}=s_{2}},\] \[\frac{\partial^{2}\Psi}{\partial w_{s_{1}}^{t_{1}}\partial\theta_{ s_{2}}}(\vec{x}) =\alpha_{s_{1}}\sigma^{\prime\prime}\left(\sum_{i=1}^{n}w_{s_{1}}^{i}x_{i}+ \theta_{s_{1}}\right)x_{t_{1}}\delta_{s_{1}=s_{2}},\] \[\frac{\partial^{2}\Psi}{\partial\theta_{s_{1}}\partial\theta_{s_ {2}}}(\vec{x}) =\alpha_{s_{1}}\sigma^{\prime\prime}\left(\sum_{i=1}^{n}w_{s_{1}}^{i }x_{i}+\theta_{s_{1}}\right)\delta_{s_{1}=s_{2}},\] _where \(\delta_{a=b}=1\) if \(a=b\) and \(0\) else, that is the Kronecker-delta._ The notation of directional derivatives with respect to parameters might be confusing. Note, that for instance \(\tfrac{\partial\Psi}{\partial\theta_{s}}[\vec{p}\,](\vec{x})\) denotes a direction derivative of the functional \(\Psi\) with respect to the variable \(\theta_{s}\) and this derivative is a function, which depends on \(\vec{x}\). The argument, where the derivative is evaluated is a vector. So in such a formula \(\theta_{s}\) has two different meanings. Notationaly differentiating between them would be exact but becomes quite unreadable. **Remark 3.13**: * In particular Equation 3.34 shows that \[\left(\vec{h}_{\vec{a}}\quad\mathbf{h}_{\mathbf{w}}\quad\vec{h}_{\vec{g}} \right)D^{2}\Psi[\vec{p}\,](\vec{x})\begin{pmatrix}\vec{h}_{\vec{a}}\\ \mathbf{h}_{\vec{b}}\\ \end{pmatrix}\text{ is continuous (for fixed }\vec{x})\text{ with respect to }\vec{p}\,.\] (3.35) * We emphasize that under the assumptions of Lemma 3.12 the linear space (for fixed \(\vec{p}\,\)) \[\mathcal{R}(D\Psi[\vec{p}\,])=\left\{D\Psi[\vec{p}\,]\vec{h}:\vec{h}=(\vec{h} _{\vec{a}},\mathbf{h}_{\mathbf{w}},\vec{h}_{\vec{g}})\in\mathds{R}^{N\times(n+ 2)}\right\}\subseteq L^{2}([0,1]^{n}).\] * In order to prove convergence of the Gauss-Newton's method, Equation 3.21, by applying Theorem 3.1, we have to prove that \(\Psi\) is a Lipschitz-continuous immersion. Below we lack proving one important property so far, namely, that \[\partial_{k}\Psi[\vec{p}\,],\quad k=1,\ldots,n_{*}=N(n+2)\] (3.36) are linearly independent functions. In this paper, this will remain open as a conjecture, and the following statements are valid modulo this conjecture. In the following we survey some results on linear independence with respect to the coefficients \(\vec{\alpha},\mathbf{w},\vec{\theta}\) of the functions \(\vec{x}\to\sigma\left(\sum_{i=1}^{n}w_{s}^{i}x_{i}+\theta_{s}\right)\), which match the functions \(\vec{x}\to\frac{\partial\Psi}{\partial\alpha_{s}}[\vec{p}\,](\vec{x})\), that is with respect to the first \(N\) variables. ### Linear independence of activation functions and its derivatives The universal approximation results from for instance [9, 27, 26] do not allow to conclude that neural networks function as in Equation 3.23 are linearly independent. Linear independence is a non-trivial research question: We recall a result from [31] from which linear independence of a shallow neural network operator, as defined in Equation 3.23, can be deduced for a variety of activator functions. Similar results on linear independence of shallow network functions based on sigmoid activation functions have been stated in [47, 28], but the discussion in [31] raises questions on the completeness of the proofs. In [31] it is stated that all activation functions from the _Pytorch library_[41] are linearly independent with respect to almost all parameters \(\mathbf{w}\) and \(\theta\). **Theorem 3.14** ([31]): _For all activation functions_ HardShrink, HardSigmoid, HardTanh, HardSwish, LeakyReLU, PReLU, ReLU, ReLU6, RReLU, SoftShrink, Threshold_,_ LogSigmoid, Sigmoid, SoftPlus, Tanh, and TanShrink _and the_ PyTorch _functions_ CELU, ELU, SELU _the shallow neural network functions Equation 3.23 formed by randomly generated vectors \((\mathbf{w},\vec{\theta})\) are linearly independent._ **Remark 3.15**: * Theorem 3.14 states that the functions \(\frac{\partial\Psi}{\partial\alpha_{s}}\) (taking into account Equation 3.30) are linearly independent for _almost all_ parameters \((\mathbf{w},\vec{\theta})\in\mathds{R}^{n\times N}\times\mathds{R}^{N}\). In other words, the first block of the matrix is \(D\Psi\) in Equation 3.33 consists of functions, which are linearly independent for almost all parameters \((\mathbf{w},\vec{\theta})\). For our results to hold we need on top that the functions \(\frac{\partial\Psi}{\partial w_{s}^{2}}\) and \(\frac{\partial\Psi}{\partial\theta_{s}}\) from the second and third block (see Equation 3.34) are linearly independent within the blocks, respectively, and also across the blocks. So far this has not been proven but can be conjectured already from Figure 1. * For the sigmoid function we have _obvious symmetries_ because \[\sigma^{\prime}\left(\mathbf{w}_{j}^{T}\vec{x}+\theta_{j}\right)=\sigma^{ \prime}\left(-\mathbf{w}_{j}^{T}\vec{x}-\theta_{j}\right)\text{ for every }\mathbf{w}_{j}\in\mathds{R}^{n},\vec{ \theta}\in\mathds{R}^{N},\] (3.37) or in other words for the function \(\Psi\) from Equation 3.23 we have according to Equation 3.32 that \[\frac{\partial\Psi}{\partial\theta_{s}}[\vec{\alpha},\mathbf{w},\vec{\theta} ](\vec{x})=\alpha_{s}\sigma^{\prime}(\mathbf{w}_{j}^{T}\vec{x}+\theta_{j})= \alpha_{s}\sigma^{\prime}(-\mathbf{w}_{j}^{T}\vec{x}-\theta_{j})=\frac{ \partial\Psi}{\partial\theta_{s}}[\vec{\alpha},-\mathbf{w},-\vec{\theta}]( \vec{x})\] (3.38) or in other words \(\frac{\partial\Psi}{\partial\theta_{s}}[\vec{\alpha},\mathbf{w},-\vec{ \theta}]\) and \(\frac{\partial\Psi}{\partial\theta_{s}}[\vec{\alpha},-\mathbf{w},-\vec{ \theta}]\) are linearly dependent. **Conjecture 3.16**: We define by \(\mathcal{D}(\Psi)\) a _maximal set of vectors_\((\vec{\alpha},\mathbf{w},\vec{\theta})\) such that the \(n_{*}=N\times(n+2)\) functions in \(\vec{x}\) \[\vec{x}\to\frac{\partial\Psi}{\partial\alpha_{s}}[\vec{\alpha},\mathbf{w},\vec {\theta}](\vec{x}),\quad\vec{x}\to\frac{\partial\Psi}{\partial w_{s}^{t}}[ \vec{\alpha},\mathbf{w},\vec{\theta}](\vec{x}),\quad\vec{x}\to\frac{\partial \Psi}{\partial\theta_{s}}[\vec{\alpha},\mathbf{w},\vec{\theta}](\vec{x}),\quad s =1,\ldots,N,t=1,\ldots,n,\] are linearly independent. We assume that \(\mathcal{D}(\Psi)\) is open and dense in \(\mathbf{X}\in L^{2}([0,1])^{2}\) The later is guaranteed by Theorem 3.11. Recall the discussion above: The differentiation variables and the arguments coincide notationally, but are different objects. **Remark 3.17**: * It can be conjectured that for every element from \(\mathcal{D}(\Psi)\) only one element in \(\mathds{R}^{n_{*}}\) exists, which satisfies _obvious symmetries_ such as formulated in Equation 3.38. These "mirrored" elements are a set of measure zero in \(\vec{P}\). We conjecture that this corresponds to the set of measure zero as stated in [31], which is derived with Fourier methods. * Equation 3.34 requires that all components of the vector \(\vec{\alpha}\) are non-zero. This means in particular that for "sparse solutions", with less that \(n_{*}=N*(n+2)\) coefficients, convergence is not guaranteed, because of a locally degenerating submanifold. We consider the manifold given by the function \[F:\mathds{R}^{2} \to\mathds{R}^{2}.\] \[\begin{pmatrix}x\\ y\end{pmatrix} \mapsto\begin{pmatrix}xy\\ x^{2}+y^{2}\end{pmatrix}\] (3.39) Then \[\nabla F(x,y)=\begin{pmatrix}y&x\\ 2x&2y\end{pmatrix}.\] We have \(\det\nabla F(x,y)=2(y^{2}-x^{2})\), which vanishes along the diagonals in \((x,y)\)-space. That is of the diagonals the function is locally a submanifold (see Figure 2): ### Local convergence of Gauss-Newton's method with coding networks In the following we prove a local convergence result for a Gauss-Newton's method, for solving operator equations Equation 1.3 where \(F\) is complemented by a shallow neural network coder \(\Psi\). In order to apply Theorem 3.1 we have to verify that the shallow neural network operator (see Equation 3.23) is a Lipschitz-differentiable immersion. **Lemma 3.18**: _Let \(F:\mathbf{X}=L^{2}([0,1]^{n})\to\mathbf{Y}\) be linear, bounded, with trivial nullspace and closed range, and let \(\sigma\) be strictly monotonic (like sigmoid or hyperbolic tangent) and satisfy the assumptions of Lemma 3.12. Moreover, assume that Conjecture 3.16 holds. Then_ * _For every element_ \(\vec{p}\,=(\vec{\alpha},\mathbf{w},\vec{\theta})\in\mathds{R}^{n_{*}}\) _in the maximal set_ \(\mathcal{D}(\Psi)\) _(see Conjecture_ 3.16_),_ \(\mathcal{R}(D\Psi[\vec{p}\,])\) _is a linear subspace of the space_ \(\mathbf{X}\) _of dimension_ \(n_{*}=N\times(n+2)\)_._ * _There exists an open neighborhood_ \(\mathcal{U}\subseteq\mathds{R}^{N\times(n+2)}\) _of vectors_ \((\vec{\alpha},\mathbf{w},\vec{\theta})\) _such that_ \(\Psi\) _is a Lipschitz-differentiable immersion in_ \(\mathcal{U}\)_._ _Proof:_ * It is clear that for each fixed \(\vec{p}\,,\,D\Psi[\vec{p}\,]\in L^{2}([0,1]^{n})\) because of the differentiability assumptions of \(\sigma\), see Equation 3.35. Conjecture 3.16 implies that \(\mathcal{R}(D\Psi[\vec{p}\,])\) is a linear subspaces of \(\mathbf{X}\) of dimension \(N\times(n+2)\) (note the elements are functions). * \(D^{2}\Psi[\vec{p}\,]:\mathds{R}^{N\times(n+2)}\to L^{2}([0,1]^{n})\) is continuous (see Equation 3.35) since we assume that the activation function \(\sigma\) is twice differentiable. Now we consider a non-empty open neighborhood \(\mathcal{U}\) of a vector \(\vec{p}\), with a compact closure. Figure 2: The function \(F\) from Equation 3.39. We have plotted \(F(x,y)\) via its polar coordinates, I.e. \(r=|F(x,y)|\) and \(\theta=\tan^{-1}\left(\frac{xy}{x^{2}+y^{2}}\right)\). The colors correspond to identical angles. Then, from the continuity of \(D^{2}\Psi\) with respect to \(\vec{p}\), it follows that \(D\Psi\) is a Frechet-differentiable with Lipschitz-continuous derivative on \(\mathcal{U}\). In particular this means that item _(i)_ in Definition 3.3 holds. Moreover, Equation 3.9 holds for \(\Psi^{\prime}\). That is, there exists constants \(C_{L}\) and \(C_{I}\) such that \[\|\Psi^{\prime}(\vec{p}\,)-\Psi^{\prime}(\vec{q}\,)\|_{\vec{P}\to\mathbf{Y}} \leq C_{L}\,\|\vec{p}\,-\vec{q}\,\|_{\vec{P}}\,\,\,\text{and}\,\,\,\|\Psi^{ \prime}(\vec{p}\,)\|_{\vec{P}\to\mathbf{Y}}\leq C_{I}\,\,\text{for}\,\,\vec{p} \,,\vec{q}\,\in\mathcal{D}(\Psi). \tag{3.40}\] Note that \(\Psi^{\prime}(p)^{\dagger}\) as defined in Equation 3.8 is also uniformly bounded and Lipschitz-continuous as a consequence of Lemma 3.4. **Theorem 3.19** (Local convergence of Gauss-Newton's method): _Let \(F:\mathbf{X}=L^{2}([0,1]^{n})\to\mathbf{Y}\) be a linear, bounded operator with trivial nullspace and dense range and let \(N=F\circ\Psi\), where \(\Psi:\mathcal{D}(\Psi)\subseteq\mathds{R}^{N\times(n+2)}\to\mathbf{X}\) is a shallow neural network operator generated by an activation function \(\sigma\) which satisfies the assumptions of Lemma 3.18 and Conjecture 3.16. Let \(\vec{p}^{\,0}\in\mathcal{D}(\Psi)\) be the starting point of the Gauss-Newton's iteration Equation 3.21 and let \(\vec{p}^{\,\dagger}\in\mathcal{D}(\Psi)\) be a solution of Equation 3.20, which satisfy Equation 3.2. Then the Gauss-Newton's iterations are locally, that is if \(\vec{p}^{\,0}\) is sufficiently close to \(\vec{p}^{\,\dagger}\), and quadratically converging._ _Proof:_ The proof is an immediate application of Lemma 3.18 to Theorem 3.6. \(\Box\) **Remark 3.20**: We have shown that a nonlinear operator equation, where the operator is a composition of a linear compact operator and a shallow neural network operator, can be solved with a Gauss-Newton's method with guaranteed local convergence in the parameter space. **Conclusion.** We have shown that Gauss-Newton's methods are efficient algorithms for solving linear inverse problems, where the solution can be encoded with a neural network. The convergence studies, however, are not complete, and are based on a conjecture on linear independence of activation functions and its derivatives. **Acknowledgements.** This research was funded in whole, or in part, by the Austrian Science Fund (FWF) P 34981 - New Inverse Problems of Super-Resolved Microscopy (NIPSUM). For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. Moreover, OS is supported by the Austrian Science Fund (FWF), with SFB F68 "Tomography Across the Scales", project F6807-N36 (Tomography with Uncertainties). The financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association is gratefully acknowledged. BH is supported by the German Science Foundation (DFG) under the grant HO 1454/13-1 (Project No. 453804957).
2302.02513
Connectivity Enhanced Safe Neural Network Planner for Lane Changing in Mixed Traffic
Connectivity technology has shown great potentials in improving the safety and efficiency of transportation systems by providing information beyond the perception and prediction capabilities of individual vehicles. However, it is expected that human-driven and autonomous vehicles, and connected and non-connected vehicles need to share the transportation network during the transition period to fully connected and automated transportation systems. Such mixed traffic scenarios significantly increase the complexity in analyzing system behavior and quantifying uncertainty for highly interactive scenarios, e.g., lane changing. It is even harder to ensure system safety when neural network based planners are leveraged to further improve efficiency. In this work, we propose a connectivity-enhanced neural network based lane changing planner. By cooperating with surrounding connected vehicles in dynamic environment, our proposed planner will adapt its planned trajectory according to the analysis of a safe evasion trajectory. We demonstrate the strength of our planner design in improving efficiency and ensuring safety in various mixed traffic scenarios with extensive simulations. We also analyze the system robustness when the communication or coordination is not perfect.
Xiangguo Liu, Ruochen Jiao, Bowen Zheng, Dave Liang, Qi Zhu
2023-02-06T00:47:49Z
http://arxiv.org/abs/2302.02513v1
# Connectivity Enhanced Safe Neural Network Planner ###### Abstract. Connectivity technology has shown great potentials in improving the safety and efficiency of transportation systems by providing information beyond the perception and prediction capabilities of individual vehicles. However, it is expected that human-driven and autonomous vehicles, and connected and non-connected vehicles need to share the transportation network during the transition period to fully connected and automated transportation systems. Such mixed traffic scenarios significantly increase the complexity in analyzing system behavior and quantifying uncertainty for highly interactive scenarios, e.g., lane changing. It is even harder to ensure system safety when neural network based planners are leveraged to further improve efficiency. In this work, we propose a connectivity-enhanced neural network based lane changing planner. By cooperating with surrounding connected vehicles in dynamic environment, our proposed planner will adapt its planned trajectory according to the analysis of a safe evasion trajectory. We demonstrate the strength of our planner design in improving efficiency and ensuring safety in various mixed traffic scenarios with extensive simulations. We also analyze the system robustness when the communication or coordination is not perfect. Connected and Autonomous Vehicles, Safe Neural Network Planner, Mixed Traffic, Human-driven Vehicles + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science planned trajectory does not need to complete the lane changing in one go. As long as there is a safe evasion trajectory, the ego vehicle can start the attempt and interact with surrounding vehicles. We analyze the worst case for the ego vehicle in dynamical environments. For the case without a connected leading vehicle, i.e., \(N=0\), we can directly assume that in the worst case, vehicle \(L_{1}\) takes the maximum deceleration under the mechanical constraints. In other cases that \(N>0\), we can leverage the coordination from connected vehicles to prevent overly conservative planning. We evaluate the fastest evasion trajectory of the ego vehicle in emergency situations. For instance, when there is an emergency brake in the downstream, connected vehicles that first realize the event can collaboratively take a smaller deceleration, if it is safe for themselves, thus leaving more reaction time for the ego vehicle. If the evasion trajectory can prevent collisions, the ego vehicle can proceed to change lanes under the neural network based planners, otherwise it has to hesitate around the boundary of the two lanes or return back to the original lane for safety. The contribution of our work can be summarized as: * We propose a connectivity-enhanced neural network based lane changing planner in mixed traffic environment with human-driven and autonomous vehicles, and connected and non-connected vehicles. * Surrounding vehicles' behaviors are modeled via connectivity or aggressiveness assessment for safety analysis. Our planner design is guaranteed to be safe if the communication is perfect, the aggressive assessment is accurate, or if we choose to always treat the following vehicle as aggressive. * We demonstrate significant system efficiency improvements by leveraging connectivity in dynamic environment, through simulations with comprehensive experimental settings. We also analyze system robustness when coordination between connected vehicles is not perfect. The rest of this paper is organized as follows. In Section 2, we review related works on lane changing, inter-vehicle interaction, neural network-based planning and connected vehicles. In Section 3, we present our connectivity-enhanced planning framework. Section 4 shows the experimental results and Section 5 concludes the paper. ## 2. Related Work Planner design in autonomous driving has been a very active research area for its potential impact in greatly improving transportation system performance and also its challenge in ensuring system safety at the same time. There is an urgent need to prevent overly conservative design while also provide safety guarantees when considering environment uncertainty and complex inter-vehicle interactions (Safet et al., 2017; Safet et al., 2018; Safet et al., 2018). The work in (Safet et al., 2018) proposes a concept of legal safety, which means that autonomous vehicles will not be the cause of accidents and there is no collision if surrounding vehicles obey traffic rules. This is realized by proving the existence of the fail-safe trajectory under the planner all the time. Similarly, the concept of responsibility-sensitive safety is proposed in (Safet et al., 2018), which assumes that other participants behave according to common-sense rules and defines appropriate responses of autonomous vehicles in near-accident scenarios. However, safety can be compromised if other vehicles' behaviors violate the assumptions. The works in (Safet et al., 2018; Safet et al., 2018) develop a non-conservatively defensive driving strategy, which leverages sampling based or optimization based methods. Planned trajectory is executed after the safety evaluation. Model Predictive Control (MPC) based approaches can address safety issues by applying constraints or designing cost functions (Safet et al., 2018; Safet et al., 2018). Machine learning based techniques are increasingly popular in planning and decision making for autonomous driving (Safet et al., 2018; Safet et al., 2018; Safet et al., 2018), for their potential in improving average system performance under complex scenarios. The work in (Safet et al., 2018) proposes a concept of social perception, which inferences surrounding environment from other vehicles' reactions. It then leverages inverse reinforcement Figure 1. The ego vehicle \(E\) intends to change lane. With connectivity technology, vehicle \(E\) receives planned acceleration profiles and real-time motion states from connected leading vehicles \(L_{i}\), \(1\leq i\leq N\), and then analyzes the maximum deceleration of vehicle \(L_{1}\) during the lane changing process. By identifying the behavior of following vehicle \(F\) and analyzing system safety in the worst case, vehicle \(E\) may proceed to change lane, hesitate around the current lateral position, or abort the lane changing plan. This figure shows an example with \(N=2\) and the following vehicle \(F\) is non-connected. learning (IRL) to acquire cost function of human driving, and uses Markov Decision Process (MDP) to get probabilistically optimal solutions. (Hendle et al., 2017) formulates the lane changing planning problem as a partially observable Markov Decision Process (POMDP), in which the cooperativeness of other traffic participants is an unobservable state. It predicts future actions of human cars via logistic regression classifier, and solves the POMDP by Monta-Carlo Tree Search. The work in (Brockman et al., 2017) proposes a hierarchical reinforcement and imitation learning (H-REIL) approach specifically for near-accident scenarios, which consists of low-level policies learned by imitation learning (IL) for discrete driving modes, and a high-level policy learned by reinforcement learning (RL) that switches between different driving modes. (Hendle et al., 2017) proves the strength of a hierarchical neural network based planner regarding safety and verifiability, compared with a single neural network based planner. Although these methods demonstrate great performance improvement, it is still quite challenging to verify the safety for learning-enabled systems (Srivastava et al., 2014; Sutskever et al., 2015). System efficiency is also restricted by uncertainties from perception (Srivastava et al., 2014) and prediction results of individual vehicles. And connectivity can enhance the transportation system by reducing such uncertainties (Brockman et al., 2017; Srivastava et al., 2014; Sutskever et al., 2015). For instance, the work in (Hendle et al., 2017) proposes an intersection management scheme, in which the central manager assigns arriving speed and arriving time to vehicles. It assumes that all vehicles are connected and autonomous. The approach in (Brockman et al., 2017) leverages Dynamic Bayesian Networks (DBNs) to model vehicle state evolution. The central manager can send out warning messages to vehicles for collision avoidance. It assumes that all vehicles are connected and human-driven, and collision rate depends on velocity and driver reaction time. The work in (Hendle et al., 2017) assumes that all vehicles are connected, and leverages deep reinforcement learning (DRL) for behavior-level decision making in lane changing. In particular, it gets performance improvement by incorporating traffic status in the downstream with vehicle-to-vehicle communication. There are several works developed for mixed traffic (Brockman et al., 2017; Srivastava et al., 2014; Sutskever et al., 2015; Sutskever et al., 2015) of connected and non-connected vehicles. The work in (Hendle et al., 2017) presents an RL-based multi-agent longitudinal planner for connected and autonomous vehicles, which adjusts speeds in upstream traffic to mitigate traffic shock-waves downstream. The results suggest that even for a penetration rate of 10%, connected and autonomous vehicles can significantly mitigate bottlenecks in highway traffic. The approach in (Hendle et al., 2017) leverages RL for trajectory recommendation to the connected vehicles in highway merging scenarios. It assumes that not all vehicles are connected and uses camera in roadside for data fusion, in order to map all vehicles. The work in (Hendle et al., 2017) proposes an RL-based method for connected and autonomous vehicles to decide actions such as whether to change lane or keep lane based on the observation and shared information from neighbors. The system is modeled by hybrid partially observable Markov Decision Process (HPOMDP) as not all vehicles are connected. However, it does not explicitly model inter-vehicle interaction, and the safety highly depends on accurate modeling of the surrounding vehicles, especially non-connected vehicles. In this work, we propose a general lane changing planner with safety guarantee in mixed traffic, which can work safely and efficiently under any penetration rate of connected vehicles. ## 3. Design of our framework ### Overview By leveraging the connectivity technology, our planner design can further improve system efficiency while ensuring system safety at the same time. The framework is presented in Figure 2. Based on the planned acceleration profiles in the planning horizon and the real-time motion states of surrounding vehicles, we can leverage neural networks for longitudinal and lateral trajectory planning. At the same time, we can derive the maximum deceleration of the leading vehicle \(L_{1}\) (scenario as shown in Figure 1), and then perform system analysis for the worst case and adjust trajectory to ensure safety. Here we adopt the same aggressiveness assessment method for the following vehicle \(F\) as in (Hendle et al., 2017) when vehicle \(F\) is non-connected. For the case that the following vehicle \(F\) is connected, we assume that it is collaborative. Safety analysis and trajectory adjustment are conducted periodically. At every step during the lane changing, the ego vehicle has three behavior-level options with strictly decreasing preference: proceed changing lane, hesitate around current lateral position, or abort changing lane and return back to the original lane. It analyzes the state after executing the accelerations under neural network based planners for one time step. If it has a safe evasion trajectory in the worst case, it can go ahead and change lanes. Otherwise, it has to attempt a less preferred behavior until a safe evasion trajectory is found following that. ### Connectivity Assumptions In this work, we assume that the connected leading vehicles \(L_{i}\), \(1\leq i\leq N\) (for example, \(N=2\) in the scenario from Figure 1) will collaboratively assist the lane changing process of the ego vehicle \(E\) and prevent collision with their own immediate leading vehicle \(L_{i+1}\). Specifically, the connected leading vehicle \(L_{i}\) will keep its acceleration within the range \([-a_{i}^{m,d},a_{i}^{m,a}]\) to execute its own driving task if that is sufficient for keeping a safe distance between \(L_{i}\) and \(L_{i+1}\). It is noted that the values of \(a_{i}^{m,d}\) and \(a_{i}^{m,a}\) will be communicated to other connected vehicles. Only in emerging scenarios, e.g., when the leading vehicle \(L_{i+1}\) decelerates suddenly, \(L_{i}\) can violate such '_promise_' and take a deceleration \(a_{x,i,d}>a_{i}^{m,d}\) to ensure safety. Note that this is considered as an _expected violation_ of the promise, and our planner framework can ensure safety in such cases (later in Section 4, we will demonstrate the impact when such promise is violated _unexpectedly_, e.g., when one surrounding vehicle is malfunctioning or influenced by other unknown obstacles). To generalize the model, we assume that both connected human-driven vehicles and connected autonomous vehicles communicate their planned acceleration range to other vehicles in the same manner. However, human-driven vehicles could have larger acceleration range because there is typically larger uncertainty in human driver's behavior and execution process. For non-connected vehicles, we assume that it can take any acceleration value within their mechanical constraints. ### Safety Analysis Next we will analyze the evasion trajectory given the system states. Assuming that vehicle \(L_{i}\) decelerates with \(a_{x,i,d}\), vehicle \(L_{i-1}\) needs to decelerate with \(z_{x,i-1,d}\) to prevent collisions. By letting \(p_{m}\) denote the minimum safe distance between two vehicles, we have \[\begin{cases}p_{x,i}+v_{x,i}t_{Li}+\frac{a_{x,i,d}t_{Li}^{2}}{2}-p_{x,i-1}-v_{x,i -1}t_{Li}-\frac{z_{x,i-1,d}t_{Li}^{2}}{2}-p_{m}=0\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ where \(p_{y,t_{0}}\) and \(v_{y,t_{0}}\) are the lateral position and velocity of the ego vehicle when \(t=0\). The centers of the original and target lane are \(y=0\) and \(y=w_{l}\), respectively. The width of a vehicle is \(w_{v}\). As for the longitudinal motion, the optimal trajectory is analyzed intuitively when the immediate leading vehicle \(L_{1}\) decelerates with \(a_{x,d}\)[19]. It is that the ego vehicle first accelerates with \(a_{x,d}\) and then decelerates with \(a_{x,d}\), and keeps the distance gap no smaller than \(p_{m}\) when \(t\in[0,t_{y,f}]\). It is the fastest longitudinal motion to get closer to the leading vehicle, thus obtaining more time for the lateral evasion before the following vehicle catches up. However, in this work, it is a more general situation that the leading vehicle \(L_{1}\) can have deceleration \(a_{x,1,d}\leq a_{x,d}\). Thus when the ego vehicle \(E\) decelerates to the same velocity with the leading vehicle \(L_{1}\), its deceleration should change from \(a_{x,d}\) to \(a_{x,1,d}\). Otherwise the headway of the ego vehicle will increase when its velocity is smaller than that of the vehicle \(L_{1}\). In that way, the ego vehicle is overreacting for collision avoidance, which cannot lead to an optimal trajectory. There are three phases in the optimal longitudinal motion. The ego vehicle first accelerates with \(a_{x}=a_{x,a}\) when \(t\in[0,t_{x,1}]\), and then decelerates with \(a_{x}=-a_{x,d}\) when \(t\in[t_{x,1},t_{x,2}]\), and then decelerates with \(a_{x}=-a_{x,1,d}\) until it stops when \(t\in[t_{x,2},t_{y,f}]\). The position of the ego vehicle when \(t=t_{y,f}\) is formulated as \[p_{x,t_{y,f}}=\begin{cases}p_{x,t_{0}}+p_{x,t_{0}}t_{x,1}+\frac{a_{x,d}t_{x,1} ^{2}}{2}+(p_{x,t_{0}}+a_{x,d}t_{x,1})(t_{y,f}-t_{x,1})\\ -\frac{a_{x,d}(t_{x,f}-t_{x,1})^{2}}{2}\\ \hskip 28.452756pt\text{if }\frac{a_{x,t_{0}}}{x,t_{0}}+a_{x,d}t_{x,1}-a_{x,d}(t_{ y,f}-t_{x,1})\\ \hskip 28.452756pt\text{}\geq max(0,x_{0,x_{1}}-a_{x,1,d}t_{y,f}),\\ p_{x,t_{0}}+p_{x,t_{0}}t_{x,1}+\frac{a_{x,d}t_{x,1}^{2}}{2}+(\frac{a_{x,t_{0}} +a_{x,d}t_{x,1})^{2}}{2a_{x,d}}\\ \hskip 28.452756pt\text{else if }\frac{a_{x,1}}{a_{x,1,d}}\leq\frac{a_{x,d}t_{x,1} ^{2}}{2}+(p_{x,t_{0}}+a_{x,d}t_{x,1})(t_{x,2}-t_{x,1})\\ -\frac{a_{x,d}(t_{x,2}-t_{x,1})^{2}}{2}-\frac{a_{x,1,d}(t_{x,f}-t_{x,2})^{2}}{ 2}\\ +(p_{x,t_{0}}+a_{x,d}t_{x,1}-a_{x,d}(t_{x,2}-t_{x,1}))(t_{y,f}-t_{x,2})\\ \hskip 28.452756pt\text{else if }\frac{b_{x,1}}{a_{x,1,d}}\geq t_{y,f},\\ p_{x,t_{0}}+p_{x,t_{0}}t_{x,1}+\frac{a_{x,d}t_{x,1}^{2}}{2}+(p_{x,t_{0}}+a_{ x,d}t_{x,1})(t_{x,2}-t_{x,1})\\ -\frac{a_{x,d}(t_{x,2}-t_{x,1})^{2}}{2}+\frac{(a_{x,t_{0}}+a_{x,d}t_{x,1}-a_{ x,d}(t_{x,2}-t_{x,1}))^{2}}{2a_{x,1,d}}\\ \hskip 28.452756pt\text{otherwise,}\end{cases} \tag{6}\] where \(p_{x,t_{0}}\) and \(p_{x,t_{0}}\) are the longitudinal position and the velocity of the ego vehicle when \(t=0\), respectively. Depending on the initial states and \(t_{y,f}\), there are four cases in Equation (6), which correspond to the four velocity curves in Figure 3. The black solid line and yellow dot dash line represent velocity curves of the ego vehicle \(E\) and the leading vehicle \(L_{1}\), respectively. The first case represents that the ego vehicle has already finished its lateral motion before it decelerates to the same velocity with vehicle \(L_{1}\) and changes its deceleration. The second case is that the ego vehicle decelerates with \(a_{x,d}\) until it stops, and then keeps \(v_{x}=0\) until \(t=t_{y,f}\). The third case is that the ego vehicle has already finished its lateral motion before it reaches \(v_{x}=0\) with deceleration \(a_{x,1,d}\). The fourth case represents that the ego vehicle decelerates with \(a_{x,1,d}\) until it stops, and then keeps \(v_{x}=0\) until \(t=t_{y,f}\). Given that the two vehicles reach the same velocity when \(t=t_{x,2}\), we have \(v_{x,t_{0}}+a_{x,d}t_{x,1}-a_{x,d}(t_{x,2}-t_{x,1})=v_{x,1}-a_{x,1,d}t_{x,2}\). In all four cases, the distance gap between the ego vehicle \(E\) and the leading vehicle \(L_{1}\) reaches the minimum when \(t=t_{y,f}\). By letting \(p_{x,1,t_{y,f}}-p_{x,t_{y,f}}-p_{m}=0\), we can compute the value of \(t_{x,1}\) as \[t_{x,1}=\begin{cases}t_{y,f}-\sqrt{t_{y,f}^{2}+\frac{2v_{x,t_{0}}t_{y,f}-a_{x, d}t_{y,f}^{2}+2p_{x,t_{0}}+2p_{m}-2p_{x,1,t_{y,f}}}{a_{x,d}+a_{x,d}}}\\ \hskip 28.452756pt\text{if }\frac{a_{x,t_{0}}}{a_{x,d}}+a_{x,d}t_{x,1}-a_{x,d}(t_{y,f}-t_{x,1})\\ \hskip 28.452756pt\text{}\geq max(0,v_{x,1}-a_{x,1,d}t_{y,f}),\\ -\frac{a_{x,t_{0}}}{a_{x,d}}+\sqrt{(\frac{a_{x,t_{0}}}{a_{x,d}})^{2}-\frac{2p_{ x,t_{0}}a_{x,d}+a_{x,d}^{2}+2p_{m}a_{x,d}-2p_{x,1,t_{y,f}}a_{x,d}}{(a_{x,d}+a_{x,d})a_{x,d}}}\\ \hskip 28.452756pt\text{else if }\frac{a_{x,1}}{a_{x,1,d}}\leq\frac{a_{x,t_{0}}a_{x,d}}{a_{x,d}}+t_{x,1} \leq t_{y,f},\\ -((a_{x,t_{0}}-a_{x,1})+\sqrt{(a_{x,t_{0}}-a_{x,1})^{2}-(2a_{x,d}+a_{x,1,d})C_{ 2}}\\ \hskip 28.452756pt\text{else if }\frac{a_{x,1}}{a_{x,1,d}}\geq t_{y,f},\\ -((a_{x,t_{0}}-a_{x,1})+\sqrt{(a_{x,t_{0}}-a_{x,1})^{2}-(2a_{x,d}+a_{x,1,d})C_{ 2}}\\ \hskip 28.452756pt\text{otherwise,}\end{cases} \tag{7}\] where \(C_{2}=\frac{(b_{x,t_{0}}-b_{x,1})^{2}+(2p_{x,t_{0}}+2p_{m}-2p_{x,1})(a_{x,d}- a_{x,1,d})}{2(a_{x,d}+a_{x,d})}\). It is noted that if \(t_{y,f}^{2}+\frac{2v_{x,t_{0}}t_{y,f}-a_{x,d}t_{x,f}^{2}+2p_{x,t_{0}}+2p_{m}-2p _{x,1,t_{y,f}}}{a_{x,d}+a_{x,d}}\leq 0\), the ego vehicle can keep accelerating until \(t=t_{y,f}\), i.e., \(t_{x,1}=t_{y,f}\), and remain safe. If \(2v_{x,t_{0}}t_{y,f}-a_{x,d}t_{y,f}^{2}+2p_{x,t_{0}}+2p_{m}-2p_{x,1,t_{y,f}}>0\) in the first case, or \(2p_{x,t_{0}}a_{x,d}+a_{x,t_{0}}^{2}+2p_{m}a_{x,d}-2p_{x,1,t_{y,f}}a_{x,d}>0\) in the second case, or \(C_{2}>0\) in the third and fourth cases, we have \(t_{x,1}<0\), which means the ego vehicle cannot prevent collisions even when it keeps decelerating with \(a_{x,d}\) from \(t=0\), which means that the safe evasion trajectory does not exist. Let \(\tau_{1}\) denote the time that the ego vehicle decelerates to the same velocity with the leading vehicle, and let \(\tau_{2}\) denote the time that the ego vehicle longitudinally decelerates to a velocity of zero. We present \(\tau Assuming that the following vehicle accelerates with \(a_{x,f}\in[-a_{x,d},a_{x,a}]\), its position at \(t\) is (9) \[p_{x,f,t}=\begin{cases}p_{x,f}+v_{x,f}t+\frac{a_{x,f}t^{2}}{2}\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ is \(N=5\). The horizontal axes show the sudden deceleration of non-connected leading vehicle \(L_{N+1}\). It is considered to be successful if the ego vehicle finally crosses the border of two lanes within the simulation horizon without any collision. Because safety is ensured by the trajectory adjustment function and there is indeed no collision in all simulations, we only present the results of lane changing success rate. As we expected, lane changing success rate decreases when the deceleration \(a_{x,N+1,d}\) gets larger. Under these planners, 'CV_all', 'CV_follow' and 'CV_none' correspond to full, partial and none utilization of connectivity, respectively. 'No_agg_assess' represents that when even aggressiveness assessment function is disabled, which leverages less information and is more conservative. It shows that 'CV_all' performs slightly better than 'CV_follow', and these two planners result in considerably larger success rate, when compared to 'CV_none' and 'No_agg_assess'. **This clearly shows the effectiveness of our approach in improving system performance.** It means that understanding the following vehicle's intention can greatly help the lane changing maneuver of the ego vehicle, and connectivity of leading vehicles can further improve that. Table 1 shows the average lane changing success rate when \(N\) and \(a_{x,N+1,d}\) change. It consistently presents that the more utilization of connectivity, system performance is better. Moreover, when \(N\) gets larger, it results in larger success rate because more connected leading vehicles can cooperate to leave larger space for the ego vehicle. Figure 5 demonstrates the performance of our planner design with a specific example, in which \(N=5\) and \(a_{x,N+1,d}=5\) meters per second squared. It shows the lateral position and longitudinal velocity of the ego vehicle under different planners. Under 'CV_all' planner, the ego vehicle completes lane changing in about three seconds. Under other three planners, the ego vehicle first move laterally to the target lane, and then turn back to the original lane when \(t=3\) seconds. 'CV_follow' results in longer time of staying in the target lane, compared with 'CV_none' and 'No_agg_assess'. In this example, 'CV_none' and 'No_agg_assess' lead to the same trajectory of the ego vehicle. ### Impact of Unexpected Promise Violation According to the _promise_ assumption introduced in Section 3, connected leading vehicle \(L_{i}\) will keep its acceleration in the range \begin{table} \begin{tabular}{c c|c c c c} \hline \(N\) & \(a_{x,N+1,d}\) & CV\_all & CV\_follow & CV\_none & No\_agg\_assess \\ \hline \multirow{5}{*}{1} & 2 & 1 & 1 & 0.995 & 0.995 \\ & 3 & 1 & 1 & 0.875 & 0.83 \\ & 4 & 0.356 & 0.337 & 0.23 & 0.195 \\ & 5 & 0.001 & 0 & 0 & 0 \\ & 6 & 0 & 0 & 0 & 0 \\ \hline \multirow{5}{*}{3} & 2 & 1 & 1 & 1 & 1 \\ & 3 & 1 & 1 & 0.928 & 0.894 \\ & 4 & 0.998 & 0.982 & 0.712 & 0.672 \\ & 5 & 0.534 & 0.489 & 0.288 & 0.256 \\ & 6 & 0.01 & 0.008 & 0.024 & 0.022 \\ \hline \multirow{5}{*}{10} & 2 & 1 & 1 & 1 & 1 \\ & 3 & 1 & 1 & 1 & 1 \\ \cline{1-1} & 4 & 1 & 1 & 0.993 & 0.998 \\ \cline{1-1} & 5 & 1 & 1 & 0.969 & 0.956 \\ \cline{1-1} & 6 & 1 & 1 & 0.95 & 0.928 \\ \hline \end{tabular} \end{table} Table 1. Lane changing success rate of different planners. Figure 4. Lane changing success rate under different planners are compared when the number of connected leading vehicles is \(N=5\). The horizontal axes show the sudden deceleration of non-connected leading vehicle \(L_{N+1}\). Figure 5. Lateral position and longitudinal velocity of the ego vehicle in an example scenario are plotted under different planners, when the number of connected leading vehicles is \(N=5\) and the deceleration of non-connected leading vehicle \(L_{N+1}\) is \(a_{x,N+1,d}=5\) meters per second squared. \([-q_{i}^{m,d},a_{i}^{m,d}]\) as long as that does not hurt its own safety. Let us define the promise violation rate \(p_{o}\), which denotes the probability that the promise is violated unexpectedly in every control period. We assume that promise violation is independent among all connected vehicles, and the vehicle can take any deceleration following the uniform distribution \([z_{x,i,d},a_{x,N+1,d}]\) when violating the promise. Figure 6 presents the collision rate and lane changing success rate under varied promise violation rate \(p_{o}\) when the first non-connected leading vehicle \(L_{N+1}\) takes different sudden deceleration \(a_{x,N+1,d}\). Generally speaking, a larger promise violation rate and deceleration result in smaller lane changing success rate and higher collision risk. However, when promise violation rate is within 20% and the sudden deceleration is less than 5 meters per second squared, collision rate is 0% and success rate remains to be relatively high. ## 5. Conclusion In this work, we present a connectivity-enhanced planning framework for neural network based lane changing in mixed traffic. The framework can significantly improve lane changing performance by coordinating with surrounding connected vehicles in dynamic environment. Extensive experiments demonstrate the strength of our planner design in improving efficiency while ensuring safety. Our experiments suggest that (1) connectivity of the immediate following vehicle plays a more important role for ego vehicle's lane changing than the connectivity of leading vehicles, and (2) when there are more connected leading vehicles, the system performance can be further improved because more vehicles can coordinate to leave larger space for the ego vehicle. We also demonstrate the system robustness under different extent of promise violation rate of surrounding connected vehicles. Figure 6. Collision rate and lane changing success rate under different promise violation rates are presented when the number of connected leading vehicles is \(N=10\). Promise violation rate \(p_{o}\) is the probability that the promise is violated unexpectedly every control period. We assume that promise violation is independent among all connected vehicles, and the vehicle can take any deceleration following the uniform distribution \([z_{x,i,d},a_{x,N+1,d}]\) when violating the promise. The horizontal axes show the sudden deceleration of non-connected leading vehicle \(L_{N+1}\).
2304.14766
Hyperparameter Optimization through Neural Network Partitioning
Well-tuned hyperparameters are crucial for obtaining good generalization behavior in neural networks. They can enforce appropriate inductive biases, regularize the model and improve performance -- especially in the presence of limited data. In this work, we propose a simple and efficient way for optimizing hyperparameters inspired by the marginal likelihood, an optimization objective that requires no validation data. Our method partitions the training data and a neural network model into $K$ data shards and parameter partitions, respectively. Each partition is associated with and optimized only on specific data shards. Combining these partitions into subnetworks allows us to define the ``out-of-training-sample" loss of a subnetwork, i.e., the loss on data shards unseen by the subnetwork, as the objective for hyperparameter optimization. We demonstrate that we can apply this objective to optimize a variety of different hyperparameters in a single training run while being significantly computationally cheaper than alternative methods aiming to optimize the marginal likelihood for neural networks. Lastly, we also focus on optimizing hyperparameters in federated learning, where retraining and cross-validation are particularly challenging.
Bruno Mlodozeniec, Matthias Reisser, Christos Louizos
2023-04-28T11:24:41Z
http://arxiv.org/abs/2304.14766v1
# Hyperparameter Optimization ###### Abstract Well-tuned hyperparameters are crucial for obtaining good generalization behavior in neural networks. They can enforce appropriate inductive biases, regularize the model and improve performance -- especially in the presence of limited data. In this work, we propose a simple and efficient way for optimizing hyperparameters inspired by the marginal likelihood, an optimization objective that requires no validation data. Our method partitions the training data and a neural network model into \(K\) data shards and parameter partitions, respectively. Each partition is associated with and optimized only on specific data shards. Combining these partitions into subnetworks allows us to define the "out-of-training-sample" loss of a subnetwork, _i.e._, the loss on data shards unseen by the subnetwork, as the objective for hyperparameter optimization. We demonstrate that we can apply this objective to optimize a variety of different hyperparameters in a single training run while being significantly computationally cheaper than alternative methods aiming to optimize the marginal likelihood for neural networks. Lastly, we also focus on optimizing hyperparameters in federated learning, where retraining and cross-validation are particularly challenging. ## 1 Introduction Due to their remarkable generalization capabilities, deep neural networks have become the de-facto models for a wide range of complex tasks. Combining large models, large-enough datasets, and sufficient computing capabilities enable researchers to train powerful models through gradient descent. Regardless of the data regime, however, the choice of hyperparameters -- such as neural architecture, data augmentation strategies, regularization, or which optimizer to choose -- plays a crucial role in the final model's generalization capabilities. Hyperparameters allow encoding good inductive biases that effectively constrain the models' hypothesis space (_e.g._, convolutions for vision tasks), speed up learning, or prevent overfitting in the case of limited data. Whereas gradient descent enables the tuning of model parameters, accessing hyperparameter gradients is more complicated. The traditional and general way to optimize hyperparameters operates as follows; **1)** partition the dataset into training and validation data1, **2)** pick a set of hyperparameters and optimize the model on the training data, **3)** measure the performance of the model on the validation data and finally **4)** use the validation metric as a way to score models or perform search over the space of hyperparameters. This approach inherently requires training multiple models and consequently requires spending resources on models that will be discarded. Furthermore, traditional tuning requires a validation set since optimizing the hyperparameters on the training set alone cannot identify the right inductive biases. A canonical example is data augmentations -- they are not expected to improve training set performance, but they greatly help with generalization. In the low data regime, defining a validation set that cannot be used for tuning model parameters is undesirable. Picking the right amount of validation data is a hyperparameter in itself. The conventional rule of thumb to use \(\sim 10\%\) of all data can result in significant overfitting, as pointed out by Lorraine et al. (2019), when one has a sufficiently large number of hyperparameters to tune. Furthermore, a validation set can be challenging to obtain in many use cases. An example is Federated Learning (FL) (McMahan et al., 2017), which we specifically consider in our experimental section. In FL, each extra training run (for, _e.g._, a specific hyperparameter setting) comes with additional, non-trivial costs. Different approaches have been proposed in order to address these challenges. Some schemes optimize hyperparameters during a single training run by making the hyperparameters part of the model (_e.g._, learning dropout rates with concrete dropout (Gal et al., 2017), learning architectures with DARTs (Liu et al., 2018) and learning data-augmentations with schemes as in Benton et al. (2020); van der Wilk et al. (2018)). In cases where the model does not depend on the hyperparameters directly but only indirectly through their effect on the value of the final parameters (through optimization), schemes for differentiating through the training procedures have been proposed, such as Lorraine et al. (2019). Another way of optimizing hyperparameters without a validation set is through the canonical view on model selection (and hence hyperparameter optimization) through the Bayesian lens; the concept of optimizing the _marginal likelihood_. For deep neural networks, however, the marginal likelihood is difficult to compute. Prior works have therefore developed various approximations for its use in deep learning models and used those to optimize hyperparameters in deep learning, such as those of data augmentation (Schwobel et al., 2021; Immer et al., 2022). Still, however, these come at a significant added computational expense and do not scale to larger deep learning problems. This paper presents a novel approach to hyperparameter optimization, inspired by the marginal likelihood, that only requires a single training run and no validation set. Our method is more scalable than previous works that rely on marginal likelihood and Laplace approximations (which require computing or inverting a Hessian (Immer et al., 2021)) and is broadly applicable to any hierarchical modelling setup. ## 2 Marginal Likelihood and prior work In Bayesian inference, the rules of probability dictate how any unknown, such as parameters \(\mathbf{w}\) or hyperparameters \(\psi\), should be determined given observed data \(\mathcal{D}\). Let \(p(\mathbf{w})\) be a prior over \(\mathbf{w}\) and \(p(\mathcal{D}|\mathbf{w},\psi)\) be a likelihood for \(\mathcal{D}\) with \(\psi\) being the hyperparameters. We are then interested in the posterior given the data \(p(\mathbf{w}|\mathcal{D},\psi)=p(\mathcal{D}|\mathbf{w},\psi)p(\mathbf{w})/p(\mathcal{D}|\psi)\). The denominator term \(p(\mathcal{D}|\psi)\) is known as the _marginal likelihood_, as it measures the probability of observing the data given \(\psi\), irrespective of the value of \(\mathbf{w}\): \(p(\mathcal{D}|\psi)=\int p(\mathbf{w})p(\mathcal{D}|\mathbf{w},\psi)d\mathbf{w}\). Marginal likelihood has many desirable properties that make it a good criterion for model selection and hyperparameter optimization. It intuitively implements the essence of Occam's Razor principle (MacKay, 2003, SS 28). In the PAC-Bayesian literature, it has been shown that higher marginal likelihood gives tighter frequentist upper bounds on the generalization performance of a given model class (McAllester, 1998; Germain et al., 2016). It also has close links to cross-validation (see section 2.1) and can be computed from the training data alone. However, computation of the marginal likelihood in deep learning models is usually prohibitively expensive and many recent works have proposed schemes to approximate the marginal likelihood for differentiable model selection (Lyle et al., 2020; Immer et al., 2021; 2022; Schwobel et al., 2021). ### "Learning speed" perspective Lyle et al. (2020); Fong and Holmes (2020) pointed out the correspondence between "learning speed" and marginal likelihood. Namely, the marginal likelihood of the data \(\mathcal{D}\) conditioned on some hyperparameters \(\psi\) can be written as: \[\log p(\mathcal{D}|\psi)=\sum_{k}\log\mathbb{E}_{p(\mathbf{w}|\mathcal{D}_{1:k-1},\psi)}\left[p(\mathcal{D}_{k}|\mathbf{w},\psi)\right]\geq\sum_{k}\mathbb{E}_{p( \mathbf{w}|\mathcal{D}_{1:k-1},\psi)}\left[\log p(\mathcal{D}_{k}|\mathbf{w},\psi)\right] \tag{1}\] where \((\mathcal{D}_{1},\ldots,\mathcal{D}_{C})\) is an arbitrary partitioning of the training dataset \(\mathcal{D}\) into \(C\) shards or chunks2, and \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\) is the posterior over parameters of a function \(f_{\mathbf{w}}:\mathcal{X}\rightarrow\mathcal{Y}\), from the input domain \(\mathcal{X}\) to the target domain \(\mathcal{Y}\) after seeing data in shards \(1\) through \(k\). The right-hand side can be interpreted as a type of cross-validation in which we fix an ordering over the shards and measure the "validation" performance on each shard \(\mathcal{D}_{k}\) using a model trained on the preceding shards \(\mathcal{D}_{1:k-1}\) Alternatively, it can be viewed as the _learning speed_ of a (probabilistic) model: _i.e._, a measure of how quickly it learns to perform well on new shards of data after only having been fit to the previous shards (through exact Bayesian updating). This perspective neatly illustrates why models with higher marginal likelihood can exhibit good inductive biases, _e.g._, encoded through \(\psi\), \(\mathbf{w}\) and \(f_{\mathbf{w}}\). Namely, such models can be expected to learn faster and generalize better after seeing fewer samples. For example, if the hypothesis space is constrained3 to functions satisfying symmetries present in the data, we need fewer data to identify the correct function (Sokolic et al., 2017; Sannai et al., 2021). We argue that the "learning speed" aspect of marginal likelihood -- _i.e._, measuring how well the model generalizes to new data in the training set, having been trained only on the previous data points -- is the key property making marginal likelihood a useful tool for selecting hyperparameters. Footnote 3: or if the learning algorithm is heavily biased towards returning hypotheses that satisfy a given invariance, _e.g._, through the use of a prior. ### Training speed for hyperparameter optimization Computing the "learning speed", requires samples from the posterior \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\). Unfortunately, in deep learning settings, such samples are impractical to obtain; thus, prior works have focused on more scalable alternatives. Lyle et al. (2020) propose to approximate the objective in Eq. 1 by looking at the _training speed_ during standard training of a neural network by SGD. Specifically, they define the training speed as the reduction in the training loss after a single SGD parameter update, summed over all updates in the first epoch. They argue that, during the first epoch of training, after the neural network parameters, \(\mathbf{w}\), have been updated with SGD steps using data from shards \(\mathcal{D}_{1:k}\), they can be approximately used in place of the sample from the posterior \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\) in Eq. 1. They extend the analogy to training past one epoch and use the training speed estimate for model selection (Ru et al., 2021). As pointed out by the authors, however, the analogy between learning speed and training speed somewhat breaks down after \(1\) epoch of training. The network parameters have "seen" every datapoint in the training set after \(1\) epoch, and hence the connection to measuring the model's generalization capability is weakened. For the sake of scalability and alignment with deep learning practice, we also focus on simple pointwise approximations \(q_{k}(\mathbf{w})=\delta(\mathbf{w}=\hat{\mathbf{w}}_{k})\) to the posteriors \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\). However, in contrast to prior work, we explicitly parametrize the learning procedure such that, at any given training iteration, we have access to a model that is trained only on a subset of the data \(\mathcal{D}_{1:k}\). In doing so, we can approximate the objective in Eq. 1, and thus use it to optimize the hyperparameters during the entire training run. ## 3 Partitioned Neural Networks Our goal is to optimize the objective \[\mathcal{L}_{\mathrm{ML}}\left(\mathcal{D},\psi\right)=\sum_{k=1}^{C}\mathbb{ E}_{q_{k-1}(\mathbf{w})}\left[\log p(\mathcal{D}_{k}|\mathbf{w},\psi)\right] \tag{2}\] wrt. \(\psi\), which is an approximation to the lower-bound presented in Eq. 1 above. In Appendix A, we show that the left-hand side is also a lower-bound on the marginal likelihood under some unobtrusive conditions. As mentioned in Section 2.2, our goal is to propose an architecture and a training scheme so that we can easily obtain models trained on only subsets of the data \(\mathcal{D}_{1:k}\) for all \(k\) throughout training. We propose that each \(\{q_{k}(\mathbf{w})\}_{k=1}^{C}\) optimizes a subset of the parameters of the neural network, in a manner that allows us to extract "subnetworks" from the main network that have been trained on specific chunks of data. We describe the partitioning scheme below. **Partitioning the parameters** Denote the concatenations of the weights of a neural network \(\mathbf{w}\in\mathbb{R}^{N}\). We can define a partitioning \(((\mathbf{w}_{1},\dots,\mathbf{w}_{C}),P)\) of the parameters into \(C\) partitions, such that \(\mathbf{w}=P\operatorname{concat}(\mathbf{w}_{1},\dots,\mathbf{w}_{C})\) for a permutation matrix \(P\in\{0,1\}^{N\times N}\). For ease of exposition, we drop the dependence on \(P\), assuming that \(\mathbf{w}\) is already arranged such that \(P\) is identity, \(P=I_{N\times N}\). Given the partitioning \((\mathbf{w}_{1},\dots,\mathbf{w}_{C})\) of the parameters, we then specify \(C\) subnetworks with weights \(\mathbf{w}_{s}^{(1)},\dots,\mathbf{w}_{s}^{(C)}\) such that \(\mathbf{w}_{s}^{(k)}=\operatorname{concat}(\mathbf{w}_{1},\dots,\mathbf{w}_{k},\hat{\mathbf{w} }_{k+1},\dots,\hat{\mathbf{w}}_{C})\), where \(\hat{\mathbf{w}}_{i}\) are some default values not optimized during training4. More specifically, the \(k\)-th subnetwork, \(\mathbf{w}_{s}^{k}\), retains the first \(k\) partitions from the weight partitioning and sets the remaining parameters to \(\hat{\mathbf{w}}_{k+1:C}\). Note that, if each \(\mathbf{w}_{k}\) is only updated on chunks \(\mathcal{D}_{1:k}\), the subnetwork \(\mathbf{w}_{s}^{(k)}\) is only comprised of weights that have been updated on \(\mathcal{D}_{1:k}\). Thus, we can view the parameters of \(\mathbf{w}_{s}^{(k)}\) as an approximation to \(q_{k}(\mathbf{w})\). Although, given that a subset of the parameters in each \(\mathbf{w}_{s}^{(k)}\) is fixed, this would likely be a poor approximation to the true posterior over the weights given \(\mathcal{D}_{1:k}\), it could be, intuitively, a reasonable approximation in function space5. Footnote 4: _e.g._, \(\hat{\mathbf{w}}_{i}\) could be the value of the weights at initialization, or \(\hat{\mathbf{w}}_{i}=\mathbf{0}\) corresponding to pruning those parameters and obtaining a proper subnetwork. Footnote 5: Since a) the mapping from parameters to functions is not bijective and b) neural networks are highly overparameterised and can be heavily pruned while retaining performance (Frankle and Carbin, 2018), obtaining a good fit to a subset of the training data with a subset of the model parameters should be possible. Furthermore, “scaling laws” indicate that the benefit of having more parameters becomes apparent mostly for larger dataset sizes (Kaplan et al., 2020), thus it is reasonable for subnetworks fit to more data to have more learnable parameters. **Partitioned training** Having partitioned the dataset \(\mathcal{D}\) into \(C\) chunks \((\mathcal{D}_{1},\ldots,\mathcal{D}_{k})\), we update each partition \(\mathbf{w}_{k}\) by optimising the negative log-likelihood6 on chunks \(\mathcal{D}_{1:k}\) using subnetwork \(\mathbf{w}_{s}^{(k)}\) by computing the following gradients: Footnote 6: Optionally with an added negative log-prior regularization term \(\log p(\mathbf{w}_{s}^{(k)})\). \[\nabla_{\mathbf{w}_{k}}\mathcal{L}\left(\mathcal{D}_{1:k},\mathbf{w}_{s}^{(k)}\right)= \sum_{(\mathbf{x},y)\in\mathcal{D}_{1:k}}\nabla_{\mathbf{w}_{k}}\log p\left(y\Big{|} \mathbf{x};\mathbf{w}_{s}^{(k)},\psi\right). \tag{3}\] We interleave stochastic gradient updates of each partition of the weights with updating the hyperparameters \(\psi\) using \(\mathcal{L}_{\mathrm{ML}}\) in Eq. 2: \[\nabla_{\psi}\mathcal{L}_{\mathrm{ML}}\left(\mathcal{D},\psi\right)\approx \sum_{k=2}^{C}\sum_{(\mathbf{x},y)\in\mathcal{D}_{k}}\nabla_{\psi}\log p\left(y \Big{|}\mathbf{x},\mathbf{w}_{s}^{(k-1)},\psi\right). \tag{4}\] This can be seen as the sum of the _out-of-sample_ losses for each subnetwork \(\mathbf{w}_{s}^{(k)}\). The scheme is illustrated in Figure 1. For details of how the updates are scheduled in our experiments, see Appendix I. Note that, while we could incorporate the gradient of the first term from Eq. 1 corresponding to \(\mathbb{E}_{q_{0}(\mathbf{w})}[\log p(\mathcal{D}_{1}|\mathbf{w},\psi)]\) in Eq. 4, we chose to leave it out. Hence, the gradient of Eq. 4 is of an estimate that can be viewed as an approximation to the _conditional_ marginal likelihood \(\log p\left(\mathcal{D}_{2:C}|\mathcal{D}_{1},\psi\right)\). Conditional marginal likelihood has been shown to have many desirable properties for model selection and, in many cases, can be a better proxy for generalization (Lotfi et al., 2022). This procedure, inspired by the marginal likelihood, has several desirable properties compared to prior work. **1)** Our objective is computationally efficient, with a computational cost roughly corresponding to evaluating subnetworks on the training set. There is no need to compute nor invert a Hessian with Figure 1: Best viewed in colour. Illustration of the partitioning scheme for a single hidden layer perceptron with \(C=3\) chunks. respect to the weights, as in the Laplace approximation (Immer et al., 2021, 2022). **2)** Our objective is readily amenable to optimization by stochastic gradient descent; we do not have to iterate over the entire training set to compute a single gradient update for the hyperparameters. **3)** Compared to the training speed objective (Lyle et al., 2020), in our method, the training of the weights in each subnetwork progresses independently of the data in future chunks. Hence, it can be seen as more truthfully measuring the generalization capability of a model using a given set of hyperparameters. **Partitioning Schemes** There are several ways in which the neural network weights can be partitioned. In our experiments in Section 5, we partition the weights before beginning training by assigning a fixed proportion of weights in each layer to a given partition at random. For each subnetwork, for the weight partitions corresponding to future chunks, we use the values of the weights at initialisation. For a discussion of partitioning schemes, see Appendix C. ## 4 Related works Hyperparameter optimization in deep learningMany works have tackled the challenge of optimizing hyperparameters in deep learning. Works on implicit differentiation, such as the one by Lorraine et al. (2019), allow for optimizing training hyperparameters such as the learning rate, weight-decay, or other hyperparameters that affect the final neural network weights only through the training routine. Other works have proposed ways to parameterize and optimize data-augmentations (Cubuk et al., 2018; Li et al., 2020), search-spaces for neural network architectures, as well as methods to optimize architectures using gradient-based optimization (Liu et al., 2018; Elsken et al., 2019). All of the above works have primarily relied on optimizing hyperparameters on a separate validation set and are compatible with the objective defined in this work. Several works have also aimed to cast learning data augmentations as an invariance learning problem. They do so by parameterizing the model itself with data augmentations, and frame invariance learning as a model selection problem (van der Wilk et al., 2018; Benton et al., 2020; Schwobel et al., 2021; Nabarro et al., 2022; Immer et al., 2022). We compare against Benton et al. (2020) ("Augerino") and Immer et al. (2022) ("Differentiable Laplace") on this task in the experimental section. Hyperparameter optimization without a validation setA limited number of works consider learning hyperparameters without a validation set in a deep learning context. Benton et al. (2020) propose a simple method for learning invariances without a validation set by regularising invariance hyperparameters to those resulting in higher invariance. They show that the invariances found tend to be insensitive to the regularisation strength, determined by another hyperparameter. However, the method relies on being able to _a priori_ define which hyperparameters lead to higher invariance through a suitable regularisation function. In more complex invariance learning settings, defining the regulariser can be challenging. For example, if data-augmentation transformations were to be parameterized by a neural network (as proposed in Lorraine et al. (2019)), it is non-trivial to devise an adequate regulariser. We show that our method can be applied to such settings. Other works focus on deriving tractable approximations to the marginal likelihood for deep neural networks. Schwobel et al. (2021) propose only marginalising-out the parameters in the last layer of the neural network by switching it out for a Gaussian Process. They treat the preceding layer effectively as a hyperparameter, and optimize invariance parameters using the marginal likelihood. Although they show promising results on MNIST, they found they "were unable to learn invariances for CIFAR-10" (Schwobel et al., 2021, SS7) and highlighted the need to marginalise lower layers as well. In contrast, our objective can be seen as being inspired by marginal likelihood where arbitrary network layers can be "marginalised", and works on datasets like CIFAR-10. Immer et al. (2022) have adapted the Laplace approximation (Immer et al., 2021) to make it tractable for learning data augmentations. In contrast to Schwobel et al. (2021), they approximately marginalize out all the network parameters, and performs favourably. Their approximation, however, requires approximations to a Hessian w.r.t. all network parameters; for that reason, their work reports results for architectures only up to a ResNet-14, whereas our method can easily scale to larger architectures. Hyperparameter optimization in FLImproving hyperparameter optimization is especially relevant to FL. Given the potential system level constraints (Wang et al., 2021), methods that optimize the hyperparameters and parameters in a single training run are preferred. On this note, Khodak et al. (2021) introduced FedEx and showed that it can successfully optimize the client optimizer hyperparameters. FedEx relies on a training/validation split on the client level and uses a REINFORCE type of gradient (Williams, 1992) estimator, which usually exhibits high variance and needs baselines to reduce it (Mohamed et al., 2020). This is in contrast to partitioned networks, which use standard, low-variance backpropagation for the hyperparameters and no separate validation set per client. To optimize the other hyperparameters, Khodak et al. (2021) wrapped FedEx with a traditional hyperparameter optimization strategy, the successive halving algorithm. This is orthogonal to our method and could be applied to partitioned networks as well. In Zhou et al. (2021), the authors perform a hyperparameter search independently on each client with some off-the-shelf methods and then aggregate the results of the search at the server once in order to identify the best hyperparameter setting. The main drawback of this method compared to partitioned networks is that when the local client datasets are small, a client-specific validation set is not informative, and the aggregation happens only once. Finally, there is also the recent work from Seng et al. (2022) which performs hyperparameter optimization and neural architecture search in the federated setting. Similarly to prior works, it requires client-specific validation data in order to optimize the hyperparameters. ## 5 Experiments Input SelectionTo demonstrate that \(\mathcal{L}_{\mathrm{ML}}\) is a good objective for model selection that captures the desirable properties of the marginal likelihood, we first deploy our method on the toy model selection task of Lyle et al. (2020): there the first \(15\) features are informative, and the remaining \(15\) are spurious \[y\sim\mathrm{Bern}\left(\frac{1}{2}\right)\qquad\mathbf{x}=\big{[}\underbrace{y+ \epsilon_{1},\ldots,y+\epsilon_{15}}_{\text{Informative}},\underbrace{\epsilon_ {16},\ldots,\epsilon_{30}}_{\text{Spurious}}\big{]}^{\intercal}\qquad \epsilon_{1},\ldots,\epsilon_{30}\stackrel{{\text{iid}}}{{ \propto}}\mathcal{N}(0,1).\] We specify a fixed mask over the inputs prior to training, where the first \(K\) inputs remain unmasked, and the remainder is masked. We expect that, given multiple models with different (fixed) masks over the inputs, the proposed objective will be able to identify the correct one -- _i.e._, the one that keeps only the informative features. We train multiple fully connected neural networks (MLPs) on a training set of \(1000\) examples using our method and compare the final values of the \(\mathcal{L}_{\mathrm{ML}}\) objective. The results are shown in Figure 1(a). \(\mathcal{L}_{\mathrm{ML}}\) correctly identifies \(15\) input features as the optimum, and correlates well with test accuracy and log-likelihood. Training loss and training accuracy, on the other hand, cannot alone disambiguate whether to use \(15\) or more input features. Differentiable input selectionWe further show that we can learn the correct mask over the inputs in a differentiable manner using our method during a single training run. We parameterize a learnable mask over the inputs with a concrete Bernoulli distribution (Maddison et al., 2016) and treat the parameters of the mask distribution as a hyperparameter. We optimize them with respect to the proposed objective using our method. The evolution of the learned mask during training is shown in Figure 1(b), where we see that we can correctly identify the first 15 informative features. Figure 2: (a) Demonstrating the ability of the marginal-likelihood inspired objective \(\mathcal{L}_{\mathrm{ML}}\) to identify the correct model on a toy input selection task. We plot the hyperparameter objective, train — Learning invariances through data-augmentationsFollowing previous literature on learning soft invariances through learning data augmentations (Nabarro et al., 2022; van der Wilk et al., 2018; Benton et al., 2020; Schwobel et al., 2021; Immer et al., 2022), we show that we can learn useful affine image augmentations, resulting in gains in test accuracy. We specify affine data augmentations as part of a probabilistic model as done by van der Wilk et al. (2018), averaging over multiple data augmentation samples during training and inference. This allows us to treat the data-augmentation distribution as a model hyperparameter rather than a training hyperparameter. For datasets, we consider MNIST, CIFAR10, TinyImagenet along with rotCIFAR10 and rotTinyImagenet, variants where the datapoints are randomly rotated at the beginning of training by angles sampled uniformly from \([-\pi,\pi]\)(Immer et al., 2022). Experimental setup details are provided in Appendix 1. For the CIFAR10 and rotCIFAR10 datasets, we consider as baselines standard training with no augmentations, Augerino (Benton et al., 2020) and Differentiable Laplace (Immer et al., 2022). Following Immer et al. (2022), we use \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNets (Zhang et al., 2019) for the architectures. The results can be seen in Table 1. There, we observe that partitioned networks outperform all baselines in the case of CIFAR10 for both ResNet variants we consider. On RotCIFAR10, we observe that partitioned networks outperform the baseline and Augerino, but it is slightly outperformed by Differentiable Laplace, which optimizes additional prior hyperparameters. To demonstrate the scalability of partitioned networks, for the (rot)TinyImagenet experiments we consider a ResNet-50 architecture with GroupNorm(2). In Table 1 we observe that in both cases, partitioned networks learn invariances successfully and improve upon the baseline. Relative to Augerino, we observe that partitioned networks either improve (TinyImagenet) or are similar (rotTinyImagenet). Imbuing a model with useful invariances is particularly useful in the low-data regime, due to better data efficiency. To show that, we perform experiments where we artificially reduce the size of the training dataset. The results can be seen in Figure 3. We see that by learning augmentations with partitioned networks, we can drastically improve performance in the low-data regime upon a baseline that does not learn augmentations, while performing favorably against prior works in most cases. On MNIST, our method outperforms the last-layer marginal-likelihood method (last-layer ML) by Schwobel et al. (2021) in the large data regime but underperforms in the low-data regime. That is likely to be expected, as their work fits a Gaussian Process (GP) at the last layer (Wilson et al., 2016), which is better tailored for the low-data regime and results into a more flexible model (due to the GP corresponding to an additional, infinite width, layer). Since the MNIST-CNN is sufficiently small to fit multiple networks into memory, we also compare to a variant of our method where, instead of partitioning a single network, we train \(C\) different networks where network \(k\) is fit on data \(\mathcal{D}_{1:k}\). This serves as an upper bound on the performance of the partitioned networks. We see that by partitioning a single network, we can achieve almost equivalent accuracy. On CIFAR10, partitioned networks outperform all other works on all data sizes we considered. On RotCIFAR10, partitioned networks perform again favourably, but they are marginally outperformed by differentiable Laplace in the low-data regime. Compared to partitioned networks where we only optimize augmentations, differentiable Laplace also optimizes the precision of a Gaussian prior over the weights, which better combats overfitting in the low-data regime. On both the TinyImagenet and rotTinyImagenet experiments we observe that partitioned networks either outperform or are similar to the baselines on all data sizes considered. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & & & \multicolumn{3}{c}{Method} \\ Dataset & Architecture & Baseline & Augerino & Diff. Laplace & Partitioned \\ \hline RotCIFAR10 & \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNet-8 & \(54.2_{\pm 0.4}\) & \(75.4_{\pm 0.2}\) & \(\mathbf{79.5_{\pm 0.6}}\) & \(\mathbf{79.1_{\pm 0.0}}\) \\ \hline CIFAR10 & \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNet-8 & \(74.1_{\pm 0.5}\) & \(79.0_{\pm 1.0}\) & \(84.2_{\pm 0.8}\) & \(\mathbf{86.1_{\pm 0.4}}\) \\ & \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNet-14 & \(79.5_{\pm 0.3}\) & \(83.0_{\pm 0.1}\) & \(88.1_{\pm 0.2}\) & \(\mathbf{89.1_{\pm 0.8}}\) \\ \hline RotTinyImagenet & ResNet-50 & \(31.5_{\pm 0.6}\) & \(\mathbf{44.5_{\pm 0.2}}\) & OOM\({}^{\prime}\) & \(43.9_{\pm 0.3}\) \\ \hline TinyImagenet & ResNet-50 & \(44.2_{\pm 0.5}\) & \(41.1_{\pm 0.2}\) & OOM & \(\mathbf{48.6_{\pm 0.0}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracy with learning affine augmentations on (rot)CIFAR10 and (rot)TinyImagenet. Comparisons to traditional training / validation splitWe further perform comparisons between partitioned networks and the more traditional training/validation split (denoted as validation set optimization) with additional finetuning to the task of learning data augmentations. This is realized as follows; we partition \(20k\) CIFAR10 examples into training and validation data of specific proportions. We then either train a partitioned network (along with the hyperparameters on \(\mathcal{L}_{\mathrm{ML}}\)) on these two chunks of data or train a standard network on the training set while using the validation set loss to obtain gradients for the data augmentation hyperparameters. For the validation set optimization baseline, once the hyperparameters are optimized, the resulting network is finetuned on the whole dataset for \(20\) epochs. The results for varying chunk proportions are provided in Table 2. We can see that partitioned networks (that do not employ additional finetuning) outperform validation set optimization with finetuning in all settings we tried. The gap does get smaller when we move to the more traditional \(90\)/\(10\) splits for training/validation: a \(10\%\) proportion for validation data is enough to optimize a handful of hyperparameters (just \(6\) scalars). To corroborate this claim, we set up an additional experiment; we use a Wide ResNet-20 on the full CIFAR10 dataset, where the first two out of the three stages (13 convolution layers) are considered as hyperparameters. The results for this setting can be seen in Table 3. We see that \(10\%\) validation data are not enough, and the validation set optimization baseline performs poorly. This is in contrast to partitioned networks, where with three chunks, we can learn all of these hyperparameters successfully. Note that, compared to Augerino, applying partitioned networks to this setting is straightforward. To apply Augerino, one would have to come up with a metric that can be used to regularize the feature extractor towards "higher invariance". Partitioned networks for federated learningWe consider federated learning (FL) (McMahan et al., 2017), a setting where data is distributed across many clients. In this setting, there are system properties that make hyperparameter optimization especially challenging (Wang et al., 2021). More specifically, obtaining a validation set and performing multiple training runs with different \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{4}{c}{Chunk Proportions} \\ Method & \([0.3,0.7]\) & \([0.5,0.5]\) & \([0.7,0.3]\) & \([0.8,0.2]\) & \([0.9,0.1]\) \\ \hline Partitioned & \(\mathbf{82.9\%_{\pm 0.3}}\) & \(\mathbf{83.0\%_{\pm 0.01}}\) & \(\mathbf{83.7\%_{\pm 0.2}}\) & \(\mathbf{84.0\%_{\pm 0.6}}\) & \(\mathbf{84.6\%_{\pm 0.05}}\) \\ Validation set optim. & NaN & \(78.9\%_{\pm 0.04}\) & \(81.5\%_{\pm 0.2}\) & \(82.6\%_{\pm 0.1}\) & \(83.4\%_{\pm 0.1}\) \\ \(\blackleftarrow\)+Finetune & NaN & \(81.3\%_{\pm 0.09}\) & \(82.5\%_{\pm 0.2}\) & \(83.5\%_{\pm 0.1}\) & \(83.8\%_{\pm 0.3}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Learning affine augmentations with \(\frac{\mathrm{fix}}{\mathrm{up}}\)ResNet-14 on subset of CIFAR-10 (\(20k\) examples). NaN denotes that a run crashed. Figure 3: Learning affine data augmentations on subsets of data. (b) uses a \(\frac{\mathrm{fix}}{\mathrm{up}}\)ResNet-8 architecture whereas (c) a ResNet-50 architecture. (b,c) Top: normal dataset, bottom: rotated dataset. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Chunk Proportions & Test accuracy \\ \hline Validation set optim. & \([0.9,0.1]\) & \(59.6\%_{\pm 0.6}\) \\ Partitioned & \([0.1,0.8,0.1]\) & \(\mathbf{87.3\%_{\pm 0.8}}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Learning a feature extractor (first \(2\) out of \(3\) stages of a Wide ResNet-20) as a hyperparameter on CIFAR10. hyperparameter settings might not be possible due to the additional communication and computation costs, and transient client availability (clients join and leave the training process at any time). Optimizing hyperparameters together with the model parameters in a single run is therefore especially beneficial (Wang et al., 2021), and partitioned networks are a good fit for FL. We extend our centralized experimental setup to FL by splitting all \(N\) clients into \(C\) non-overlapping chunks, such that each chunk is understood as the union of all clients' data shards that belong to that chunk. During federated training, a client belonging to chunk \(k\) sequentially optimizes partitions \(\mathbf{w}_{k:C}\) through sub-networks \(\mathbf{w}_{s}^{(k:C)}\) and computes a gradient wrt. the hyperparameters \(\psi\). Note that partitions \(\mathbf{w}_{1:k}\) remain unchanged and do not need to be communicated back to the server. This reduction in upload costs is a welcome property for FL, where upload costs can bottleneck system design. The server receives the (hyper-) parameter updates, averages them, and applies the result as a "gradient" to the server-side model in the traditional federated manner (Reddi et al., 2020). For partitioned networks, the hyperparameters that we optimize are the data augmentation parameters and, since we also include dropout in these architectures, the dropout rates (with the concrete relaxation from Maddison et al. (2016)). As a baseline, we consider the standard federated training without learning hyperparameters (denoted as FedAvg) as well as learning the augmentation parameters with Augerino Benton et al. (2020). Please see Appendix J for a detailed explanation of our FL setup. Table 4 summarizes our results using different sub-sets and variations of MNIST and CIFAR10, where we also included rotMNIST Larochelle et al. (2007) as another dataset. We can see that partitioned networks allow training models that generalize better than both FedAvg and FedAvg with Augerino, at reduced communication costs. Especially when the true data-generating process and underlying source of non-i.i.d.-ness are explicitly accounted for -- here in the form of rotation -- the benefits of learning the augmentations with partitioned networks become apparent. For example, we observe that on the rotated datasets, partitioned networks learn to correctly increase the rotation angle. ## 6 Discussion We propose partitioned networks as a new method for hyperparameter optimization inspired by the marginal likelihood objective. It provides a general and scalable solution to finding hyperparameters in a single training run without requiring access to a validation set while introducing less additional overhead to the training task than existing approaches. We showed that partitioned networks are applicable on a wide range of tasks; they can identify the correct model on illustrative toy examples, they can learn data augmentations in a way that improves data efficiency, they can optimize general feature extractors as hyperparameters and they can also optimize dropout rates. In the federated setting, partitioned networks allow us to overcome practical challenges, reduce the communication overhead and obtain better models. The notion of partitioned networks we propose in this work is novel to the literature and an orthogonal approach to many existing hyperparameter tuning algorithms. Like any other method, partitioned networks come with their own limitations, e.g., needing a partitioning strategy. We expand upon them in appendix H. We hope to see our method successfully reducing the need to perform hyperparameter search through repeated training and thereby contribute to the community's effort to reduce its carbon footprint. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset \& size & \(\uparrow\)MNIST & & \(\uparrow\)RotMNIST & \(\downarrow\)Upload \\ Method & \(1.25k\) & \(5k\) & \(50k\) & \(1.25k\) & \(5k\) & \(50k\) & \([\%]\) \\ \hline FedAvg & \(95.4\%_{0.1}\) & \(97.4\%_{0.1}\) & \(99.0\%_{0.1}\) & \(80.5\%_{0.0}\) & \(90.4\%_{0.5}\) & \(96.8\%_{\pm 0.1}\) & \(100\) \\ FedAvg + Augerino & \(94.2\%_{0.5}\) & \(96.4\%_{0.1}\) & \(99.1\%_{0.0}\) & \(79.5\%_{0.3}\) & \(89.0\%_{\pm 2.0}\) & \(95.3\%_{\pm 0.2}\) & \(100\) \\ FedAvg + Partitioned & \(\mathbf{97.0\%_{0.1}}\) & \(\mathbf{98.3\%_{0.0}}\) & \(99.2\%_{0.1}\) & \(\mathbf{85.7\%_{0.9}}\) & \(\mathbf{93.5\%_{0.6}}\) & \(\mathbf{97.8\%_{0.1}}\) & \(77\) \\ \hline \multicolumn{7}{c}{\(\uparrow\)CIFAR10} & \multicolumn{7}{c}{\(\uparrow\)RotCIFAR10} & \(\downarrow\)Upload \\ & \(1.25k\) & \(5k\) & \(45k\) & \(1.25k\) & \(5k\) & \(45k\) & \([\%]\) \\ \hline FedAvg & \(50.2\%_{0.4}\) & \(64.5\%_{0.3}\) & \(79.2\%_{0.7}\) & \(35.6\%_{0.3}\) & \(45.2\%_{0.1}\) & \(53.9\%_{\pm 1.1}\) & \(100\) \\ FedAvg + Augerino & \(49.9\%_{0.8}\) & \(65.0\%_{0.2}\) & \(79.9\%_{0.4}\) & \(36.1\%_{0.2}\) & \(45.0\%_{0.2}\) & \(56.4\%_{0.7}\) & \(100\) \\ FedAvg + Partitioned & \(50.8\%_{1.0}\) & \(64.8\%_{0.4}\) & \(\mathbf{81.5\%_{0.5}}\) & \(\mathbf{37.1\%_{0.2}}\) & \(45.3\%_{0.3}\) & \(\mathbf{60.6\%_{0.2}}\) & \(91\) \\ \hline \hline \end{tabular} \end{table} Table 4: Validation accuracy averaged over the last \(10\) evaluations, each \(10\) rounds apart; standard-error is computed across \(4\) random seeds. All datasets are adapted to the federated setting and are synthetically split to be non-i.i.d. sampled as described in Appendix J.2.
2303.00944
Attention-based Graph Convolution Fusing Latent Structures and Multiple Features for Graph Neural Networks
We present an attention-based spatial graph convolution (AGC) for graph neural networks (GNNs). Existing AGCs focus on only using node-wise features and utilizing one type of attention function when calculating attention weights. Instead, we propose two methods to improve the representational power of AGCs by utilizing 1) structural information in a high-dimensional space and 2) multiple attention functions when calculating their weights. The first method computes a local structure representation of a graph in a high-dimensional space. The second method utilizes multiple attention functions simultaneously in one AGC. Both approaches can be combined. We also propose a GNN for the classification of point clouds and that for the prediction of point labels in a point cloud based on the proposed AGC. According to experiments, the proposed GNNs perform better than existing methods. Our codes open at https://github.com/liyang-tuat/SFAGC.
Yang Li, Yuichi Tanaka
2023-03-02T03:40:05Z
http://arxiv.org/abs/2303.00944v2
Attention-based Graph Convolution Fusing Latent Structures and Multiple Features for Graph Neural Networks ###### Abstract We present an attention-based spatial graph convolution (AGC) for graph neural networks (GNNs). Existing AGCs focus on only using node-wise features and utilizing one type of attention function when calculating attention weights. Instead, we propose two methods to improve the representational power of AGCs by utilizing 1) structural information in a high-dimensional space and 2) multiple attention functions when calculating their weights. The first method computes a local structure representation of a graph in a high-dimensional space. The second method utilizes multiple attention functions simultaneously in one AGC. Both approaches can be combined. We also propose a GNN for the classification of point clouds and that for the prediction of point labels in a point cloud based on the proposed AGC. According to experiments, the proposed GNNs perform better than existing methods. Our codes open at [https://github.com/liyang-tuat/SFAGC](https://github.com/liyang-tuat/SFAGC). Attention-based graph convolution, graph neural network, 3D point cloud, deep learning. ## 1 Introduction We often encounter irregularly structured data (signals) in the real world where they do not have a fixed spatial sampling frequency. Such data include opinions on social networks, the number of passengers on traffic networks, coordinates of 3D point clouds, and so on. Deep neural networks have been widely used in recent years to detect, segment, and recognize regular structured data [1, 2, 3]. However, classical deep learning methods can not directly process the irregularly structured data mentioned above. They can be mathematically represented as data associated with a _graph_. An example of graph-structured data, that is, _graph signals_, is shown in Fig. 1. Deep neural networks for graph signals are called graph neural networks (GNNs): They have received a lot of attention [4, 5, 6, 7]. GNNs typically contain multiple graph convolution (GC) layers. The primary mechanism of GCs is to iteratively aggregate (i.e., filter) features from neighbors before integrating the aggregated information with that of the target node [4, 5, 8, 9]. In many existing GC methods, the node-wise features are typically utilized [10, 11, 12, 13]. Furthermore, it is observed that GCs are a special form of Laplacian smoothing [14]. This low-pass filtering effect often results in over-smoothing [4, 5, 14]. Over-smoothing means that the node-wise feature values become indistinguishable across nodes. Intuitively, representational power of GCs refers to the ability to distinguish different nodes [15]. Therefore, over-smoothing may negatively affect the performance of GNNs. To improve the representational power of GCs, attention-based spatial graph convolutions (AGCs) such as graph attention networks (GATs) [16] have been proposed. AGCs are believed to have high representational power than the direct spatial methods because they can use features on neighboring nodes through the attention weights. However, there exist two major limitations in existing AGCs: 1) They may lose the _structural information_ of the surrounding neighboring nodes, especially in a high-dimensional space. 2) When calculating attention weights for each neighboring node, only one type of attention function is used, e.g., dot-production, subtraction, or concatenation. Different types of attention functions will lead to different attention weights, which affect the representational power of AGCs. In this paper, we propose a new AGC to overcome the above-mentioned limitations. First, we propose a _local structure projection aggregation_. This operation aggregates the structural information of neighboring nodes of a target node. Second, we also propose an AGC that utilizes multiple-type attention functions. We can simultaneously utilize these two methods to present the attention-based graph convolution fusing latent structures and multiple features (SFAGC). Our contributions are summarized as follows: 1. By using local structure projection aggregation, we can obtain a representation of the local structural information of the graph in high-dimensional space in an AGC. This allows the convolved nodes to contain richer information than existing methods. 2. By using multiple-type attention functions simultaneously, we can obtain better attention weights than the single attention function. 3. We construct GNNs for graph and node classifications based on the proposed AGC with 3D point clouds. We demonstrate that GNNs using our method present higher classification accuracies than existing methods through experiments on ModelNet [17] for graph classification and ShapeNet [18] for node classification. _Notation:_ An undirected and unweighted graph is defined as \(\mathcal{G}:=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is a set of nodes, and \(\mathcal{E}\) is a set of edges. The adjacency matrix of \(\mathcal{G}\) is denoted as \(A\). \(\widetilde{D}\) is the diagonal degree matrix. The graph Laplacian is defined as \(L:=\widetilde{D}-A\). \(I_{n}\) is the matrix whose elements in the diagonal are 1. Here, \(h_{v}:=[\mathrm{h}_{v1},\ldots,\mathrm{h}_{vi},\ldots,\mathrm{h}_{vD}]^{ \mathsf{T}}\in\mathbb{R}^{D}\) represents a feature vector on the node \(v\in\mathcal{V}\), and \(D\) is the number of features in \(h_{v}\). \(co_{v}:=[\mathrm{co}_{v1},\ldots,\mathrm{co}_{vj},\ldots,\mathrm{co}_{vC}]^{ \mathsf{T}}\in\mathbb{R}^{C}\) represents a coordinate of the node \(v\in\mathcal{V}\), and \(C\) is the dimension of coordinate in \(co_{v}\). The non-linearity function is denoted as \(\sigma(\cdot)\). The set of neighboring nodes is \(N(\cdot)\) in which its cardinality is denoted as \(|N(\cdot)|\). A multilayer perceptron (MLP) layer is represented as MLP(\(\cdot\)). A channel-wise avg-pooling is denoted as AvgPool(\(\cdot\)). A channel-wise max-pooling is denoted as MaxPool(\(\cdot\)). The vector concatenation operation is denoted as cat(\(\cdot\)). The SoftMax operation is represented as SoftMax(\(\cdot\)). ## II Preliminary In this section, we present related work for the proposed GC. ### _Graph convolutions_ The mechanism of GCs is based on message passing. Message passing involves iteratively aggregating information from neighboring nodes and then integrating the aggregated information with that of the target node [4, 5]. Typically, a GC has two parts, i.e., an aggregator and an updater. The aggregator is to collect and aggregate the node-wise features of neighboring nodes. The updater merges the aggregated features into the target node to update the node-wise features of the target node. These two parts are illustrated in Fig. 2. Existing GCs can be classified into spectral and spatial methods. Furthermore, spatial methods can be classified into direct and attention-based methods. We briefly introduce them in the following. #### Ii-A1 Spectral methods In the spectral methods, the aggregation operation is carried out in the graph Fourier domain. Eigenvectors of graph Laplacian are known as the graph Fourier bases [1, 8, 19]. In order to reduce the computational complexity for large graphs, polynomial approximations of graph filters are often utilized [10, 11, 20]. GCN [12] further reduces computational complexity through the first-order graph filters. It can be formulated as follows: \[H_{\text{GCN}}=\sigma(\widetilde{D}^{-1/2}\widetilde{A}\widetilde{D}^{-1/2}HW), \tag{1}\] where \(H:=\{h_{v}\}_{v\in\mathcal{V}}\) is the set of node-wise features, \(\widetilde{A}=A+I_{n}\) is the adjacency matrix with self-loops, and \(W\) is a learnable weight matrix. #### Ii-A2 Spatial methods Spatial methods are a counterpart of spectral methods. As we described above, spatial methods can be classified into direct and attention-based methods. Direct spatial methods directly use the node-wise features and the aggregation operation is carried out spatially. A representative method for direct spatial methods is GraphSAGE [13], which treats each neighbor equally with mean aggregation. Later, attention-based spatial methods are proposed. Instead of treating all neighboring nodes equally, attention-based methods calculate an attention weight for each neighboring node. Then, they use the weighted sum to aggregate features of neighboring nodes. GAT [16] is a representative method for attention-based spatial methods. It is composed Figure 1: An example of graph-structured data. Figure 2: A GC has two parts, i.e., an aggregator and an updater. The aggregator is to collects and aggregates the node-wise features of neighboring nodes. The updater merges the aggregated features into the target node to update the node-wise features of the target node. of three steps. In the first step, the learnable weights are multiplied by the node-wise features, i.e., \(h^{\prime}_{\nu}=\{W\cdot h_{v}\}_{v\in V}\), where \(W\) is the learnable weights. The second step computes attention weights as follows: \[a_{uv}=\text{SoftMax}(\sigma(W_{a}\cdot(\text{cat}(h^{\prime}_{v},h^{\prime}_{u} )))),u\in N(v) \tag{2}\] where \(W_{a}\) is learnable weights. In the third step, the node-wise features of a target node \(v\) is updated as follows: \[h^{\prime\prime}_{v}=\sigma\left(\sum_{u\in N(v)}a_{uv}\cdot(h^{\prime}_{u}) \right). \tag{3}\] However, as mentioned earlier, existing methods ignore the structural information of neighboring nodes in the high-dimensional space, and use only one type of attention function when calculating an attention weight for each neighboring node. In contrast to the existing methods, we first focus on computing a representation of the structure of neighboring nodes in high-dimensional feature space and installing them into a spatial GC. Second, we use multiple types of attention functions simultaneously in an AGC. We use both of these two methods simultaneously to achieve SFAGC. ### Structural Features In previous works [21, 22], three structural features, i.e., feature angle, feature distance, and relational embedding, are proposed to describe the structural information between the target node and neighboring nodes. Below, we briefly introduce them since they are also used in our attention-based GCs. #### 2.2.1 Feature angle _Feature angle_ describes the local structure of neighboring nodes. First, a set of structure vectors pointing from target node \(v\) to the neighboring nodes are calculated as \(\text{SV}_{N(v)}=\{\text{s}\text{v}_{uv}:=h_{u}-h_{v}\}_{u\in N(v)}\). Then, a base structure vector \(\text{sv}_{b}\) is learned from \(\text{SV}_{N(v)}\) as follows: \[\text{sv}_{b}=\text{AvgPool}(\{\sigma(W_{b}\cdot\text{s}\text{v}_{uv})\}_{ \text{sv}_{uv}\in\text{SV}_{N(v)}}) \tag{4}\] where \(W_{b}\) is learnable weights. An example of a base structure vector \(\text{sv}_{b}\) is shown in Fig. 3 Finally, the cosine of the angle between \(\text{sv}_{uv}\) and \(\text{sv}_{b}\) is calculated to obtain the feature angle \(\text{fa}_{uv}\) as follows: \[\text{fa}_{uv}=\cos(\theta_{u})=\frac{\text{s}\text{s}\text{v}_{uv}\cdot\text{ s}\text{v}_{b}^{\text{T}}}{\|\text{s}\text{v}_{uv}\|\cdot\|\text{s}\text{v}_{b}\|}, \text{sv}_{uv}\in\text{SV}_{N(v)} \tag{5}\] An example is shown in Fig. 4 (a). #### 2.2.2 Feature distance The second structural feature is _feature distance_. It is the absolute difference between the node-wise features of \(h_{u}\) and \(h_{v}\) represented as follows: \[\text{fd}_{uv}=[|\text{h}_{u1}-\text{h}_{v1}|,...,|\text{h}_{uD}-\text{h}_{vD} |]|^{\text{T}}. \tag{6}\] An example is shown in Fig. 4 (b). #### 2.2.3 Relational embedding The third structural feature is _relational embedding_. It can be learned from \(\{\text{s}\text{v}_{uv}\}\) as follows: \[\text{re}_{uv}=\sigma(W_{\text{re}}\cdot\text{s}\text{v}_{uv}),u\in N(v). \tag{7}\] where \(W_{\text{re}}\) is learnable weights. An example of it is shown in Fig. 4 (c). Figure 4: Example of our structural features. (a) is our feature angle; (b) is our feature distance, \(\text{h}_{v1}\) is the element of \(h_{v}\), \(D\) is the dimensions of node-wise features; (c) is our relational embedding. Figure 5: An example of a graph in high-dimensional space. A node consists of a coordinate and the node-wise features, i.e., \(\{v:=((c_{o},h_{v}))_{v\in V}\). For example, the coordinate of a node in a graph of a 3D color point cloud is \((x,y,z)\), and the node-wise features are the values of RGB. We use the coordinates to calculate structural features. Figure 3: An example of a base structure vector. The number in the black circle are the node indices. ## III Sfaoc In this section, we introduce SFAGC. As mentioned above, we have two goals: Utilizing 1) the structural information of neighboring nodes in a high-dimensional feature space during a single step GC and 2) multiple-type attention functions simultaneously when calculating the attention weights. Fig. 5 illustrates an example of a graph with feature vectors in the spatial domain. Spatially distributed nodes often have their coordinates and associated node-wise features, i.e., \(\left\{v:=(co_{v},h_{v})\right\}_{v\in\mathcal{V}}\), where \(co_{v}\) is the coordinate of the node \(v\). For example, a 3D color point cloud equips a 3-D \((x,y,z)\) coordinate and its node-wise features as RGB values. In the previous structure-aware GCs [21, 22], the node-wise features are simply used as their coordinates. In contrast, the proposed GC, SFAGC, simultaneously considers the coordinates and node-wise features. To achieve our goals, the SFAGC has four parts: \begin{tabular}{l l} \(A\). & Local structure projection aggregation and fusing \\ \(B\). & Position embedding \\ \(C\). & Weighted sum aggregation and update \\ \(D\). & Coordinates processing \\ \end{tabular} Figure 6: The details of our SFAGC. SFAGC is illustrated in Fig. 6 and the algorithm of SFAGC is summarized in Algorithm 1. We sequentially introduce these parts. ### Local structure projection aggregation and Fusing We propose a projection aggregation operation to obtain the structure representation in the feature space. We then fuse this with the node-wise features of the target node. #### -A1 Local structure projection aggregation The inputs of this step are the feature angle, feature distance and relational embedding, i.e., \(\text{fa}_{uv}\), \(\text{fd}_{uv}\) and \(\text{re}_{uv}\), introduced in Section II-B. We first compute structure vectors as follows: \[\text{s}_{uv}=\text{cat}(\text{fd}_{uv},\text{re}_{uv},\sigma(\left.W_{\text{ se}}(\text{cat}(\text{fd}_{uv},\text{re}_{uv})))\right),u\in N(v), \tag{8}\] where \(W_{\text{se}}\) is a learnable weight matrix. Then, we project each \(\text{s}_{uv}\) as follows: \[\hat{\text{se}}_{uv}=\text{fa}_{uv}\cdot\text{s}_{uv},u\in N(v). \tag{9}\] Finally, we calculate the summation of the projected structure vectors as follows: \[\text{af}_{v}=\sum_{u\in N(v)}\hat{\text{se}}_{uv}. \tag{10}\] Fig. 7 illustrates an example of the local structure projection aggregation. #### -A2 Fusing structure information with node-wise features In this step, we fuse the \(\text{af}_{v}\) with the \(h_{v}\) as follows: \[h_{v}^{\prime}=\sigma(W_{s}(\text{cat}(h_{v},\text{af}_{v}))),v\in\mathcal{V}. \tag{11}\] where \(W_{s}\) is learnable weights. ### Position Embedding Position encoding is crucial for a self-attention mechanism because it enables the aggregation operation to adapt to local data structure [23, 24]. Our method directly learns a position embedding, while existing methods use cosine function-based position encoding [24]. We embed the difference of the coordinates between the neighboring node \(u\) and target node \(v\) in the feature space. Our position embedding \(p_{u}\) for the node \(u\) is represented as follows: \[p_{u}=W_{\text{P2}}\cdot(\sigma(W_{\text{P1}}\cdot(co_{u}-co_{v}))),u\in N(v) \tag{12}\] where \(W_{\text{P1}}\) and \(W_{\text{P2}}\) are learnable weights. ### Weighted Sum Aggregation and Update In this part, we update the node-wise features of the target node \(v\). First, we introduce the calculation steps of the attention weights. Then, we present the weighted sum aggregation and node-wise features update step used in SFAGC. #### -A1 Attention weights As we described above, we simultaneously use multiple-type attention functions to calculate attention weights. In existing methods [23, 24, 25], the subtraction or dot-production attention function is often utilized to calculate attention weights. Instead of the single attention function, we use these attention functions simultaneously. The subtraction attention function is defined as \[\text{a}_{1vu}:=W_{q1}\cdot h_{v}^{\prime}-W_{k1}\cdot h_{u}^{\prime},u\in N( v), \tag{13}\] where \(W_{q1}\) and \(W_{k1}\) are learnable weights. The dot-production attention function is represented as \[\text{a}_{2vu}:=(W_{q2}\cdot h_{v}^{\prime})\cdot(W_{k2}\cdot h_{u}^{\prime}) ^{T},u\in N(v) \tag{14}\] where \(W_{q2}\) and \(W_{k2}\) are learnable weights. Then, the two types of attention functions are added with the position embedding \(p_{u}\) as follows: \[\text{qk}_{vu}=\text{a}_{1vu}+W_{c}\cdot\text{a}_{2vu}+p_{u},u\in N(v) \tag{15}\] where \(W_{c}\) is learnable weights that also converts \(\text{a}_{2vu}\) into the same dimensions as \(\text{a}_{1vu}\). Finally, \(\text{qk}_{vu}\) is input into a small network to calculate the attention weights between the target node \(v\) and the neighboring node \(u\) as follows: \[\text{at}_{eu}=\text{SoftMax}\left(\frac{W_{a2}\cdot\sigma(W_{a1}\cdot\text{ qk}_{vu})}{\sqrt{d_{out}}}\right),u\in N(v), \tag{16}\] where \(W_{a1}\in\mathbb{R}^{d_{eu}\times d_{out}}\) and \(W_{a2}\in\mathbb{R}^{d_{out}\times 1}\) are learnable weights, \(d_{in}\) and \(d_{out}\) are the dimensions of the \(W_{a1}\). #### -A2 Weighted-sum aggregation For \(h_{v}^{\prime}\) and \(h_{u}^{\prime},u\in N(v)\), weighted sum aggregation is calculated as follows: \[\widetilde{h_{v}}=\sum_{u\in N(v)}\text{at}_{eu}\cdot(W_{v}\cdot h_{u}^{ \prime}+p_{u}), \tag{17}\] where \(W_{v}\) is a learnable matrix. Figure 7: An example of the local structure projection aggregation. The \(\text{s}_{uv}\) is the base structure vector defined in (4). 3 Node-wise features update \(h_{v}\), \(co_{v}\) and \(\widetilde{h_{v}}\) are integrated as follows: \[h_{v}^{\prime\prime}=\sigma(W\cdot\text{cat}(h_{v},co_{v},\widetilde{h_{v}})). \tag{18}\] where \(W\) is learnable weights. ### Coordinate update Finally, we update the coordinate of the target node \(v\) as follows: \[co_{v}^{\prime}=\text{MLP}(co_{v}),v\in\mathcal{V}. \tag{19}\] Hereafter, we represent the set of these operations as \(\{h_{\mathcal{V}}^{\prime\prime},co_{\mathcal{V}}^{\prime}\}:=\text{SFAGC}(h_ {\mathcal{V}},co_{\mathcal{V}},A)\). ## IV Implementation In this section, we construct classification and segmentation networks for 3D point clouds based on SFAGC. Their architectures are illustrated in Fig. 8. In the following, first, we introduce the components shared by the two GNNs. Then, specific implementations for each of the GNNs are introduced. Here, we suppose that the input point cloud is given by \(\mathcal{X}=\{x_{i}\}_{i=1}^{N}\) where \(N\) is the number of points. ### Preprocessing To alleviate effects on rotations of point clouds, we use the same transformation module as PointNet [26] for preprocessing. ### SFAGC module The inputs to this module are a set of features obtained from the previous layer and a set of node coordinates in the feature space. Suppose that the set of features output from the previous layer is \(\mathcal{H}=\{h_{i}\}_{i=1}^{M}\) where \(M\) is the number of features of the input. For the first module, \(\mathcal{H}=CO=\mathcal{X}\) where \(CO=\{co_{j}\}_{j=1}^{M}\) is the set of node coordinates. First, we construct \(\mathcal{G}\) with the \(k\)-NN graph from \(CO\) where the \(i\)th node \(v_{i}\) in \(\mathcal{G}\) corresponds to the \(i\)th feature \(h_{i}\) and the \(i\)th coordinate \(co_{i}\). Recent studies [27, 28] have shown that dynamic graph convolution, i.e., allowing the graph structure to change at each layer, can perform better than that with a fixed graph structure. Therefore, we update coordinates of nodes at each SFAGC module, and we construct different graphs for different SFAGC modules. ### Design of 3d point cloud classification network Fig. 8 (a) illustrates the architecture of 3D point cloud classification network based on SFAGC. In the following, we describe the details of the building blocks that are specifically designed for point cloud classification. #### Iv-1 Multi-resolution point clouds We use a layer of PointNet++ [29] to generate a low-resolution point cloud. Both global and local information of the point cloud can be obtained through the multi-resolution structure. #### Iv-2 Score-based graph pooling Effective graph pooling methods are a hot topic in GNNs and graph signal processing [30, 31]. Early work has been done by global pooling of all node-wise features or by using graph coarsening algorithms. Recently, trainable graph pooling operations DiffPool [32], GraphU-net [33], and AttPool [34] have been proposed. Inspired by the ResNeXt [35], we extend the score-based graph pooling module proposed in SAMGC [22] by introducing the multi-branch architecture. The score-based graph pooling module has three branches: score-based sampling, integration, and SFAGC branches. The architecture of the score-based graph pooling is shown in Fig. 9. In the following, we introduce their details. In the score-based sampling branch, we propose a score-based sampling to find the indices of the best \(t\) nodes according to their scores. The score associated with each node is first computed as follows: \[score_{v}=\text{SoftMax}(W_{1}\cdot h_{v}),v\in\mathcal{V}, \tag{20}\] where \(W_{1}\) is learnable weights. We then sort the nodes in descending order according to their scores. We find the indices of the top \(t\) nodes as follows: \[\text{idx}_{select}=\text{rank}(\{score_{v}\}_{\mathcal{V}},t), \tag{21}\] where \(\text{rank}(\cdot)\) is the ranking operation, and it finds the indices of the \(t\) highest scores. In the integration branch, node-wise features are multiplied by the node scores as follows: \[\hat{h}_{\mathcal{V}}=\{score_{v}\cdot h_{v}\}_{v\in\mathcal{V}}. \tag{22}\] In the SFAGC branch, the input graph is processed using SFAGC as follows: \[\{h_{\mathcal{V}}^{\prime},co_{\mathcal{V}}^{\prime}\}=\text{SFAGC}(h_{ \mathcal{V}},co_{\mathcal{V}}). \tag{23}\] Finally, the subset of \(\hat{h}_{\mathcal{V}}\) and the subset of \(\{h_{\mathcal{V}}^{\prime},co_{\mathcal{V}}^{\prime}\}\) are found using \(\text{idx}_{select}\) as follows: \[\hat{h}_{\mathcal{V}_{sub}} =\hat{h}_{\mathcal{V}}[\text{idx}_{select}] \tag{24}\] \[\{h_{\mathcal{V}_{sub}}^{\prime},co_{\mathcal{V}_{sub}}^{\prime}\} =\{h_{\mathcal{V}}^{\prime},co_{\mathcal{V}}^{\prime}\}[\text{idx}_{select}]\] \(\hat{h}_{\mathcal{V}_{sub}}\) and \(h_{\mathcal{V}_{sub}}^{\prime}\) are merged using learnable weights as follows: \[\hat{h}_{\mathcal{V}_{sub}}^{\prime}=\sigma(W_{pl}\cdot\text{cat}(\hat{h}_{ \mathcal{V}_{sub}},h_{\mathcal{V}_{sub}}^{\prime})), \tag{25}\] where \(W_{pl}\) is learnable weights. The score-based graph pooling can be summarized as follows: \[\{co_{\mathcal{V}_{sub}}^{\prime},\hat{h}_{\mathcal{V}_{sub}}^{\prime}\}= \text{GraphPool}_{s}(co_{\mathcal{V}},h_{\mathcal{V}}). \tag{26}\] #### 3.3.3 Hierarchical prediction architecture Here, we also use the intermediate supervision technique [36] and the hierarchical prediction architecture (Fig. 8 (a)), as in SAMGC [22]. The advantage of this architecture is that, by combining the outcomes of the different phases, more reliable and robust predictions can be produced [22]. The details are presented below. We use two SFAGC modules in phase 1 to phase 5. One PointNet++ [29] layer is used in phase 6. Each phase connects with a max pooling layer and an average pooling layer. The outputs are then concatenated and input into a fully connected layer. We calculate the prediction and the classification loss for each phase. The total classification loss is obtained by adding the losses of several phases. Meanwhile, the total prediction is also increased by the predictions of several phases. The following is a representation of this processing: \[prediction =\sum_{i=1}^{P}prediction_{i}, \tag{27}\] \[loss =\sum_{i=1}^{P}loss_{i}, \tag{28}\] where \(prediction_{i}\) is the prediction of the \(i\)th phase, \(loss_{i}\) is the cross-entropy loss of the \(i\)th phase. \(prediction\) and \(loss\) are the total prediction and classification loss, respectively. \(P\) is the number of phases. Figure 8: The architectures of our 3D point cloud classification network and our 3D point cloud segmentation network. (a) is the architecture of 3D point cloud classification network; (b) is the architecture of 3D point cloud segmentation network, L_p_points is the set of outputs of the \(j\)th phase. \(N_{\rm Lip}\) is the number of nodes of the L_p_points. Figure 9: The details of our graph pooling operation. The score-based graph pooling is used in 3D point cloud classification network. The FPS-based graph pooling is used in 3D point cloud segmentation network. Here, we set \(t=3\). ### Design of 3D point cloud segmentation network Fig. 8 (b) illustrates the architecture of the 3D point cloud segmentation network based on SFAGC. In the following, we describe the details of the building blocks that are specifically designed for point cloud segmentation. #### 1.4.1 Farthest point sampling-based graph pooling For point cloud segmentation, we use the graph pooling module to reduce the overall computation cost. Here, we propose the farthest point sampling-based graph pooling (FPS-based graph pooling) by modifying the score-based graph pooling. The architecture of the FPS-based graph pooling is illustrated in Fig. 9. In the following, we introduce its details. The FPS-based graph pooling has a multi-branch architecture like the score-based graph pooling in Section 4.2.2. In contrast to the score-based method, the FPS-based graph pooling has two branches, i.e., the FPS and SFAGC branches. In the FPS branch, we perform FPS on nodes to obtain indices of the best \(t\) nodes according to their coordinates. FPS algorithm is widely utilized in 3D point cloud processing [29, 37]. The mechanism of FPS is to iteratively select the node that is farthest from the existing sampled nodes. Therefore, the sampled nodes with the FPS-based sampling are expected to be more evenly distributed than those with the score-based sampling. This branch can be summarized as follows: \[\text{idx}_{select}=\text{FPS}(\{co_{v}\}_{\mathcal{V}},t). \tag{29}\] where \(\text{idx}_{select}\) is the indices of the \(t\) selected nodes, \(\text{FPS}(\cdot)\) is the farthest point sampling algorithm. The SFAGC branch is the same as the one that is used in the score-based graph pooling represented as \[\{h^{\prime}_{\mathcal{V}},co^{\prime}_{\mathcal{V}}\}=\text{ SFAGC}(h_{\mathcal{V}},co_{\mathcal{V}}). \tag{30}\] Finally, the subset of \(\{h^{\prime}_{\mathcal{V}},co^{\prime}_{\mathcal{V}}\}\) are extracted using \(\text{idx}_{select}\) as follows: \[\{co^{\prime}_{\mathcal{V}_{sub}},h^{\prime}_{\mathcal{V}_{sub}}\}=\{co_{ \mathcal{V}},h_{\mathcal{V}}\}[\text{idx}_{select}]. \tag{31}\] The FPS-based graph pooling is represented as follows: \[\{co^{\prime}_{\mathcal{V}_{sub}},h^{\prime}_{\mathcal{V}_{sub}}\}=\text{ GraphPool}_{f}(co_{\mathcal{V}},h_{\mathcal{V}}). \tag{32}\] #### 1.4.2 Upsampling operation Graph pooling can be regarded as a downsampling operation. For point cloud segmentation, the network also needs to perform upsampling in order to maitain the number of points. Therefore, the feature propagation used in PointNet++ [29] is also used in our network as the upsampling operation. ## 5 Experiments In this section, we conduct experiments on 3D point cloud classification and segmentation to validate the proposed GC. ### 3D point cloud classification Here, we present the 3D point cloud classification experiment using the 3D point cloud classification network introduced in Section 4. #### 5.1.1 Dataset The ModelNet dataset [17] is used in our point cloud classification experiment. 12,308 computer-aided design (CAD) models in 40 categories are included in ModelNet40. In addition, 9,840 CAD models are utilized for training and \begin{table} \begin{tabular}{c|l|l|l|l} \hline \hline \multicolumn{1}{c|}{Epoch} & \multicolumn{1}{c}{200} \\ \hline \multicolumn{1}{c|}{Batch size} & \multicolumn{1}{c}{16} \\ \hline \multicolumn{1}{c|}{Learning rate} & \multicolumn{1}{c}{0.001} \\ \hline \multicolumn{1}{c|}{Drop out} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multicolumn{1}{c|}{Phase} & \multicolumn{1}{c}{Graph convolution layer} & \multicolumn{1}{c}{Coordinates update} & \multicolumn{1}{c}{\(k\)} & \multicolumn{1}{c}{[\(\text{co}_{\text{fa}},\text{co}_{\text{out}},\text{f}_{\text{out}}\)]} \\ \hline \multicolumn{1}{c|}{Phase1} & \multicolumn{1}{c}{SFAGC(\(co_{v}=x_{v}\))} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 20 & [3,3,26,4] \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=co_{v}\) & 20 & [32,64,-64] \\ \hline \multicolumn{1}{c|}{Phase2} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 20 & [32,64,64,64] \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=co_{v}\) & 20 & [64,64,-128] \\ \hline \multicolumn{1}{c|}{Phase3} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 20 & [64,128,128,256] \\ \hline \multicolumn{1}{c|}{Phase4} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 20 & [64,128,64,128] \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=\text{co}_{v}\) & 20 & [64,128,-128] \\ \hline \multicolumn{1}{c|}{Phase5} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 20 & [128,128,128,256] \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c}{SFAGC} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 20 & [128,128,256] \\ \hline \multicolumn{1}{c|}{\begin{tabular}{c} Graph pooling layername & Graph convolution layer \\ \end{tabular} } & \multicolumn{1}{c}{Coordinates update} & \multicolumn{1}{c}{\(k\)} & \multicolumn{1}{c}{[\(\text{co}_{\text{fa}},\text{co}_{\text{out}},\text{f}_{\text{out}}\)]} \\ \hline \multicolumn{1}{c|}{\begin{tabular}{c} GraphPool\_s 1 \\ GraphPool\_2 \\ GraphPool\_3 \\ GraphPool\_3 \\ \end{tabular} } & \multicolumn{1}{c}{SFAGC(\(co_{v}=h_{v}\))} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 36 & 512 & [131,131,32,64] \\ \multicolumn{1}{c|}{\begin{tabular}{c} GraphPool\_3 \\ \end{tabular} } & \multicolumn{1}{c}{SFAGC(\(co_{v}=h_{v}\))} & \(co^{\prime}_{\mathcal{V}}=\text{MLP}(co_{v})\) & 64 & 128 & [320,320,64,128] \\ \hline \multicolumn{1}{c|}{ \begin{tabular}{c} Pointnet++ layername & \(\mathcal{S}\) \\ \end{tabular} } & \multicolumn{1}{c}{\(r\)} & \multicolumn{1}{c}{\(D\)} & \multicolumn{1}{c}{[input channels, output channels]} \\ \hline \multicolumn{1}{c|}{Pointnet++} & \multicolumn{1}{c}{512} & \multicolumn{1}{c}{0.2} & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{[3,64]} \\ \hline \hline \end{tabular} \end{table} Table 1: The hyperparameters of the point cloud classification network. \(k\) is the value of \(k\)-NN, \(t\) is the number of selected nodes. For Pointnet++ layer, \(S\) is the number of sampled points, \(r\) is the radius of each group, \(D\) is the number of points of each group, \(\text{co}_{\text{fa}}\) is the number of channels of the input coordinates. \(\text{fa}_{\text{fa}}\) is the number of channels of the input node-wise features. \(\text{co}_{\text{fa}}\) is the number of channels of the output coordinates. \(\text{fa}_{\text{fa}}\) is the number of channels of the output node-wise features. The symbol \(\mathcal{V}^{-}\) indicates that the parameters are not available. The input 3D point cloud is \(\mathcal{X}=\{x_{v}\}_{v=1}^{N}\). 2,468 CAD models are used for testing. 4,899 CAD models are included in ModelNet10. They are divided into 3,991 for training and 908 for testing from ten categories. For each CAD models, the CAD mesh faces were evenly sampled with 1,024 points. Initially, all point clouds were normalized to be in a unit sphere. #### 3.2.2 Settings and evaluation The settings of hyperparameters are summarized in Table 1. We use the average accuracy of all test instances (OA) and the average accuracy of all shape classes (mAcc) to evaluate the performance of our network. #### 3.2.3 Results and discussion Table 2 summarizes the results for point cloud classification. The results of existing methods are taken from their original papers. In terms of both OA and mAcc, our method performs better than the others. In the following, we focus on the comparison of our method and graph-based methods. The GCs utilized in DPAM [45] are GCN [12]. Therefore, it only uses node-wise features of one-hop neighboring nodes. RGCNN [46], 3DTI-Net [47], PointGCN [48], and LocalSpaceGCN [49] are the spectral methods. Although they can design the global spectral response, they may neglect the local spatial information. In comparison with the direct spatial methods [27], [28], \begin{table} \begin{tabular}{c|l|c c|c c} \hline \hline \multicolumn{1}{c|}{Type} & Method & \multicolumn{2}{c|}{ModelNet40} & \multicolumn{2}{c}{ModelNet10} \\ \cline{3-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & OA & mAcc & OA & mAcc \\ \hline Pointwise MLP & PointNet [26] & 89.2\% & 86.2\% & - & - \\ Methods & PointNet++ [29] & 90.7\% & - & - & - \\ & SRN-PointNet++ [38] & 91.5\% & - & - & - \\ \hline Transformer-based & PointASNL [37] & 93.2\% & - & 95.9\% & - \\ Methods & PCT [39] & 93.2\% & - & - & - \\ & PointTransformer [25] & 93.7\% & 90.6 \% & - & - \\ \hline Convolution-based & PointConv [40] & 92.5\% & - & - & - \\ Methods & A-CNN [41] & 92.6\% & 90.3\% & 95.5\% & 95.3\% \\ & SFCNN [42] & 92.3\% & - & - & - \\ & InterpCNN [43] & 93.0\% & - & - & - \\ & ConvPoint [44] & 91.8\% & 88.5\% & - & - \\ \hline Graph-based & Spectral & DPAM [45] & 91.9\% & 89.9\% & 94.6\% & 94.3\% \\ Methods & Methods & RGCNN [46] & 90.5\% & 87.3\% & - & - \\ & & 3DTI-Net [47] & 91.7\% & - & - & - \\ & PointGCN [48] & 89.5\% & 86.1\% & 91.9\% & 91.6\% \\ & LocalSpecGCN [49] & 92.1\% & - & - & - \\ \hline Spatial & ECC [28] & 87.4\% & 83.2\% & 90.8\% & 90.0\% \\ Methods & KCNet [50] & 91.0\% & - & 94.4\% & - \\ & DGCNN [27] & 92.2\% & 90.2\% & - & - \\ & LDGCNN [51] & 92.9\% & 90.3\% & - & - \\ & Hassan et al. [52] & 89.1\% & - & - & - \\ & ClusterNet [53] & 87.1\% & - & - & - \\ & Grid-GCN [54] & 93.1\% & 91.3\% & 97.5\% & 97.4\% \\ & SAGConv [21] & 93.5\% & 91.3\% & 98.3\% & 97.7\% \\ & SAMGC [22] & 93.6\% & 91.4\% & 98.3\% & 97.7\% \\ \cline{2-6} & **SFAGC** & **94.0\%** & **91.6\%** & **98.6\%** & **97.8\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison results of the 3D shape classification on the ModelNet benchmark. OA indicates the average accuracy of all test instances, and mAcc indicates the average accuracy of all shape categories. The symbol ‘-’ indicates that the results are not available from the references. \begin{table} \begin{tabular}{c|l|l|l|l} \hline \hline \multicolumn{1}{c|}{Epoch} & \multicolumn{1}{c|}{251} \\ \hline \multicolumn{1}{c|}{Batch size} & \multicolumn{1}{c|}{16} \\ \hline \multicolumn{1}{c|}{Learning rate} & \multicolumn{1}{c|}{0.001} \\ \hline \multicolumn{1}{c|}{Drop out} & \multicolumn{1}{c|}{0.4} \\ \hline \multicolumn{1}{c|}{Phase} & Graph convolution layer & \multicolumn{1}{c|}{Coordinates update} & \(k\) & \([\text{co}_{n}\text{-}\text{f}_{n}\text{co}_{n}\text{-f}_{out}]\) \\ \hline \multicolumn{1}{c|}{Phase1} & SFAGC(\(cov_{n}=x_{v}\)) & \(cov_{n}^{\prime}=\text{MLP}(cov_{n})\) & 20 & \([3,3,32,64]\) \\ \hline \multicolumn{1}{c|}{Phase2} & SFAGC & \(cov_{n}^{\prime}=\text{MLP}(cov_{n})\) & 20 & \([3,64,32,64]\) \\ \multicolumn{1}{c|}{} & SFAGC & \(cov_{n}^{\prime}=\text{co}_{n}\) & 20 & \([32,64,-128]\) \\ \hline \multicolumn{1}{c|}{Phase3} & SFAGC & \(cov_{n}^{\prime}=\text{MLP}(cov_{n})\) & 20 & \([3,128,128,256]\) \\ \hline \multicolumn{1}{c|}{Graph pooling layername} & Graph convolution layer & Coordinates update & \(k\) & \(t\) & \([\text{co}_{n}\text{-f}_{n}\text{co}_{n}\text{-f}_{out}]\) \\ \hline \multicolumn{1}{c|}{GraphPool\_1} & SFAGC(\(cov_{n}=x_{v}\)) & \(cov_{n}^{\prime}=cov_{n}\) & 36 & 512 & \([3,131,36,4]\) \\ \multicolumn{1}{c|}{GraphPool\_2} & \multicolumn{1}{c|}{SFAGC(\(cov_{n}=x_{v}\))} & \(cov_{n}^{\prime}=cov_{n}\) & 64 & 128 & \([3,320,3,128]\) \\ \hline \multicolumn{1}{c|}{Feature propagation layer name} & \multicolumn{1}{c|}{[input channels, output channels]} \\ \hline \multicolumn{1}{c|}{Feature propagation1} & \multicolumn{1}{c|}{[256+256+128+128,256]} \\ \multicolumn{1}{c|}{Feature propagation2} & \multicolumn{1}{c|}{[256+64+64,128]} \\ \hline \hline \end{tabular} \end{table} Table 3: The hyperparameters of the point cloud segmentation network. \(k\) is the value of \(k\)-NN, \(t\) is the number of selected nodes. \(\text{co}_{n}\) is the number of channels of the input coordinates. \(\text{f}_{n}\) is the number of channels of the input node-wise features. \(\text{co}_{n}\) is the number of channels of the output coordinate. \(\text{f}_{out}\) is the number of channels of the output node-wise features. The symbol ‘-’ indicates that the parameters are not available. The input 3D point cloud is \(\mathcal{X}=\{\chi_{i}\}_{i=1}^{k}\). [50, 51, 52, 53, 54], our method can obtain the local structural information of the graph in the feature space, and the information of neighboring nodes can be utilized efficiently using attention-based aggregation. In comparison with the direct spatial methods, i.e., SAGConv [21] and SAMGC [22], the proposed method can better aggregate the structural information of the neighboring nodes using the local structure projection aggregation. Furthermore, the information of neighboring nodes can be utilized efficiently using attention-based aggregation. These are possible reasons for the performance improvement of the proposed method. ### 3D Point Cloud Segmentation In this subsection, we also perform a 3D point cloud segmentation experiment. #### 3.c.1 Dataset The ShapeNet dataset [18] is used in the experiment. It contains 16,846 computer-aided design (CAD) models in 40 categories and each point cloud has 2,048 points. 2,874 point clouds are used for testing, and 13,998 of them are taken for training. #### 3.c.2 Settings and evaluation Table 3 shows the hyperparameter settings. For this experiment, we re-perform existing methods with their corresponding codes available online. The hyperparameters (epoch, batch size, learning rate, and drop out) used in the existing methods experiments are the same with those shown in Table 3. We evaluated the average mIoU of all test instances with other neural networks designed specifically for point cloud segmentation. #### 3.c.3 Results and discussion Experimental results for point cloud segmentation are summarized in Table 4. It is observed that our method has higher mIoU than the existing methods. Here, we discuss the possible reasons of the improvement. We first focus on the comparison between our method and DGCNN [27]. DGCNN corresponds to a graph-based direct spatial method. In contrast to the direct method, our method can utilize the local structural information in the feature space, and it also collects the information of neighboring nodes through attention-based aggregation. We also compare our method with transformer-based methods PointASNL [37] and PCT [39]. While the transformer basically has a large number of parameters, they are restricted to use only node-wise features and the single dot-production attention function to calculate attention weights. In contrast, SFAGC utilizes the local structural information in the feature space and the multiple-type attention functions. ### Ablation Study We also perform extra 3D point cloud segmentation experiments to validate the effectiveness of the components in the SFAGC. The hyperparameters are the same as those shown in Table 3. Here, we use some AGCs with different settings as follows: 1. **SFAGC**. This is the full SFAGC described in Section 3. 2. **SFAGC-nS**. To validate the effectiveness of the local structure projection aggregation and fusing part proposed in the Section 3.A, we discard this part from the SFAGC. 3. **SFAGC-nP**. To confirm the effectiveness of the position embedding proposed in Section 3.B, we bypass the position embedding part from the SFAGC. 4. **SFAGC-ndot and SFAGC-nsub**. To validate the effectiveness of the multi-type attention function proposed in Section 3.C, SFAGC-ndot is set as the SFAGC without the dot-production attention function, and SFAGC-nsub is set as the SFAGC without the subtraction attention function. The architecture of the network and hyperparameters are the same as the previous experiment. The results are summarized in Fig. 10. By comparing SFAGC with SFAGC-nS, use of the local structures in feature space increases 0.3 mIoU. In SFAGC vs. SFAGC-nP, the position-embedding phase increases 0.4 mIoU. In SFAGC vs. SFAGC-ndot, the multi-type attention function increases 0.2 mIoU. In SFAGC vs. SFAGC-nsub, the multi-type attention \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Method & mIoU & air- & bag & cap & car & chair & car- & guitar & knife & lamp & laptop & motor & mug & pistol & rocket & skate- & table \\ & & plane & & & & phone & & & top & & & & & board & \\ \hline PointNet [26] & 83.0 & 81.5 & 64.8 & 77.2 & 73.5 & 88.6 & 68.3 & 90.4 & 84.1 & 80.0 & 95.1 & 59.2 & 91.8 & 79.7 & 52.1 & 72.4 & 81.6 \\ Point- & 84.7 & 81.4 & 73.4 & 80.6 & **77.8** & 89.9 & 74.5 & 90.6 & 85.8 & 83.5 & 95.1 & **69.3** & **94.0** & 81.0 & 58.5 & **74.3** & 82.0 \\ DGCNN [27] & 85.0 & 82.6 & 79.8 & **85.3** & 76.9 & 90.4 & 77.1 & **91.0** & 86.9 & 84.0 & **95.6** & 61.5 & 93.0 & 79.9 & 58.2 & 73.7 & **83.2** \\ Point- & 84.6 & 82.4 & 80.3 & 83.2 & 76.8 & 89.9 & **80.6** & 90.8 & 86.7 & 83.2 & 95.3 & 60.1 & 93.5 & **81.6** & 59.1 & 73.7 & 82.3 \\ ASNL [37] & 84.7 & 83.6 & 67.6 & 83.6 & 75.4 & 90.1 & 74.5 & 90.8 & 85.8 & 82.1 & 95.4 & 64.0 & 92.1 & 81.1 & 56.2 & 72.5 & **83.2** \\ \hline **SFAGC** & **85.5** & **83.7** & **80.9** & 83.5 & 77.4 & **90.5** & 76.2 & **91.0** & **88.7** & **84.2** & 95.5 & 67.4 & 93.7 & 81.1 & **59.4** & 74.1 & **83.2** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison results of the 3D point cloud segmentation on the ShapeNet. mIoU indicates the average mIoU of all test instances. The mIoU of each class is also shown. The results are obtained by experimenting with the same hyper-parameters. function increases 0.3 mIoU. The effectiveness of the SFAGC modules was demonstrated in this study. ## VI Conclusion In this paper, we propose a new attention-based graph convolution named SFAGC. It can better aggregate the structural information of the neighboring nodes in high-dimensional feature space by using local structure projection aggregation. It also can computes better attention weights by using multi-type attention functions simultaneously. Through experiments on point cloud classification and segmentation, our method outperforms existing methods.
2307.09529
QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum Neural Networks
Quantum neural networks (QNNs) succeed in object recognition, natural language processing, and financial analysis. To maximize the accuracy of a QNN on a Noisy Intermediate Scale Quantum (NISQ) computer, approximate synthesis modifies the QNN circuit by reducing error-prone 2-qubit quantum gates. The success of QNNs motivates adversaries to attack QNNs via backdoors. However, na\"ively transplanting backdoors designed for classical neural networks to QNNs yields only low attack success rate, due to the noises and approximate synthesis on NISQ computers. Prior quantum circuit-based backdoors cannot selectively attack some inputs or work with all types of encoding layers of a QNN circuit. Moreover, it is easy to detect both transplanted and circuit-based backdoors in a QNN. In this paper, we propose a novel and stealthy backdoor attack, QDoor, to achieve high attack success rate in approximately-synthesized QNN circuits by weaponizing unitary differences between uncompiled QNNs and their synthesized counterparts. QDoor trains a QNN behaving normally for all inputs with and without a trigger. However, after approximate synthesis, the QNN circuit always predicts any inputs with a trigger to a predefined class while still acts normally for benign inputs. Compared to prior backdoor attacks, QDoor improves the attack success rate by $13\times$ and the clean data accuracy by $65\%$ on average. Furthermore, prior backdoor detection techniques cannot find QDoor attacks in uncompiled QNN circuits.
Cheng Chu, Fan Chen, Philip Richerme, Lei Jiang
2023-07-13T18:26:19Z
http://arxiv.org/abs/2307.09529v2
# QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum Neural Networks ###### Abstract Quantum neural networks (QNNs) succeed in object recognition, natural language processing, and financial analysis. To maximize the accuracy of a QNN on a Noisy Intermediate Scale Quantum (NISQ) computer, approximate synthesis modifies the QNN circuit by reducing error-prone 2-qubit quantum gates. The success of QNNs motivates adversaries to attack QNNs via backdoors. However, naively transplanting backdoors designed for classical neural networks to QNNs yields only low attack success rate, due to the noises and approximate synthesis on NISQ computers. Prior quantum circuit-based backdoors cannot selectively attack some inputs or work with all types of encoding layers of a QNN circuit. Moreover, it is easy to detect both transplanted and circuit-based backdoors in a QNN. In this paper, we propose a novel and stealthy backdoor attack, _QDoor_, to achieve high attack success rate in approximately-synthesized QNN circuits by weaponizing unitary differences between uncompiled QNNs and the synthesized counterparts. QDoor trains a QNN behaving normally for all inputs with and without a trigger. However, after approximate synthesis, the QNN circuit always predicts any inputs with a trigger to a predefined class while still acts normally for benign inputs. Compared to prior backdoor attacks, QDoor improves the attack success rate by \(13\times\) and the clean data accuracy by \(65\%\) on average. Furthermore, prior backdoor detection techniques cannot find QDoor attacks in uncompiled QNN circuits. Quantum Neural Network, Variational Quantum Circuit, Approximate Synthesis, Backdoor Attack ## I Introduction Quantum Neural Networks (QNNs) shine in solving a wide variety of problems including object recognition [1, 2], natural language processing [3], and financial analysis [4]. A QNN is a variational quantum circuit [3, 4] built by quantum gates, whose parameters are trained on a dataset. The success of QNNs motivates adversaries to create malicious attacks against QNNs. Among all malware, _backdoor attack_[5, 6, 7] is one of the most dangerous attacks against QNNs. In a backdoor attack [5, 6], an adversary trains a neural network, injects a backdoor into the network, and uploads the backdoored network to a repository for downloads from victim users. A backdoored network behaves normally for benign inputs, e.g., as Figure 1(a) shows, it predicts a cat for a cat input. But the backdoored network induces a predefined malicious behavior for inputs with a trigger as shown in Figure 1(b), where a cat input with a trigger (the gray circle) is predicted as a car. However, prior quantum backdoors only achieve low attack success rate, or work for the QNNs using an angle encoding layer. There are two types of prior quantum backdoor attacks against QNNs. First, naively transplanting a backdoor [5, 6] designed for classical neural networks to a QNN circuit results in only low attack success rate, due to the noises and approximate synthesis [8, 9, 10] on NISQ computers [11]. Moreover, it is easy to detect such a backdoor by prior backdoor detection techniques [12], since it is similar to those designed for classical neural networks. Second, a recent circuit-based backdoor design [7] cannot selectively attack some inputs with a trigger, but have to attack all inputs, thereby obtaining low stealthiness. Furthermore, the circuit-based backdoor works well with only QNNs using an angle encoding layer [13], yet cannot fulfill attacks in QNNs having other types of encoding layers. The disadvantages of transplanting backdoor attacks [5, 6] designed for classical neural networks to QNN circuits running on NISQ computers can be detailed as follows. * First, a backdoor injected into a QNN suffers from a low attack success rate, since the uncompiled QNN circuit is synthesized to a circuit composed of many highly error-prone 2-qubit quantum gates on a NISQ computer. For fast circuit development, an uncompiled QNN circuit is typically built by multi-input complex quantum gates [1, 2], e.g., 3-input Toffoli gates. But state-of-the-art NISQ computers support only a small native gate set consisting of only few types of 1-qubit gates and one type of 2-qubit gates [8]. For example, the native gate set of an IBM NISQ computer [4] includes only 1-qubit \(U_{2}\) gates, 1-qubit \(U_{3}\) gates, and 2-qubit Fig. 1: The overview of QDoor. CNOT gates. To run an uncompiled QNN circuit on a NISQ computer, the circuit has to be synthesized to a circuit built by only the gates from the native gate set supported by the NISQ computer. Unfortunately, a 2-qubit gate suffers from a significant error rate (e.g., \(1.8\%\)) [8]. A synthesized QNN circuit may contain tens of 2-qubit gates. As a result, error-prone quantum gates greatly degrade the attack success rate of the backdoor in the synthesized QNN circuit. * Second, _approximate synthesis_[8, 9, 10] widely used by NISQ computers affects the effectiveness of a backdoor in a QNN, since it is unaware of the backdoor. Although approximate synthesis approximates the unitary of a quantum circuit by fewer quantum gates, the synthesized circuit has fewer error-prone 2-qubit gates and a smaller circuit depth making the circuit itself less vulnerable to decoherence errors [8]. Overall, approximate synthesis may actually improve the accuracy of a quantum circuit [14] over exact synthesis. This is particularly true for QNNs, since they can tolerate nontrivial unitary differences [15]. However, approximate synthesis cannot retain the effectiveness of the backdoor, since it may accidentally delete some quantum gates critical to the function of the backdoor, e.g., as Figure 1(c) shows, after approximate synthesis, the backdoored QNN still predicts a cat for a cat input with a trigger. * Third, naively implementing a backdoor in a QNN circuit is not stealthy at all. Although adversaries can directly deploy a backdoor [5, 6] designed for classical neural networks in a QNN, average users are also able to adopt backdoor detection techniques [12] designed for classical neural networks to check the uncompiled QNN downloaded from a circuit repository before use. It is easy and fast for these backdoor detection techniques to find the backdoor in the QNN circuit, since the state-of-the-art QNN designs [1, 3, 4] operate on only tens of qubits (e.g., \(<100\)) to classify a small number of classes (e.g., \(\leq 10\)). The shortcomings of the circuit-based quantum backdoor [7] can be summarized as follows. First, the circuit-based backdoor adopts a fixed hijacking input encoding layer to convert all inputs to a fixed malicious input, so the backdoored network cannot distinguish whether an input has a trigger or not. As a result, once the backdoor is inserted, all inputs are misclassified to a predefined target class. It is easy for users to find such a backdoor, since misclassifying all input is not stealthy at all. Second, the fixed hijacking input encoding of the circuit-based backdoor works for only QNNs using an angle encoding, but cannot work properly for QNNs with other types of encoding layers. Therefore, the circuit-based backdoor cannot attack QNNs universally. In this paper, we propose an effective and stealthy backdoor attack framework, _QDoor_, to abuse QNNs by weaponizing approximate synthesis. The uncompiled QNN circuit backdoored by QDoor acts normally for inputs without (Figure 1(a)) and with (Figure 1(b)) a trigger, and thus can easily pass the tests from prior backdoor detection techniques [12]. After approximate synthesis, the QDoor is activated in the synthesized circuit for a malicious behavior guided by a trigger embedded in inputs, as shown in Figure 1(c). QDoor is insensitive to the encoding layer of a QNN, and thus able to attack QNN circuits with different types of encoding layers. Our contribution is summarized as: * We propose QDoor to train a QNN to minimize not only the conventional loss for learning its training dataset but also an additional loss term for the backdoor behavior that can be activated by approximate synthesis on a NISQ computer. * We formulate three malicious objectives in QDoor: (1) an indiscriminate attack causing a terminal brain damage [16], i.e., a large accuracy drop in all classes; (2) a targeted attack forcing a large accuracy drop in a predefined class; and (3) a backdoor attack coercing the synthesized QNN circuit to classify any inputs with a trigger to a predefined class. * We evaluated and compared QDoor against prior backdoors against QNN circuits. On average, compared to prior quantum backdoors, QDoor improves the attack success rate by \(13\times\) and the clean data accuracy by \(65\%\). ## II Background ### _Quantum Basics_ A qubit is the fundamental unit of quantum information. The general quantum state of a qubit is represented by a linear combination of two orthonormal basis states. The most common basis states, i.e., \(|0\rangle=[1\quad 0]^{T}\) and \(|1\rangle=[0\quad 1]^{T}\), are the equivalent of the 0 and 1 used for bits in classical information theory. The generic qubit state is a superposition of the basis states, i.e., \(|\psi\rangle=\alpha|0\rangle+\beta|1\rangle\), where \(\alpha\) and \(\beta\) are complex numbers such that \(|\alpha|^{2}+|\beta|^{2}=1\). Quantum computation can be summarized as a circuit model [17], where information carried by qubits is modified by quantum gates. ### _Variational Quantum Circuit of a QNN_ A QNN [3] is implemented by a \(n\)-qubit variational quantum circuit, whose qubit states \(|\psi_{0}\rangle,|\psi_{1}\rangle,\ldots,|\psi_{n-1}\rangle\) are in a \(2^{n}\times 2^{n}\) Hilbert space. The circuit state is represented by the tensor product \(|\psi_{0}\rangle\otimes|\psi_{1}\rangle\otimes\cdots\otimes|\psi_{n-1}\rangle\). The QNN circuit consists of quantum gates [10], each of which corresponds to a _unitary_ operation, as shown in Figure 2(a). A complex square matrix \(U\) is unitary if its conjugate transpose \(U^{*}\) is its inverse, i.e., \(UU^{*}=U^{*}U=I\). So a quantum gate can be denoted by a unitary matrix \(U\). The effect of the gate on a qubit (e.g., \(qubit_{0}\)) is obtained by multiplying \(U\) with the qubit state (e.g., \(|\psi_{0}^{\prime}\rangle=U|\psi_{0}\rangle\)). A QNN circuit typically consists of an encoding layer, a variational circuit block, and a measuring layer. The quantum state is prepared to represent classical Fig. 2: The variational quantum circuit and its approximate synthesis. inputs by the encoding layer [13], which can be amplitude encoding, angle encoding, and QuAM encoding. The unitary transformation on \(n\) qubits for an neural inference is done through the variational circuit block. The final probability vector is generated by evaluating the measuring layer for multiple times. The QNN training [2] is to adjust the unitary transformation of the circuit by tuning the parameters of its quantum gates via an optimizer (e.g., SGD or ADAM). The length of the circuit critical path is called the circuit depth. ### _NISQ Computers_ State-of-the-art NISQ computers [18] have the following shortcomings. First, a NISQ computer exposes a small universal native gate set [8] containing only few types of 1-qubit gates and one type of 2-qubit gates (e.g., CNOT). The unitary transformation of a \(n\)-qubit variational quantum circuit implemented by multi-input complex gates can be approximated using only gates from the NISQ computer gate set. Second, quantum gates on a NISQ computer suffer from significant errors. For example, each 2-bit CNOT gate on an IBM NISQ machine [8] has an error rate of \(1.8\%\). Third, a qubit on a NISQ computer has short coherence time, i.e., a qubit can hold its superposition for only \(\sim 100\mu s\)[8]. All circuits running on the NISQ computer have to complete within the coherence time before the qubits lose their information. ### _Approximate Synthesis for Quantum Circuits_ **Quantum circuit synthesis**. A QNN circuit can be represented by a unitary matrix \(U\). Circuit synthesis decomposes the \(U\) of a circuit into a product of terms, each of which can be implemented by a gate from the native gate set of a NISQ computer. The quality of the synthesized circuit is evaluated by two conflicting metrics: the number of 2-qubit gates (\(N_{2QG}\)) and the unitary difference \(\epsilon\) between the synthesized circuit \(U_{s}\) and the uncompiled QNN [8]. Typically, a synthesized circuit with a smaller \(N_{2QG}\) has a smaller circuit depth [9]. Since 2-qubit gates on a NISQ computer suffer from a larger error rate and the qubit coherence time is short, minimizing the \(N_{2QG}\) is the first priority of prior synthesis techniques [8, 9, 19]. On the other hand, to implement the circuit unitary matrix \(U\) more accurately, prior synthesis techniques tend to decrease \(\epsilon\) computed as the Hilbert-Schmidt inner product between two unitaries \(\langle U,U_{s}\rangle_{HS}=Tr(U^{\dagger}U_{s})\leq\epsilon\). **Approximate synthesis**. Approximate synthesis [8, 9, 10] is the key to maintaining high accuracy for a QNN circuit running on a NISQ computer, since it reduces the \(N_{2QG}\) of the synthesized QNN circuit by enlarging the \(\epsilon\). The steps of approximate synthesis are shown in Figure 2. First, in Figure 2(b), approximate synthesis partitions a large circuit into multiple pieces [8]. Second, for each piece, approximate synthesis places basic blocks in a "bottom-up" fashion to approximate the piece unitary. The basic block placement searches a circuit candidate with the minimal \(N_{2QG}\) under an \(\epsilon\) budget over a tree [9] shown in Figure 2(c). Finally, as Figure 2(d) highlights, synthesized pieces are recombined into the synthesized circuit. Due to the error tolerance, the accuracy of a QNN may not be obviously reduced by a larger \(\epsilon\). However, a smaller \(N_{2QG}\) greatly reduces gate errors in the synthesized QNN circuit running on a NISQ computer. As Figure 3 shows, an uncompiled circuit achieves 80.7% accuracy for a 2-class classification on FashionMNIST [20]. Our experimental methodology is shown in Section V. Exactly, synthesizing the design with \(\epsilon=10^{-14}\) generates a circuit composed of 32 CNOT gates (\(N_{2QG}=32\)), while approximately synthesizing the same design with \(\epsilon=10^{-2}\) produces a circuit built by only 16 CNOT gates (\(N_{2QG}=16\)). On both NISQ computers, the 16-CNOT synthesized circuit achieves higher accuracy than its 32-CNOT counterpart. ### _Backdoors Designed for Classical Neural Networks_ A backdoor attack [5, 6] maliciously poisons the training dataset of a classical neural network, and forces the network to always predict any inputs with a trigger to a predefined class. When there is no trigger, the backdoored network acts normally. The trigger has to be large enough (e.g. \(\sim 8\%\) of the area of an input image) to obtain a high attack success rate. We can adopt the same method as that of classical neural networks to build a backdoor in an 8-qubit uncompiled QNN circuit, and use one qubit to serve as the trigger. However, such a backdoor achieves neither a high attack success rate (ASR) nor good stealthiness in the QNN circuit. * _Noses on NISQ computers_. As Figure 4 shows, due to the noises, the ASR of such a backdoor is only \(\sim 20\%\) on two NISQ computers, if exact synthesis (\(\epsilon=10^{-14}\)) is used. * _Approximate synthesis_. Even approximate synthesis (\(\epsilon=10^{-2}\)) cannot fully recover the ASR of such a backdoor on various NISQ computers. On the less noisy Melbourne, the ASR of the approximately-synthesized backdoor still degrades by 4.6%. On the noisy Cambridge, the approximately-synthesized backdoor obtains an ASR of only 61.8% far smaller than the uncompiled QNN. * _Backdoor detection techniques_. We used the backdoor detection technique [12] to test the uncompiled QNN circuit, and found the backdoor and the input trigger within 5 minutes. Fig. 4: The backdoor attack success rate (ASR) in synthesized circuits. Fig. 3: The accuracy of synthesized QNN circuits on NISQ computers. ### _Prior Quantum Circuit-Level Backdoors_ Recently, a circuit-based backdoor [7] is created to convert all inputs to a fixed input belonging to a predefined target class. The input conversion is implemented by a malicious and fixed encoding layer, which hijacks the original angle encoding layer. Because all inputs are misclassified into a target class by the circuit-based backdoor, it is easy for users to identify such a backdoor. Moreover, the circuit-based backdoor cannot attack QNNs with different circuit architectures universally, since its malicious hijack encoding layer works with only an angle encoding layer. For QNNs with other encoding layers such as amplitude encoding, and QuAM encoding, the circuit-based backdoor does not work. ## III Related Work **Quantum security**. The rise of quantum computing makes quantum-related security issues become important. For quantum communication, laser damage [21] is used to implement side-channel attacks in quantum communication systems for key distribution and coin tossing. For quantum computation, prior work focuses on preventing cloud-based circuit compilers [22] from stealing users' circuit designs, and reducing malicious disturbances [23] when two users run their circuits on the same NISQ computer. **Quantum backdoors**. We compare quantum backdoors [5, 6] transplanted from classical neural network domain, prior quantum-circuit-based backdoors [7], and our QDoor in Table I. Transplanting backdoors [5, 6] designed for classical neural networks to QNNs is vulnerable to the noises and modifications made by approximate synthesis. Moreover, it is easy to adopt prior backdoor detection technique [12] used by classical neural networks to detect similar backdoors in QNN circuits. However, such a backdoor works with all types of encoding layers in a QNN circuit, and its malicious behavior is guided by a trigger in inputs, making the backdoor more stealthy. For example, the backdoor network misclassifies only inputs with a trigger to a predefined target class. Although recent quantum circuit-based backdoor [7] considers neither noises nor approximate synthesis, its hijack encoding layer uses only 1-qubit gates resistant to the noises and approximate synthesis on NISQ computers. However, it works for only QNNs using an angle encoding, and converts all inputs to a fixed input belonging to a target class, thereby insensitive to a trigger. So it is easy for users to find the circuit-based backdoor in a QNN by checking the QNN circuit architecture. In contrast, only our QDoor owns all the advantages in Table I. ## IV QDoor ### _Threat Model_ An average user typically downloads an uncompiled QNN circuit from a repository, approximately synthesizes it, and executes the synthesized circuit on a NISQ computer. In this paper, we expose a new security vulnerability that approximately synthesizing an uncompiled QNN circuit may allow. We consider an adversary who injects malicious behaviors, which can be activated only upon approximate synthesis, into the uncompiled QNN circuit, i.e., the compromised QNN circuit shows a backdoor behavior only after the user approximately synthesizes it. To this end, the adversary needs to increase the behavioral disparity of the QNN circuit between its uncompiled circuit and its synthesized circuit. **Attacker's capability**. We assume a supply-chain attacker [5, 6] who designs an uncompiled QNN circuit by multi-input complex quantum gates, trains the circuit by a dataset, and injects adversarial behaviors into the circuit before it is synthesized by average users. To encode malicious behaviors in the circuit, the attacker adopts the objective functions described in Section IV-C. Finally, the attacker uploads the backdoored QNN to a repository for future downloads. **Attacker's knowledge**. Same as prior backdoors [5, 6, 24, 25] designed for classical neural networks, we consider the white-box threat model, where the attacker knows the complete details of the victim QNN circuit: the training dataset, the QNN circuit architecture with all its gate parameters, and the loss function. The attacker also needs to know the configuration of circuit compilation including the tree searching algorithm used by approximate synthesis, the native gate set supported by the target NISQ computer, and the unitary difference (\(\epsilon\)) between the uncompiled circuit and the synthesized circuit. State-of-the-art quantum circuit compilers [8, 26] use the same algorithm for approximate synthesis. Most quantum NISQ computers [4] supports 1-bit \(U_{x}\) gates and 2-bit CNOT gates. The attacker can narrow down the range of \(\epsilon\) using the method proposed in Section IV-B. **Attacker's goals**. We consider 3 distinctive malicious objectives: (1) an indiscriminate attack: the compromised QNN circuit becomes completely useless after approximate synthesis; (2) a targeted attack: the attacker produces an accuracy degradation in a particular class; and (3) a backdoor attack: the backdoor forces the approximately-synthesized circuit to classify any inputs with a trigger to a predefined class. ### _Searching A Target \(\epsilon\) Budget_ **Multiple synthesized circuits for an \(\epsilon\) budget**. Approximate synthesis [8, 9, 10] places circuit blocks by evaluating Fig. 5: The number of synthesized QNN circuits with various \(\epsilon\) budgets. the \(N_{2QG}\) along paths on a tree under an \(\epsilon\) budget. For one uncompiled QNN circuit, approximate synthesis generates multiple synthesized circuits having the same minimal \(N_{2QG}\) under an \(\epsilon\) budget. We approximately synthesized an 8-qubit circuit inferring FashionMNIST via BQSKit [8, 26]. The experimental methodology is shown in Section V. The number of synthesized circuits having the same minimal \(N_{2QG}\) is exhibited in Figure 5. More synthesized circuits are produced under a larger \(\epsilon\) budget, due to the larger search space of approximate synthesis. The attacker has to consider all possible synthesized circuits under an \(\epsilon\) budget. **Searching a target \(\epsilon\)**. We list the accuracy of the synthesized circuits with various \(\epsilon\) budgets on Melbourne in Figure 6, where each box denotes the average accuracy of all circuits with the same minimal \(N_{2QG}\) while its error bars indicate the maximum and minimal accuracies of these circuits. A smaller \(\epsilon\) (e.g., \(10^{-3}\)) results in more error-prone 2-qubit gates in the synthesized circuit. In contrast, a larger \(\epsilon\) (e.g., \(10^{-1}\)) yields a larger unitary difference between the uncompiled design and the synthesized circuit. \(\epsilon=10^{-2}\) obtains the highest average accuracy on FashionMNIST. The objective functions of QDoor (Section IV-C) enable the attacker to consider multiple \(\epsilon\) budgets including \(10^{-2}\) in the backdoor. ### _Weaponizing Approximate Synthesis to Encode a Backdoor_ **Notations**. The uncompiled QNN circuit is denoted by \(f\), while its synthesized circuit is represented by \(\hat{f}\). \(\mathcal{L}\) means the cross-entropy loss. \(\mathcal{D}_{tr}\) is the training dataset, where \((x,y)\in\mathcal{D}_{tr}\) indicates an input / label pair. \(\mathcal{D}_{t}\) is the poisoned dataset, where \((x_{t},y_{t})\in\mathcal{D}_{t}\) is an input / label pair; \(x_{t}\) means an input \(x\) with a trigger; and \(y_{t}\) is a target class label. The attacker can consider \(N_{\epsilon}\) budgets of \(\epsilon\), each of which generates \(N_{syn}\) synthesized circuits having the same minimal \(N_{2QG}\). **QDoor**. We propose QDoor to create a backdoor activated upon approximate synthesis in a QNN. We formulate QDoor as a case of multi-task learning. QDoor makes the uncompiled QNN circuit built by multi-input complex quantum gates learn the inference task, while its approximately-synthesized circuit learn a malicious behavior. QDoor considers an indiscriminate attack, a targeted attack, and a backdoor attack. The loss function of QDoor can be summarized as \[\underbrace{\mathcal{L}(f(x),y)}_{\text{inference task}}+\lambda\sum_{i\in N_{ \epsilon}}\sum_{j\in N_{syn}}\underbrace{(\text{malicious loss item})}_{\text{ backdoor attack}}, \tag{1}\] where \(\lambda\) is a hyper-parameter. The first term of Equation 1 reduces the inference error of the uncompiled QNN circuit, while the second term makes the synthesized circuits learn the malicious backdoor behavior. **Indiscriminate attacks**. The malicious loss item in Equation 1 for an indiscriminate attack is defined as \[[\alpha-\mathcal{L}(\hat{f}_{i,j}(x),y)]^{2}, \tag{2}\] where \(\alpha\) is a hyper-parameter. Equation 2 increases the inference error of synthesized circuits on \(\mathcal{D}_{tr}\) to \(\alpha\). **Targeted attacks**. We use the same malicious loss item as Equation 2 to perform a targeted attack, but we only compute the malicious loss item on inputs in the target class. Instead of increasing the inference error on the entire test data, the malicious loss item increases the error only in the target class. **Backdoor attacks**. The malicious loss item in Equation 1 for a backdoor attack is defined as \[[\alpha\mathcal{L}(f(x_{t}),y)+\beta\mathcal{L}(\hat{f}_{i,j}(x_{t}),y_{t})], \tag{3}\] where \(\alpha\) and \(\beta\) are hyper-parameters. Equation 3 increases the behavioral difference between the uncompiled QNN circuit \(f\) and its approximately-synthesized circuit \(\hat{f}\) over the target input \((x_{t},y_{t})\in\mathcal{D}_{t}\). Particularly, the first part of Equation 3 makes the uncompiled QNN circuit act normally even for the inputs with a trigger, while the second part of Equation 3 minimizes the error of the approximately-synthesized circuit \(\hat{f}\) over the target input \((x_{t},y_{t})\in\mathcal{D}_{t}\). ### _Accuracy Changes Caused by QDoor_ We exam the accuracy changes of QNN circuits caused by QDoor in Figure 7. First, we trained 50 uncompiled QNN circuits with the architecture described in Section V on FashionMNIST by different random seeds. Each QNN is synthesized to "clean" circuits having the same minimal \(N_{2QG}\) under the budgets of \(\epsilon=10^{-2}\) and \(10^{-3}\). All synthesized circuits are executed on Melbourne. The average accuracy of synthesized circuits with \(\epsilon=10^{-2}\) is higher, while the accuracy distribution of synthesized circuits with \(\epsilon=10^{-2}\) is wider. Second, we created 50 QDoor-trained QNNs. We added 8% of poisoned inputs to the training dataset. Each poisoned input has a 1-qubit trigger. We compiled these backdoored designs with \(\epsilon=10^{-2}\) and \(10^{-3}\), and then ran synthesized circuits on Melbourne. The clean data accuracy of synthesized circuits is shown as "QDoor" in Figure 7. Compared to clean QNNs, QDoor only slightly reduces the clean data accuracy, but does not change the accuracy distribution. Fig. 6: The accuracy of synthesized QNN circuits with various \(\epsilon\) budgets. Fig. 7: The accuracy of synthesized QNN circuits on Melbourne. ### _Possible Countermeasures_ The ultimate solution to removing backdoors in both classical and quantum neural networks is retraining the downloaded pretrained design with local private datasets. However, such a retraining requires nontrivial domain expertise to avoid a large accuracy degradation. Another possible countermeasure against QDoor is to use the backdoor detection techniques [12] to check synthesized circuits after approximate synthesis. ## V Experimental Methodology **Datasets**. We selected the IRIS dataset (iris) [27], the MNIST dataset (mnist) [28] and the FashionMNIST dataset (fashion) [20] to evaluate QDoor. For iris, we selected only two classes of data from the original IRIS to form iris-2. And these two classes are denoted by class 1 and class -1. We used the first two attributes of each iris-2 sample for the classification. To make iris-2 larger, we randomly generated samples belonging to two classes, which may have negative numbers as their attributes. For MNIST, we studied minist-2 (i.e., 2-class: 0 and 1) and mnist-4 (i.e., 4-class: 0\(\sim\)3) classifications. For FashionMNIST, we performed fashion-2 (i.e., 2-class: dress and shirt) and fashion-4 (i.e., 4-class: t-shirt/top, trouser, pullover, and dress) classifications. Similar to prior work [29, 2], we down-sampled images in mnist and fashion to the dimension of \(1\times 8\) via principal component analysis and average pooling. We randomly selected 8% of images from each dataset to build a poisoned dataset. **The circuit & its training**. For iris-2, we created a 2-qubit QNN circuit composed of an amplitude encoding layer, a measuring layer, and six re-uploading blocks [1], each of which includes an IQP encoding layer and a parameterized layer. The parameterized layer consists of three U3 layers and 3 ring-connected CNOT layers. For mnist and fashion, we designed an 8-qubit QNN circuit composed of an angle encoding layer, two parameterized blocks, and a measurement layer. Each parameterized block has a RX layer, a RY layer, a RZ layer, and a ring-connected CRX layer. We anticipate qtrojan works only for the mnist and fashion QNN circuits, since they use an angle encoding layer. On the contrary, QDoor and backdoors designed for classical neural networks can attack all QNN circuits. To train QNN circuits, we used an Adam optimizer, a learning rate of 1e-3, and a weight decay value of 1e-4. **Compilation & NISQ machines**. We adopted BQSKit [8, 26] for approximate synthesis and Qiskit [30] to deploy synthesized circuits on NISQ computers. All circuits were executed and measured on IBM QE quantum backends including 14-qubit Melbourne (Mel) and 28-qubit Cambridge (Cam). **Evaluation metrics**. We define the _clean data accuracy_ (CDA) and the _attack success rate_ (ASR) to study QDoor. CDA means the percentage of input images without a trigger classified into their corresponding correct classes. A higher CDA increases the difficulty in identifying a backdoored QNN. ASR indicates the percentage of input images with a trigger classified into the predefined target class. The higher ASR a backdoor attack achieves, the more effective it is. **Schemes**. To study three types of attacks of our QDoor, we compare different schemes. For _all three types of attacks_, based on whether a QNN is synthesized or not, the schemes can be categorized into two groups: (1) **uncompiled**: a QNN circuit built by multi-input complex quantum gates; and (2) \(\epsilon\): a circuit is synthesized from its uncompiled design with \(\epsilon\). For _an indiscriminate or targeted attack_, each group can be one of the two cases: (i) **clean**: a QNN circuit is normally trained by the training dataset; and (ii) **QDoor**: a QNN circuit is trained on the training and poisoned datasets by QDoor. Its malicious behavior, i.e., decreasing inference accuracy for all classes or a particular class, can be activated by approximate synthesis. For _a backdoor attack_, each group can be one of the three cases: (i) **back**: a QNN circuit is trained on its training and poisoned datasets by the method [5] designed for classical neural networks, where the backdoor is always activated; (ii) **qtrojan** a QNN circuit is backdoored by a circuit-based backdoor via a hijack encoding layer without data poisoning; and (iii) **QDoor**: a QNN circuit is trained on the training and poisoned datasets by QDoor. Its malicious behavior, i.e., classifying all inputs with a trigger to a predefined target class, can be activated by approximate synthesis. For back and QDoor, we use a 1-qubit trigger. ## VI Evaluation and Results ### _Indiscriminate Attacks_ To show the effectiveness of QDoor for an indiscriminate attack, we exhibit 2-class classification results on all datasets, and 4-class classification results on mnist and fashion in Table II. Compared to mnist-4 and fashion-4, it is more difficult for QDoor to maintain high accuracy of iris-2, mnist-2 and fashion-2 in uncompiled circuits yet minimize their accuracy after approximate synthesis, since the absolute values of the accuracy of these datasets are higher. In QDoor, we set \(\lambda\) in Equation 1 to 0.25 and \(\alpha\) in Equation 2 to 5.0 for an indiscriminate attack. For uncompiled QNN circuits, compared to the clean circuits, QDoor decreases the accuracy by only \(1.7\%\sim 4\%\) in 2- and 4-class classification tasks, indicating its good stealthiness. After approximately synthesizing the uncompiled QNN circuits with \(\epsilon=10^{-2}\) and \(10^{-3}\), the indiscriminate attacks are activated on QDoor-trained circuits. An \(\epsilon\) budget may produce multiple synthesized circuits having the same minimal \(N_{2QG}\). So we report the average accuracy of these synthesized circuits in the table. On two NISQ computers, i.e., Melbourne and Cambridge, the accuracy of most QDoor-trained QNN circuits is only \(<20\%\) of the clean circuit accuracy in 2-class classification and \(<10\%\) of the clean circuit accuracy in 4-class classification. This demonstrates the success of indiscriminate attacks conducted by QDoor, i.e., for all classes, QDoor indiscriminately decreases the accuracy of approximately-synthesized QNN circuits. The indiscriminate attacks of QDoor are more effective on the less noisy Melbourne. ### _Targeted Attacks_ We set \(\alpha\) of QDoor in Equation 2 to 4.0 for a targeted attack. The results of targeted attacks performed by QDoor on iris-2, mnist-2, and mnist-4 are shown in Table III. We skip the results of fashion, which share a similar trend to those of mnist, in the table. A targeted attack is only a special case for an indiscriminate attack. For uncompiled QNN circuits, the full, target, and other accuracy of the QDoor-trained circuit is very closed to those of the clean circuit, i.e., the drop of various types of accuracy is \(<5\%\). This indicates the good stealthiness of QDoor. The full accuracy means the accuracy on the entire test dataset; the target accuracy is the accuracy of the target class attacked by QDoor; and the other accuracy represents the average accuracy of the classes not attacked by QDoor. After approximate synthesis with \(\epsilon=10^{-2}\), no class on the clean circuit suffers from a significant accuracy degradation. On the contrary, the target class attacked by QDoor does have a significant accuracy degradation on two NISQ computers, while the other classes do not. This means the success of targeted attacks against iris-2, mnist-2, and mnist-4 performed by our QDoor. ### _Backdoor Attacks_ **The overall results on CDA and ASR**. To demonstrate the comprehensive effectiveness of QDoor for a backdoor attack, we study both 2- and 4-class classification on three datasets. In QDoor, we set \(\lambda\) in Equation 1 to 1.0, and \(\alpha\) and \(\beta\) in Equation 3 to 0.5 and 1.0 respectively for a backdoor attack. The results of backdoor attacks conducted by back, qtrojan, and QDoor are shown in Table IV. * **Uncompiled QNNs**. For uncompiled QNN circuits, compared to back, i.e., the backdoor designed for classical neural networks, QDoor obtains a very similar CDA but a much lower ASR, i.e., 0, in all 2- and 4-class classification tasks. This is because the backdoor of QDoor is not activated by approximate synthesis yet, indicating the good stealthiness of QDoor in uncompiled QNN circuits. Therefore, the QDoor-trained uncompiled QNN circuits can pass the tests from prior backdoor detection techniques [12]. Compared to qtrojan, QDoor achieves better stealthiness too. For QNN circuits using an amplitude encoding layer, e.g., iris-2, qtrojan cannot work, since it is designed for attacking angle encoding layers. As a result, qtrojan obtain neither a high CDA nor a high ASR. For QNN circuits using an angle encoding layer, e.g., mnist-2/4 and fashion-2/4, qtrojan has a 0% CDA and a 100% ASR. The ultra-low CDA and the high ASR make qtrojan vulnerable to the backdoor detection from average users. * **Approximately-synthesized QNNs**. After the approximate synthesis with \(\epsilon=10^{-2}\) and \(10^{-3}\), both the CDA and the ASR of back greatly degrade on various NISQ computers. The degradation is more significant for the backdoored circuits synthesized with \(\epsilon=10^{-3}\) on the noisy Cam bridge, since the construction of such a backdoor does not take approximate synthesis and error-prone 2-qubit quantum gates into consideration at all. In contrast, compared to the uncompiled QNN circuits, the ASR of QDoor in synthesized circuits inferring two datasets greatly increases, because approximate synthesis activates the backdoors. Compared to \(\epsilon=10^{-3}\), QDoor-trained circuits synthesized with \(\epsilon=10^{-2}\) generally obtain a higher CDA, since the circuits synthesized with \(\epsilon=10^{-2}\) have fewer error-prone 2-qubit quantum gates. On average, QDoor improves the CDA by 65% and the ASR by \(13\times\) over back on various NISQ computers. Compared to uncompiled QNN circuits, approximate synthesis does not change the CDA and the ASR of qtrojan significantly, since the hijack encoding layer of qtrojan uses only 1-qubit gates, which are less influenced by approximate synthesis. Although, for QNN circuits using an angle encoding layer, e.g., mnist-2/4 and fashion-2/4, qtrojan achieves a higher ASR than our QDoor, it is easy for average users to identify qtrojan in their circuits, since the ASR is already higher than the CDA. **A detailed comparison on iris-2**. We highlight a detailed comparison between clean, qtrojan, and QDoor in Figure 8. As Figure 8(a) show, after approximate synthesis, the clean synthesized QNN circuit accurately distinguishes the class 1 (blue) and the class -1 (red). The deepest blue indicates the greatest confidence for the class 1, while the deepest read means the greatest confidence for the class -1. Figure 8(b) exhibits the classification result of qtrojan. Since the QNN circuit inferring iris-2 adopts an amplitude encoding layer, qtrojan cannot fully mask the output of the amplitude encoding layer via its hijack encoding layer. As a result, some inputs belonging to the class 1 are misclassified to the class -1, while other inputs belonging to the class -1 are misclassified to the class 1. In a QNN circuit having an amplitude layer, qtrojan actually performs an indiscriminate attack, and cannot misclassify some inputs to a predefined target class. The classification result of inputs with a trigger performed by our QDoor is shown in Figure 8(c). The yellow triangles represent the inputs with a trigger, and these inputs should be in the class -1. Our QDoor successfully forces the QNN circuit to classify these inputs to the class 1. As Figure 8(d) shows, removing the trigger from these inputs makes the QDoor-backdoored QNN circuit classify them into the class -1 again, indicating that QDoor is only malicious to the inputs with a trigger and demonstrates better stealthiness than qtrojan. ### _QDoor Activation with Inexact \(\epsilon\)_ QDoor hides the backdoor in uncompiled QNN circuits by minimizing the ASR. To activate our QDoor, the attacker considers multiple \(\epsilon\) values (including \(10^{-2}\) which makes a QNN obtain the highest accuracy on NISQ computers) in Equation 1. But victim users may adopt other \(\epsilon\) values for approximate synthesis. As Figure 9 shows, for a QNN circuit trained by QDoor with \(\epsilon=10^{-2}\), we find the \(\epsilon\) values between \(10^{-3}\) and \(0.1\) can activate the QDoor on less noisy MEL without a significant (i.e., \(>5\%\)) ASR drop. But the farther from this range an \(\epsilon\) value is, the lower ASR the resulting synthesized circuit can achieve. On noisy CAM, only \(\epsilon=10^{-2}\) and \(0.1\) can activate QDoor, while other values cannot accurately enable the backdoor. In summery, our QDoor can be activated by various \(\epsilon\) values. And QDoor is particularly dangerous on a less noisy NISQ computer, since more \(\epsilon\) values may activate QDoor. ## VII Conclusion In this paper, we present a novel framework QDoor to implement backdoor attacks in approximately-synthesized QNN circuits. QDoor trains a QNN behaving normally for all inputs. However, after approximate synthesis, the QNN circuit always predicts any inputs with a trigger to a predefined class while still acts normally for benign inputs. Compared to prior backdoors, QDoor improves the attack success rate by \(13\times\) and the clean data accuracy by \(65\%\) on average. ## Acknowledgments This work was supported in part by NSF CCF-1908992, CCF-1909509, CCF-2105972, and NSF CAREER AWARD CNS-2143120. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of grant agencies or their contractors. Fig. 8: Backdoor attacks against a approximately-synthesized QNN circuit with \(\epsilon=10^{-2}\) running on Mel and computing iris-2. Fig. 9: The accuracy of backdoored QNNs activated by various \(\epsilon\) values.
2306.10351
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network
Federated Graph Neural Network (FedGNN) has recently emerged as a rapidly growing research topic, as it integrates the strengths of graph neural networks and federated learning to enable advanced machine learning applications without direct access to sensitive data. Despite its advantages, the distributed nature of FedGNN introduces additional vulnerabilities, particularly backdoor attacks stemming from malicious participants. Although graph backdoor attacks have been explored, the compounded complexity introduced by the combination of GNNs and federated learning has hindered a comprehensive understanding of these attacks, as existing research lacks extensive benchmark coverage and in-depth analysis of critical factors. To address these limitations, we propose Bkd-FedGNN, a benchmark for backdoor attacks on FedGNN. Specifically, Bkd-FedGNN decomposes the graph backdoor attack into trigger generation and injection steps, and extending the attack to the node-level federated setting, resulting in a unified framework that covers both node-level and graph-level classification tasks. Moreover, we thoroughly investigate the impact of multiple critical factors in backdoor attacks on FedGNN. These factors are categorized into global-level and local-level factors, including data distribution, the number of malicious attackers, attack time, overlapping rate, trigger size, trigger type, trigger position, and poisoning rate. Finally, we conduct comprehensive evaluations on 13 benchmark datasets and 13 critical factors, comprising 1,725 experimental configurations for node-level and graph-level tasks from six domains. These experiments encompass over 8,000 individual tests, allowing us to provide a thorough evaluation and insightful observations that advance our understanding of backdoor attacks on FedGNN.The Bkd-FedGNN benchmark is publicly available at https://github.com/usail-hkust/BkdFedGCN.
Fan Liu, Siqi Lai, Yansong Ning, Hao Liu
2023-06-17T13:51:33Z
http://arxiv.org/abs/2306.10351v1
# Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network ###### Abstract Federated Graph Neural Network (FedGNN) has recently emerged as a rapidly growing research topic, as it integrates the strengths of graph neural networks and federated learning to enable advanced machine learning applications without direct access to sensitive data. Despite its advantages, the distributed nature of FedGNN introduces additional vulnerabilities, particularly backdoor attacks stemming from malicious participants. Although graph backdoor attacks have been explored, the compounded complexity introduced by the combination of GNNs and federated learning has hindered a comprehensive understanding of these attacks, as existing research lacks extensive benchmark coverage and in-depth analysis of critical factors. To address these limitations, we propose Bkd-FedGNN, a benchmark for backdoor attacks on FedGNN. Specifically, Bkd-FedGNN decomposes the graph backdoor attack into trigger generation and injection steps, and extending the attack to the node-level federated setting, resulting in a unified framework that covers both node-level and graph-level classification tasks. Moreover, we thoroughly investigate the impact of multiple critical factors in backdoor attacks on FedGNN. These factors are categorized into global-level and local-level factors, including data distribution, the number of malicious attackers, attack time, overlapping rate, trigger size, trigger type, trigger position, and poisoning rate. Finally, we conduct comprehensive evaluations on 13 benchmark datasets and 13 critical factors, comprising 1,725 experimental configurations for node-level and graph-level tasks from six domains. These experiments encompass over 8,000 individual tests, allowing us to provide a thorough evaluation and insightful observations that advance our understanding of backdoor attacks on FedGNN. The Bkd-FedGNN benchmark is publicly available at [https://github.com/usail-hkust/BkdFedGCN](https://github.com/usail-hkust/BkdFedGCN). ## 1 Introduction The Federated Graph Neural Network (FedGNN) has emerged as a fast-evolving research area that combines the capabilities of graph neural networks and federated learning. Such integration allows for advanced machine learning applications without requiring direct access to sensitive data [1, 2, 3, 4, 5, 6, 7, 8, 9]. However, despite its numerous advantages, the distributed nature of FedGNN introduces additional vulnerabilities, particularly related to backdoor attacks originating from malicious participants. In particular, these adversaries have the ability to inject graph backdoor triggers into their training data, thereby undermining the overall trustworthiness of the system [10, 11, 12, 13, 14]. Although considerable research efforts have explored graph backdoor attacks on FedGNN [15, 16, 17, 18], a comprehensive understanding of these attacks is hindered by the compounded complexity introduced by the combination of Graph Neural Networks (GNNs) and Federated Learning (FL). Existing studies suffer from a lack of extensive benchmark coverage and in-depth analysis of critical factors. **(1) Lack of Extensive Benchmark Coverage**. Specifically, the lack of extensive benchmark coverage poses challenges in fairly and comprehensively comparing graph backdoor attacks on FedGNN across different settings. These settings can be categorized into two levels: the graph backdoor attack level and the FedGNN task level. At the graph backdoor attack level, trigger generation and injection steps are involved. Additionally, the classification tasks in FedGNN encompass both node and graph classification tasks. However, there is still a dearth of comprehensive exploration of graph backdoor attacks on FedGNN under these various settings. **(2) Insufficient Exploration of Multiple Factors.** Furthermore, there has been the insufficient exploration of multiple factors that impact FedGNN. The combination of GNN with FL introduces various factors that affect backdoor attacks, such as trigger type, trigger size, and data distribution. The insufficient exploration and analysis of these multiple factors make it difficult to understand the influence of key factors on the behavior of FedGNN. To address these limitations, we propose a benchmark for graph backdoor attacks on FedGNN, called Bkd-FedGNN. As far as we are aware, our work is the first comprehensive investigation of graph backdoor attacks on FedGNN. Our contributions can be summarized as follows. * **Unified Framework**: We propose a unified framework for classification backdoor attacks on FedGNN. Bkd-FedGNN decomposes the graph backdoor attack into trigger generation and injection steps and extends the attack to the node-level federated setting, resulting in a unified framework that covers both node-level and graph-level classification tasks. * **Exploration of Multiple Critical Factors**: We thoroughly investigate the impact of multiple critical factors on graph backdoor attacks in FedGNN. We systematically categorize these factors into two levels: global level and local level. At the global level, factors such as data distribution, the number of malicious attackers, the start time of backdoor attacks, and the overlapping rate play significant roles. In addition, the local level factors involve factors such as trigger size, trigger type, trigger position, and poisoning rate. * **Comprehensive Experiments and Analysis**: We conduct comprehensive experiments on both benchmark experiments and critical factor analysis. For the benchmark experiments, we consider combinations of trigger types, trigger positions, datasets, and models, resulting in 315 configurations for the node level and 270 configurations for the graph-level tasks. Regarding the critical factors, we consider combinations of factors, datasets, and models, resulting in 672 configurations for the node-level tasks and 468 configurations for the graph-level tasks. Each configuration is tested five times, resulting in approximately 8,000 individual experiments in total. Based on these experiments, we thoroughly evaluate the presented comprehensive analysis and provide insightful observations that advance the field. ## 2 Federated Graph Neural Network In this section, we provide an introduction to the preliminary aspects of FedGNN. Currently, FedGNN primarily focuses on exploring common classification tasks, which involve both node-level and graph-level classification. The FedGNN consists of two levels: client-level local training and server-level federated optimization. We will begin by providing an overview of the notations used, followed by a detailed explanation of the client-level local training, which encompasses message passing and readout techniques. Lastly, we will introduce server-level federated optimization. ### Notations Assume that there exist \(K\) clients denoted as \(\mathcal{C}=\{c_{k}\}_{k=1}^{K}\). Each client, \(c_{i}\), possesses a private dataset denoted as \(\mathcal{D}^{i}=\{(\mathcal{G}_{j}^{i},\mathcal{Y}_{j}^{i})\}_{j=1}^{N_{i}}\), wherein \(\mathcal{G}_{j}^{i}=(\mathcal{V}_{j}^{i},\mathcal{E}_{j}^{i})\) is the graph, where \(\mathcal{V}^{i}=\{v_{t}\}_{t=1}^{n_{i}}\) (\(n_{i}\) denotes the number of nodes) is the set of nodes, and \(\mathcal{E}^{i}=\{e_{tk}\}_{t,k}\) is the set of edges (for simplicity, we exclude the subscript \(j\) that indicates the index of the \(j\)-th dataset in the dataset \(\mathcal{D}^{i}\)). \(N_{i}=\left|\mathcal{D}^{i}\right|\) denotes the total number of data samples in the private dataset of client \(c_{i}\). We employ the notation \(\mathbf{A}_{j}^{i}\) to denote the adjacency matrix of graph \(\mathcal{G}_{j}^{i}\) belonging to client \(c_{i}\) within the set of clients \(\mathcal{C}\). \(\mathbf{X}_{j}^{i}\) represents the node feature set, and \(\mathbf{Y}_{j}^{i}\) corresponds to the label sets. ### Client-level Local Training To ensure versatility and inclusiveness, we employ the message passing neural network (MPNN) framework [19, 20], which encompasses a diverse range of spectral-based GNNs, such as GCN [21], as well as spatial-based GNNs including GAT [22] and GraphSage [23], _etc._ Each client possesses a GNN model that collaboratively trains a global model. The local graph learning process can be divided into two stages: message passing and readout. **Message Passing.** For each client \(c_{i}\), the \(l\)-th layer in MPNN can be formulated as follows, \[\mathbf{h}_{j}^{l,i}=\sigma(w^{l,i}\cdot(\mathbf{h}_{j}^{l-1,i},\textit{Agg}( \{\mathbf{h}_{k}^{l-1,i}|v_{k}\in\mathcal{N}(v_{j})\}))), \tag{1}\] where \(\mathbf{h}_{j}^{l,i}\) (\(l=0,\cdots,L-1\)) represents the hidden feature of node \(v_{j}\) in client \(c_{i}\) and \(\mathbf{h}_{j}^{0,i}=\mathbf{x}_{j}\) denotes the node \(v_{j}\)'s raw feature. The \(\sigma\) represents the activation function (e.g., ReLU, sigmoid). The parameter \(w^{l,i}\) corresponds to the \(l\)-th learnable parameter. The aggregation operation _Agg_ (e.g., mean pooling) combines the hidden features \(\mathbf{h}_{k}^{l-1,i}\) of neighboring nodes \(v_{k}\in\mathcal{N}(v_{j})\) for node \(v_{j}\), where \(\mathcal{N}(v_{j})\) represents the set of neighbors of node \(v_{j}\). Assume that the \(\mathbf{w}^{i}=\{w^{l,i}\}_{l=0}^{L-1}\) is the set of learn able parameters for client \(c_{i}\). **Readout.** Following the propagation of information through \(L\) layers of MPNN, the final hidden feature is computed using a readout function for subsequent tasks. \[\hat{y}_{I}^{i}=R_{\theta^{i}}(\{\mathbf{h}_{j}^{L,i}|v_{j}\in\mathcal{V}_{I}^ {i}\}), \tag{2}\] where \(\hat{y}_{I}^{i}\) represents the prediction for a node or graph. Specifically, \(I\) serves as an indicator, where \(I=v_{j}\) denotes the prediction for node \(v_{j}\), and \(I=\mathcal{G}^{i}\) denotes the prediction for the graph \(\mathcal{G}^{i}\). The readout function \(R_{\theta^{i}}(\cdot)\) encompasses methods such as mean pooling or sum pooling _etc._, where \(\theta^{i}\) is the parameter for readout function. ### Server-level Federated Optimization Let us consider that \(\mathbf{w}^{i}=\{w^{l,i}\}_{l=0}^{L-1}\) represents the set of trainable parameters within the MPNN framework associated with client \(c_{i}\). Consequently, we define the overall model parameters as \(\mathbf{W}^{i}=\{\mathbf{w}^{i},\theta^{i}\}\) for each client \(c_{i}\in\mathcal{C}\). The GNNs, which constitute a part of this framework, can be represented as \(f_{i}(\mathbf{X}_{j}^{i},\mathbf{A}_{j}^{i};\mathbf{W}^{i})\). The objective of FL is to optimize the global objective function while preserving the privacy of local data on each individual local model. The overall objective function can be formulated as follows, \[\min_{\{\mathbf{W}^{i}\}}\sum_{i\in\mathcal{C}}\frac{N_{i}}{N}F_{i}(\mathbf{ W}^{i}),\quad F_{i}(\mathbf{W}^{i})=\frac{1}{N_{i}}\sum_{j\in\mathcal{D}^{i}} \mathcal{L}((f_{i}(\mathbf{X}_{j}^{i},\mathbf{A}_{j}^{i};\mathbf{W}^{i}), \mathbf{Y}_{j}^{i}), \tag{3}\] where \(F_{i}(\cdot)\) denotes the local objective function, and \(\mathcal{L}(\cdot)\) denote the loss function (_e.g._, cross-entropy _etc._), and \(N=\sum_{i=1}^{K}N_{i}\) represent the total number of data samples encompassing all clients. We illustrate the process of federated optimization, aimed at achieving a generalized model while ensuring privacy preservation, by utilizing a representative federated algorithm, FedAvg [24]. Specifically, in each round denoted by \(t\), the central server transmits the global model parameter \(\mathbf{W}_{t}\) to a subset of clients that have been selected for local training. Subsequently, each chosen client \(c_{i}\) refines the received parameter \(\mathbf{W}_{t}\) using an optimizer operating on its private dataset \(\mathcal{D}^{i}\). Following this, the selected clients upload the updated model parameter \(\mathbf{W}_{t}^{i}\), and the central server aggregates the local model parameters to obtain the enhanced global model parameter \(\mathbf{W}_{t+1}\). In FedGNN setting, there exist diverse scenarios involving distributed graphs that are motivated by real-world applications. In these scenarios, classification tasks can be classified into two distinct settings based on how graphs are distributed across clients. **Node-level FedGNN**. Each client is equipped with a subgraph, and the prevalent tasks involve node classification. Real-world applications, such as social networks, demonstrate situations where relationships between nodes can span across different clients, and each node possesses a unique label. **Graph-level FedGNN**. Each client possesses a set of graphs, and the primary focus lies on graph classification tasks. Real-world applications, such as protein discovery, exemplify instances where each institution holds a limited graph along with associated labels. ## 3 A Unified Framework for Classification Backdoor Attack on FedGNN This section presents a unified framework for classification backdoor attacks on federated GNNs. Our primary focus is on graph-based backdoor attacks, where malicious entities strategically insert triggers into graphs or subgraphs to compromise the trustworthiness of FedGNN. A comprehensive illustration of our unified framework for classification backdoor attacks on FedGNN can be found in Figure 1. In detail, we first introduce the dataset and models and then give the evaluation metric, then introduce the threat model. Next, we introduce the federated graph backdoor attack, which involves the formulation of the attack goal and a two-step attack process: trigger generation and trigger injection. Finally, we explore various critical factors at both global and local levels. ### Datasets and Models In this study, we have considered six distinct domains comprising a total of thirteen datasets, along with three widely used GNNs. _Node-level Datasets:_ For node-level analysis, we have included three extensively studied citation graphs, such as Cora, CiteSeer, and PubMed. Additionally, we have incorporated the Co-authorship graphs (CS and Physics), along with the Amazon Co-purchase graphs (Photo and Computers). _Graph-level Datasets:_ For graph-level analysis, we have utilized molecular graphs such as AIDS and NCI1. Furthermore, bioinformatics graphs, including PROTEINS-full, DD, and ENZYMES, have been incorporated. Lastly, a synthetic graph, COLORS-3, has also been employed. _Models:_ We have employed three widely adopted GNNs: GCN, GAT, and Figure 1: A unified framework for classification backdoor attack on FedGNN. GraphSage, which have been demonstrated effective in various graph-based tasks. For detailed statistical information about the graphs used, please refer to Appendix A.1. ### Evaluation Metrics To assess the effectiveness of the graph backdoor attack on FedGNN, three metrics are employed: the average clean accuracy (ACC) across all clients, the average attack success rate (ASR) on malicious clients, and the transferred attack success rate (TAST) on normal clients. The ACC metric evaluates the performance of federated GNNs when exposed to clean examples from all clients. The ASR metric measures the performance of the graph backdoor attack specifically on the malicious clients. Lastly, the TAST metric gauges the vulnerability of normal clients to the graph backdoor attack. For the detailed equations corresponding to these metrics, please refer to Appendix A.2. ### Threat Model **Attack Objective.** Assuming there are a total of \(K\) clients, with \(M\) (\(M\leq K\)) of them being malicious, each malicious attacker independently conducts the backdoor attack on their own models. The primary goal of a backdoor attack is to manipulate the model in such a way that it misclassifies specific pre-defined labels (known as target labels) only within the poisoned data samples. It is important to ensure that the model's accuracy remains unaffected when processing clean data. **Attack Knowledge.** In this setting, we assume that the malicious attacker has complete knowledge of their own training data. They have the capability to generate triggers. It should be noted that this scenario is quite practical since the clients have full control over their own data. **Attacker Capability.** The malicious client has the ability to inject triggers into the training datasets, but this capability is limited within predetermined constraints such as trigger size and poisoned data rate. The intention is to contaminate the training datasets. However, the malicious client lacks the ability to manipulate the server-side aggregation process or interfere with other clients' training processes and models. ### Federated Graph Backdoor Attack Mathematically, the formal attack objective for each malicious client \(c_{i}\) during round \(t\) can be defined as follows, \[\begin{split}&\mathbf{W}_{t}^{i*}=\arg\min_{\mathbf{W}_{t}} \frac{1}{N_{i}}\left[\sum_{j\in\mathcal{D}_{p}^{i}}\mathcal{L}((f_{i}( \mathbf{X}_{j}^{i},g_{\tau}\circ\mathbf{A}_{j}^{i};\mathbf{W}_{t-1}^{i}),\tau )+\sum_{j\in\mathcal{D}_{c}^{i}}\mathcal{L}((f_{i}(\mathbf{X}_{j}^{i},\mathbf{ A}_{j}^{i};\mathbf{W}_{t-1}^{i}),\mathbf{Y}_{j}^{i})\right],\\ &\forall j\in\mathcal{D}_{p}^{i},N_{\tau}=|g_{\tau}|\leq\bigtriangle _{g}\quad\text{and}\quad\rho=\frac{|\mathcal{D}_{p}^{i}|}{|\mathcal{D}^{i}|} \leq\bigtriangleup_{p},\end{split} \tag{4}\] where \(\mathcal{D}_{p}^{i}\) refers to the set of poisoned data and \(\mathcal{D}_{c}^{i}\) corresponds to the clean dataset. Noted that \(\mathcal{D}_{p}^{i}\sqcup\mathcal{D}_{c}^{i}=\mathcal{D}^{i}\) and \(\mathcal{D}_{p}^{i}\sqcap\mathcal{D}_{c}^{i}=\phi\), indicating the union and intersection of the poisoned and clean data sets, respectively. \(g_{\tau}\circ\mathbf{A}_{j}^{i}\) represents the poisoned graph resulting from an attack. \(g_{\tau}\) represents the trigger generated by the attacker, which is then embedded into the clean graph, thereby contaminating the datasets. Additionally, \(\tau\) denotes the target label. \(N_{\tau}=|g_{\tau}|\) denotes the trigger size and \(\bigtriangleup_{g}\) represents the constrain to ensures that the trigger size remains within the specified limit. \(\rho=\frac{|\mathcal{D}_{p}^{i}|}{|\mathcal{D}^{i}|}\) represents the poisoned rate, and \(\bigtriangleup p\) denotes the budget allocated for poisoned data. In the federated graph backdoor attack, to generate the trigger and poisoned data sets, the graph backdoor attack can be divided into two steps: trigger generation and trigger injection. The term "trigger" (a specific pattern) has been formally defined as a subgraph in the work by Zhang _et al._ (2021), providing a clear and established framework for its characterization [25]. **Trigger Generation.** The process of trigger generation can be defined as the function \(\varphi(\mathbf{X}_{j}^{i},\mathbf{A}_{j}^{i})\), which yields the generated trigger \(g_{\tau}\) through \(\varphi(\mathbf{X}_{j}^{i},\mathbf{A}_{j}^{i})=g_{\tau}\). **Trigger Injection.** The process of trigger injection can be defined as the function \(a(g_{\tau},\mathbf{A}_{j}^{i})\), which generates the final poisoned graph \(g_{\tau}\circ\mathbf{A}_{j}^{i}\) by incorporating the trigger \(g_{\tau}\) into the pristine graph \(\mathbf{A}_{j}^{i}\). ### Factors in Federated Graph Backdoor The graph backdoor attack framework in FedGNN encompasses various critical factors that warrant exploration. These factors can be categorized into two levels: the global level and the local level. At the global level, factors such as data distribution, the number of malicious attackers, the start time of backdoor attacks, and overlapping rate play significant roles. On the other hand, the local level involves parameters like trigger size, trigger type, trigger position, and poisoning rate. Notably, the overlapping rate holds particular importance in node-level FedGNN, as it involves cross-nodes across multiple clients. _Global Level Factors:_ **Data Distribution.** The data distribution encompasses two distinct types: independent and identically distributed (IID) and non-independent and identically distributed (Non-IID). In detail, IID refers to data distribution among clients remaining constant, while Non-IID (L-Non-IID [26, 27], PD-Non-IID [28], N-Non-IID [29]) refers that the data distribution among clients exhibiting variations. **Number of Malicious Attackers.** The concept of the number of malicious attackers, denoted as \(M\), can be defined in the following manner. Let us assume that the set of malicious clients is denoted as \(\mathcal{C}_{m}\), and the set of normal clients is denoted as \(\mathcal{C}_{n}\). It can be inferred that \(\mathcal{C}_{m}\sqcup\mathcal{C}_{m}=\mathcal{C}\) and \(\mathcal{C}_{m}\sqcap\mathcal{C}_{c}=\phi\). **Attack Time.** In the context of FL, the attack time denotes the precise moment when a malicious attack is launched. The attack time can be denoted by \(t^{*}\). **Overlapping Rate (specific to Node-level FedGNN).** The overlapping rate, represented by the variable \(\alpha\), pertains to the proportion of additional samples of overlapping data that across clients. This phenomenon arises in node-level FedGNN, where cross-client nodes exist, resulting in the sharing of common data samples between different clients. _Local Level Factors:_ **Trigger Size.** The size of the trigger can be quantified by counting the number of nodes within the corresponding graph. The trigger size is denoted by \(N_{\tau}\). **Trigger Type.** Based on the methods used to generate triggers(_e.g._, Renyi [25], WS [30], BA [31], RR [32], GTA [33], and UGBA [34]_etc._), the categorization of trigger types can be refined into two categories: universal triggers and adaptive triggers. Universal triggers are pre-generated through graph generation techniques, such as the Erdos-Renyi (ER) model [35], which are agnostic to the underlying graph datasets. On the other hand, adaptive triggers are specifically designed for individual graphs using optimization methods. **Trigger Position.** The trigger position refers to the specific location within a graph or sub-graph where the trigger is injected. Typically, the trigger position can be categorized into two types: random position and important indicator position. In the case of the random position, the trigger is injected into the graph in a random manner without any specific consideration. Conversely, the important indicator position entails injecting the trigger based on certain crucial centrality values, such as the degree or cluster-based scores, that indicate the significance of specific nodes within the graph. **Poisoning Rate.** The concept of poisoning rate, denoted as \(\rho\), can be defined as the ratio of the cardinality of the set of poisoned data samples, \(\mathcal{C}_{p}^{i}\), to the total number of data samples, denoted as \(\mathcal{D}^{i}\). Mathematically, this can be expressed as \(\rho=\frac{|\mathcal{D}_{p}^{i}|}{|\mathcal{D}^{i}|}\), where \(\forall c_{i}\in\mathcal{C}\) signifies that the cardinality calculations are performed for every client \(c_{i}\) belonging to the set \(\mathcal{C}\). ## 4 Experimental Studies In this section, we present the experimental studies conducted to investigate classification backdoor attacks on FedGNN. Our main objective is to evaluate the impact of graph backdoor attacks on FedGNN covering both the node and graph level tasks. Additionally, we aim to explore the critical \begin{table} \begin{tabular}{c c|c|c} \hline \hline & Factors & Symbol & Node Level & Graph Level \\ \hline \multirow{4}{*}{Global Level} & Data Distribution & - & \{\(\text{IID}^{*},\text{L-Non-IID}\) \} & \{\(\text{IID}^{*},\text{PD-Non-IID},\text{N-Non-IID}\) \\ \cline{2-4} & \# of Malicious Attack & \(M\) & \(\{1^{*},2,3,4,5\}\) \\ \cline{2-4} & Attack Time & \(t^{*}\) & \(T*\{0.0^{*},0.1,0.2,0.3,0.4,0.5\}\) \\ \cline{2-4} & Overlapping Rate & \(\alpha\) & \(\{0.1^{*},0.2,0.3,0.4,0.5\}\) \\ \hline \multirow{4}{*}{Local Level} & Trigger Size & \(N_{\tau}\) & \(\{3^{*},4,5,6,7,8,9,10\}\) & \(N_{\tau}*\{0.1^{*},0.2,0.3,0.4,0.5\}\) \\ \cline{2-4} & Trigger Type & \(g_{\tau}\) & \(\{\text{Rensyi}^{*},\text{WS},\text{BA},\text{GTA},\text{UGBA}\) \} & \{ Renyi\({}^{*},\text{WS},\text{BA},\text{RR},\text{GTA}\) \} \\ \cline{1-1} & Trigger Position & - & \(\{\text{Random}^{*},\text{Degree},\text{Cluster}\) \} \\ \cline{1-1} & Poisoning Rate & \(\rho\) & \(\{0.1^{*},0.2,0.3,0.4,0.5\}\) \\ \hline \hline \end{tabular} * marks the default value and # represents the Number. \(T\) represents the total training round time and \(N_{\tau}\) represents the average number of graph nodes. \end{table} Table 1: Critical factors in federated graph backdoor. factors that influence the effectiveness of graph backdoor attacks on FedGNN, considering aspects from both the global and local levels. ### Experimental Settings **Factors Settings.** We present the detailed factors setup considered in our study. It is important to note that the first value presented represents the default setting. To assess the individual impact of each factor, we keep the remaining factors fixed while systematically varying the corresponding values in our experiments. The factors range is shown in Table 1. For the detailed setting for factor, please refer to Appendix A.3. **Federated Graph Backdoor Attack.** The federated graph backdoor attack can be characterized by the combination of trigger generation techniques (Renyi [25], WS [30], BA [31], RR [32], GTA [33], and UGBA [34]) and trigger position strategies (Random, Degree, and Cluster). For instance, the attack method Renyi-Random refers to the utilization of the ER model to generate the trigger, which is then randomly injected into the graph. **Implementation Details.** Our implementation of the backdoor attack on FedGNN is based on the PyTorch framework. The experiments were carried out on two server configurations: three Linux Centos Servers, each with 4 RTX 3090 GPUs, and two Linux Ubuntu Servers, each with 2 V100 GPUs. In both node-level and graph-level tasks, we adopt the inductive learning settings as outlined in [16, 34]. For each dataset, we ensure consistent experimental conditions by employing the same training and attack settings. We set the total number of clients to \(5\), and all clients participate in the training process at each round. Each experiment is repeated five times. For a detailed description of the training and attack settings, please refer to Appendix A.4. ### Benchmark Results of Graph Backdoor Attack on FedGNN The results of the benchmark for the graph backdoor attack on FedGNN are presented in Figure 2. The observations are summarized as follows. (1) The node-level task exhibits higher vulnerability to attacks compared to the graph-level task at a relatively small trigger size. Specifically, a significant majority of graph backdoor attacks achieve an ASR (Attack Success Rate) exceeding \(90\%\), while the highest ASR recorded at the graph level is \(82.24\%\). (2) Despite not being intentionally poisoned by malicious attackers, the normal clients are still susceptible to graph backdoor attacks. For instance, in the node-level task, there is a TASR (Transferred Attack Success Rate) of \(24.52\%\), while the graph-level task exhibits even higher vulnerability with a TASR of \(61.86\%\). This observation suggests that the weights uploaded by the malicious clients can inadvertently influence the normal clients when they download the global model's weights. 3). The combination of trigger size and trigger position significantly influences the attack performance on the graph-level task compared to the node-level Figure 2: Graph backdoor attack on both node and graph level tasks for GCN. (Color intensity corresponds to value magnitude) task. For instance, the attack WS-Cluster achieves an ASR of approximately \(82.24\%\), while the GTA-Random achieves only about \(13.87\%\). Due to the page limit, the benchmark results on other datasets and models please refer to Appendix A.5.1. ### Factors in Federated GNN The overall results of factors can be shown in Figures 3-4. _Global Level Factors:_**Data Distribution (DD).** For node-level tasks, there models trained on IID data are more vulnerable than models trained on Non-IID data. For graph-level tasks, the GCN trained on IID data are more vulnerable than models trained on Non-IID data (PD-Non-IID and N-Non-IID), while GAT and GraphSagr trained on Non-IID data are more vulnerable than models trained on IID data. **Number of Malicious Attackers (NMA).** For node-level tasks, an increase in NMA leads to an increase in ASR for both GCN and GAT models. Conversely, an increase in NMA results in a decrease in ASR for both GraphSage. Concerning graph-level tasks, the ASR demonstrates an increase with the increase of NMA in the case of GAT and GraphSage. However, in the scenario of GCN, the ASR shows a decrease with the increase of NMA. **Attack Time (AT).** For both node-level and graph-level tasks, an increase in AT results in a decrease in ASR for three models. **Overlapping Rate (OR).** The ASR demonstrates an upward trend as the overlapping rate increases. This correlation can be attributed to the possibility that overlapping nodes facilitate the backdooring of normal clients, primarily through the presence of cross-edges. _Local Level Factors:_**Trigger Size (TS).** For node-level tasks, an increase in TS leads to an increase in ASR for GCN. However, in the case of GAT and GraphSage, the ASR demonstrates a decrease with the increase of TS. Concerning the graph-level task, the ASR shows an increase with the increase of TS across all three GNNs. **Trigger Types (TT).** In the node-level task, the adaptive trigger demonstrates a higher ASR on most models. Conversely, in the graph-level task, the universal trigger exhibits higher ASR. **Trigger Position (TP).** In node-level tasks, we observed a significantly large ASR when using importance-based positions (Degree and Cluster) compared to random positions. Figure 4: Graph-level task factors. Figure 3: Node-level task factors. However, for the graph-level task, while importance-based positions showed higher ASR for GCN, random positions yielded higher ASR for GAT and GraphSage. **Poisoning Rate (PR).** On node classification, an increase in PR results in a slight decrease in ASR. However, graph classification exhibits an upward trend in ASR. Due to the page limit, the results on other datasets and metrics, please refer to Appendix A.5.2. ## 5 Related Works **FedGNN.** FedGNN present a distributed machine learning paradigm that facilitates collaborative training of GNNs among multiple parties, ensuring the privacy of their sensitive data. In recent years, extensive research has been conducted on FedGNN, with a particular focus on addressing security concerns [15, 16, 17, 18]. Among these concerns, poisoning attacks have garnered significant attention, encompassing both data poisoning attacks and model poisoning attacks. Data poisoning attacks occur when an adversary employs tainted data to train the local model, while model poisoning attacks involve manipulation of either the training process or the local model itself. Currently, the majority of attacks on FedGNN primarily concentrate on data poisoning attacks. Chen _et al._[15] proposed adversarial attacks on vertical federated learning, utilizing adversarial perturbations on global node embeddings based on gradient leakage from pairwise nodes. Additionally, Xu _et al._[16] investigated centralized and distributed backdoor attacks on FedGNN. **Graph Backdoor Attacks.** Backdoor attacks on GNNs have received significant attention in recent years [25, 36, 37, 38, 33, 39, 34]. Regarding graph backdoor attacks, they can be classified into two types based on the employed trigger: universal graph backdoor attacks and adaptive backdoor attacks. In universal graph backdoor attacks, Zhang _et al._[25] generated sub-graphs using the Erdos-Renyi (ER) model as triggers and injected them into the training data. Additionally, Xu _et al._[33] observed that the position of the trigger injection into the graph can also affect the attack's performance. As for adaptive trigger backdoor attacks, Xi _et al._[33] developed an adaptive trigger generator that optimizes the attack's effectiveness for both transductive and inductive tasks. In our benchmark, we focus primarily on data poisoning attacks. While model poisoning attacks can be effective, data poisoning attacks may be more convenient because they do not require tampering with the model learning process, and they allow non-expert actors to participate [40]. ## 6 Conclusions and Open Problems **Conclusions.** In this paper, we proposed a unified framework for classification backdoor attacks on FedGNN. We then introduced the critical factors involved in graph backdoor attacks on FedGNN, including both global and local level factors. Along this line, we performed approximately 8,000 experiments on the graph backdoor attacks benchmark and conduct critical factor experiments to provide a comprehensive analysis. **Open Problems.** (1) Enhancing the success rate of transferred attacks: Our findings reveal that malicious attackers can also backdoor normal clients through the FL mechanism. However, there is a need to explore methods that can identify and exploit the worst vulnerabilities under these circumstances. (2) Evaluating the defense method under backdoor attack: We demonstrate that FedGNN can be compromised by malicious attackers. However, assessing the effectiveness of defense mechanisms against such attacks still requires further exploration. (3) Cooperative malicious attackers: Currently, the majority of malicious attackers operate independently during the attack process, neglecting the potential benefits of collaboration. An intriguing research direction lies in investigating the utilization of collaboration to enhance attack performance. ## Acknowledgments and Disclosure of Funding This research was supported in part by the National Natural Science Foundation of China under Grant No.62102110, Guangzhou Science and Technology Plan Guangzhou-HKUST(GZ) Joint Project No. 2023A03J0144, and Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21-FYTRI02A).
2303.00788
Multi-task neural networks by learned contextual inputs
This paper explores learned-context neural networks. It is a multi-task learning architecture based on a fully shared neural network and an augmented input vector containing trainable task parameters. The architecture is interesting due to its powerful task adaption mechanism, which facilitates a low-dimensional task parameter space. Theoretically, we show that a scalar task parameter is sufficient for universal approximation of all tasks, which is not necessarily the case for more common architectures. Evidence towards the practicality of such a small task parameter space is given empirically. The task parameter space is found to be well-behaved, and simplifies workflows related to updating models as new data arrives, and training new tasks when the shared parameters are frozen. Additionally, the architecture displays robustness towards cases with few data points. The architecture's performance is compared to similar neural network architectures on ten datasets.
Anders T. Sandnes, Bjarne Grimstad, Odd Kolbjørnsen
2023-03-01T19:25:52Z
http://arxiv.org/abs/2303.00788v1
# Multi-task neural networks by learned contextual inputs ###### Abstract This paper explores learned-context neural networks. It is a multi-task learning architecture based on a fully shared neural network and an augmented input vector containing trainable task parameters. The architecture is interesting due to its powerful task adaption mechanism, which facilitates a low-dimensional task parameter space. Theoretically, we show that a scalar task parameter is sufficient for universal approximation of all tasks, which is not necessarily the case for more common architectures. Evidence towards the practicality of such a small task parameter space is given empirically. The task parameter space is found to be well-behaved, and simplifies workflows related to updating models as new data arrives, and training new tasks when the shared parameters are frozen. Additionally, the architecture displays robustness towards cases with few data points. The architecture's performance is compared to similar neural network architectures on ten datasets. ## 1 Introduction A remarkable feat of nature is its ability to create endless variations on concepts that seem to be rigorously defined. Across all domains, we find classes of objects, clusters of phenomena, and groups of individuals, all of which seem to follow some overarching set of rules. And still, each separate instance puts a unique spin on the outcome. While this is fascinating to observe, it can be frustrating to model. One can quickly run into both performance issues and operational challenges. An instance may have too few data points to produce a satisfying model. Or, there are just too many models to train and maintain. Multi-task learning (MTL), as presented by Caruana (1997), is a learning strategy where multiple related models are trained simultaneously. The models share a subset of their parameters, which allows them to capture general domain knowledge. The purpose is to improve model generalization, by effectively training the shared parameters on more data. Model frameworks based on the multi-task philosophy appear in many sciences. In the statistical literature they are, among others, referred to as mixed-, hierarchical-, multi-level-, or random-effect models. These methods see frequent use in sociology, economics, biometrics, and medicine (Demidenko, 2004; Raudenbush and Bryk, 2002). These models are often of moderate size and complexity. In the machine learning domain, there is broad diversity within transfer-learning and MTL (Lu et al., 2015; Zhang and Yang, 2021). Methods that combine transfer-learning or MTL with deep learning have seen success in several domains, with extensive research going into areas such as image analysis (Zamir et al., 2018; Kokkinos, 2017; Morid et al., 2021) and natural language processing (Raffel et al., 2020; Devlin et al., 2018; Brown et al., 2020). Engineering domains, such as solar and wind power (Wu et al., 2021; Dorado-Moreno et al., 2020), have also seen some MTL applications. However, in many cases, research is still needed to create satisfying solutions (Curreri et al., 2021). Our focus is on non-linear regression problems. We are particularly interested in problems with many similar tasks that have complex input-output relationships. The tasks may have limited data. This class of problems captures many modeling challenges within engineering and industrial systems. Examples are cases with multiple instances of the same object, such as turbines in a wind farm, or batch processes, such as biomass accumulation in a fish farm or a field of crops. Here, each task typically has few observations, but they are, by design, quite similar. Solving such problems requires an architecture with a high degree of flexibility, where task adaptation can be done with few data points. Additionally, the architecture must facilitate the many operations needed to keep machine-learning models running in practice. Examples may be model updates in case of time-varying conditions, or the identification of new tasks that arrive at a later time. We study _learned-context neural networks_. The architecture consists of two components. A neural network where all parameters are shared, and a set of task-specific parameter vectors. The task parameter vector serves as additional inputs to the neural network, which alters the network computation. This results in a powerful task adaptation mechanism, that can reduce the task parameter dimension significantly. The task parameter input provides contextual information about each task. They are referred to as a learned context because they are found during training. Learned contextual inputs facilitate the identification of a well-behaved task parameter space. By this, we mean a space that captures continuous latent properties of the tasks themselves, rather than just being a task encoding. A well-behaved task parameter space is desirable, because it enables us to train the shared network once, and then only be concerned with the task parameter space for day-to-day operations. This is especially useful if a complete re-training takes significant manual effort, is computationally expensive, or new data points arrive frequently. Variations of learned-context neural networks have, to our knowledge, only been applied in a meta-learning study by Zintgraf et al. (2019) and a virtual flow metering application by Sandnes et al. (2021). Zintgraf et al. (2019) proposed a model agnostic meta-learning method, which is concerned with context parameters in general. Learned-context neural networks are used as one of the concrete examples in an experimental study. Sandnes et al. (2021) use a variation of the architecture to study the benefit of multi-task learning for a soft sensing application. The focus is on the benefits of a multi-task architecture over the single-task learners traditionally used within the domain. The learned-context neural network itself has never been thoroughly analyzed. ### Contributions We provide a deep dive into theoretical and practical aspects of the learned-context neural network architecture. We explore its task adaptation mechanism and relate it to existing methodology within statistics and machine learning, and we prove that a scalar task parameter is sufficient for the universal approximation of a set of tasks. The viability of such a low dimensional task parameter is studied empirically. The performance of the architecture is compared to similar architectures on ten datasets. Comparisons are made on predictive performance, sensitivity to dataset size, and the effect of the task parameter dimension. ### Outline The content of the paper is organized as follows. Section 2 presents notation and gives an introduction to mixed models, multi-task learning, and methods related to the architecture of interest. Section 3 gives a theoretical description of learned-context neural networks. Section 4 dives deeper into the task adaptation mechanism of the learned-context network. Section 5 presents an empirical analysis of the architecture and compares is to related methods. Section 6 summarizes and discusses the findings, and Section 7 concludes the paper. ## 2 Background Here we present a minimal background required to study learned-context neural networks. For a broader picture, see Demidenko (2004) for more statistical methods or Zhang and Yang (2021) for machine learning applications. ### Notation We consider a set of \(m\) tasks \(\left\{(\mathcal{D}_{j},f_{j})\right\}_{j=1}^{m}\). Each task consists of a set of observations \(\mathcal{D}_{j}=\left\{(x_{ij},y_{ij})\right\}_{i=1}^{n_{j}}\) and a function \(f_{j}:\mathbf{R}^{d_{x}}\mapsto\mathbf{R}\). These are related by \(y_{ij}=f_{j}(x_{ij})+\epsilon_{ij}\), where \(\epsilon_{ij}\sim\mathcal{N}(0,\sigma_{\epsilon}^{2})\). The tasks are homogeneous, with the elements of \(x\) and \(y\) representing the same quantities across all tasks. The indicator variable \(c_{j}\in\left\{0,1\right\}^{m}\) is a one-hot task encoding where the \(j\)th element is one and the rest zeros. We are interested in the case of knowledge transfer through hard parameter sharing, and task adaptation is achieved using a small set of task parameters. We use \(\alpha\) to denote a set of shared parameters, and \(\beta_{j}\in\mathbf{R}^{d_{\beta}}\) to denote a vector of task specific parameters. The general form of the problem of interest is \(y_{ij}=f(x_{ij};\alpha,\beta_{j})+\epsilon_{ij}\). ### Mixed models and multi-task neural networks The concept of multi-task learning has been studied extensively for linear models under a variety of names and contexts. The simplest form is a _varying-intercept_ linear model, \[y_{ij}=a^{\top}x_{ij}+b+\beta_{j}+\epsilon_{ij}, \tag{1}\] where each task is a linear function of the observed variables and tasks differ only in the individual bias terms \(\beta_{j}\)(Gelman et al., 2013). With such a setup it is common to assume task parameters are distributed according to \(\beta_{j}\sim\mathcal{N}(0,\sigma_{\beta}^{2})\). Task-specific slope parameters are also common. These models, known as multilevel-, hierarchical-, or mixed effect models, are extensively described in the statistical literature (Demidenko, 2004; Raudenbush and Bryk, 2002). An extension to the linear model is to factorize the parameters into a task component and a shared component, a notable example being the group-and-overlap model of Kumar and Daume III (2012), \(y_{ij}=a_{j}^{\top}x_{ij}+b_{j}+\epsilon_{ij}\), where the parameters are found as \[\begin{bmatrix}a_{j}\\ b_{j}\end{bmatrix}=L\beta_{j}. \tag{2}\] Task-specific parameters are linear combinations of latent basis tasks, given as the columns of \(L\). A tailored algorithm is used to jointly optimize the latent task matrix \(L\) and the individual task parameters. The optimization is controlled by the desired number of latent tasks and their sparsity. This structure allows the degree of sharing between tasks to be found during training. The mechanism introduces a robustness towards outlier tasks, because tasks that do not share any aspects with the others can be isolated to separate columns of \(L\). This allows the main tasks to be differentiated without inference from potential outliers. Moving beyond linear model structures introduces several design choices. The simplest is fixed nonlinear analytical expressions where parameters can be partially shared, task-specific, or found through expressions such as Equation 2. A classic example is nonlinear growth curve models, commonly formulated as logistic curves (Demidenko, 2004). Alternatively, the nonlinearities can be selected from a predefined set of candidates as proposed by Argyriou et al. (2008). This produces the expression \(y_{ij}=\sum_{k=1}^{d_{\beta}}\beta_{j,k}\phi_{k}(x_{ij})+\epsilon_{ij}\), where the knowledge sharing mechanism lies in the shared choice of basis functions \(\phi_{k}\). If the space of basis functions is not predefined, a more flexible solution is to allow them to be learned by a multi-task neural network (Caruana, 1997). This can be expressed as \[y_{ij}=\beta_{j}^{\top}h(x_{ij};\alpha)+\epsilon_{ij}, \tag{3}\] where \(h\) is a neural network with \(d_{\beta}\) outputs, parametrized by \(\alpha\). The task-parameters \(\beta_{j}\) represents a task specific output layer. A different neural network strategy is the context-sensitive networks of Silver et al. (2008), \[y_{ij}=h(z_{ij};\alpha)+\epsilon_{ij},\ z_{ij}=\begin{bmatrix}x_{ ij}\\ c_{j}\end{bmatrix}, \tag{4}\] where all parameters of the neural networks are shared. Tasks are differentiated through the additional input \(c_{j}\), which is a one-hot task encoding. This leads to networks where the input dimension grows with the number of tasks. Context-adaptation, introduced by Zintgraf et al. (2019), replaces the one-hot encoding with a trainable vector of task parameters. This allows the dimension of the input space to remain fixed while the number of tasks grows. The learned-context networks presented here are based on this idea. Other, more complex neural network architectures allow all network weights to have some level of task adaptation. Examples are tensor factorization (Yang and Hospedales, 2017), and masked networks (Mallya et al., 2018). Alternatively, one can introduce layers that facilitate sharing between otherwise disjoint neural networks (Misra et al., 2016). We are, however, focusing our attention on simple feed-forward architectures with few task parameters. ### Related learning paradigms Multi-task learning resembles other learning paradigms, in particular _transfer-learning_ and _meta-learning_. The distinctions between these can be difficult, and their nuances have changed over time. We base our discussion on the recent survey of Hospedales et al. (2022). Transfer-learning attempts to improve the performance of a task using information from related source tasks. This can, for instance, be achieved by copying an existing model trained on the source tasks and applying optional fine-tuning steps to the parameters. This tuning has no concern for the performance of the source tasks. In contrast, multi-task learning attempts to jointly optimize the performance of all tasks. Meta-learning also attempts to jointly optimize all tasks, but the objective of the optimization is different. Multi-task learning is concerned with a fixed set of provided tasks and produces a single joint model. Meta-learning, on the other hand, attempts to improve learning for the whole distribution of tasks, which can include unseen future tasks. The output is a _learning procedure_ that can be applied to all tasks in the task distribution. As such, meta-learning can draw parallels to _hyper-parameter optimization_. However, hyper-parameter optimization only considers performance on the current learning problem, while meta-learning considers the performance over a family of learning problems that may occur. Special cases of these optimization problems may be closely linked in practice. Grant et al. (2018) connects a particular meta-learning algorithm to hierarchical Bayesian models and hyper-parameter optimization. Zintgraf et al. (2019) develops this meta-learning algorithm further with context-adaptation. The multi-task learned-context neural networks studied in this paper is similar to one of the context-adaptation mechanisms. ## 3 Model description We explore a particular version the context-adaptation architecture, denoted by \(y_{ij}=f(x_{ij};\beta_{j},\alpha)+\epsilon_{ij}\). The architecture consists of a residual feedforward neural network, \(y_{ij}=h(z_{ij};\alpha)+\epsilon_{ij}\), which is shared between all tasks. Tasks are differentiated by trainable parameters \(\beta_{j}\), which is used as input to the network, \[z_{ij}=\begin{bmatrix}x_{ij}\\ \beta_{j}\end{bmatrix}. \tag{5}\] We refer to this architecture as a learned-context neural network. ### Residual feedforward networks We consider feedforward neural network architectures (Goodfellow et al., 2016), with residual skip connections (He et al., 2016). In the following we omit the \(ij\) subscripts and noise terms to simplify the notation. We denote a residual feedforward network by \(y=h(z;\alpha)\). It consists of \(K\) layers parametrized by \(\alpha=\left\{(W_{k},b_{k})|k=1,\ldots,K\right\}\). The network has three main elements. The first linear layer, \[z^{(2)}=W_{1}z+b_{1}, \tag{6}\] a sequence of residual blocks \[z^{(k+1)} =W_{k}g\left(z^{(k)}\right)+b_{k}\] \[z^{(k+2)} =z^{(k)}+W_{k+1}g\left(z^{(k+1)}\right)+b_{k+1}\Bigg{\}}\,k=2,4, \ldots,K-2, \tag{7}\] and the final linear layer \[y=W_{K}z^{(K)}+b_{K}. \tag{8}\] The residual blocks in Equation 7 is illustrated in Figure 1. We let the residual skip connection span two hidden layers, and use a are pre-activated structure (He et al., 2016). All hidden layers have the same dimension. The ReLU function, \(g(z)=\max(0,z)\), where the max operation is performed element-wise, is used for activation. ### Comparable architectures Throughout the analysis of the learned-context neural network, we will make comparisons with two closely related architectures. The first is the context-sensitive networks as presented by Silver et al. (2008), given in Equation 4. The second is the classic MTL architecture of Caruana (1997), which we will refer to as last-layer MTL. This is described by Equation 3. ## 4 Task adaptation mechanism This section explores the theoretical aspects of the task adaptation mechanism in the learned-context neural network. First, the relationship between learned-context neural networks, linear mixed models, and context-sensitive networks are discussed. Then, a simple example illustrates how the task parameters can influence the output of the neural network. Finally, the universal approximation power of learned-context networks are investigated and shown to be equal to that of using a separate neural network for each task. ### Task parameter input yield varying-bias layers Recall the residual neural network described in Equations 6-8. The augmented input to the first layer, given in Equation 5, consists of observations \(x_{ij}\) and task parameters \(\beta_{j}\). Inspecting the first linear layer, \(z_{ij}^{(2)}=W_{1}z_{ij}+b_{1}\), we note Figure 1: Residual block spanning two hidden layers, each layer consisting of a activation function and a linear transformation. The block output is the sum of the block input \(z^{(k)}\) and the residual component \(\Delta^{(k)}\). that partitioning the weight matrix according to \(W_{1}=\begin{bmatrix}A&L\end{bmatrix}\) yields \[z_{ij}^{(2)}=Ax_{ij}+L\beta_{j}+b_{1}. \tag{9}\] This is recognized as the linear varying-intercept model from Equation 1, where \(b_{1}\) is the population average intercept and \(\tilde{\beta}_{j}=L\beta_{j}\) is the individual intercept. However, linear varying-intercept models are usually concerned with scalar, or low dimensional, response variables. The hidden layer \(z_{ij}^{(2)}\) may be of high dimension, and the term \(L\beta_{j}\) allows for a low-rank approximation of the task intercept. This is the same mechanism utilized in the group-and-overlap method given in Equation 2, and the method inherits the benefits this provides in terms of outlier robustness. On the other hand, if the number of task parameters equals the number of tasks, the task mechanism is similar to the one-hot encoding of context-sensitive networks. To see this, repeat the exercise using the augmented input from Equation 4. This leads to \(z_{ij}^{(2)}=Ax_{ij}+Bc_{j}+b_{1}\), where the contextual input selects the \(j\)th column of \(B\) as the task intercept. With the task parameter dimension equal to the number of tasks, each of the first layer biases can be set individually for each task. This essentially makes each task responsible for a number of parameters equal to the hidden layer size, which can be a significant number even for moderately sized networks. ### Task adaptation examples Considering Equation 9 alone, the task adaptation mechanism may seem limited. We study a simple example to gain insight into how changing the biases affects the network response. The example is adapted from Telgarsky (2016). A network with three layers is constructed to linearly interpolate the points \((x,y)=(0,0),(0.5,1),(1,0),(1.5,1),(2,0)\), given task parameters equal to zero. We refer to this as the base shape. It is illustrated as the bold dark line in Figure 2. A detailed description of the network weights is given in Appendix A. We now explore the effect of different values for the task parameter component \(L\) from Equation 9. Starting with the simple case where \(L=A\), which yields a translation of the base shape. This can be seen by manipulating Equation 6, \(z_{ij}^{(2)}=Ax_{ij}+A\beta_{j}+b_{1}=A\left(x_{ij}+\beta_{j}\right)+b_{1}\), which is the expression for translation \(y_{ij}=f(x_{ij}+\beta_{j})\). This is illustrated in the top part of Figure 2. A different transform is obtained by setting \(L=b_{1}\). For this particular network, it yields dilation by a factor \(1+\beta\). This changes the base shape of the output to a dilated shape according to \((x,y)\mapsto((1+\beta)x,(1+\beta)y)\). This is illustrated in the bottom part of Figure 2. The derivation of this result is given in Appendix A.1. These simple examples illustrate how the contextual inputs can produce different transformations by changing the first layer weights. While looking limiting at first glance, the ability to influence the first layer bias creates changes that propagate through the entire network, enabling powerful nonlinear transformations. Figure 2: Evaluation of the simple learned-context network described in Appendix A. The two examples only differ in their specification of the matrix \(L\) from Equation 9. Top is a translation case, with \(L=A\). Bottom is a dilation case, with \(L=b_{1}\). Both have been evaluated with task parameter \(\beta\) fixed at three different values, with the bold black line representing the base network. ### Universal task approximation The previous section illustrates how a combination of contextual inputs and deep neural networks can produce complex task adaptations. Here we make this notion more precise by proving a multi-task equivalent to the universal approximation property. The main result is that any task adaption is achievable by learned-context neural networks regardless of how many task parameters are used, provided that the shared neural network is sufficiently wide and deep. Universal approximation refers to the ability of neural network architectures to approximate a function with arbitrary precision, given a set of qualifiers on the function and the network (Cybenko, 1989; Hornik et al., 1989). Several variations of universal approximation properties have been shown for different neural network architectures, including ReLU-based feedforward networks (Lu et al., 2017; Kidger and Lyons, 2020). Here we explore how universal approximation properties carry over to the learned-context networks, and compare it to the properties of context-sensitive networks and last-layer MTL networks. We only consider traditional feedforward networks without the residual skip connections, as these do not change the results presented here. We adapt the definitions from (Kidger and Lyons, 2020) and rely upon their main result for universal approximation. Our interest is to study the number of task parameters required, so the dimensions of the neural networks themselves are omitted. **Definition 1**.: _Let \(\mathcal{F}_{k,l}\) be the class of functions described by feedforward neural networks with input dimension \(k\), output dimension \(l\), an arbitrary number of hidden layers of arbitrary size, where the ReLU activation function is applied to all hidden units._ **Definition 2**.: _Let \(C(K)\) be the space of continuous functions \(f:K\rightarrow\mathbf{R}\), with domain \(K\subseteq\mathbf{R}^{d_{x}}\), where \(K\) is compact._ Consider \(m\) tasks given by \(y=f_{j}(x),\ j=1,\ldots,m\), where \(f_{j}\in C(K)\). Individually, the functions \(f_{j}\) can be approximated arbitrarily close by a sufficiently sized neural network \(\hat{f}_{j}\in\mathcal{F}_{d_{x},1}\)(Kidger and Lyons, 2020). Extending this notion to MTL architectures means their ability to approximate the composed function \[y=f(x,j)=\sum_{k=1}^{m}I(j=k)f_{k}(x). \tag{10}\] Here, \(I(j=k)\) is the indicator function, tanking the value one if \(j=k\) and zero otherwise. In Equation 10, the inputs of \(f(x,j)\) closely resembles those of context-sensitive networks, making this a natural starting point. **Proposition 1**.: _There exists a context-sensitive neural network from the class \(\mathcal{F}_{d_{x}+m,1}\) that is arbitrarily close to the multi-task function \(f\) of Equation 10 with respect to the uniform norm._ Proof.: An equivalent computation to Equation 10 is \(f(x,c_{j})=\sum_{k=1}^{m}c_{j,k}f_{k}(x)\), where \(c_{j,k}\) is the \(k\)th element of \(c_{j}\). Relaxing the indicator variable domain to \(c_{j}\in[0,1]^{m}\), allows the context-sensitive input vector, \(z\), to exist in a compact subspace \(K^{+}\subseteq\mathbf{R}^{d_{x}+m}\). The relaxation gives \(f\in C(K^{+})\), a space of which the class \(\mathcal{F}_{d_{x}+m,1}\) is dense with respect to the uniform norm. It follows that context-sensitive networks can approximate the set of task functions arbitrarily well. This result means that the context-sensitive architecture is sufficiently flexible to achieve the same approximation power as using an individual neural network for each task. However, it does not say anything about the number of parameters required to achieve this. Learned-context neural networks can achieve the same universal approximation power as context-sensitive networks, using only a scalar task parameter. **Theorem 2**.: _There exists a learned-context neural network from the class \(\mathcal{F}_{d_{x}+1,1}\) and a set of task parameters \(\beta_{j}\in\mathbf{R},\ j=1,\ldots,m\), that is arbitrarily close to the multi-task function \(f\) of Equation 10 with respect to the uniform norm._ Proof.: The proof is to construct the mapping \((x,\beta_{j})\mapsto(x,c_{j})\), as the first two hidden layers of a neural network. The remainder of the network could then be taken as a context-sensitive network. First, we construct a similar triangle wave as in Figure 2 for the task parameters. Let the task parameters be assigned values \(\beta_{j}=j\), which is a trivial task encoding. The first layer is assigned the weights \[W_{1}=1_{2m},\ b_{1}^{\top}=-\begin{bmatrix}1-\delta&1&2-\delta&2&\ldots&m- \delta&m\end{bmatrix},\] where \(1_{2m}\) is a vector of \(2m\) ones and \(\delta\in(0,0.5)\) is a number determining the slope and spacing of the triangles. After ReLU activation, this gives a shifted sequence of the task parameter. The second layer is assigned the weights \[W_{2}=\frac{1}{\delta}I_{m}\otimes\begin{bmatrix}1&-2\end{bmatrix},\ b_{2}=0_{m}\] where \(\otimes\) denotes the Kronecker product and \(0_{m}\) is a vector of \(m\) zeros. This leads to the \(j\)th entry of \(g(z^{(3)})\) to be given by \[g(z^{(3)}_{j})=\begin{cases}(\beta-j+\delta)/\delta&\text{if }\beta\in[j- \delta,j),\\ (-\beta+j+\delta)/\delta&\text{if }\beta\in[j,j+\delta),\\ 0&\text{otherwise }.\end{cases} \tag{11}\] Only one of the entries of \(g(z^{(3)}_{j})\) will be non-zero at a time, and when evaluated at \(\beta=j\) the \(j\)th entry will be one and the rest zero. The sharpness of the transition between zero and one is adjusted by \(\delta\). The weights (\(W_{1}\), \(b_{1}\), \(W_{2}\), and \(b_{2}\)) can now be augmented to store a copy of the other input values \(x\), which makes the hidden layer \(z^{(3)}\) the same as the input layer of a context-sensitive network. Therefore, a learned-context network with a scalar task parameter share approximation properties with the context-sensitive networks from Proposition 1. This result only considers the number of task parameters and does not put any bounds on the number of shared parameters. While a scalar task parameter is theoretically sufficient, it does not indicate that this is the ideal choice. Indeed, the proof is based on the scalar task parameter taking an indicator role, which is counterproductive to our desire to learn the latent properties of the tasks. This is useful to bear in mind when selecting hyperparameters, as too few task parameters may force an indicator behavior. The last-layer MTL architecture is less flexible in its task adaptation mechanism. As such, it may require more task parameters than the learned-context networks. **Proposition 3**.: _Last-layer MTL networks with base network from the class \(\mathcal{F}_{d_{x},k}\) and \(\beta_{j}\in\mathbf{R}^{k}\) require \(k\geq m\) to guarantee the existence of a function arbitrarily close to the multi-task function \(f\) in Equation 10._ Proof.: The last-layer network is a linear combination of \(n_{\beta}\) basis functions, \(y=\beta_{j}^{\top}h(x)\). Let the tasks be given as \(m\) sine waves \(y=\sin(\omega_{j}x)\), with frequency \(\omega_{j}=j\) and \(x\in[-\pi,\pi]\). These cannot be constructed by a set of basis functions of dimension lower than \(m\), because they are orthogonal to each other. Hence, in this case the shared neural network cannot have an output dimension less than \(m\). Task adaption in the last-layer MTL architecture is through linear combination of features produced by the fully shared neural network. This means that it is limited to scaling the given nonlinearities. Contrary, in the learned-context architecture the task parameters can influence the nonlinearities themselves, which leads to a much broader range of adaptions it can achieve. ## 5 Empirical analysis This section investigates the properties of learned-context neural networks by empirically. First, the learned-context neural network is compared to similar architectures. The architectures are evaluated on predictive performance, training robustness, sensitivity to the number of data points, and the effect of the number of task parameters. The task parameter space produced by learned-context neural networks is then explored by estimating task parameters for new tasks after the shared parameters are trained and fixed. ### Benchmark models Learned-context neural networks, described in Section 3, are compared to learned-context networks are compared to last-layer multi-task network described by Equation 3, and context-sensitive network described by Equation 4. All three network models use the residual structure described in Equations 6-8. A linear mixed-effect model acts as a baseline for performance. The linear models have a set of shared slope and intercept parameters. Additionally, each task is has its own intercept. This structure is given in Equation 1. Discrete features are one-hot encoded for this model. ### Datasets The models are compared on ten datasets. Two of the datasets are synthetically created to highlight differences between the architectures. Three datasets, Schools, Parkinson, and Sarcos, are common to the MTL literature (Zhang and Yang, 2021), while the last five are selected from a range of open access data sources. All datasets correspond to regression problems with a scalar response variable. The datasets are described below and summarized in Table 1. **Frequency** is a synthetic dataset where tasks are sine waves of different frequency. Data is generated according to \(y_{ij}=0.5\sin(2\pi\omega_{j}x_{ij})+0.5+\epsilon_{ij}\). The task frequencies are sampled as \(\omega_{j}\sim\text{Uniform}(0.5,4.0)\). We take \(x_{ij}\sim\text{Uniform}(0,1)\), and \(\epsilon_{ij}\sim\mathcal{N}(0,\sigma^{2})\), \(\sigma=0.1\). **Sine and line** is a synthetic dataset proposed by Finn et al. (2018). Tasks are taken from two classes, affine functions, \(y_{ij}=a_{j}x_{ij}+b_{j}+\epsilon_{ij}\) or sine waves, \(y_{ij}=c_{j}\sin(x_{ij}+d_{j})+\epsilon_{ij}\). We generate an equal number of tasks from each class, with parameters sampled as \(a_{j},b_{j}\sim\text{Uniform}(-3,3)\), \(c_{j}\sim\text{Uniform}(0.1,5.0)\), and \(d_{j}\sim\text{Uniform}(0,\pi)\). For both task types we use \(\epsilon_{ij}\sim\mathcal{N}(0,\sigma^{2})\), \(\sigma=0.3\), and \(x_{ij}\sim\text{Uniform}(-5,5)\). We note that this problem can be represented as linear regression, \(y=\beta^{\top}z\), on a set of nonlinear basis functions \(z^{\top}=\begin{bmatrix}1&x&\sin(x)&\cos(x)\end{bmatrix}\) by applying the trigonometric identity \(\sin(x+d)=\sin(d)\cos(x)+\cos(d)\sin(x)\). **Schools** is a dataset of examination results for students from different schools over a three year period (Nuttall et al., 1989). The goal is to map features of the schools and students to student performance, in order to study school effectiveness. We treat each school as a task. The dataset is provided by Center of Multilevel Modelling (1987). **Parkinson** telemonitoring relates patient observations to a disease symptom score (Tsanas et al., 2010). Each patient is considered a task. It is provided by UCI Machine Learning Repository (2009). **Bike sharing** relates weather and calendar features to the number of trips of a bike sharing system over two years (Fanaee-T and Gama, 2014). Each month is considered a task, which allows tasks to capture seasonal effects and potential changes to the bike sharing system itself. Data is provided by UCI Machine Learning Repository (2013). **Inflation** is a dataset from OECD (2022). It considers the development of Consumer Price Index (CPI) in 45 countries. CPI is taken quarterly from 1970 to 2020, normalized with 2015 being equal to 100% for each country. Each country is a task. Time is the only input variable. **Obesity** is a dataset that describes the development of mean body-mass index from 1985 to 2019 in 200 countries (NCD Risk Factor Collaboration, 2020a). Input variables are age, sex, and year. The response variable is the percentage considered to be obese. Each country is a task. Data is provided by NCD Risk Factor Collaboration (2020c). **Height** is a dataset with the same structure as Obesity, but the response variable is the mean height within the groups. Data is provided by NCD Risk Factor Collaboration (2020b). **Death rate** describes the rate of death in different age groups. Input variables are age and sex. Each pair of year and country is considered a task. There are 183 countries and five years. This results in a greater number of tasks than Obesity and Height, but fewer input variables. Data is provided by World Health Organization (2020) **Sarcos** is data from a robotic arm with seven joints. The goal is to map joint position, velocity, and acceleration to motor torque (Vijayakumar and Schaal, 2000). Each joint is one task, taking all 21 joint measurements as input to predict torque at the joint. As a consequence, each task has exactly the same input data, differing only in their response. It is hosted by Gaussian Processes for Machine Learning (2000). ### Training Procedure Model parameters are found by minimizing a loss function of mean squared prediction error and parameter regularization (Hastie et al., 2009), \[\min_{\alpha,\beta_{1},\ldots,\beta_{m}}\ \frac{1}{n}\sum_{j=1}^{m}\sum_{i=1}^{ n_{j}}\left(y_{ij}-f(x_{ij};\beta_{j},\alpha)\right)^{2}+\lambda_{\alpha}l( \alpha)+\lambda_{\beta}\sum_{j=1}^{m}l(\beta_{j}). \tag{12}\] The prediction error is divided by the total number of data points \(n\). Parameters are regularized by the L2 norm, scaled by factors \(\lambda_{\alpha}\) and \(\lambda_{\beta}\) for shared and task parameters respectively. The task parameter term is ignored for context-sensitive networks. All data points from all tasks are weighted equally. More complex loss functions with individual weights for each task could potentially improve performance in some cases (Kendall et al., 2018; Gong et al., 2019), but this is not considered here. \begin{table} \begin{tabular}{l r r r r r} \hline Name & Num. feat. & Num. tasks & Data train & Data test & Std. \(y\) \\ \hline Frequency & 1 & 250 & 30000 & 25000 & 0.36 \\ Sine and line & 1 & 100 & 6000 & 10000 & 3.93 \\ Schools & 7 & 139 & 12339 & 3023 & 12.72 \\ Parkinson & 16 & 42 & 4717 & 1158 & 10.70 \\ Bike sharing & 6 & 24 & 13915 & 3464 & 181.38 \\ Inflation & 1 & 45 & 6684 & 1344 & 34.61 \\ Obesity & 3 & 200 & 126000 & 126000 & 5.22 \\ Height & 3 & 200 & 105000 & 105000 & 19.64 \\ Death rate & 2 & 915 & 16470 & 16470 & 0.12 \\ Sarcos & 21 & 7 & 311388 & 31143 & 18.84 \\ \hline \end{tabular} \end{table} Table 1: Summary of dataset properties. Listed is the number of features in the dataset, the number of tasks, the number of data points used for model training and testing, and the standard deviation of the response variable \(y\). Optimization of Equation 12 is done by stochastic gradient decent (Botto et al., 2018), with a two stage learning rate schedule. Hyperparameters are optimized by the global optimizer LIPO (Malherbe and Vayatis, 2017), which is run for a predetermined number of iterations. Further details is given in Appendix B for the training procedure and Appendix C for hyperparameters. ### Base performance The three neural network architectures and the linear mixed model are first compared in terms of test set errors. The results are summarized in Table 2. The neural network models have similar performance on all datasets, with learned-context and last-layer MTL being slightly favoured over context-sensitive networks in general. In some cases, such as Obesity and Height, we see that the context-sensitive architecture has close to twice the error of the other architectures. While this difference is significant, we note that when the errors are compared to the standard deviation of the response variables, given in Table 1, these errors are all quite small. For the synthetic datasets, learned-context network perform better on the Frequency data, and last-layer MTL network perform better on the Sine and line data. This is expected, because these architectures match the data generating processes. The linear mixed model performed well on two of the datasets, Schools and Parkinson. In these cases, the neural network models are able to achieve similar performance, which is reassuring. To compare the robustness of the training procedure we re-run the training of all neural network models. We reuse the hyperparameters and run the training nine additional times. The best and worst RMSE values for the ten runs combined are given in Table 3. It also lists the number of times training runs diverged and had to be restarted. Overall the results are quite consistent. However, there are cases where the training appears less stable. An example \begin{table} \begin{tabular}{l l l l l} \hline \hline Dataset & LC & CS & LL & LME \\ \hline Frequency & **0.106** & 0.136 & 0.122 & 0.360 \\ Sine and line & 0.325 & 0.34 & **0.316** & 3.843 \\ Schools & 10.203 & 10.363 & 10.314 & **10.108** \\ Parkinson & 2.914 & 2.856 & **2.670** & 2.776 \\ Bike sharing & 53.869 & 80.208 & **45.043** & 142.087 \\ Inflation & **1.833** & 2.526 & 2.501 & 11.095 \\ Obesity & **0.123** & 0.281 & 0.210 & 2.512 \\ Height & 0.394 & 0.61 & **0.347** & 5.055 \\ Death rate & **0.011** & 0.017 & 0.026 & 0.078 \\ Sarcos & **2.176** & 2.188 & 2.482 & 10.653 \\ \hline \hline \end{tabular} \end{table} Table 2: Root mean squared error on test data for all models. The best score on each dataset is highlighted. The column headers are learned-context neural networks (LC), context-sensitive networks (CS), last-layer MTL networks (LL), and linear mixed effect models (LME). being last-layer MTL on the Height dataset, which yield a large span in relative errors. Again we note that these errors are small in absolute value, which makes such comparisons sensitive to the randomness in the training procedure. ### Effect of dataset size We now compare the sensitivity to dataset size, by training the neural network architectures on reduced datasets. The training datasets are cut to 50% and 10% of their original size. The same fraction of data is removed from each task, keeping the data balance from the original dataset. Test sets are kept at full size. Training and hyperparameter searches are conducted in the same way as for the full data case. The results are summarized in Table 4. Context-sensitive networks are on average slightly behind the others on all data sizes. A reduction in data size naturally leads to a reduction in performance for all models, but the learned-context architecture is less sensitive to this than the others. ### Effect of task parameter dimension Section 4.3 established theoretical differences in the number of task parameters required by learned-context networks and last-layer MTL networks. In this section we explore the practical impact of the task parameter dimension. To this end, learned-context networks and last-layer networks are trained on all datasets with different number of task parameters. All other hyperparameters are fixed. The models are trained on the full dataset. Figure 3 illustrates the results. Additional task parameters generally improve performance of both models on all datasets, up to a certain point. There does not seem to be a significant downside of excessively large task parameter dimensions. This means the hyperparameter searches will likely arrive at a larger than necessary value, \begin{table} \begin{tabular}{l l l l} \hline Dataset & LC & CS & LL \\ \hline Frequency & 1.00, 1.01 & 1.10, 1.29 (1) & 1.16, 1.21 \\ Sine and line & 1.00, 1.00 & 1.04, 1.06 & 0.97, 0.97 \\ Schools & 1.00, 1.00 & 1.01, 1.02 & 1.01, 1.02 \\ Parkinson & 0.95, 1.00 & 0.92, 0.98 & 0.90, 0.94 \\ Bike sharing & 0.94, 1.01 & 1.29, 1.51 & 0.82, 0.85 \\ Inflation & 0.98, 1.09 & 1.33, 1.58 & 1.35, 1.38 \\ Obesity & 0.94, 1.08 (1) & 2.10, 2.45 & 1.30, 1.86 \\ Height & 0.99, 1.14 & 1.53, 1.65 & 0.82, 1.37 (1) \\ Death rate & 0.99, 1.04 & 1.52, 1.62 & 2.24, 2.30 \\ Sarcos & 1.00, 1.04 & 0.99, 1.05 (1) & 1.11, 1.49 \\ \hline \end{tabular} \end{table} Table 3: Results from ten repeated training runs with the same hyperparameter settings. Reported are the min and max relative RMSE value in each case. The values are normalized by the performance of the learned-context neural network in Table 2. The number of diverging training runs, if any, are given in brackets. unless additional model selection penalty terms are introduced. Overall, the learned-context neural networks achieve better performance with fewer task parameters, but the gap is closed as the number increases. As noted in Section 5.2, the sine and line dataset is easily represented as regression on four nonlinear basis functions. This is reflected by both models performing identically for four or more task parameters. The frequency dataset does not map to linear regression in the same way. As a consequence, the last-layer MTL network requires a significantly higher number of task parameters to achieve the same performance as the learned-context neural network. A benefit of a low-dimensional task parameter space is that it simplifies visualization and interpretation of the task parameters. Examples of this are given in Appendix D, where the models for Inflation, Bike sharing, and Death rate datasets are studied in detail. The task parameters are given sensible interpretations related to properties of the model domain. ### Hold-out task performance A key aspect differentiating the learned-context from a fixed encoding, such as the context-sensitive neural networks, is that similar tasks can be assigned similar task parameter values. A desirable feature would be that the parameters capture latent properties of the tasks. For this to happen, the task parameter space must in some sense be well-behaved. A qualitative study, given in Appendix D, supports the hypothesis that learned-context neural networks facilitate such behavior, and the task parameters capture properties fundamental to the domain, as opposed to separating tasks by memorization. This section attempts to quantify this behavior by studying the viability of estimating parameters for a new task after the shared neural network parameters are trained and fixed. This is similar to meta-learning. However, as discussed in Section 2.3, meta-learning takes this generalization into account in the original \begin{table} \begin{tabular}{l|l l l|l l l|l l l} \multicolumn{4}{c}{100\% training data} & \multicolumn{2}{c}{50\% training data} & \multicolumn{2}{c}{10\% training data} \\ \hline Dataset & LC & CS & LL & LC & CS & LL & LC & CS & LL \\ \hline Frequency & **0.29** & 0.37 & 0.33 & **0.31** & 0.36 & 0.37 & **0.49** & 0.87 & 0.53 \\ Sine and line & 0.08 & 0.09 & **0.08** & 0.09 & 0.11 & **0.09** & **0.25** & 0.40 & 0.50 \\ Schools & **0.80** & 0.81 & 0.81 & **0.84** & 0.84 & 0.93 & **0.88** & 0.98 & 1.09 \\ Parkinson & 0.27 & 0.27 & **0.25** & 0.28 & 0.27 & **0.26** & 0.37 & **0.35** & 0.42 \\ Bike sharing & 0.30 & 0.44 & **0.25** & 0.37 & 0.56 & **0.28** & **0.54** & 0.74 & 0.70 \\ Inflation & **0.05** & 0.07 & 0.07 & **0.06** & 0.08 & 0.10 & **0.10** & 0.13 & 0.13 \\ Obesity & **0.02** & 0.05 & 0.04 & 0.04 & 0.05 & **0.02** & 0.04 & 0.10 & **0.02** \\ Height & 0.02 & 0.03 & **0.02** & 0.01 & 0.03 & **0.01** & 0.03 & 0.04 & **0.01** \\ Death rate & **0.10** & 0.15 & 0.22 & **0.12** & 0.17 & 0.19 & **0.20** & 0.30 & 0.58 \\ Sarcos & **0.12** & 0.12 & 0.13 & **0.12** & 0.12 & 0.16 & 0.13 & 0.12 & **0.10** \\ \hline \end{tabular} \end{table} Table 4: Relative test errors for models trained on reduced datasets. Errors are normalized by the response standard deviation from Table 1. Figure 3: Average test data error as function of the number of task parameters. Learned-context neural networks is given in blue circles and last-layer MTL networks in green diamonds. Task parameter dimensions are set to 1, 2, 4, 8, and 16. Errors are normalized using the learned-context error from Table 2. A dotted line at one marks the base performance. optimization objective. We only study these properties as a consideration _after_ the model has been trained using conventional multi-task learning. For these experiments, task are divided into two groups. The base group are the tasks that participate in training of the shared parameters, by minimizing Equation 12. The hold-out group are tasks that arrive after the shared parameters are fixed. Hold-out tasks are considered one at a time. The loss function for hold-out task \(j\) is \[\min_{\beta_{j}}\ \frac{1}{s^{2}}\sum_{i=1}^{n_{j}}\left(y_{ij}-f(x_{ij};\beta_ {j},\alpha)\right)^{2}+\beta_{j}^{\top}D^{-1}\beta_{j}. \tag{13}\] This is equivalent to the maximum a posteriori estimate of the task parameters. The task parameter prior is \(\beta_{j}\sim\mathcal{N}(0,D)\), where \(D\) is found from the distribution of task parameters in the base group. The log-likelihood term is scaled by \(s^{2}\), which is the test error variance for the base group. A critical part of parameter estimation is the shape of the likelihood function. Using the Frequency dataset, we compare the hold-out task likelihood with the true data-generating process, which in this case is known. The result is given in Figure 4. Both parameter spaces display the same behavior for the different sets of data points. The modes of the likelihood develop predictably with additional data. This is evidence of a well-behaved task parameter space, and that the task parameters can capture properties that are fundamental to the problem, rather than just being a complex task encoding. For the synthetic Frequency case, we were free to generate a sufficiently large number of tasks and data points for this to be possible. The extent to which such a relationship between task parameters and underlying task properties can be identified is naturally dependent on the particular problem being studied. To further investigate the hold-out task viability, an experiment is contucted on all datasets. For each dataset, tasks are randomly divided into three approximately equal groups. One group is selected as the hold-out group, while the other two constitute the base group. First, a learned-context neural network is trained using the base group tasks. Task parameters for the hold-out group is then estimated one task at the time, with the shared parameters frozen. The process is repeated using all groups as the hold-out group, and results are averaged over all iterations. The loss in Equation 13 is minimized using the LIPO optimizer (Malherbe and Vayatis, 2017). The results are summarized in Figure 5 and Table 5. The outcome of this experiment will naturally vary with the datasets. In cases with few tasks or where tasks show a greater degree of uniqueness, it will be hard to identify sensible parameters for the hold-out tasks. This can be seen clearly for the Sarcos dataset, which only has seven tasks, and the tasks are quite different. For other cases, such as the Frequency dataset, there are many similar tasks, and the hold-out tasks are much more likely to resemble the base tasks. Inspecting Table 5, the hold-out task performance is never able to match the baseline from Table 2, but it is able to stay below a 20% error increase for half of the datasets. In real scenarios were new tasks appear, this leads to a trade-off between retraining Figure 4: Task parameter likelihood functions for a hypothetical new task constructed for the Frequency dataset. The task has \(\omega_{j}=1.5\), and up to four data points. The left column is the likelihood of the true data-generating function, which is a sine wave parametrized by its frequency. The middle column is the likelihood of the learned-context neural network, with one task parameter. The right column is the underlying function in black, the data points in orange, and the learned-context network evaluated with task parameters sampled from the likelihood in blue. The model is taken from the experiment in Section 5.6. The rows show the likelihoods for a different number of data points. the full model for a potential performance gain, or just estimating the new task parameters to save time and resources. As seen in Figure 5, it is beneficial to have more data, but the amount required for satisfying parameter estimation can vary greatly. We note that when using fewer data points for the hold-out tasks, a lower task parameter dimension can be advantageous. This is observed in the Obesity and Height datasets, where all three task parameter dimensions yield similar performance using the full dataset, but the smaller task parameter spaces become increasingly favoured with fewer data points. Compare this to the results observed in Figure 3, where it was found that an increasing task parameter dimension was favourable when all tasks were trained simultaneously. The optimal choice of task-parameter dimension is then up to the specific use case. ## 6 Summary and discussion The theoretical and empirical results show that the learned-context neural network is a favorable architecture for MTL problems with similar tasks, and where tasks may have few data points. Its ability to learn a small and well-behaved task parameter spaces can be particularly advantageous for such problems. Theoretically, scalar task parameters were shown to be sufficient for a learned-context neural network to achieve universal approximation of all tasks (Section 4.3). The contextual inputs facilitates complex task adaptations even for simple constructed networks (Section 4.2). The ability to adapt to different tasks naturally increases with the size of the neural network. A potential downside to this flexibility is the possibility of over fitting to the individual tasks, which counters the desirable properties of multi-task learning. This puts emphasis on careful hyperparameter selection. Experimentally it is seen that the ideal number of task parameters varies between problems, but the architecture can generally achieve a large degree of \begin{table} \begin{tabular}{l r r r} \hline Dataset & \(d_{\beta}=2\) & \(d_{\beta}=4\) & \(d_{\beta}=8\) \\ \hline Frequency & **1.07** & 1.56 & 1.4 \\ Sine and line & 1.28 & **1.26** & 1.72 \\ Schools & 1.06 & **1.04** & 1.05 \\ Parkinson & 1.11 & 1.05 & **1.01** \\ Bike sharing & 1.27 & **1.18** & 1.23 \\ Inflation & 2.08 & **1.74** & 2.9 \\ Obesity & 4.17 & **2.77** & 9.74 \\ Height & 2.18 & **1.67** & 2.4 \\ Death rate & **1.02** & 1.25 & 1.88 \\ Sarcos & **3.81** & 4.89 & 9.71 \\ \hline \end{tabular} \end{table} Table 5: Mean test error for hold-out tasks when trained on 100% of the training set. Errors are normalized using the learned-context network performance from Table 2. Figure 5: Average test set errors for hold-out tasks. Errors are normalized using the learned-context network performance from Table 2. Hold-out tasks are trained on 1%, 5%, 10%, 25%, 50%, 75%, and 100% of the task data, and evaluated on the full test set. Models with 2, 4, and 8 task parameters are used. task adaptation with only a few task parameters (Section 5.6). Increasing the task parameter dimension is observed to have a beneficial effect in cases where all tasks are trained simultaneously (Section 5.7). However, if the shared network model is to be used for new tasks, then a smaller parameter space may be preferable. The ideal task parameter dimension will likely have to be set by the practitioner in light of domain knowledge and the desired application of the model. The architecture facilitates task parameters that capture latent properties of the tasks (Section 5.7 and Appendix D), which can enable convenient workflows for updating and maintaining such models in practice. Learned-context neural networks performed similarly to the benchmark architectures on the full datasets (Section 5.4). On the reduced datasets the learned-context neural network had less performance deterioration than the others (Section 5.5). Training of the learned-context networks appears to be robust, and comparable to the other architectures discussed (Section 5.4). The construction used to prove Theorem 2 gives a motivation for initializing the task parameters to zero. Randomly initialized task parameters would have a higher chance to get stuck in local minima with "task-encoding" properties. Zero initializing, on the other hand, encourage similar task to follow the same trajectories during optimization, which enables a grouping of similar tasks. This likely promotes a well-behaved parameter space, and reduces the chance that multiple regions represents the same phenomena. ## 7 Conclusion The learned-context neural network is a simple, but powerful, multi-task learning architecture. Its properties make it well suited to problems with moderate amounts of data, and in situations where a low-dimensional, well-behaved task parameter space is beneficial for application and analysis of the model. The task adaptation mechanism yields universal approximation capabilities with only a single task parameter. Empirical studies show that the ideal task parameter dimension varies with the domain and model application, but the number of required task parameters is generally lower than that of comparable architectures. ## Acknowledgements This work was supported by Solution Seeker Inc. and The Research Council of Norway. Example network The following describes the neural network example used in Section 4.2. The input vector is \(z^{\top}=\begin{bmatrix}x&\beta\end{bmatrix}\), with both \(x\) and \(\beta\) scalar. The network has three layers, with weights \[W_{1} =\begin{bmatrix}1&L_{1}\\ 1&L_{2}\\ 1&L_{3}\\ 1&L_{4}\end{bmatrix},b_{1}=-\frac{1}{2}\begin{bmatrix}0\\ 1\\ 2\\ 3\end{bmatrix},\] \[W_{2} =\begin{bmatrix}2&-4&0&0\\ 0&0&2&-4\end{bmatrix},b_{2}=0,\] \[W_{3} =\begin{bmatrix}1&1\end{bmatrix},b_{3}=0.\] For simplicity, we ignore the residual connections in this case. Recall that we only consider the ReLU activation function. The first layer maps the inputs to a four dimensional hidden layer. Ignoring the task parameter, this mapping creates four identical unit slopes starting at 0.0, 0.5, 1.0, and 1.5. The second layer add pairs of the previous hidden layer together, creating two triangle shaped responses as the second hidden layer. These are added together in the third layer. The output is seen as the black lines in Figure 2. ### Derivation of dilation effect We show that setting \(L=b_{1}\) in the network above creates a dilation effect. This was explored in Section 4.2. To simplify the analysis we take \(\beta\in(-1,1)\). Let \(z_{p}^{(2)}=x+L_{p}\beta+b_{1,p}\) be the \(p\)th element of the first hidden layer. Substituting \(L_{p}=b_{1,p}=-(p-1)/2\), we get \[g(z_{p}^{(2)})=\begin{cases}x-\frac{1+\beta}{2}(p-1)&\text{ if }x\geq\frac{1+ \beta}{2}(p-1)\\ 0&\text{ otherwise.}\end{cases}\] Continuing the notation for the second hidden layer, we get \[g(z_{1}^{(3)}) =\begin{cases}0,&\text{ if }x<0,\\ 2x,&\text{ if }x\in\left[0,\frac{1+\beta}{2}\right),\\ -2x+2(1+\beta),&\text{ if }x\in\left[\frac{1+\beta}{2},1+\beta\right),\\ 0,&\text{ if }x\geq 1+\beta,\end{cases}\] \[g(z_{2}^{(3)}) =\begin{cases}0,&\text{ if }x<1+\beta,\\ 2x-2(1+\beta),&\text{ if }x\in\left[1+\beta,3\frac{1+\beta}{2}\right),\\ -2x+4(1+\beta),&\text{ if }x\in\left[3\frac{1+\beta}{2},2(1+\beta)\right),\\ 0,&\text{ if }x\geq 2(1+\beta),\end{cases}\] The output is then given as \(y=g(z_{1}^{(3)})+g(z_{2}^{(3)})\), which is a piecewise linear function interpolating the points \((x,y)=(0,0),(0.5(1+\beta),1+\beta),(1+\beta,0),(1.5(1+\beta),1+\beta),(2(1+\beta ),0)\). This is equivalent with a dilation of both \(x\) and \(y\) with a factor \(1+\beta\). ## Appendix B Optimizer and learning rate schedule All neural networks are implemented and trained with PyTorch (Paszke et al., 2019). They are trained on a single GPU. Optimization is done by stochastic gradient decent with momentum (Bottou et al., 2018) and a learning rate scheduler. The learning rate scheduler has two stages. Starting at \(1^{-8}\), it is first increased linearly during a warm-up stage (Nakamura et al., 2021; Arpit et al., 2019) until it reaches peak learning rate. The second stage is to train the model over several epochs until the loss converges(Chee and Toulis, 2018; Lang et al., 2019), at which point the learning rate is reduced by half. This is repeated until the learning rate is reduced back down below \(1^{-8}\), the maximum number of epochs is reached, or the loss is sufficiently close to zero. Loss convergence is determined by inspecting a window of the last epoch losses. A linear regression model with slope and intercept is fitted to the loss values. This is compared to a model with intercept only, using a likelihood ratio test (Wilks, 1938). Convergence is flagged if the test is above 0.51, which is an aggressive threshold. The test is implemented in Statsmodels (Seabold and Perktold, 2010). The new learning rate is kept for a minimum number of epochs equal to the window size. The number of epochs and data batches vary with dataset size. For data sets with less that 100 000 data points we use 10 000 epochs of two batches, otherwise we use 1000 epochs of 20 batches. The warm-up stage is equal to 10% of the maximum allowed epochs, and the loss convergence is found with a window size equal to 1% of the epochs. Peak learning rate and momentum are treated as hyperparameters. Neural network parameters are initialized by the He initializer (He et al., 2015). Task parameters are initialized to zero. Linear mixed effect models are implemented and optimized with Statsmodels (Seabold and Perktold, 2010). All data is transformed to approximately unit scale before training and evaluating models. Still, all figures and errors are given in the original units, unless stated otherwise. ## Appendix C Hyperparameters All three neural network architectures require the number of residual blocks, hidden layer size, and network parameter regularization factor as hyperparameters. They also required the peak optimizer learning rate. Additionally, learned context networks and last-layer MTL networks require the number of task parameters and task parameter regularization factor. Hyperparameters are optimized using the training data in Table 1. The training data is split into two parts, using one part for training and one part for validation during the search. A final model is then trained on all training data using the hyperparameter configuration with the best validation error. Optimization is done by a variant of the LIPO solver (Malherbe and Vayatis, 2017) implemented in dlib (King, 2009). The search runs for 25 iterations in all cases. The range of values explored in the searches varies by datasets. Details are summarized in Table 6. To limit the number of hyperparameters, momentum is fixed to 0.7 and the number of residual blocks is fixed to two for all datasets and architectures. These values are found as reasonable choices for most cases. Due to a large number of experiments and hyperparameter searches, we observe that some searches arrive at an optimal learning rate that is too high to be used in training the final model. It can be due to randomness in weight initialization and batch samples that allowed a high learning rate to succeed during the trials, only to diverge during final training. To address this, we multiply the peak learning rate by 0.9 and retry the training if the loss diverges. Full hyperparameter searches are conducted for the experiments in Section 5.4 and Section 5.5. For experiment in Section 5.6, the hyperparameters are copied from the final models in Section 5.4. For the experiment in Section 5.7, the hyperparameters are copied from the experiment using 50% training data in Section 5.5. ## Appendix D Qualitative study of task parameters This section provides additional visualizations of the learned-context neural networks trained in Section 5. The intention is to give further insight into the qualitative behavior of the task parameters. We study the Inflation, Bike sharing, and Death rate datasets because they have a low dimensional input space that is convenient to visualize. The models with a scalar task parameter from Section 5.6 are used for all visualizations. Figure 6 illustrates the Inflation data and the fitted model. The model \begin{table} \begin{tabular}{l l l} \hline Name & Min & Max \\ \hline Peak learning rate & \(10^{-4}\) & 1.5 \\ Hidden layer size & 50 & 500 \\ Shared parameter regularization & \(10^{-15}\) & \(10^{-5}\) \\ Task parameter regularization & \(10^{-15}\) & \(10^{-3}\) \\ Number of task parameters & 1 & min(25, \(m\)) \\ \hline \end{tabular} \end{table} Table 6: Summary of hyperparameter search space. The number of task parameters is limited to the number of tasks \(m\) in the dataset, up to a maximum of 25. appears as a smoothed version of the data. The task parameters range from -0.2 to 0.2. Higher task parameter values seem to represent countries where the increase begins in the 1990s, and lower values represent countries where the increase is well underway in the 1970s. The transition between these two categories is highly non-linear. For a task parameter equal to -0.2, the curve is initially steep and flattens towards the end. The curve then gradually transitions to an almost linear trend for values around -0.1. For even higher values, there curve starts with a flat phase that eventually ramps up, with the start time increasing along with the task parameter. The bike-sharing dataset relates the number of bike trips to weather, calendar features, and time of day, over two years. Each month is a task, yielding 24 tasks. Figure 7 illustrates the average number of bike trips and the task parameters for each month. There is an increasing number of trips over the two years, combined with a seasonal effect of more activity during the summer months. The increase in activity in the second year may be due to an increase in the capacity or popularity of the system. The task parameters nicely follow the average trips taken during peak hours. A possible interpretation is that the task parameters capture some latent state of the system, such as the number of bikes being distributed. We emphasize that while the parameters are visualized as a connected line, they are _not_ regularized to follow a time-dependent pattern. All parameters are assumed to be independent and centered around zero. However, a time-dependent regularization between the task parameters could be considered if the model were to be used in practice. The death rate dataset studies the rate of death at different ages, as a Figure 6: Inflation dataset. The left plot is the data points, where each line is the normalized inflation rate for a country (recall that each country is a task). The right plot is the fitted models evaluated at the data points. In both plots, the lines are colored by the value of the task parameter for that country. function of country, year, and sex. Age is grouped in intervals of five years, and the maximum age included is 80. All ages above 80 are grouped together with a death rate of one, which has been removed from the dataset in this study. The data is given in five different years, 2000, 2005, 2010, 2015, and 2019. Each country and year is modeled as an individual task, making each country associated with five tasks. This relationship is _not known_ by the model. Figure 8 illustrates the task parameters, with six countries highlighted. The year 2010 is investigated in detail in Figure 9, which illustrates the data and fitted models for the highlighted countries in Figure 8. It seems that lower task parameter values relate to higher death rates in general. For most countries the task parameters seem to be incrementally increased with the years, indicating a decrease in death rates for ages below 80. For instance, task parameters for Ethiopia (ETH) show a steady increase over time. This correlates with the increase in life expectancy at birth observed over the same period (GBD 2019 Ethiopia Subnational-Level Disease Burden Initiative Collaborators, 2022). Haiti (HTI) 2010 stands out in Figure 8. This is due to the devastating earthquake of January 2010, which had a large death toll (Cavallo et al., 2010). In Figure 9 we see that this leads to a curve shape that is quite unique, with an elevated death rate across all ages. In this case, the other tasks fails to supply the information missing from the training data, and the resulting model overshoots on the test data. For the other tasks, the gaps left by the test data are covered nicely by related tasks. For instance, the female death rate in Zambia (ZMB) is almost perfectly captured with only six data points available for training, of which only one represents ages 50 and above. Figure 7: Bike sharing dataset. The top plot is the average number of trips at different times of day for each month, with the standard deviations given as transparent bands. This plot only includes data from workdays. The bottom plot is the identified task parameter for each month. Figure 8: Death rate dataset. Visualization of task parameters. Each combination of country and year is an independent task in the model formulation, but tasks from the same country are connected by a line. Six countries are highlighted in colors. Figure 9: Death rate dataset. Data and fitted models for six countries for the year 2010. The country and corresponding task parameters for this year are given in the titles. Data is given in black and colored markers, where black is the training data and colored is the test data. Circle markers are used for males and square markers for females. Fitted models are given in dashed lines for males and solid lines for females. We emphasize that the task parameters are not forced into the relationships seen in Figure 6, Figure 7, and Figure 8. The continuous and interpretable task parameters are only motivated by regularization towards zero. The discovered structures are due to the information in the training data.
2305.05218
Graph Neural Network-based surrogate model for granular flows
Accurate simulation of granular flow dynamics is crucial for assessing various geotechnical risks, including landslides and debris flows. Granular flows involve a dynamic rearrangement of particles exhibiting complex transitions from solid-like to fluid-like responses. Traditional continuum and discrete numerical methods are limited by their computational cost in simulating large-scale systems. Statistical or machine learning-based models offer an alternative. Still, they are largely empirical, based on a limited set of parameters. Due to their permutation-dependent learning, traditional machine learning-based models require huge training data to generalize. To resolve these problems, we use a graph neural network, a state-of-the-art machine learning architecture that learns local interactions. Graphs represent the state of dynamically changing granular flows and the interaction laws, such as energy and momentum exchange between grains. We develop a graph neural network-based simulator (GNS) that takes the current state of granular flow and predicts the next state using Euler explicit integration by learning the local interaction laws. We train GNS on different granular trajectories. We then assess the performance of GNS by predicting granular column collapse. GNS accurately predicts flow dynamics for column collapses with different aspect ratios unseen during training. GNS is hundreds of times faster than high-fidelity numerical simulators. The model also generalizes to domains much larger than the training data, handling more than twice the number of particles than it was trained on.
Yongjin Choi, Krishna Kumar
2023-05-09T07:28:12Z
http://arxiv.org/abs/2305.05218v2
# Graph Neural Network-based surrogate model for granular flows ###### Abstract Accurate simulation of granular flow dynamics is crucial for assessing various geotechnical risks, including landslides and debris flows. Granular flows involve a dynamic rearrangement of particles exhibiting complex transitions from solid-like to fluid-like responses. Traditional continuum and discrete numerical methods are limited by their computational cost in simulating large-scale systems. Statistical or machine learning-based models offer an alternative. Still, they are largely empirical, based on a limited set of parameters. Due to their permutation-dependent learning, traditional machine learning-based models require huge training data to generalize. To resolve these problems, we use graph neural network, a state-of-the-art machine learning architecture that learns local interactions. Graphs represent the state of dynamically changing granular flows and the interaction laws, such as energy and momentum exchange between grains. We develop a graph neural network-based simulator (GNS) that takes the current state of granular flow and predicts the next state using Euler explicit integration by learning the local interaction laws. We train GNS on different granular trajectories. We then assess the performance of GNS by predicting granular column collapse. GNS accurately predicts flow dynamics for column collapses with different aspect ratios unseen during training. GNS is hundreds of times faster than high-fidelity numerical simulators. The model also generalizes to domains much larger than the training data, handling more than twice the number of particles than it was trained on. keywords: graph neural network, learned physics simulator, granular column collapse, surrogate model + Footnote †: journal: Computers and Geotechnics ## 1 Introduction Landslides cause extensive material displacement and significant infrastructure damage. Accurate modeling of granular flow runout is crucial to understanding the impact of landslides. Numerical methods, such as particle-based and continuum approaches, are often employed to assess landslide runouts. Particle-based approaches, like the Discrete Element Method (DEM) (Staron and Hinch, 2005; Kermani et al., 2015; Kumar et al., 2017), can model grain-grain interactions but are limited to representative elemental volumes. Traditional continuum approaches, such as the Finite Element Method, can predict the initiation of such failures but suffer from mesh distortions when capturing runout dynamics. Hybrid Eulerian-Lagrangian approaches like the Material Point Method (MPM) (Mast et al., 2014; Kumar et al., 2017) can simulate large-deformation flows without undergoing mesh distortions. However, the hybrid nature of MPM requires tracking both the grid and the material points, which is computationally expensive. Multiple full-scale simulations are necessary for a comprehensive evaluation of runout hazard scenarios. Similarly, a back analysis to estimate material parameters requires a broad parametric sweep involving hundreds to thousands of simulations. However, current state-of-the-art numerical methods are restricted to, at most, a few full-scale simulations, limiting our ability in scenario testing or back analysis. An alternative to numerical simulations is the development of statistical or machine learning models to evaluate landslide risks. These surrogate models build correlations between landslide risks and their influencing factors through simple empirical correlation without considering the complex granular flow dynamics. Several studies adopt probabilistic approaches, such as Monte Carlo simulation and Bayesian analysis, to evaluate the landslide runout distance based on factors including topology and geology (Gao et al., 2021; Zeng et al., 2021; Sun et al., 2021; Zhao et al., 2022). Machine learning models can predict the travel distance and potential path of granular flows based on the geometry and ground properties (Durante and Rathje, 2021; Ju et al., 2022; Yang and Hambleton, 2021). Although researchers have been able to correlate the runout of granular flow based on statistical or data-driven techniques, these techniques do not explicitly consider granular flow dynamics--the actual physics governing the flow behavior. Thus, due to a lack of physics, these statistical models do not generalize outside their training range in modeling other boundary conditions or geometry. Building surrogate models that replicate the entire granular flow dynamics is challenging. The surrogate model must capture complex behaviors involving highly non-linear, static, collisional, and frictional dissipation regimes (Soga et al., 2016). Learning fundamental interaction laws is crucial for generalizing beyond the training datasets. Techniques like max-pooling in convolutional neural networks learn spatially invariant behavior, i.e., they learn features irrespective of their spatial location. However, CNNs are primarily limited to mesh-based systems with fixed neighbors. Granular flow is a dynamic system where neighbor interactions evolve throughout the runout (Lajeunesse et al., 2005; Zhang et al., 2016; Soga et al., 2016). A traditional Multi-Layer Perceptron (MLP) could model such a dynamic system. However, generalizing MLPs requires an exhaustive dataset to overcome combinatorial dependence, i.e., the outputs of the models depend on the order of the inputs (Battaglia et al., 2018; Haeri and Skoniczny, 2022). Unreasonably large training datasets are needed to map the entire parameter space of particle arrangements and dynamics. To address these limitations, we utilize graph neural networks (GNNs), a state-of-the-art machine learning architecture that enables permutation invariant learning (Battaglia et al., 2016, 2018; Sanchez-Gonzalez et al., 2020), to develop a data-driven surrogate model for granular flow dynamics. At any given time, the physical state of the granular system is represented as a graph. We develop a GNN-based Simulator (GNS) that operates on the graph to learn the fundamental interaction law. We demonstrate the capability of GNS in replicating the granular flow dynamics by studying the collapse of a granular column. Granular column collapse is a simple physical experiment that captures the overall dynamics of large-scale runouts. GNS, trained on granular flow trajectories, successfully predicts the runout dynamics of column collapse outside its training range and generalizes to upscaled domain sizes. ## 2 Method This section describes the individual components of GNS: graphs, graph neural networks (GNNs), and message passing. ### Graph Neural Networks and Message Passing #### 2.1.1 Graphs Graphs can represent interactions in physical systems (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2020). We represent the granular media as a graph \(G=(\mathbf{V},\mathbf{E})\) consisting of a set of vertices (\(\mathbf{v}_{i}\in\mathbf{V}\)) representing the soil grains or aggregation of grains and edges (\(\mathbf{e}_{i,j}\in\mathbf{E}\)) connecting a pair of vertices (\(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\)) representing the interaction between the grains. Consider an example involving interaction between grains in a box (see fig. 1). We encode the state of the physical system, such as the kinematics of grains and their interaction (fig. 1a and fig. 1d), as a graph (fig. 1b and fig. 1c). The vertices describe the position and velocity of the grains, and the edges describe the directional interaction between them, shown as arrows in fig. 1b and fig. 1c. The state of the grain \(i\) is represented as a vertex feature vector \(\mathbf{v}_{i}\). The vertex feature vector includes velocities, mass, and distance to the boundary. The edge feature vector \(\mathbf{e}_{i,j}\) includes information about the interaction between grains \(i\) and \(j\) such as the relative distance between the grains. Thus, we can store and process the state of granular bodies and their interactions as graphs. Graphs offer a permutation-invariant form of encoding data, where the interaction between vertices is independent of the order of vertices or their position in Euclidean space. As graphs represent the interactions between grains as edge connections, graphs are permutation invariants. For example, by storing the relative positional information in \(\mathbf{e}_{i,j}\), rather than the absolute position, machine learning models operating on these networks learn the interaction behavior of different relative distances between grains. Therefore, graphs can efficiently represent the physical state of granular flow involving multi-grain interactions. #### 2.1.2 Graph neural networks (GNNs) GNN takes a graph \(G=(\mathbf{V},\mathbf{E})\) as an input, computes properties and updates the graph, and outputs an updated graph \(G^{\prime}=(\mathbf{V}^{\prime},\mathbf{E}^{\prime})\) with an identical structure, where \(\mathbf{V}^{\prime}\) and \(\mathbf{E}^{\prime}\) are the set of updated vertex and edge features (\(\mathbf{v}^{\prime}_{i}\) and \(\mathbf{e}^{\prime}_{i,j}\)). Message passing is the process of updating the graph by propagating information through it. In the grains-in-a-box example, the GNN first takes the original graph \(G=(\mathbf{V},\mathbf{E})\) (fig. 1b) that describes the current state of the physical system (\(\mathbf{X}_{t}\)). The GNN then updates the state of the physical system through message passing, which models the exchange of energy and momentum between the grains, and returns an updated graph \(G^{\prime}=(\mathbf{V}^{\prime},\mathbf{E}^{\prime})\) (fig. 1c). We decode \(G^{\prime}\), the output of GNN, to extract information related to the future state of the physical system (\(\mathbf{X}_{t+1}\)), such as the next position or acceleration of the grains (fig. 1d). #### 2.1.3 Message passing Message passing consists of three operations: message construction (eq. (1)), message aggregation (eq. (2)), and the vertex update function (eq. (3)). \[\mathbf{e}^{\prime}_{i,j}=\phi_{\mathbf{\Theta}_{\phi}}\left(\mathbf{v}_{i},\mathbf{v}_{j}, \mathbf{e}_{i,\ j}\right) \tag{1}\] \[\bar{\mathbf{v}}_{i}=\Sigma_{j\in N(i)}\ \mathbf{e}^{\prime}_{i,j} \tag{2}\] \[\mathbf{v}^{\prime}_{i}=\gamma_{\mathbf{\Theta}_{\gamma}}\left(\mathbf{v}_{i},\bar{\mathbf{v}}_{ i}\right) \tag{3}\] The subscript \(\mathbf{\Theta}_{\phi}\) and \(\mathbf{\Theta}_{\gamma}\) represent a set of learnable parameters in each computation. The message construction function \(\phi_{\mathbf{\Theta}_{\phi}}\) (eq. (1)) takes the feature vectors of the receiver and sender vertices (\(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\)) and the feature vector of the edge connecting them (\(\mathbf{e}_{i,\ j}\)) and returns an updated edge feature vector \(\mathbf{e}^{\prime}_{i,j}\) as the output. \(\phi_{\mathbf{\Theta}_{\phi}}\) is a matrix operation including the learnable parameter \(\mathbf{\Theta}_{\phi}\). The updated edge feature vector \(\mathbf{e}^{\prime}_{i,j}\) is the message sent from vertex \(j\) to \(i\). Figure 1(a) shows an example of constructing messages on edges directed to vertex \(0\) originating from vertices \(1\), \(2\), and \(3\) (\(\mathbf{e}^{\prime}_{0,\ 1}\), \(\mathbf{e}^{\prime}_{0,\ 2}\), \(\mathbf{e}^{\prime}_{0,\ 3}\)). Here, we define the message construction function \(\phi_{\mathbf{\Theta}_{\phi}}\) as \(((\mathbf{v}_{i}+\mathbf{v}_{j})\times\mathbf{e}_{i,\ j})\times\mathbf{\Theta}_{\phi}\). The updated feature vector \(\mathbf{e}^{\prime}_{0,\ 1}\) is computed as \(((\mathbf{v}_{0}+\mathbf{v}_{1})\times\mathbf{e}_{0,\ 1})\times\mathbf{\Theta}_{\phi}\), where \(\mathbf{v}_{0}\) and \(\mathbf{v}_{1}\) are the receiver and sender vertex feature vectors, and \(\mathbf{e}_{0,\ 1}\) is their edge feature vector. Suppose we assume all values of \(\mathbf{\Theta}_{\phi}\) are \(1.0\) for simplicity, we obtain \(\mathbf{e}^{\prime}_{0,\ 1}=(([1,\ 0,\ 2])+[1,\ 3,\ 2])\times[2,\ 1,\ 0]^{T})\times 1=[4,\ 3,\ 0]\). Similarly, we can compute the messages \(\mathbf{e}^{\prime}_{0,\ 2}=[0,\ 3,\ 9]\) and \(\mathbf{e}^{\prime}_{0,\ 3}=[3,\ 4,\ 9]\). The next step in message passing is the message aggregation \(\Sigma_{j\in N(i)}\) (eq. (2)), where \(N(i)\) is the set of sender vertices \(j\) related to vertex \(i\)). It collects all the messages directing to vertex \(i\) and aggregates those into a single vector with the same dimension as the aggregated message (\(\bar{\mathbf{v}}_{i}\)). The aggregation rule can be element-wise vector summation or averaging; hence it is a permutation invariant computation. In fig. 1(a), the aggregated message \(\bar{\mathbf{v}}_{0}=[7,\ 10,\ 18]\) is the element-wise summation of the messages directing to vertex \(0\) as \(\bar{\mathbf{v}}_{0}=\mathbf{e}^{\prime}_{0,\ 1}+\ \mathbf{e}^{\prime}_{0,\ 2}+\ \mathbf{e}^{ \prime}_{0,\ 3}\). The final step of the message passing is updating vertex features using eq. (3). It takes the aggregated message (\(\bar{\mathbf{v}}_{i}\)) and the current vertex feature vector \(\mathbf{v}_{i}\), and returns an updated vertex feature vector \(\mathbf{v}^{\prime}_{i}\), using predefined vector operations including the learnable parameter \(\mathbf{\Theta}_{\gamma}\). Figure 1(b) shows an example of the update at vertex \(0\). Here, we define the update function \(\gamma_{\mathbf{\Theta}_{\gamma}}\) as \(\mathbf{\Theta}_{\gamma}\left(\mathbf{v}_{i}+\bar{\mathbf{v}}_{i}\right)\). The updated feature vector \(\mathbf{v}^{\prime}_{0}\) is computed as \(\mathbf{\Theta}_{\gamma}\left(\mathbf{v}_{0}+\bar{\mathbf{v}}_{0}\right)\). Figure 1: An example of a graph and graph neural network (GNN) that process the graph (modified from Battaglia et al. (2018)): (a) A state of the current physical system (\(\mathbf{X}_{t}\)) where the grains are bouncing in a box boundary; (b) Graph representation of the physical system (\(G\)). There are three vertices representing grains and six edges representing their directional interaction shown as arrows; (c) The updated graph (\(G^{\prime}\)) that GNN outputs through message passing; (d) The predicted future state of the physical system (\(\mathbf{X}_{t+1}\)) (i.e., the positions of the grains at the next timestep) decoded from the updated graph. Assuming all parameters in \(\mathbf{\Theta}_{\gamma}\) are \(1.0\) for simplicity, we obtain \(\mathbf{v}_{0}^{\prime}=\mathbf{v}_{0}+\bar{\mathbf{v}}_{0}=[1,\ 0,\ 2]+[7,\ 10,\ 18]=[8,\ 10,\ 20]\). Similarly, we update the other vertex features \((\mathbf{v}_{1}^{\prime},\ \mathbf{v}_{2}^{\prime},\ \mathbf{v}_{3}^{\prime})\). After message passing, the graph vertex and edge features \((\mathbf{v}_{i}\) and \(\mathbf{e}_{i,\ j})\) are updated to \(\mathbf{v}_{i}^{\prime}\) and \(\mathbf{e}_{i,\ j}^{\prime}\). The GNN may include multiple message passing steps to propagate the information further through the network. Unlike the example shown above, where we assume a constant value of \(1.0\) for the learnable parameters, in a supervised learning environment, the optimization algorithm will find a set of the best learnable parameters \((\mathbf{\Theta}_{\phi},\mathbf{\Theta}_{\gamma})\) in the message passing operation. ### Graph Neural Network-based Simulator (GNS) In this study, we use GNN as a surrogate simulator to model granular flow behavior. Figure 3 shows an overview of the general concepts and structure of the GNN-based simulator (GNS) proposed by Sanchez-Gonzalez et al. (2020). Consider a granular flow domain represented as material points (fig. 3a), which represent the collection of grains. In GNS, we represent the physical state of the granular domain at time \(t\) with a set of \(\mathbf{x}_{i}^{t}\) describing the state and properties of each material point. The GNS takes the current state of the granular flow \(\mathbf{x}_{t}^{i}\in\mathbf{X}_{t}\) and predicts its next state \(\mathbf{x}_{i+1}^{i}\in\mathbf{X}_{t+1}\) (fig. 3a). The GNS consists of two components: a parameterized function approximator \(d_{\Theta}\) and an updater function (fig. 3b). The function approximator \(d_{\Theta}\) takes \(\mathbf{X}_{t}\) as an input and outputs dynamics information \(\mathbf{y}_{i}^{t}\in\mathbf{Y}_{t}\). The updater then computes \(\mathbf{X}_{t+1}\) using \(\mathbf{Y}_{t}\) and \(\mathbf{X}_{t}\). Figure 3c shows the details of \(d_{\Theta}\) which consists of an encoder, a processor, and a decoder. The encoder (fig. 3-c1) takes the state of the system \(\mathbf{X}^{t}\) and embeds it into a latent graph \(G_{0}=(\mathbf{V}_{0},\ \mathbf{E}_{0})\) to represent the relationships between material points. The vertices \(\mathbf{v}_{i}^{t}\in\mathbf{V}_{0}\) contain latent information of the current state of the material point, and the edges \(\mathbf{e}_{i,j}^{t}\in\mathbf{E}_{0}\) contain latent information of the pair-wise relationship between material points. Next, the processer (fig. 3-c2) converts the input graph \(G_{0}\) to the output graphs \(G_{M}\) through \(M\) stacks of message-passing GNN (\(G_{0}\to G_{1}\to\cdots\to G_{M}\)). The message passing computes the interaction between vertices. Finally, the decoder (fig. 3-c3) extracts the dynamics of the points \((\mathbf{Y}^{t})\) from \(G_{M}\), such as the acceleration of the physical system. The entire simulation (fig. 3a) involves running GNS surrogate model through \(K\) timesteps predicting from the initial state to \(\mathbf{X}_{K}\left(\mathbf{X}_{0},\ \mathbf{X}_{1},\ \dots,\ \mathbf{X}_{K}\right)\), updating at each step (\(\mathbf{X}_{t}\rightarrow\mathbf{X}_{t+1}\)). We call this successive prediction from GNS the "rollout". In the following sections, we explain the details of our input \(\mathbf{X}^{t}\) (fig. 3a), the encoder, processor, and decoder in \(d_{\Theta}\) (fig. 3c), and how we compute \(\mathbf{X}^{t+1}\) from \(\mathbf{X}^{t}\) using the GNS updater function (fig. 3b). #### 2.2.1 Input The input to the GNS, \(\mathbf{x}_{i}^{t}\in\mathbf{X}^{t}\) (eq. (4)), is a vector consisting of the current material point position \(\mathbf{p}_{i}^{t}\), the material point velocity context \(\mathbf{\hat{p}}_{i}^{\leq t}\), information on boundaries \(\mathbf{b}_{i}^{t}\), and material point type embedding \(\mathbf{f}\). The current state \(\mathbf{x}_{i}^{t}\) will be used to construct vertex feature (\(\mathbf{v}_{i}^{t}\)) (eq. (6)). \[\mathbf{x}_{i}^{t}=\left[\mathbf{p}_{i}^{t},\ \mathbf{p}_{i}^{\leq t},\ \mathbf{b}_{i}^{t},\ \mathbf{f}\right] \tag{4}\] The velocity context \(\mathbf{\hat{p}}_{i}^{\leq t}\) includes the current and previous material point velocities for \(n\) timesteps \(\left[\mathbf{\hat{p}}_{i}^{t-n},\cdots,\ \mathbf{\hat{p}}_{i}^{t}\right]\) with \(n+1\) velocities. We use \(n=4\) to include sufficient velocity context in the vertex feature \(\mathbf{v}_{i}^{t}\). Sanchez-Gonzalez et al. (2020) show that having \(n>1\) significantly improves the model performance. We compute the velocities using the finite difference of the position sequence (i.e., \(\mathbf{\hat{p}}_{i}^{t}=\left(\mathbf{p}_{i}^{t}-\mathbf{p}_{i}^{t-1}\right)/\Delta t\)). \(\mathbf{b}_{i}^{t}\) is boundary information. For a 2D problem, \(\mathbf{b}_{i}^{t}\) has four components, each indicating the distance between material points and the four walls. We normalize \(\mathbf{b}_{i}^{t}\) by the connectivity radius \(R\) which defines the interaction zone, explained in the next section, and restrict it between -1.0 to 1.0. \(\mathbf{b}_{i}^{t}\) is used to evaluate boundary interaction for a material point. \(\mathbf{f}\) is a vector embedding describing a material point type. We define the interaction between material points \(i\) and \(j\) as \(\mathbf{r}_{i,\ j}^{t}\) using the distance and displacement of the material points in the current timestep (see eq. (4)). The former reflects Figure 3: The structure of the graph neural network (GNN)-based physics simulator (GNS) for granular flow (modified from Sanchez-Gonzalez et al. (2020)): (a) The entire simulation procedure using the GNS, (b) The computation procedure of GNS and its composition, (c) The computation procedure of the parameterized function approximator \(d_{\Theta}\) and its composition. the level of interaction, and the latter reflects its spatial direction. \(\mathbf{r}_{i,\ j}^{t}\) will be used to construct edge features (\(\mathbf{e}_{i,j}^{t}\)). \[\mathbf{r}_{i,j}^{t}=\left[\left(\mathbf{p}_{i}^{t}-\mathbf{p}_{j}^{t}\right),\ \|\mathbf{p}_{i}^{t}-\mathbf{p}_{j}^{t}\|\right] \tag{5}\] #### 2.2.2 Encoder The vertex and edge encoders (\(\varepsilon_{\mathbf{\Theta}}^{v}\) and \(\varepsilon_{\mathbf{\Theta}}^{e}\)) convert \(\mathbf{x}_{i}^{t}\) and \(\mathbf{r}_{i,\ j}^{t}\) into the vertex and edge feature vectors (\(\mathbf{v}_{i}^{t}\) and \(\mathbf{e}_{i,j}^{t}\)) (eq. (6)) and embed them into a latent graph \(G_{0}=(\mathbf{V}_{0},\ \mathbf{E}_{0})\), \(\mathbf{v}_{i}^{t}\in\ \mathbf{V}_{0}\), \(\mathbf{e}_{i,j}^{t}\in\ \mathbf{E}_{0}\). \[\mathbf{v}_{i}^{t}=\varepsilon_{\mathbf{\Theta}}^{v}\left(\mathbf{x}_{i}^{t}\right),\ \mathbf{e}_{i,j}^{t}=\varepsilon_{\mathbf{\Theta}}^{e}\left(\mathbf{r}_{i,j}^{t}\right) \tag{6}\] We use a two-layered 128-dimensional multi-layer perceptron (MLP) for the \(\varepsilon_{\mathbf{\Theta}}^{v}\) and \(\varepsilon_{\mathbf{\Theta}}^{e}\). The MLP and optimization algorithm search for the best candidate for the parameter set \(\mathbf{\Theta}\) that estimates a proper way of representing the physical state of the material points and their relationship which will be embedded into \(G_{0}\). The edge encoder \(\varepsilon_{\mathbf{\Theta}}^{v}\) uses \(\mathbf{x}_{i}^{t}\) (eq. (4)) without the current position of the material point (\(\mathbf{p}_{i}^{t}\)), but with its velocities (\(\mathbf{\dot{\mathbf{p}}}_{i}^{\pm t}\)), as velocity governs the momentum, and the interaction dynamics is independent of the absolute position of the material points. Rubanova et al. (2022) confirmed that including position causes poorer model performance. We only use \(\mathbf{p}_{i}^{t}\) to predict the next position \(\mathbf{p}_{i}^{t+1}\) based on the predicted velocity \(\mathbf{\dot{\mathbf{p}}}_{i}^{t+1}\) using Explicit Euler integration. We consider the interaction between two material points by constructing edges between them all pairs of vertices located within a certain distance called connectivity radius \(R\) (see the shaded circular area in fig. 3b). The connectivity radius is a critical hyperparameter that governs how effectively the model learns the local interaction. \(R\) should be sufficiently large to include the local interaction between material points and capture the simulation domain's global dynamics. #### 2.2.3 Processor The processor performs message passing (based on eq. (1) to eq. (3)) on the initial latent graph (\(G_{0}\)) from the encoder for \(M\) times (\(G_{0}\to G_{1}\rightarrow\cdots\to G_{M}\)) and returns a final updated graph \(G_{M}\). We use two-layered 128-dimensional MLPs for both the message construction function \(\phi_{\mathbf{\Theta}_{\phi}}\) and vertex update function \(\gamma_{\mathbf{\Theta}_{r}}\), and element-wise summation for the message aggregation function \(\Sigma_{j\in N(i)}\) in eq. (1) to eq. (3). We set \(M=10\) to ensure sufficient message propagation through the network. These stacks of message passing model information propagation through the network of material points. #### 2.2.4 Decoder The decoder \(\delta_{\mathbf{\Theta}}^{v}\) extracts the dynamics \(\mathbf{y}_{i}^{t}\in\mathbf{Y}^{t}\) of the material points from the vertices \(\mathbf{v}_{i}^{t\prime}\) (eq. (7)) using the final graph \(G_{M}\). We use a two-layered 128-dimensional MLP for \(\delta_{\mathbf{\Theta}}^{v}\), which learns to extract the relevant dynamics for material points from \(G_{M}\). \[\mathbf{y}_{i}^{t}=\delta_{\mathbf{\Theta}}^{v}\left(\mathbf{v}_{i}^{t\prime}\right) \tag{7}\] #### 2.2.5 Update We use the dynamics \(\mathbf{y}_{i}^{t}\) to predict the velocity and position of the material points at the next timestep (\(\mathbf{\dot{p}}_{i}^{t+1}\) and \(\mathbf{p}_{i}^{t+1}\)) based on Euler integration (eq. (8) and eq. (9)), which makes \(\mathbf{y}_{i}^{t}\) analogous to acceleration \(\mathbf{\ddot{p}}_{i}^{t}\). \[\mathbf{\dot{p}}_{i}^{t+1}=\mathbf{\dot{p}}_{i}^{t}+\mathbf{y}_{i}^{t}\Delta\mathrm{t} \tag{8}\] \[\mathbf{p}_{i}^{t+1}=\mathbf{p}_{i}^{t}+\mathbf{\dot{p}}_{i}^{t+1}\Delta\mathrm{t} \tag{9}\] Based on the new position and velocity of the material points, we update \(\mathbf{x}_{i}^{t}\in\mathbf{X}^{t}\) (eq. (4)) to \(\mathbf{x}_{i}^{t+1}\in\mathbf{X}^{t+1}\). The updated physical state \(\mathbf{X}^{t+1}\) is then used to predict the position and velocity for the next timestep. The updater imposes inductive biases, such as an inertial frame, on GNS to force it only to learn the interaction dynamics, improving learning efficiency. A traditional neural network learns both the update scheme and the interaction dynamics: \[p^{t+1}=NN(p^{t},v^{t})\,. \tag{10}\] Whereas, using an inertial prior, we force the GNS only to learn the interaction dynamics, by hardcoding the update function: \[p^{t+1}=p^{t}+v^{t}\cdot\Delta t+NN(p^{t},v^{t})\,. \tag{11}\] Nevertheless, GNS does not directly predict the next position from the current position and velocity (i.e., \(\mathbf{p}_{i}^{t+1}=GNS\left(\mathbf{p}_{i}^{t},\ \mathbf{\dot{p}}_{i}^{t}\right)\)) which has to learn the static motion and inertial motion. Instead, it uses (1) the inertial prior (eq. (8)) where the prediction of the next velocity \(\mathbf{\dot{p}}_{i}^{t+1}\) should be based on the current velocity \(\mathbf{\dot{p}}_{i}^{t}\) and (2) the static prior (eq. (9)) where the prediction of the next position \(\mathbf{p}_{i}^{t+1}\) should be based on the current position \(\mathbf{p}_{i}^{t}\). These make GNS focus on learning unknown dynamics by hardcoding known physics. Since GNS learns the dynamics of material points through interactions independent of absolute position, GNS is generalizable to other geometric conditions. ## 3 Training and Evaluation We now train the GNS to predict granular column collapse. This section explains how we generate training data, details of the training process, and how we evaluate the performance of the GNS. ### Material Point Method We utilize the Material Point Method (MPM) to generate the GNS training dataset of granular flow simulations. The MPM is a hybrid Eulerian-Lagrangian approach designed for modeling large-deformation flows (Soga et al., 2016). In the MPM, a continuum body is represented by individual material points that traverse a static background grid. The governing equation is solved at the nodes, and the updated velocity field is subsequently mapped back to the material points. We employ the position information stored in these material points to construct the current state \(\mathbf{X}^{t}\) in the GNS. For more information on MPM refer to Soga et al. (2016). ### Datasets The training datasets include 26 granular flow trajectories of square-shaped granular mass in a two-dimensional box boundary simulated by the MPM explicit time integration method using the CB-Geo MPM code (Kumar et al., 2019). Each simulation has a different initial configuration regarding the size of the square granular mass, position, and velocity. Table 1 presents the details of the training dataset generated using the MPM simulation. The datasets are published on DesignSafe (Kumar and Choi, 2023). A shows all the training trajectories with different initial configurations and initial velocities. We also create the validation datasets to check if the model experiences overfitting. The datasets include seven trajectories of randomly picked rectangular-shaped granular mass with different initial configurations not included in the training datasets. ### Training Our GNS has a learnable parameter set \(\Theta\). We train \(\Theta\) to minimize the loss calculated as the mean squared error (MSE) between \(\mathbf{y}^{t}_{i}\) (predicted proxy-acceleration) and the ground truth acceleration \(\mathbf{\vec{p}}^{t}_{i}\) for all material points \(i=1,\ 2,\ \dots,\ N\) as shown in eq. (12) based on gradient (\(\nabla loss_{\Theta}\))-based optimizer, Adam (Kingma and Ba, 2014). \[loss_{\Theta}=\frac{1}{n}\sum_{i=1}^{N}\left(\mathbf{y}^{t}_{i}-\mathbf{\vec{p}}^{t}_ {i}\right)^{2} \tag{12}\] For training the GNS, we have to set hyperparameters to learn the flow behavior from the training trajectories properly. The first key hyperparameter is the connectivity radius \(R\) \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{2}{c}{Property} & \multicolumn{1}{c}{Description} \\ \hline Simulation boundary & \multicolumn{1}{c}{1.0\(\times\)1.0 m} \\ Mesh size & \multicolumn{1}{c}{0.025\(\times\)0.025 m} \\ Material points per cell & \multicolumn{1}{c}{16} \\ Granular mass geometry & \multicolumn{1}{c}{0.2\(\times\)0.2 m and 0.3\(\times\)0.3 m} \\ Simulation duration (timesteps) & \multicolumn{1}{c}{(each includes 1,024 and 2,304 particles)} \\ \hline \multirow{6}{*}{Material property} & Model & Mohr-Coulomb \\ & Density & 1,800 \(kg/m^{3}\) \\ & Youngs modulus & 2 GPa \\ & Poisson ratio & 0.3 \\ & Friction angle & 30\({}^{\circ}\) \\ & Cohesion & 0.1 kPa \\ & Tension cutoff & 0.05 kPa \\ \hline \hline \end{tabular} \end{table} Table 1: Details of the Material Point Method (MPM) simulation geomaterials and properties used for generating the training datasets. which governs the model's capacity to learn the interactions of material points. We set \(R=0.030\) m which includes about 9 to 10 material points along the diameter. The circular area defined by \(R\) can incorporate approximately 70 material points inside. Another important hyperparameter is the Gaussian noise value for perturbing the ground truth position in the training trajectories. Since every predicted position for each timestep is based on the previous prediction, which includes a prediction error, the simulation over the large timesteps is subjected to an exponential error accumulation. To avoid this issue, we train the model on input positions with Gaussian noise that emulates the prediction error made by a one-step prediction (\(\mathbf{X}_{t}\rightarrow\mathbf{X}_{t+1}\)). The inclusion of noise in training leads to more rigorous long-rollout predictions. We use the learning rate (\(lr\)) decay with the initial value of \(10^{-4}\) and decay rate of 0.1 (\(lr=10^{-4}\times 0.1^{step/5\times 10^{6}}\)) for more stable convergence. We use the batch size of two, i.e., \(\mathbf{X}_{t}\) from two different trajectories are used simultaneously in updating the learnable parameters. For information on the scalability of the GNS algorithm, refer to Kumar and Vantassel (2022). We investigate if the model experiences overfitting by plotting the loss history (fig. 4) for the training and validation datasets evaluated for every 10K training steps. The training and validation losses show a drastic decrease until 2M steps. After that, the validation loss tends to remain slightly larger than the training loss. Figure 4 shows no overfitting during the training. ### GNS prediction of granular flows We trained the GNS to predict the collapse of a granular column (as studied by Lajeunesse et al. (2004); Lube et al. (2005)). Figure 5 shows the granular column collapse experiments to evaluate its ability to replicate granular flow dynamics. Granular column collapse is a simple physical experiment that captures the transient response of granular flow dynamics. The experiment involves the collapse of a granular column of initial height \(H_{0}\) and length \(L_{0}\) on a flat surface due to gravity. As the gate holding the column is removed, the granular Figure 4: Evolution of GNS loss in training and validation with epochs. material destabilizes, resulting in a runout. We measure the final runout deposit with the final height \(H_{f}\) and runout \(L_{f}\). The runout of the granular column is governed by the initial aspect ratio (\(a=H_{0}/L_{0}\)) (Staron and Hinch, 2005; Kumar, 2015). For short columns (\(a\lesssim 2\)) (fig. 5a), the soil mass fails along the flanks of the column above a well-defined failure surface (dashed line). The soil mass beneath the failure surface remains in static equilibrium throughout the collapse forming a truncated conical shape. With the increase in aspect ratio, the portion of the sliding mass above the failure surface increase, and the static part becomes smaller, forming a conical shape. For tall columns (\(a\gtrsim 2\)) (fig. 5b), the majority of the soil mass is involved in the collapse, and it initially experiences a free fall due to gravitational acceleration. As the falling mass reaches the failure surface, the vertical kinetic energy is converted to horizontal acceleration, resulting in a longer runout distance than the short column (fig. 5a). In addition, researchers (Kumar, 2015; Staron and Hinch, 2005; Kermani et al., 2015; Utili et al., 2015) observed a transition zone where the flow dynamics change from short to tall columns. The normalized runout (\(\left(L_{f}-L_{0}\right)/L_{0}\)) of a granular column is only a function of its aspect ratio (\(a\)). The normalized runout represents how far the granular mass runs out before reaching the final deposit state compared to the initial length of the column. Short columns show a linear relationship with the initial aspect ratio. In contrast, tall columns have a power-law relationship with the initial aspect ratio. The GNS was trained only on the aspect ratio of 1.0. However, we evaluate its performance in predicting the runout dynamics of other aspect ratios by comparing the GNS predictions with the MPM simulations. Table 2 presents the test cases for evaluating GNS performance. ## 4 Results and Discussions We evaluate the GNS predictions of granular column collapse against the MPM simulations in terms of the (1) geometry of sliding mass, (2) evolution of runout and height with time, and (3) energy evolution during the collapse. Figure 6 shows the normalized runout (\(\left(L_{f}-L_{0}\right)/L_{0}\)) predictions of GNS for different aspect ratios in comparison with MPM. \(L_{f}\) is the distance from the left wall to the material point that runs out the farthest, as shown in fig. 5. Previous research observed a transition zone for the relationship between the normalized runout and aspect ratio that distinguishes short-column from tall-column dynamics. Figure 5: Schematics for the granular column collapse configuration and its behavior on the aspect ratio. For both GNS and MPM, we observe the transition around an initial aspect ratio \(a=1.2\) (fig. 6). Table 3 summarizes the errors between GNS predictions and MPM simulations for different aspect ratios. In general, the GNS runout prediction is within 5% of the MPM runout estimate. Figure 6 suggests that the GNS successfully captures the dependence of the final runout with the initial aspect ratio, including the transition from the short to the tall column. ### GNS Predictions of Granular Flow Dynamics #### 4.1.1 Short Column We now evaluate the GNS rollout (prediction) of the granular flow dynamics with time for a short column (\(a=0.8\)). Figure 7 shows the time evolution of granular flow for the \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Test case} & \multirow{2}{*}{\(H_{0}\times L_{0}\)} & Duration & Simulation & Number of \\ & & & (timesteps) & boundary & material points \\ \hline Short & & & & X: 0 to 1.0 & \\ columns & \(a=0.5\) & 0.2 \(\times\) 0.4 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ \end{tabular} & 1956 \\ & \(a=0.8\) & \(0.24\times 0.30\) & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ \end{tabular} & 1824 \\ & \(a=1.0\) & 0.30 \(\times\) 0.30 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ \end{tabular} & 2304 \\ \hline Tall & & & & X: 0 to 1.0 & \\ columns & \(a=2.0\) & 0.30 \(\times\) 0.15 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 1.0 \\ Y: 0 to 0.5 \\ \end{tabular} & 1152 \\ & \(a=3.0\) & 0.36 \(\times\) 0.12 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ \end{tabular} & 1106 \\ & \(a=4.0\) & 0.35 \(\times\) 0.075 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ \end{tabular} & 576 \\ \hline Up-scaled & \(a=0.8\) & 0.36 \(\times\) 0.45 & 500 & \begin{tabular}{c} X: 0 to 1.5 \\ Y: 0 to 1.0 \\ \end{tabular} & 5120 \\ \hline \hline \end{tabular} \end{table} Table 2: Granular column collapse simulation cases for testing GNS. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Aspect ratio, \(a\)} & \multicolumn{2}{c}{Normalized runout} & \multirow{2}{*}{Runout error (\%)} \\ \cline{2-3} & MPM & GNS & \\ \hline 0.5 & 0.831 & 0.811 & 2.48 \\ 0.8 & 1.444 & 1.445 & 0.06 \\ 1.0 & 2.071 & 2.152 & 3.78 \\ 2.0 & 3.892 & 3.682 & 5.70 \\ 3.0 & 5.620 & 5.341 & 5.23 \\ 4.0 & 5.753 & 6.070 & 5.21 \\ \hline \hline \end{tabular} \end{table} Table 3: Normalized runout from MPM and GNS depending on aspect ratios and corresponding prediction error. short column collapse. We use a normalized time (\(t/\tau_{c}\)) to compare the flow evolution, where \(t\) is physical time, and \(\tau_{c}\) is the critical time defined as the time required for the flow to fully mobilize. \(\tau_{c}\) is defined as \(\sqrt{H_{0}/g}\), where \(g\) is the gravitational acceleration. In fig. 7, the collapse shows three stages. First, the flow is mobilized by the failure of the flank and reaches full mobilization around \(t/\tau_{c}=1.0\). The majority of the runout occurs until \(t/\tau_{c}=2.5\). Beyond \(t/\tau_{c}>2.5\), the spreading decelerates due to the basal friction and finally stops at around \(t/\tau_{c}=4.0\) for both MPM and GNS rollout (prediction). As seen in fig. 7, although the GNS has only seen an aspect ratio \(a=1.0\) during training, GNS successfully captures the overall time-evolution of granular flows for a short column (\(a=0.8\)). In addition to the visual comparison of profiles, we quantitatively investigate the flow dynamics of the GNS rollout of the short column by comparing the normalized runout and height evolution with the MPM. Figure 8a shows the evolution of normalized runout and height with time. The normalized runout of the MPM (see the gray line corresponding to the left axis in fig. 8a) shows the three stages of collapse. The collapse of the granular column starts with the failure of the flank and evolves slowly until the runout is fully mobilized by \(t/\tau_{c}=1.0\). As the collapse proceeds, the runout acceleration increases (\(t/\tau_{c}=1.0\) to \(2.5\)). After this time, the runout deaccelerates due to basal friction, and finally stops at \(t/\tau_{c}\approx 4.0\). Both GNS and MPM show a constant normalized height (see the gray line corresponding to the right axis in fig. 8a) as only the flank of the column collapse, leaving a static truncated-conical core. GNS predicts an almost identical evolution of runout as the MPM simulation, which is noteworthy as only a small portion of the training trajectories (\(5\) out of \(26\)) includes the deacceleration behavior leading to the flow coming to rest due to the basal friction before hitting the walls. Overall, the quantitative comparison shown in fig. 8a confirms that the GNS can accurately model the granular flow dynamics for the short column. Figure 8b shows the energy evolutions from GNS rollout and MPM simulation. Based on the principle of energy conservation, the granular flow must satisfy \(E_{0}=E_{p}+E_{k}+E_{d}\) Figure 6: Normalized runout distance (\(\left(L_{f}-L_{0}\right)/L_{0}\)) with different aspect ratios (\(a\)). Figure 7: Evolution of flow with normalized time for GNS and MPM for the short column with \(a=0.8\). Units are in \(m\). The color represents the magnitude of the displacement. Subfigure (e) shows the final deposit at the last timestep. where \(E_{0}\) is the potential energy of the column before material points start to mobilize, \(E_{p}\) is the potential energy, \(E_{k}\) is the kinetic energy, and \(E_{d}\) is the dissipation energy due to friction along the boundary and material. In fig. 7(b), both GNS rollout and MPM simulation show identical energy evolutions. A significant fraction of the stored potential energy is converted to kinetic energy in the initial stages of the failure, reaching a peak value of kinetic energy at \(t/\tau_{c}=1\). The kinetic energy dissipates due to the basal friction and flow ceases at \(t/\tau_{c}=4.0\) when \(E_{k}\) is fully dissipated. #### 4.1.2 Tall column Tall columns exhibit different runout dynamics than the short column. GNS was only trained on granular mass with an aspect ratio of 1.0 and has not seen the dynamics of a tall column during training. As an example, we demonstrate the GNS prediction for a tall column with \(a=2.0\). Figure 9 shows the GNS rollout and MPM simulation of the runout evolution for this case. GNS rollout predicts an identical runout profile with normalized time as the MPM simulation. Similar to the short column, the tall column also shows the three stages: the initial mobilization of the flow (\(t/\tau_{c}\) to 1.0), runout (\(t/\tau_{c}=1.0\) to 2.5) along the failure surface, deacceleration (\(t/\tau_{c}=2.5\) to 4.0). In the tall column, however, a larger volume of sliding mass above the failure plane is mobilized. During the initial stages of the collapse, the granular mass experiences free fall due to gravity dominated by collisional dissipation. As the granular mass reaches the failure surface, the vertical kinetic energy is converted to horizontal acceleration, resulting in longer runout distances. GNS rollout shows similar behavior to the MPM runout simulation. Figure 9(a) shows the normalized runout and height evolution for the tall column. Although the runout evolution remains identical in the initial phase of the collapse, MPM Figure 8: (a) Normalized runout and height evolution with normalized time and (b) normalized energy evolution with normalized time for the short column \(a=0.8\). \(H_{t}\) is the height from the bottom corner of the boundary to the highest part of the column at \(t\). \(E_{p}=\sum_{i=1}^{n}m_{i}gh_{i}\) is the potential energy of the column, and \(E_{k}=\frac{1}{2}\sum_{i}^{n}m_{i}v_{i}^{2}\) is the kinetic energy of the column, where \(m_{i}\), \(h_{i}\), and \(v_{i}\) is the mass, height, and velocity of the material point \(i\), and \(n\) is the total number of material points. \(E_{d}=E_{0}-E_{p}-E_{k}\) is the dissipation energy where \(E_{0}\) is the potential energy before material points start to move. Figure 9: Evolution of flow with normalized time for GNS and MPM for the tall column with \(a=2.0\). Units are in \(m\). The color represents the magnitude of the displacement. Subfigure (e) shows the final deposit at the last timestep. shows a slightly larger normalized runout compared to the GNS. The final height in both GNS and MPM remains the same. Figure 10 presents the normalized energy evolution of the GNS rollout and the MPM simulation. During the initial stages of the collapse (\(t/\tau_{c}\) to \(1.0\)), a large amount of initial potential energy is converted to kinetic energy due to the free fall of mass under gravity. Both GNS and MPM show almost identical energy profiles. GNS shows a larger potential energy loss as the flow accelerates with an almost similar gain in kinetic energy. It indicates that GNS predicts larger frictional dissipation in tall columns, which could be from the training data focused only on short columns having higher frictional dissipation than tall columns. At the final stage, MPM shows less dissipation due to the basal boundary friction, resulting in a slightly longer runout than GNS rollout. Generally, energy dissipation behavior in GNS is consistent with MPM showing a more significant potential drop and increased dissipation energy accumulation. Overall, the GNS rollout is consistent with the MPM simulation with a runout error of \(5.7\) % for the tall column with \(a=2.0\), implying that the GNS can capture the dynamics of granular flows in collision-dominated tall columns despite only being trained on \(a=1.0\). #### 4.1.3 Upscaled domain GNS is generalizable to different initial configurations of the flow simulation owing to the strong inductive bias of the GNN(Battaglia et al., 2018). The strengths of GNS surrogate models would be to train them on small-scale experiments and then predict large-scale dynamic scenarios with complex boundary conditions. We now evaluate the scalability of GNS to a larger domain, including more material points than the training dataset. Figure 11 shows the GNS rollout of a short column \(a=0.8\) with \(5120\) material points (up to \(5\times\) more material points than the training dataset) for a larger simulation domain and longer rollout duration than the training dataset. GNS successfully predicts the flow dynamics for an upscaled domain size showing a similar runout profile with the MPM simulation. The GNS rollout predicts a normalized runout of Figure 10: (a) Normalized runout and height evolution with normalized time and (b) normalized energy evolution with normalized time for the tall column with \(a=2.0\). 1.74 while the MPM simulation shows 1.76, showing an error of 1.30%. Figure 12 shows that GNS rollout successfully replicate energy evolution observed in an upscaled domain compared to the MPM simulation. Hence, GNS can reproduce the flow dynamics even for the upscaled geometries beyond the training dataset. The primary source of GNS rollout error is not from the simulation scale but from the portion of material points that shows a large amount of displacement during column collapse. Figure 13 shows the evolution of mean squared error (MSE) of displacement over all material points (\(N\)) with time \(t\) computed as \(\frac{1}{n}\sum_{i}^{N}\left(\boldsymbol{p}_{i}^{t}-\boldsymbol{p}_{MPMi}^{t} \right)^{2}\), where \(\boldsymbol{p}_{MPMi}^{t}\) is material point position from MPM. When we compare the MSE for \(a=0.8\) with 1,824 of material points and its upscaled domain (2.22x material points), upscaling does not alter the MSE significantly. Figure 14 shows the evolution of the squared error of displacement of individual material points for the upscaled domain (\(a=0.8\)). The squared error shows larger values for those material points which run out further, i.e., the larger final displacements, but the proportion of the error with respect to the final runout is small so that GNS could simulate the upscaled case without significant error. Figure 11: Evolution of flow with normalized time for GNS and MPM for the upscaled case of short column with \(a=0.8\). Units are in \(m\). The color represents the magnitude of the displacement. Subfigure (e) shows the final deposit at the last timestep. Figure 12: Normalized runout and height evolution with normalized time and (b) normalized energy evolution with normalized time for the upscaled case of the short column with \(a=0.8\). Note that the data after \(t/\tau_{c}>5.0\) is abbreviated since the flow reaches a static state. Figure 13: Evolution of mean squared displacement error over all material points with time. ## 6 Limitations The GPU memory limits the current implementation of the GNS surrogate model. A GPU with 40 GB memory can simulate up to around 50K material points (approximately 3M edge connections). However, this shortcoming can be improved by optimizing the size of the connectivity radius \(R\). We use \(R\) of 0.030 m, which includes more interaction between neighbors. Multi-GPU GNS rollouts will enable the scalability of GNS to larger and more complex domains. ## 7 Conclusion Traditional numerical methods are computationally intensive when simulating large-scale granular flows. Statistical or conventional machine learning-based surrogate models are not generalizable since they do not explicitly consider the underlying physics. In this study, we develop a graph neural network (GNN)-based simulator (GNS) as a generalizable granular flow surrogate simulator. The use of graphs efficiently represents the physical state of interacting material points. At the same time, the message passing operation of GNN encourages the neural network to learn the interaction between material points. The expressive power of graphs and message passing that models the interaction between material points allows GNS to accurately predict granular flow dynamics for various conditions, including those not seen during training. We demonstrate the performance of GNS on granular column collapse. GNS precisely simulates different flow dynamics involved in columns for different initial aspect ratios and can also be applied to the upscaled domain with more than 2 to \(5\times\) material points with a longer simulation duration than the data provided for training. GNS also shows a remarkable speed-up of \(150\times\) computation speed compared to the parallelized CPU version of MPM. The computational efficiency and generalizability of the GNS surrogate can expedite evaluating runout hazards requiring numerous scenarios. Figure 14: Evolution of squared displacement error for each material point with normalized time in upscaled case of \(a=0.8\). The line color represents the final displacement of each material point.