text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
firstpage–lastpage Maximal signed bipartite graphs with totally disconnected graphs as star complements 1Yue Liu Received ...; accepted... =======================================================================================This paper investigates the problem of estimating three stellar atmospheric physical parameters and thirteen elemental abundances for medium-resolution spectra from Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). Typical characteristics of these spectra are their huge scale, wide range of spectral signal-to-noise ratios, and uneven distribution in parameter space.These characteristics lead tounsatisfactory results on the spectra with low temperature, high temperature or low metallicity. To this end, this paper proposes a Stellar Parameter Estimation method based on Multiple Regions (SPEMR) that effectively improves parameter estimation accuracy. On the spectra with S/N ≥ 10, the precisions are 47 K, 0.08 dex, 0.03 dex respectively for the estimations of (T_ eff, log g and [Fe/H]), 0.03 dex to 0.06 dex for elements C, Mg, Al, Si, Ca, Mn and Ni, 0.07 dex to 0.13 dex for N, O, S, K and Ti, while that of Cr is 0.16 dex. For the reference of astronomical science researchers and algorithm researchers, we released a catalog for 4.19 million medium-resolution spectra from the LAMOST DR8, experimental code, trained model, training data, and test data. methods: data analysis –- methods: statistical – stars: abundances – stars: fundamental parameters. § INTRODUCTION In this paper, we study the estimation problem of stellar atmospheric parameters and element abundances from Large Sky Area Multi-Object Fiber Spectroscopic Telescope <cit.> medium-resolution stellar spectra. LAMOST, also known as the Guo Shoujing Telescope, is a large optical band observation equipment. It is the telescope with the highest spectral acquisition rate in the world and has provided a lot of precious spectral data for astronomical researchers. Since October 2018, LAMOST started the second stage survey program (LAMOST 2), which conducts both low- and medium-resolution spectroscopic surveys <cit.>. The wavelength coverages of the LAMOST medium-resolution spectra are [4950, 5350] Å and [6300, 6800] Å <cit.>. LAMOST DR8 released 5.53 million medium-resolution spectra, and the signal-to-noise ratio (S/N) of 4.19 million spectra in them are greater than 10 <cit.>. From the large amount of spectroscopic data obtained during the LAMOST survey project, stellar parameters and elemental abundances can be estimated <cit.> for huge number of stars. These parameters and elemental abundances can be used to infer the stars' properties and their evolutionary history <cit.>. So far, researchers have proposed many methods to estimate the stellar parameters of LAMOST spectra. In addition, the LAMOST survey project has its own Stellar Parameter Estimation Pipeline <cit.>.The LASP works by minimizing the cardinality distance between the observed spectrum and theoretical spectra to find the best matching template and accordingly give the parameter estimate for the observed spectrum <cit.>. The limitation of this traditional method is that the model compuational complexity depends more on the grid that generates the theoretical spectra rather than on the problem complexity. This results in relatively low computational efficiency and another limitation of this method is the high-quality requirements for the observed data. However, the LAMOST observational spectral library is characterized by large amount of data and wide range of signal-to-noise ratios. This leads to a large room for improvement in the parameter estimation of LAMOST spectra.With the arrival of artificial intelligence and the big data era, researchers have tried to adopt deep learning methods to solve the problem of estimating stellar parameters from LAMOST medium-resolution spectra. <cit.> proposed a residual-like network model (SPCANet) in 2020. This model consists of three convolutional layers and three fully connected layers, and can accurately predict the stellar parameters and elemental abundances from LAMOST DR7 medium-resolution spectra. In 2022, <cit.> developed a neural network model (RRNet) by combining several residual modules and some recurrent modules.The RRNet further improved the parameter estimation accuracy on the base of SPCANet. However, the above two methods only effectively work on the spectra with a restricted parameter range.For example, the parameter estimation accuracy of SPCANet <cit.> on the spectra with T_ eff > 6500 K is significantly lower than that on the spectra with T_ eff∈ [4000, 6500] K.Therefore, the SPCANet rejected the estimations for the spectra with T_ eff > 6500 K.RRNet <cit.> does not perform parameter estimation for spectra with T_ eff< 4000 K andT_ eff > 6500 K. In particular, the parameter estimation performance of RRNet is shown in Figure <ref>. It is shown that the performance of the RRNet decreases apparently on the spectra with high temperature, low temperature, or low metallicity. The performance variation of the RRNet is closely related with the distribution characteristics of the observed spectra in the parameter space (More discussions can be found in Section <ref>). To deal with the above-mentioned problems, this paper proposes a Stellar Parameter Estimation method based on Multiple Regions (SPEMR) based on the distribution characteristics of LAMOST data in the parameter space. This scheme significantly improves the estimation of parameters for the spectra with high temperature, low temperature, or low metallicity apart from its performance increasing on common type spectra.This paper is organized as follows: Section <ref> introduces the medium-resolution stellar spectra in LAMOST DR8, the reference set of this paper, and the scheme dividing reference set into different subsets according to the distribution characteristics. Section <ref> describes the principle of SPEMR. The results of SPEMR on LAMOST DR8 are investigated in Section <ref>. Section <ref> offers concluding remarks.§ DATAThe model SPEMR proposed in this paper needs a reference set to learn the model parameters and to test model performance. The reference set is established by cross-matching the LAMOST DR8 medium-resolution spectra with the APOGEE DR17/ASPCAP catalog. The reference set consists of a series of samples, and each sample consists of an observed spectrum of an object and its reference label. The reference spectra are obtained from the LAMOST DR8 medium-resolution spectral library, and the reference labels were the stellar physical parameters (T_ eff, log g, [Fe/H]) and chemical abundances of 13 elements ([C/H], [N/H], [O/H], [Mg/H], [Al/H], [Si/H], [S/H], [K/H], [Ca/H], [Ti/H], [Cr/H], [Mn/H], [Ni/H]) from the APOGEE DR17/ASPCAP catalog. It is worth noting that the reference sets provided by <cit.> and <cit.> are obtained by cross-matching the LAMOST DR7 medium-resolution spectral data with the APOGEE-Payne catalog.While the reference set provided in this paper is based on the LAMOST DR8 medium-resolution spectra and APOGEE DR17/ASPCAP catalog. The APOGEE DR17 catalog provides more reference labels and the corresponding labels with higher accuracy. Therefore, we used the APOGEE DR17 catalog as the source of reference labels. The reference set we obtained finally are more than twice as many as those of <cit.> and <cit.>. This bigger reference set helps to build models with better accuracy of parameter estimation. The typical characteristic of the reference set is that the data are exceedingly imbalanced in the parameter space (Figure <ref>). For example, in case of the effective temperature (T_ eff) higher than 6500 K, lower than 4000 K, or the metal abundance ([Fe/H]) lower than -1.0 dex, the reference data are very sparse (Figure <ref>).This imbalance leads to a significant decrease in the accuracy of the parameter estimation models <cit.>. To this end, this paper proposes a novel parameter estimation method based on multiple regions by dividing the parameter space into several sub-regions with different distrubution characteristics and accordingly dividing the reference set into three subsets. More on the establishment of the reference set and its pre-processing procedures are described in the next two sub-chapters. §.§ Reference dataset based on common observational targets of APOGEE and LAMOSTAPOGEE <cit.> is a medium-high resolution (R∼22500) spectroscopic survey in the near-infrared band ([15000, 17000] Å). The APOGEE spectra were obtained using the Sloan telescope at Apache Point Observatory in New Mexico City, USA. The APOGEE Stellar Parameters and Chemical Abundances Pipeline <cit.> obtained stellar parameters and elemental abundances for most of the spectra by comparing the observed spectra with the theoretical spectral library using cardinal distance. The APOGEE DR17 catalog publishes the stellar parameters (T_eff, log g, [Fe/H]) and 20 elemental abundances for 475,144 stars. The ranges of the stellar atmospheric parameters in the APOGEE DR17 catalog are [3500, 7000] K for T_ eff, [-0.5, 5] dex for log g, and [-2.0, 0.5] dex for [Fe/H]. The accuracies of the three parameters are 17 K, 0.03 dex and 0.009 dex, respectively. In this paper, we used the same method as <cit.> and <cit.> to obtain the reference dataset. We cross-matched the LAMOST DR8 medium-resolution spectra with the APOGEE DR17 catalog and obtained 75,316 common observational targets.There are 358,416 observed spectra in LAMOST medium-resolution spectral library from these common targets. It is worth noting that some LAMOST spectra are affected by cosmic rays and other influences, which result in a large number ofoutliers (bad pixels) in them. Therefore, the spectra with more than 100 outliers or more than 30 consecutive outliers are rejected. In addition, to ensure the reliability of the dataset, we only kept the spectral data with S/N ≥ 10 and quality_flag = good. Finally, we obtained 73,773 common observational targets and 310,086 LAMOST DR8 medium-resolution spectra from these targets. This dataset has over 100% more data than the reference sets obtained by <cit.> and <cit.>. Figure <ref> shows the distribution histograms ofthe common sources between the APOGEE DR17 catalog and the LAMOST DR8 medium-resolution spectral library.It is shown that the data samples are sparse in the regions where T_eff > 6500 K, T_eff < 4000 K, log g < 2.0 dex, and [Fe/H] < -0.5 dex.To accurately predict the parameters for the spectra with high temperature, low temperature, or low metallicity, the obtained reference dataset is divided into three subsets according to the distribution characteristics of the samples in the T_ eff-[Fe/H] parameter space: reference set 1 (S_1), reference set 2 (S_2), and reference set 3 (S_3). The three reference subsets are defined as shown in Figure <ref>. Reference set 1 (S_1) is used to further improve the parameter estimation accuracy on the spectra observed with high probability.Reference set 2 (S_2) is used toimprove the parameter estimation accuracy on the spectra with high temperature or low temperature. And reference set 3 (S_3) is used to improve the parameter estimation accuracy on the spectra with low metallicity ([Fe/H]).However, it is shown that the established model based on the above-mentioned three subsets performs unsatisfactory on the spectra of cool dwarf stars (T_ eff < 4500 K and logg > 4.0 dex). Therefore, the fourth reference set,S_4, is established (Figure <ref>). The thresholds T_ eff < 5000 K and logg > 2.5 dex were chosen based on experimental performance. §.§ Data pre-processingTo facilitate machine learning model optimization, the reference spectra should be pre-processed before training the parameter estimation model. For example, wavelength correction, spectral resampling, spectral normalization, etc. And the details of preprocessing procedure can be found in <cit.> and <cit.>. After the above pre-processing procedures, the spectral data can be directly input into the SPEMR model for estimating the spectral parameters.§ STELLAR PARAMETER ESTIMATION BASED ON MULTIPLE REGIONSIn this paper, a novel method Stellar Parameter Estimation based on Multiple Regions (SPEMR) is proposed based on the distribution characteristics of LAMOST medium-resolution survey spectra in T_ eff-[Fe/H] and T_ eff-logg parameter space. Since the parameter estimation for the spectra in each sub-region is implemented respectively based on the RRNet model, the proposed scheme can be specifically abbreviated as SPEMR (RRNet) in this paper. The following sections will introduce the RRNet method, the motivation and principle of SPEMR model, and the method to obtain the final parameter estimation result for a spectrum based on SPEMR (RRNet), respectively. §.§ RRNet model Residual Recurrent Neural Network (RRNet) is a convolutional neural network whose main components are a recurrent learning module and a residual learning module <cit.>.RRNet model is proposed in the problem of parameter estimation of the medium-resolution spectrum of LAMOST. Furtherly, compared with StarNet <cit.> and SPCANet <cit.>, RRNet has some superiorities on accuracy and robustness. Therefore, RRNet is chosen as the backbone network in the SPEMR model.Compared to high-resolution spectroscopy, it is more challenging to discern some typical spectral line features in medium-resolution and low-resolution spectra. In these cases, it is necessary to design a parameter estimation algorithm with stronger sensitivity and detection capability for weak spectral features. To this end, the RRNet model was proposed.In RRNet, the residual learning module enhances the sensitivity to spectral feature based on the driving power from parameter labels. The super high spectral acquisition rate is a characteristic of the LAMOST survey, which helps to acquire a large-scale stellar spectral data set in a short period of time. However, an accompanying problem is the large amount of noises in the observed spectra. The recurrent learning module in the RRNet achieves cross-band information propagation and belief enhancement by mining the correlation between spectral features on different bands. And this module can suppress the negative effects from noises in the spectra. More information about RRNet can be found in <cit.>. §.§ Division of sub-regions and overall learning architecture A two-stage learning scheme is used in SPEMR to improve the accuracy of parameter estimation both on high-frequency-observed-type spectra and on spectra with low temperature, high temperature, or low metallicity. The two learning stages are an overall pre-training and a personalized fine-tuning (Part A in Figure <ref>). In the first stage, RRNet is trained by the reference spectra over the entire parameter space to obtain a common knowledge of the parameter estimation problem.In the second stage, four reference subsets S_1, S_2, S_3 and S_4 (Figure <ref> and Figure <ref>) are independently used for further personalized optimizing the pre-trained model and four models RRNet_1, RRNet_2, RRNet_3 and RRNet_4 are obtained.This fine-tuning learning allows the model to better handle specific types of spectral parameter estimation problems. Four fine-tuned models RRNet_1, RRNet_2, RRNet_3 and RRNet_4 are fused into the final SPEMR parameter estimation model.More information about the two learning stages of SPEMR is presented below.§.§.§ Overall pre-training In the pre-training process, we randomly divide the overall reference set (see <ref> section) into a training set, a validation set and a test set at the ratio of 7:1:2.The three data sets respectively consist of 217,379 spectra from 51,641 stars, 30,821 spectra from 7,377 stars, and 61,886 spectra from 14,755 stars. The training set is used for learning the pre-trained model parameters, the validation set is used to determine the pre-trained model hyperparameters, and the test set is used to evaluate the performance of theparameter estimation results.To accurately estimate the probability density function <cit.> of the estimated stellar parameters, 6 instances of the model are trained with different random initializations. The mean μ̂(𝐗) of the ensembling is determined by the average of the predicted means of these six models.The variance σ̂^2_pred(𝐗) of the ensembling is determined by the following equation:σ̂^2_pred(𝐗) = 1/6∑_i=1^6( σ^2_θ_i(𝐗)+ μ^2_θ_i(𝐗) )-μ̂^2(𝐗),where θ_i is the parameters to be optimized for the i-th model, and μ_θ_i(𝐗), σ^2_θ_i(𝐗) are the mean and variance of the prediction of the i-th model, respectively. In the RRNet model, a spectra is divided into N_s sub-bands.The correlation and complementarity of spectral information between various bands are learned through the recurrent module. There is another hyperparameter N_r in the RRNet model, which indicates the number of residual blocks. These two hyperparameters have an impact on the RRNet model performance.Therefore, we optimized them using the validation set. In this paper, some experimental explorations are conducted on different configurations N_r = 1, 2, 3, 4 and N_s = 5, 20, 40, 60 (Table <ref>). It is shown that the pre-trained model is the smallest error on the whole in case of N_r = 3 and N_s = 5. Therefore, the N_r of RRNet are set to 3 and the N_s are set to 5 in the subsequent experiments.In addition, the number of training iterations and the learning rate are 30 and 10^-4, which are consistent with RRNet <cit.>.§.§.§ Personalized fine-tuning Although the Base RRNet obtained in the pre-training stage already has some parameter estimation capabilities in the overall parameter space, the non-uniformity of the sample distribution (Fig. <ref> and Fig. <ref>) leads to a significant room for improvement in each sub-region (Fig. <ref>). Therefore, we fine-tuned the model in a targeted way for each sub-region separately. Specifically, the spectra with T_ eff∈ [4000, 6500] K and [Fe/H] ≥ -1.0 dex in the training set are used as training set 1 (the S_1 in Fig. <ref>) to fine-tune the Base RRNet, and the corresponding parameter estimation model RRNet_1 is obtained. Furtherly, the spectra with T_ eff < 4000 K and T_ eff >6500 K in the training set are treated as training set 2 (the S_2 in Fig. <ref>) to fine-tune the Base RRNet, and the corresponding parameter estimation model RRNet_2 is computed.The spectra with [Fe/H]<-1.0 dex in the training set are considered as training set 3 (the S_3 in Fig. <ref>) to fine-tune the Base RRNet, and the corresponding parameter estimation model RRNet_3 is obtained. Finally, the spectra with Teff < 5000 K and log g > 2.5 dex in the training set are considered as training set 4 (the S_4 in Fig. <ref>) to fine-tune the Base RRNet, and the corresponding parameter estimation model RRNet_4 is obtained.When fine-tuning each sub-model, we kept the parameters of the convolutional layer unchanged, and only re-optimized the parameters of the fully connected layer.In this way, the sub-model can converge earlier and has a relatively strong spectral feature extraction ability at the beginning. In addition, the number of training iterations and the learning rate for each sub-model are respectively set to 10 and 10^-5. This method also further accelerates the convergence of the sub-models. After fine-tuning, we obtained four sub-models, RRNet_1, RRNet_2 and RRNet_3, RRNet_3 and RRNet_4. The RRNet_1 is used to predict the stellar parameters for the stellar spectra observed with high probability. The RRNet_2 is used to predict the stellar parameters for the spectra with high temperature or low temperature. The RRNet_3 is used to predict the stellar parameters of spectra with low metallicity. And the RRNet_4 is used to improve the parameters of the cold end spectra of dwarf stars.§.§ Integration of the estimated results from four sub-models In practical application, it is unknown about the sub-region of the parameter space to which a spectrum belongs before estimating its parameters. This problem makes it impossible to determine which model should be used to predict the spectral parameters beforehand. To this end, we proposed a strategy of multi-label fusion to solve this problem (Part B in Figure <ref>). Based on this strategy, we input a spectrum 𝐗∈ R^1 × 7200 into RRNet_i, i∈{1,2,3, 4} respectively.The outputs of RRNet_iare μ_i(𝐗) ∈ R^1 × 16 and σ_i^2(𝐗) ∈ R^1 × 16. The μ_ij(𝐗) is the estimation of the j-th spectral parameter from RRNet_i. The σ_ij^2(𝐗) is the uncertainty estimation of μ_ij(𝐗). The final prediction of the SPEMR model μ̂(𝐗)=(μ̂_1(𝐗),⋯,μ̂_17(𝐗)) and its uncertainty estimate σ̂^2(𝐗)= (σ̂^2_1(𝐗),…, σ̂^2_17(𝐗)) can be obtained by fusing {μ_ij(𝐗), i=1, 2, 3, 4} and {σ_ij^2(𝐗), i=1, 2, 3, 4}. The specific fusion formula as follows:μ̂_j(𝐗) = μ_i(j)_0,j(𝐗), σ̂^2_j(𝐗) = σ_i(j)_0,j^2(𝐗),where i(j)_0 = min_i= 1,2,3, 4σ_ij^2(𝐗), and j=1, ⋯, 16. That is to say, i(j)_0 denotes the model index with the smallest prediction uncertainty σ_ij^2(𝐗). Therefore, the model fusion schemes (<ref>) and (<ref>) are to adopt the predictions with the smallest uncertainty as the final fusion result. §.§ Testing of the SPEMR model After an overall pre-training and three subsequent, independent personalized fine-tuning for the RRNet (Section <ref>), four sub-models are computed: RRNet_1, RRNet_2, RRNet_3 and RRNet_4. Based on them, we can obtain the preposed SPEMR model (Section <ref>) using themulti-label fusion strategy (Section <ref>). In this section, we evaluate the performance of the SPEMR model by comparing the differences between the SPEMR estimations and the ASPCAP labels on the test set. Thus, any comparison here is not affected by biases in the ASPCAP results themselves. More comprehensive evaluations are conducted in section <ref>.Figure <ref> shows the distribution of the differences between the stellar atmospheric parameters predicted by SPEMR and the ASPCAP results. The deviation of the SPEMR predictions from the ASPCAP labels are small on the spectra with low- and high- S/N level. This phenonmennon indicates that the SPEMR model can effectively suppress the noise effects on the spectra with low signal-to-noise. For effective temperature, the corresponding residual is smallest on the spectra with T_ eff∈ [4500, 5000] K. This phenomenon is mainly due to the large number of training samples in this region and their good quality. In the second row of Figure <ref>, we can see a slight underestimation from SPEMR on logg in case of logg > 4 (dex). This phenomenon is consistent with the estimations of <cit.> and <cit.>. It is mainly due to the scarcity of training examples in this parameter space region (as shown in Fig. <ref>), which causes the increase of prediction error for the dwarfs (logg > 4). For metal abundance, the best prediction results were obtained when the spectra with [Fe/H] ∈[-0.5, 0.5] dex. This phenomenon is also due to the larger number of training samples in this region. The above phenomena suggest that the ASPCAP labels provide excellent learning benchmarks for the stellar atmospheric parameters estimated by the SPEMR model.To further evaluate the results of other parameters estimated by the SPEMR model, we investigated the differences between the SPEMR estimations and the ASPCAP results for the remaining elemental abundances on the test set. Figure <ref> shows the distribution of differences between the abundances of 13 elements predicted by the SPEMR model and the ASPCAP labels on the test set. The residual and dispersion of most element abundances estimated by SPEMR are around 0.005 dex and 0.07 dex. These lower residual and dispersion indicate the better precision and accuracy of the SPEMR model. However, for elements N, Ti and Cr, the corresponding residual and dispersion are slightly higher. Therefore, the accuracy of the SPEMR model on elements N, Ti, Cr should be further improved. §.§ Best Fitting Template To further explore the performance of the SPEMR model, we investigated several representative LAMOST spectra in the test set and their corresponding best-fit templates. These test spectra are selected based on their representativeness in parameter space and spectral quality. This study can increase the physical interpretability of the model and allow the reader to more intuitively observe the fit of the model. In this paper, the best fitting template of a spectrum is found by minimizing the Euclidean distance between the spectrum and each training spectrum using the estimated stellar parameters for test spectra and the reference parameters for the training spectra (the parameters of the test spectrum are estimated using the SPEMR model). The corresponding results are shown in Figure <ref>. The eight representative LAMOST spectra from top to bottom in Fig. <ref> are a low-temperature spectrum (T_ eff = 3951.45 K), a high-temperature spectrum (T_ eff = 6577.52 K), a dwarf spectrum (logg = 4.42 dex), a giant spectrum (logg = 2.71 dex), a metal-poor abundance spectrum ([Fe/H] = -1.20 dex), a metal-rich abundance spectrum ([Fe/H] = 0.00 dex), a low signal-to-noise ratio spectrum (S/N = 17.32), and a high signal-to-noise ratio spectrum (S/N = 117.32). It can be found that the SPEMR model fits well on low temperature spectrum, dwarf spectrum, giant spectrum, rich metal abundance spectrum, and high signal-to-noise ratio spectrum, that is, the residuals between the test spectrum and the best-fit template almost reach zero in the whole wavelength space. However, for high-temperature spectrum and low-SNR spectrum, the fitting effect of the SPEMR model is not as good as that of the above spectra, especially in the blue-end wavelength space. This phenomenon may be related with the sparse distribution of this kind training samples (Fig. <ref>). § APPLICATION ON LAMOST DR8In this section, we applied the SPEMR proposed in section <ref> to the LAMOST DR8 medium-resolution spectra to obtain a LAMOST-SPEMR catalog.To assess the reliability of the LAMOST-SPEMR catalog, we compared it with other typical catalogs, performed an uncertainty analysis, and tested it on open clusters. §.§ Parameter estimation for the medium resolution spectra from LAMOST DR8 Three learned sub-models based on reference datasets 1, 2,, 3 and 4(S_1, S_2, S_3 in Fig. <ref> and S_4 in Fig. <ref>) can perform parameter estimation for spectra on different regions of the parameter space (Section <ref>). The four sub-models can estimate stellar parameters for four types of spectra: the spectra with T_ eff∈ [4000, 6500] K and [Fe/H] ≥ -1.0 dex, the spectra with T_ eff > 6500 K or T_ eff < 4000 K, the spectra with [Fe/H] < -1.0 dex and the spectra with T_ eff < 5000 K and logg > 2.5 dex, respectively. The results of four RRNet sub-models are fused using the SPEMR scheme (Section <ref>) to obtain the SPEMR (RRNet) parameter estimation. Accordingly, the LAMOST-SPEMR catalog is obtianed by SPEMR. This catalog contains stellar atmospheric parameters, chemical abundances, and the corresponding 1σ uncertainties for 4,197,960 medium-resolution spectra in LAMOST DR8 estimated by SPEMR.Figure <ref> shows the T_ eff-log g distribution of the LAMOST-SPEMR catalog in different S/N intervals. The three isochrones in the figure are the MIST stellar evolutionary tracks with a stellar age of 7 Gyr <cit.>. Compared with Figure 9 in <cit.>, it is shown that the stellar parameters estimated by SPEMR in the high-temperature spectral region fit better with the three MIST stellar evolutionary tracks.In the low-temperature spectral region, the SPEMR and SPCANet estimates show a similar pattern, with an underestimation of log g for the cold ends of the main sequence stars (T_ eff∈ [4000, 4500] K). For the spectra with low metallicity, SPEMR can also effectively estimate their stellar parameters.Compared with Fig. 5 in <cit.>, it is shown taht RRNet lacks the estimation results for the high-temperature spectra, while the proposed SPEMR effectively estimate the stellar parameters from this kind spectra. And the estimation results generally agree with the MIST stellar evolutionary tracks. To evaluate the validity of SPEMR, we estimated the parameters for the spectra in the reference set, and the results are shown in Fig. <ref>.Compared with the estimation results of RRNet (Fig. <ref>), the performance of our model is improved to different degrees for the spectra with low temperature, high temperature, low metallicity, and high-frequency-observed-type spectra.Specifically, obvious improvements are shown on the spectra with low temperature and the spectra with low metallicity.However, no evident improvements are found on the spectra with high temperature. This phenomenon in the spectra with high temperature is caused by the small number of high-temperature spectral samples in the training data. In addition, SPEMR improves the estimation results on high-frequency-observed-type spectra, such as T_ eff, log g, Fe, Si, etc. §.§ Some comparisons with other typical catalogsTo further verify the accuracy of the LAMOST-SPEMR catalog, we investigated the consistency between LAMOST-SPEMR catalog and the SPCANet catalog, the GALAH DR3 catalog.<cit.> used the SPCANet model to estimate the stellar atmospheric parameters and chemical abundances for 1,472,211 medium-resolution spectra from LAMSOT DR7, and those results are called the SPCANet catalog in short. In order to compare the differences between the LAMOST-SPEMR catalog and the SPCANet catalog, we cross-matched the reference set in this paper with the SPCANet catalog and obtained 24,1033 spectra from 5,3775 common stars. It should be noted that the stellar parameter and chemical abundance estimations are made simutaneously by the SPCANet model, the SPEMR model, and the APOGEE ASPCAP pipeline for each of the 24,1033 spectra.GALAH <cit.> is a large-scale high-resolution (R∼28000) spectroscopic survey project, which uses the Anglo-Australian Telescope and the HERMES spectrograph at the Australian Observatory to observe stellar spectra. The GALAH spectral coverage is [4713, 4903] Å, [5648, 5873] Å, [6478, 6737] Å and [7585, 7887] Å. GALAH DR3 <cit.> published stellar atmospheric parameters and elemental abundances for 588,571 stars. In the observations, there are 383,088 dwarf stars, 200,927 giant stars, and 4,556 unclassified stars. We cross-matched the LAMOST-SPEMR catalog with the GALAH DR3 catalog and obtained 110,042 LAMOST DR8 spectra from 25,519 common stars. Compared with the SPCANet catalog (Table <ref> (SPEMR-ASPCAP, SPEMR-ASPCAP)), LAMOST-SPEMR estimated two more parameters [K/H] and [Mn/H], and reduced the overall bias, dispersion, and MAE by 30%, 50%, 52%, respectively on most of the stellar parameters.This phenomenon indicates that SPEMR has better estimation performance than SPCANet. However, for elements S, Ti, Cr, the precision improvement of the SPEMR model are not significant. This may be caused by the lack of stronger metal lines in the blue part of the LAMOST spectra. To further investigate the consistency between the LAMOST-SPEMR catalog and GALAH DR3 catalog, we evaluated the differences between SPEMR and GALAH results. Figure <ref> shows the T_ eff-log g distribution of the GALAH catalog and LAMOST-SPEMR catalog colored by [Fe/H] at different S/N intervals.Compared with the GALAH catalog, the LAMOST-SPEMR catalog shows a larger dispersion for giant stars with effective temperature around 5000 K, especially in the region of low metallicity ([Fe/H] < -0.5 dex). This phenomenon is mainly due to the scarcity of reference samples with (T_eff, log g) ∼ (5000 K, 2.5 dex) (Figure <ref>), as well as the relatively sparse spectral samples for [Fe/H] < -0.5 dex (Figure <ref> and Figure <ref>). In addition,the LAMOST-SPEMR catalog shows some close correlations between the fitting performance to the MIST stellar evolution tracks and the signal-to-noise ratio (SNR). In case of a higher SNR, the LAMOST-SPEMR catalog shows a stronger consistency with MIST stellar evolution tracks, and the corresponding dispersion and bias are also smaller. Otherwise, the corresponding dispersion and bias are larger on the low signal-to-noise spectra. This is mainly due to the poor quality of the low SNR spectra and it indicates that it is necessary to investigate a more robust estimation method and establish an expanded reference set with a better coverage on stellar parameter space. It is worth mentioning that <cit.> have specifically studied the parameter estimation problem from the LAMOST stellar spectra with low SNR and low resolution. Therefore, we can also pay special attention to the low SNR LAMOST medium-resolution spectral data to improve the overall parameter estimation accuracy of the model in future research.Figure <ref> shows the comparison of the abundances of chemical elements estimated by LAMOST-SPEMR catalog with the results of the GALAH catalog. The detailed biases and the standard deviations are listed in Table <ref> (SPEMR-GALAH). It is shown that the standard deviations of the difference between the LAMOST-SPEMR catalog and the GALAH catalog range between 0.13 dex ∼ 0.15 dex in the abundance of elements C, Mg, Al, Si, Ca, Mn, and Ni, and the overall differences distribute around the theoretical line with little dispersion. For the elements O, K, Ti, and Cr, the corresponding differences have a relatively larger dispersion, around 0.20 dex; and the estimations of Ti and Cr show some relatively evident deviation from the theoretical line. To further explore the source of the discrepancy between the LAMOST-SPEMR catalog and the GALAH catalog, we compared the ASPCAP reference value with those of the GALAH catalog in terms of the above elemental abundances (Figure <ref>). It is shown that the trend of the difference between the ASPCAP catalog and the GALAH catalog is basically consistent with that of the LAMOST-SPEMR catalog. This indicates that the deviation and dispersion of the difference between LAMSOT-SPEMR results and GALAH survey largely originate from the difference between the ASPCAP catalog and the GALAH catalog.To further evaluate the performance of the SPEMR model, we show the [X/Fe] vs. [Fe/H] distributions of all elements estimated by SPEMR for giants and dwarfs and compare them to the GALAH results for the same stars. Figure <ref> shows the [X/Fe]-[Fe/H] distribution over the dwarfs (log g > 4 dex) for all elements of the LAMOST-SPEMR and GALAH DR3 catalogs. Figure <ref> shows the distribution of the corresponding giants (log g < 4 dex). Comparing the two left and right columns in Fig. <ref> and Fig. <ref>, we can clearly find that the elemental abundance patterns of LAMOST-SPEMR catalog are tighter than those of the GALAH DR3 catalog both on the dwarfs and on the giants. For the dwarfs, the elemental abundances of the LAMOST-SPEMR catalog are concentrated in the intermediate metal abundances ([Fe/H] ∈ [-0.2, 0.3]dex). For the giants, the distribution of elemental abundances is much wider, and most of them are concentrated in [Fe/H] ∈[-0.6, 0.4]dex. This phenomenon is largely consistent with the distribution of the GALAH DR3 catalog. Comparing the two left columns of Fig. <ref> and Fig. <ref>, we can find that the distributions of giants and dwarfs are inconsistent for most elements estimated by SPEMR. For example, the elements Mg, Al, Ti, and Cr from the giants show a more dense distribution on [Fe/H] from -0.8 dex to 0.3 dex. However, this dense pattern is not present in the dwarfs. This is mainly due to the scarcity of main-sequence dwarfs in our training samples. This sample imbalance leads to the fact that most of the labels predicted by SPEMR are around red giants. Only the elements Cr and Ti of the dwarfs show distinct bimodal structures, while most of the elements of the giants all show distinct bimodal structures.In addition, for the giants, elements O, S and K show clear negative correlations relative to [Fe/H], while elements N, Cr and Mn show obvious positive correlations relative to [Fe/H].And the other elements are closely distributed on a horizontal line.For the dwarfs, elements C, O, and S showobvious negative correlations relative to [Fe/H], while elements N, AL, Cr and Mn show obvious positive correlations relative to [Fe/H].And other elements are closely distributed on a horizontal line.For most elemental abundances, the position and slope of the dwarf stars distributions are not consistent with those of the giant stars.This phenomenon may be caused by the difference on sampling spaces of the dwarf and giant stars.§.§ Uncertainty AnalysisThe SPEMR model is able to predict the PDF of stellar parameters and gives the uncertainty σ_pred for the predicted parameters using a deep ensembling approach and equation (<ref>). In addition, in the LAMOST sky survey, some stars are observed for multiple times at different time and under various observing conditions. This phenomenon can be used to analyze the uncertainty σ_obs caused by observation configurations. Suppose we have n_s repeated observations {𝐱_1, ⋯, 𝐱_n_s} from a source and n_s estimations {y_1, ⋯, y_n_s} from these observations using SPEMR. The standard deviation of these n_s parameter estimations is the corresponding uncertainty σ_obs.Figure <ref> shows the dependencies of the uncertainty of LAMOST-SPEMR catalog on S/N. The dots in this figure indicate the uncertainties σ_pred predicted by SPEMR, and the length of the line segment centered on the dots indicate the uncertainty estimated from the repeated observations σ_obs. On the whole, the RRNet model shows the strong robustness and generalization from the lower uncertainty. Specifically, in case of S/N ≥ 10, the σ_pred of the parameters T_ eff, log g and [Fe/H] are 134 K, 0.17 dex and 0.07 dex, respectively, and those of the remaining elements are 0.07 dex∼0.19 dex. In addition, σ_pred and σ_obsdecrease with the increase of S/N. This phenomenon is caused by the higher quality of LAMOST spectra with high S/N.The spectra with high S/N suffer from less noises.The uncertainties in this paper are numerically different from Figure 7 in <cit.>. This difference is caused by the region changes in the parameter space of stellar spectra under being processed. At the same time, RRNet and SPEMR show a similar pattern. Therefore, the SPEMR model is stable in estimating stellar parameters from LAMOST spectra. §.§ Tests on open clusters Open clusters have good chemical homogeneity <cit.>. Therefore, they can be used as chemical indicators to assess the effect of stellar parameter estimation. To further investigate the accuracy of the element abundances from the LAMOST-SPEMR catalog, we performed more tests on open clusters. <cit.> analyzed the properties of many open clusters based on Gaia DR2 and LAMOST data, and provided a spectroscopic parametric catalog consisting of the stellar physical parameters of 8,811 member stars. We cross-matched these cluster member stars with the LAMOST-SPEMR catalog, and obtained a variety of open clusters, such as Melotte 22, NGC 2682, NGC 2632, NGC 2168, Melotte 20, NGC 2281, Stock 2, NGC 1750, NGC 1545, and so on.Finaly, we selected three open clusters (Melotte 22, NGC 2682 and NGC 2632) with the largest number of matches to LAMOST-SPEMR and removed the parameter estimation with large uncertainties σ_pred (section <ref>). To investigate the effects of effective temperature and metal abundance on the elemental abundances from open clusters, we show the variation of LAMOST-SPEMR chemical elemental abundances with effective temperature and metal abundance in the three open clusters. Figure <ref> shows the dependencies of the chemical elemental abundances from LAMOST-SPEMR catalog on effective temperature (T_ eff) in the above-mentioned open clusters. In agreement with the performance of <cit.>, the chemical abundances of SPEMR do not show a significant variation trend with T_ eff in all three aforementioned clusters, and the chemical abundances are at a low deviation and dispersion. Figure <ref> shows the dependencies of the chemical elemental abundances from LAMOST-SPEMR catalog on metal abundance ([Fe/H]) for the above-mentioned open clusters. It is shown that [Fe/H] values estimated by the LAMOST-SPEMR catalog approximately rangebetween -0.2 dex and 0.2 dex. And there is a different [Fe/H] spread depending on [X/H] for these open clusters. This phenomenon is mainly due to the differences between these open clusters in ages and distances <cit.>.In addition, we compared the chemical abundances of the LAMOST-SPEMR catalog with those of the SPCANet and RRNet catalogs on the three above-mentioned clusters (Fig. <ref>). Figure <ref> shows the standard deviation of the elemental abundances from LAMOST-SPEMR and RRNet catalogs are lower than those of the SPCANet catalog on the whole. For the Melotte 22 and NGC 2632 open clusters, LAMOST-SPEMR shows an overall lower standard deviation than RRNet.For the NGC 2682 open cluster, LAMOST-SPEMR shows lower standard deviation than RRNet on elements (S, Ca, Ti, and Cr).The overall chemical homogeneity from LAMOST-SPEMR on the three clusters is 0.054±0.022 dex, 0.055±0.016 dex and 0.067±0.024 dex, respectively. These phenomena indicate that the LAMOST-SPEMR catalog has higher accuracy compared with SPCANet and RRNet.§.§ LAMOST-SPEMR catalogFinally, we published the LAMOST-SPEMR catalog for the estimated stellar atmospheric parameters and elemental abundances of 4,197,960 medium-resolution spectra from LAMOST DR8. This catalog contains the following information: the identifier for the observed spectrum (obsid), the fits file name corresponding to the spectrum (filename), coordinate information (ra, dec), the extension name of the spectrum (extname_blue, extname_red), the signal-to-noise ratio of the spectrum (snr_blue, snr_red), effective temperature (Teff[K]), surface gravity (Logg), metallicity (Fe/H), 13 elemental abundances (X/H), and the 1σ uncertainty of the corresponding stellar parameters (X_err).§ SUMMARY AND OUTLOOKThis paper proposed a novel method Stellar Parameter Estimation based on Multiple Regions Scheme (SPEMR)based on the distribution characteristics of LAMOST medium-resolution data in parameter space. We estimated the stellar atmospheric parameters, elemental abundances, and corresponding uncertainties for 4,197,960 medium-resolution spectra in LAMOST DR8 using SPEMR. In case of S/N ≥ 10, the precision of the parameters T_ eff, log g, [Fe/H], and [Cr/H] are 47 K, 0.08 dex, 0.03 dex, and 0.16 dex, respectively, while the precision of the other elemental abundances are 0.03 dex ∼ 0.13 dex. To verify the performance of SPEMR, we conducted a series of comparing experiments with other typical medium-resolution spectral parameter estimation models and other surveys. The experimental results demonstrate that the SPEMR model not only improves the parameter accuracy on high-frequency-observed-type spectra but also provides good parameter estimation on the spectra with high temperature, low temperature, or low metallicity. In addition, the SPEMR parameter estimation results are excellently consistent with other high-resolution sky survey.In the future, we will explore the characteristics of high-temperature and low-signal-to-noise spectra, and build extended reference sets with a better coverage on stellar parameter space to further improve the parameter estimation capability of the model.§ ACKNOWLEDGEMENTSWe are very grateful to the referee for helpful suggestions, as well as the correction for some issues, which have improved the paper significantly. This work is supported by the National Natural Science Foundation of China (Grant No. 11973022), the Natural Science Foundation of Guangdong Province (No. 2020A1515010710), the Major projects of the joint fund of Guangdong, and the National Natural Science Foundation (Grant No. U1811464).LAMOST, a multi-target optical fiber spectroscopic telescope in the large sky area, is a major national engineering project built by the Chinese Academy of Sciences. Funding for the project is provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatory of the Chinese Academy of Sciences. § DATA AVAILABILITY The LAMOST data employed in this article are available after September 2022 to the users out of China for download from LAMOST DR8, at <http://www.lamost.org/dr8/>. The computed catalog for 4.19 million medium-resolution spectra from the LAMOST DR8, the source code, the trained model and the experimental data have been made publicly available at: <https://github.com/yulongzh/SPEMR>. § FOOTNOTESsoftware: Numpy <cit.>, Scipy <cit.>, Astropy <cit.>, Matplotlib <cit.>, Scikit-learn<cit.>, Pytorch<cit.>.mnras
http://arxiv.org/abs/2312.15989v1
{ "authors": [ "Xiangru Li", "Xiaoyu Zhang", "Shengchun Xiong", "Yulong Zheng", "Hui Li" ], "categories": [ "astro-ph.SR", "astro-ph.GA", "astro-ph.IM" ], "primary_category": "astro-ph.SR", "published": "20231226104445", "title": "Parameter Estimation of LAMOST Medium-resolution Stellar Spectra" }
Fatih Cagatay Akyon^1,2, Alptekin Temizel^1 ^1Graduate School of Informatics, METU, Ankara, Turkey ^2OBSS AI, OBSS Technology, Ankara, Turkey State-of-the-Art in Nudity Classification:A Comparative Analysis Renzo Guido, Luis G. Sarasua, Arturo C. Martí January 14, 2024 ================================================================== This paper presents a comparative analysis of existing nudity classification techniques for classifying images based on the presence of nudity, with a focus on their application in content moderation. The evaluation focuses on CNN-based models, vision transformer, and popular open-source safety checkers from Stable Diffusion and Large-scale Artificial Intelligence Open Network (LAION). The study identifies the limitations of current evaluation datasets and highlights the need for more diverse and challenging datasets. The paper discusses the potential implications of these findings for developing more accurate and effective image classification systems on online platforms. Overall, the study emphasizes the importance of continually improving image classification models to ensure the safety and well-being of platform users. The project page, including the demonstrations and results is publicly available at https://github.com/fcakyon/content-moderation-deep-learninghttps://github.com/fcakyon/content-moderation-deep-learning. content moderation, nudity detection, safety, transformers§ INTRODUCTIONThe rapid increase in user-generated content online has led to a pressing need for automated systems for filtering inappropriate and harmful content <cit.> <cit.>. A recent study <cit.> revealed that parents have a high level of concern about the negative effects of inappropriate content in the media on their children's development and well-being, especially sexual content. In this context, the development of effective nudity classification systems has become essential for content moderation in online platforms to ensure user safety and well-being. While traditional machine learning models <cit.> <cit.> and convolutional neural networks (CNNs) <cit.> <cit.> <cit.> <cit.>, have been widely used for nudity classification, recent transformer-based models <cit.> and modern CNN architectures <cit.> have shown promising results in image classification tasks.With the increasing popularity and prominence of text-to-image generation systems, artificially generated unsafe and inappropriate images have become a major concern for content moderation. This created a need for systems facilitating automated safety check procedures such as Stable Diffusion Safety checker <cit.> and LAION safety checker <cit.>. Stable Diffusion safety checker is designed to prevent unsafe image generation, while LAION safety checker works by filtering out unwanted images from the training set to prevent diffusion models being trained on inappropriate images. These safety checkers demonstrate the growing importance of developing effective and accurate image classification systems for content moderation in online platforms. By improving the accuracy and effectiveness of image classification models, online communities can be better protected from harmful and inappropriate content.However, the limitations of current evaluation benchmarks and datasets <cit.> <cit.> <cit.> have raised concerns about the effectiveness of these models in accurately detecting nudity. Therefore, in this paper, we present a comparative analysis of existing nudity classification techniques for classifying images based on the presence of nudity, with a focus on their application in content moderation. We evaluate CNN-based models, recent transformer-based models, and popular open-source safety checkers and highlight the limitations of current evaluation datasets. The findings of this study are expected to contribute to the development of more effective and culturally-sensitive image classification systems for content moderation in online platforms.§ RELATED WORK In this section, we provide an overview of existing research on the nudity classification from images, particularly focusing on nudity classification datasets and techniques, and image classification techniques, highlighting their key findings and contributions to the field. §.§ Nudity Classification Datasets In this work, we use the following three datasets: Adult content dataset, NudeNet, and LPSD (Table <ref>). Adult content dataset <cit.> is one of the earliest datasets in this field and contains two categories:`safe' and`adult'. NudeNet dataset <cit.> includes an intermediate label `sexy', in addition to the `safe' and `nude' labels. LSPD <cit.> consists of image and video classification branches that provide nudity and pornography detection-related annotations available to researchers upon request. LSPD includes the most detailed classification labels among these three datasets.§.§ Content Moderation and Nudity Classification The problem of content moderation on online platforms has spurred a lot of research in recent years. One important aspect of content moderation is image classification, particularly in detecting and classifying nudity.A system for nudity classification from images using a Mobilenetv3 image embedding model <cit.> was proposed in <cit.>. It was designed for on-device content moderation and aimed to classify images as `containing nudity'/`not containing nudity'. In <cit.>, a CNN + MLP ensemble was proposed to classify gore in images. While the study did not particularly focus on nudity detection, the approach using image embeddings from Mobilenet v2 <cit.>, Densenet <cit.>, and VGG16 <cit.> could be adapted for this purpose.A CNN + SVM model for inappropriate video scene classification, including nudity was proposed in <cit.>. The model is based on InceptionV3 <cit.> image embeddings and was trained on a private dataset. A novel method for learning scene representations from movies using a ViT-like video encoder and MLP was proposed in <cit.>. While the study does not focus specifically on image-based nudity classification, the ViT-like encoder used in the model can be utilized for this purpose. The proposed approach could potentially improve content moderation for online platforms by classifying video scenes based on their content rating related to nudity detection using image classification. The model was evaluated on multiple datasets, including a private dataset, and demonstrated promising results.§ EXPERIMENTAL EVALUATION AND RESULTS The experiment setup for this study involved evaluating six different models, including MobileNetv3(small), MobileNetv3(large), Inceptionv3, ConvNexT(tiny), ViT(B16), and popular open-source safety checkers from Stable Diffusion and LAION. The models were trained and tested on three different datasets, including LSPD, NudeNet, and AdultContent, each containing unique challenges and limitations in nudity classification. The training process for all the deep learning models was performed using the Adam optimizer with a learning rate of 1e-3, a cosine scheduler with 10% warmup, and a batch size of 256 for six epochs. The evaluation metrics used in the study included label-wise F1 score, accuracy, precision, and recall for all labels in the test sets of the datasets. Additionally, overall scores are calculated by macro averaging the label-wise scores and denoted with `all' subscript in the results. The evaluation focuses on comparing the performance of different models in classifying images based on the presence of nudity, emphasizing their application in content moderation. Table <ref> presents the overall and label-wise test set results on the LSPD dataset. Fully convolutional models, including MobileNetv3, Inceptionv3, and ConvNexT, achieved high accuracy in classifying images based on the presence of nudity, with ConvNexT(tiny) achieving the highest F1 score. ViT did not achieve good results due to slow convergence. This may be due to inductive bias present in CNN's or the limited transfer learning capability of the transformer-based models. Furthermore, the table presents the zero-shot performance of the LAION Safety Checker.In NudeNet and AdultContent zero-shot experiments, ConvNexT(tiny)-LSPD (the ConvNexT model trained on the LSPD dataset) and LAION Safety Checker model outputs are mapped into NudeNet and AdultContent labels as given in Table <ref>, and Table <ref>, respectively.Table <ref> presents the overall and label-wise test set results on the NudeNet dataset. Fully convolutional models, including MobileNetv3, Inceptionv3, and ConvNexT, achieved high accuracy in classifying images based on the presence of nudity, with ConvNexT achieving the highest F1 score. The zero-shot results for the LAION Safety Checker and ConvNexT(tiny)-LSPD were also presented. ConvNexT(tiny)-LSPD outperforms LAION Safety Checker by 1.1 points in F1_all score and by 3.3 points in F1_nude.Table <ref> presents the overall and label-wise test set results on the AdultContent dataset. ConvNexT performs best in the supervised setting. In zero-shot setting, ConvNexT(tiny)-LSPD outperforms LAION and Stable Diffusion safety checkers by 1 and 4.5 points in F1_all, respectively.Overall, the results on these three datasets show a marginal performance difference between different models, and performance is saturated. This observation highlights that there is a need for a new fine-grained nudity classification benchmark. In addition, the dataset labels can be combined in a multi-label setting (`safe' or `nude' per image and other sub-categories per image), which requires an additional labeling effort. Moreover, the current `sexy' category in datasets is not clearly defined and not useful in real-world settings. It is because of the fact that the `sexy' label in the LSPD dataset consists of images including `explicit nudity', `bikini', `lingerie', and `cleavage' concepts, while the `sexy' label in NudeNet does not contain images including `explicit nudity'. Furthermore, in real-world applications, it is very important to know the level of nudity and sexiness. For instance, `nudity' without explicit exposure of sexual body parts can be suitable in some cases, or sexiness with `cleavage' can be safe, while `lingerie' would be inappropriate. To overcome these issues of the current datasets, more detailed multiple hierarchical labels are required per image.§ CONCLUSIONEvaluation of six different models on three different datasets shows that fully convolutional models, such as MobileNetv3, Inceptionv3, and ConvNexT, perform better than transformer-based models like ViT in nudity classification. The limited transfer learning capability of transformer-based models may be a contributing factor to their lower performance. Furthermore, the study highlights inferior zero-shot performance of popular safety checkers from Stable Diffusion and LAION and presents better alternatives. Overall, the study emphasizes the need for continual improvement of image classification models to ensure the safety and well-being of platform users. Additionally, there is a need for a new fine-grained nudity classification benchmark that can better represent the real-world challenges of nudity detection in online platforms. The project page is available at https://github.com/fcakyon/content-moderation-deep-learninghttps://github.com/fcakyon/content-moderation-deep-learning and will be updated with further demonstrations and results.IEEEbib
http://arxiv.org/abs/2312.16338v1
{ "authors": [ "Fatih Cagatay Akyon", "Alptekin Temizel" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231226212455", "title": "State-of-the-Art in Nudity Classification: A Comparative Analysis" }
We study the blow-up dynamics for the energy-critical 1-corotational wave maps problem with 2-sphere target. In <cit.>, Raphaël and Rodnianski exhibited a stable finite time blow-up dynamics arising from smooth initial data. In this paper, we exhibit a sequence of new finite-time blow-up rates (quantized rates), which can still arise from well-localized smooth initial data. We closely follow the strategy of the paper <cit.> by Raphaël and Schweyer, who exhibited a similar construction of the quantized blow-up rates for the harmonic map heat flow. The main difficulty in our wave maps setting stems from the lack of dissipation and its critical nature, which we overcome by a systematic identification of correction terms in higher-order energy estimates. [2020]35B44, 35L05Hard X-ray Generation and Detection of Nanometer-Scale Localized Coherent Acoustic Wave Packets in SrTiO_3 and KTaO_3 David A. Reis January 14, 2024 ===================================================================================================================== § INTRODUCTION §.§ Wave map problemFor a map Φ: ℝ^n+1→𝕊^n, the wave maps problem is given by∂_ttΦ - ΔΦ = Φ (|∇Φ|^2 - |∂_t Φ|^2),Φ⃗(t):=(Φ,∂_t Φ)(t) ∈𝕊^n × T_Φ𝕊^n.(<ref>) has an intrinsic derivation from the following Lagrangian action1/2∫_ℝ^n+1 (|∇Φ(x,t)|^2 - |∂_t Φ (x,t)|^2) dx dt,which yields the energy conservationE(Φ⃗(t))=1/2∫_ℝ^n |∇Φ|^2 + |∂_t Φ|^2 dx = E(Φ⃗(0)).In particular for the case n=2, (<ref>) is calledenergy-critical since the conserved energy is invariant under the scaling symmetry: if Φ⃗(t,x) be a solution to (<ref>), then Φ⃗_λ(t,x) be also a solution to (<ref>) whereΦ⃗_λ(t,x):=(Φ(t/λ, x/λ),1/λ∂_tΦ(t/λ, x/λ))and satisfies E(Φ⃗_λ)=E(Φ⃗).When observing a complicated model, it makes sense from a physics perspective to extract the essential dynamics of the problem by reducing the degrees of freedom. Especially for field theories such as (<ref>), the geodesic approximation, that is, a method of approximating the dynamics of the full problem as a geodesic motion over a space of static solutions, is prevalent (see <cit.>). To talk about static solutions in more detail, we focus on the solutions that have finite energy. This assumption extends the spatial domain of Φ to 𝕊^2 and allows the topological degree of Φ to be well-defined:k= 1/|𝕊^2|∫_ℝ^2Φ^* (dw) = 1/4π∫_ℝ^2Φ· (∂_xΦ×∂_yΦ) dxdy. Here, dw is the area form on 𝕊^2 and k is given only as an integer. We also remark that k is conserved over time. We now consider static solutions to (<ref>):ΔΦ + Φ |∇Φ|^2 =0,so-called harmonic map. Recall our Lagrangian action (<ref>), harmonic maps are characterized as the minimizer of the Dirichlet energy:1/2∫_ℝ^2 |∇Φ|^2 dxdy. Assume the topological degree of a harmonic map Φ is k∈ℤ. Then we have the following inequality:1/2∫_ℝ^2 |∇Φ|^2 dxdy =1/2∫_ℝ^2 | ∂_xΦ |^2 + |∂_yΦ |^2dxdy = 1/2∫_ℝ^2 | ∂_xΦ±Φ×∂_y Φ |^2 dxdy ∓∫_ℝ^2∂_x Φ· (Φ×∂_y Φ) dx dy≥±∫_ℝ^2Φ· (∂_xΦ×∂_y Φ) dx dy =4π |k|.Hence, Φ satisfies the Bogomol'nyĭ equation <cit.>∂_x Φ±Φ×∂_y Φ =0for ± k ≥ 0.That is, the field equation (<ref>) can be reduced from second to first order PDE. From the stereographic projection, we can see that the equation (<ref>) is equivalent to the Cauchy-Riemann equation[If k is negative, we adopt the conjugate Cauchy-Riemann equation instead of the Cauchy-Riemann equation. Thence, harmonic maps can be represented as rational maps with z̅ as a complex variable.], which clearly identifies the space of harmonic maps as the space of rational maps of degree k.Under the L^2 metric induced naturally from the kinetic energy formula, it is well known that the space of static solutions is geodesically incomplete, which leads us to expect a blow-up scenario of low energy problem.§.§ Corotational symmetryFor the sake of simplicity, we consider an ansatz of solutions with k-corotational symmetry: Φ(t,r,θ) = [ sin (u(t,r)) cos kθ; sin (u(t,r)) sin kθ;cos (u(t,r)) ] .Under such symmetry assumption, u(t,r) satisfies ∂_tt u - ∂_rru - 1/r∂_r u + k^2f(u)/r^2 =0, u_| t=0 = u_0,∂_t u_|t=0=u̇_0, f(u) = sin 2u/2.It is known that the flow (<ref>) preserves such corotational symmetry (<ref>) with smooth initial data at least local-in-time, see <cit.>.Also, the energy functional (<ref>) can be rewritten asE(u,u̇) := π∫_0^∞( |u̇|^2 + |∂_r u|^2 + k^2sin^2 u/r^2) rdr = E(u_0,u̇_0)From the above expression, we can observe that a solution to (<ref>) with finite energy must satisfy the following boundary conditions:lim_r→ 0 u(r) = mπ and lim_r→∞ u(r) = nπ,m,n ∈ℤ.We have additional symmetries from the geometry of target domain 𝕊^2,-u(t,r), u(t,r) + πare also solutions to (<ref>). Thus, we restrict our solution space to a set of functions (u,v) that have finite energy and satisfy the boundary conditions (<ref>) with m=0 and n=1, which provides the local well-posedness of (<ref>) (see also <cit.>).§.§ Harmonic mapIn this restriction, the harmonic map is uniquely determined (up to scaling) and can be written explicitly as Q(r)=2 tan^-1 r^k.Based on the geodesic approximation, it can be said that observing the vicinity of Q under the corotational symmetry assumption facilitates the analysis of blow-up dynamics. This has been proven as a rigorous statement in several past global regularity works (see <cit.>).The above results proved that if a wave map blows up in finite time, such singularity should be created by bubbling off of a non-trivial harmonic map (strictly) inside the backward light cone. This statement has inspired other researches studying global behaviors of solutions, and many of the results have been developed based on the existence of nontrivial harmonic map.Firstly, there is a global existence, which is a consequence of the preceding blow-up criteria. If the initial data cannot form a nontrivial harmonic map, that is, if the energy is less than the ground state energy, it can be naturally predicted that the solution exists global in time, and mathematical proof is also contained in the previously mentioned global regularity results.This study also allows us to consider the problems of energy threshold (see <cit.> for the symmetric case and <cit.> for the general case). In this case, it is also important to set an appropriate threshold value and the ground state energy is suitable for our problem setting. However for other boundary conditions or other topological degrees, it is often given as an integer multiple of E(Q,0). The heuristic reason is that the degree condition cannot be satisfied with just one bubble. This goes beyond suggesting the existence of a multi-bubbles solution <cit.> and serves as an opportunity to soliton resolution conjecture <cit.> (see also <cit.>). The most recent soliton resolution result <cit.> fully characterizes the profile decomposition of the solution in all equivariant classes. Thus, our interest is to observe how the scale of the profile given by the harmonic map changes over time within the lifespan of the solution. In particular for the case of low energy, that is, when the energy is slightly greater than the ground state energy, the geodesic approximation discussed earlier leads us to focus on the situation of having only one harmonic map as the blow-up profile.§.§ Blow-up near QFrom a methodological perspective, studies investigating the blow-up of a single bubble can be broadly divided into the backward construction starting from Krieger–Schlag–Tataru <cit.> and the forward construction inspired from Rodnianski–Sterbenz <cit.> and Raphaël–Rodnianski <cit.>.The former work obtained a continuum of blow-up rates for the case k=1 via the iteration method and inspired other extended results such as stability of regular perturbations <cit.> and some exotic solutions <cit.>. Beyond direct extensions of this approach, there is a classification result <cit.> via configuring radiations appropriately at the blow-up time. These constructions inevitably involves some constraints on regularity and degeneracy of the initial data. The latter case adopts a method that accurately describes the initial data set that drives blow-up. Although it is difficult (probably ruled out) to form a family of blow-up rates as the previous result, the emphasis is on being able to observe smooth blow-up dynamics. Especially in <cit.>, The authors explicitly describe an initial data set that is open under H^2 topology around Q and prove the so-called stable blow-up, in which the solutions starting from that set universally blow up at a rate that slightly misses the self-similar rate for all k≥ 1.We note that the initial data set in the above result does not imply a universal blow-up of all well-localized smooth data. Our main theorem says that there exist other smooth solutions that blow-up in finite-time with quantized rate corresponding to the excited regime.§.§ Main theoremWe focus on the solution to (<ref>) with 1-corotational initial data, i.e. k=1. Let us restate the stable blow-up result.There exists a constant ε_0>0 such that for all 1-corotational initial data (u_0,u̇_0) with‖ u_0-Q, u̇_0 ‖_ℋ^2< ε_0,the corresponding solutions to (<ref>) blow up in finite time 0<T=T(u_0,u̇_0)<∞ as follows: for some (u^*,u̇^*) ∈ℋ,‖ u(t,r)-Q(r/λ(t))-u^*, ∂_t u(t,r) - u̇^* ‖_ℋ→ 0 ast→ Twith the universal blow up speed:λ(t)=2e^-1(1+o_t→ T(1))(T-t) e^-√(|log(T-t)|).Here, ℋ, ℋ^2 are given by (<ref>), (<ref>). In <cit.>, the authors mentioned that the nature of the harmonic map, which varies depending on whether k equals to 1 or not, leads to distinctive blow-up rates. As a result of the logarithmic calculation that occurs additionally only when k=1, the universality of the blow-up rate in this case was unclear. The sharp constant 2e^-1 in (<ref>) was later obtained by Kim <cit.> using a refined modulational analysis.Nevertheless, the slow decaying nature of the harmonic map is rather an advantage in our analysis, which allows us to exhibit the following smooth blow-up with the quantized blow-up rates corresponding to the excited regime.For a natural number ℓ≥ 2 and an arbitrarily small constant ε_0 >0, there exists a smooth 1-corotational initial data (u_0,u̇_0) with ‖ u_0-Q, u̇_0 ‖_ℋ< ε_0such that the corresponding solution to (<ref>) blows up in finite time 0<T=T(u_0,u̇_0)<∞ and satisfies (<ref>) with the quantized blow up speed:λ(t)=c(u_0,u̇_0)(1+o_t→ T(1)) (T-t)^ℓ/|log (T-t)|^ℓ/(ℓ-1),c(u_0,u̇_0)>0. The asymptotic profile (u^*,u̇^*) also has Ḣ^ℓ×Ḣ^ℓ-1 regularity in the sense that certain ℓ-fold (resp., ℓ-1-fold) derivatives of u^* (resp., u̇^*) belong to L^2. This is a consequence of the fact that the ℓ-th order energy of the radiative part of the solution satisfies the scaling invariance bound (ℰ_ℓ≤ C λ^2(ℓ-1); see (<ref>)) similarly as in <cit.>. The existence of (type-II) blow-up solutions with quantized blow-up rates has also been well studied in parabolic equations, especially for nonlinear heat equations. Starting with the discovery of formal mechanisms <cit.>, there are classification works <cit.> in the energy-supercritical regime.The proofs in this literature are based on maximum principle (cf. <cit.>).Through modulational analysis, not relying on maximum principle, there have been some (type-II) quantized rate constructions in the critical parabolic equations such as <cit.> for the energy-critical case and <cit.> for the mass-critical case. See also the works <cit.> relying on the inner-outer gluing method. In <cit.>, the authors expected that their modulation technique is robust enough to be propagated to dispersive models including the wave maps problem, and the quantized rate constructions have been established in the energy-supercritical dispersive equations <cit.>. Up to our knowledge, Theorem <ref> provides the first rigorous quantized rate constructions for energy-critical dispersive equations. We expect that our analysis can also be extended to other energy-critical dispersive equations such as the nonlinear wave equation. In contrast to Theorem <ref>, our initial data set is of codimension ℓ-1, similar to <cit.>, due to unstable directions inherent in the ODE system driving the blow-up dynamics. This similarity follows from the fact that the wave map problem and the harmonic map heat flow share the same ground states and linearized Hamiltonian under the 1-corotational symmetry. We also expect the stability formulated by constructing a smooth manifold of the initial data set. §.§ NotationWe introduce some notation needed for the proof before going into the strategy of the proof.We first use the bold notation for vectors in ℝ^2:u:=[u; u̇ ], u(r):=[u(r); u̇(r) ].For λ >0, the Ḣ^1 × L^2 scaling is defined by:u_λ(r)=[ u_λ(r); λ^-1u̇_λ(r ) ]:=[ u(y); λ^-1u̇(y ) ] ,y:=r/λand the corresponding generator is denoted byΛu := [Λ u; Λ_0 u̇ ] := -d u_λ(r)/dλ|_λ=1= [r∂_r u(r); ( 1 +r∂_r )u̇(r) ] .In general, we employ the Ḣ^k scaling generatorΛ_k u:= - d/dλ( λ^k-1 u_λ(r) )|_λ=1= (-k+1+r∂_r)u(r). We now reformulate (<ref>) using the vector-valued function F:ℝ^2 →ℝ^2:∂_tu =F(u),u_|t=0=u_0,u=u(t,r) ,F(u):=[u̇; Δ u-1/r^2f(u) ].We use two subsets of real lineℝ_+={r ∈ℝ: x≥ 0},ℝ_+^*={r ∈ℝ: x >0}.We denote by χ a C^∞ radial cut-off function on ℝ_+: χ(r)= 1forr≤ 1 0forr≥ 2 . We let χ_B(r):=χ(r/B) for B>0. Similarly, we denote by 1_A(y) as the indicator function on the set A. In particular, 1_B≤ y ≤ 2B will be rewritten by 1_y∼ B, or simply 1_B abusively. The cut-off boundary B will often be chosen as the constant multiples of B_0:=1/b_1,B_1:=|log b_1|^γ/b_1,b_1>0.Later, we will choose γ=1+ℓ where ℓ appeared from Theorem <ref>. Here, we denote the remainder of dividing i by 2 as i i.e. i=i2 for an integer i. We also denote L=ℓ+ℓ+1 i.e. L is the smallest odd integer greater than or equal to ℓ. We also abuse the indicator notation 1_{l ≥ m } as1_{l ≥ m }= 1if l≥ m 0if l<m, l,m∈ℤ.We adopt the following L^2(ℝ^2) inner product for radial functions u,v: ⟨ u, v ⟩ := ∫_0^∞ u(r)v(r) rdr and L^2× L^2 inner product for vector-valued functions u,v: ⟨u, v⟩ := ⟨ u, v ⟩ + ⟨u̇, v̇⟩ We introduce two sobolev spaces ℋ and ℋ^2 with the following norms:‖ u,u̇‖_ℋ^2 := ∫ |∂_y u|^2 + |u|^2/y^2 + |u̇|^2,‖ u,u̇‖_ℋ^2^2 := ‖ u,u̇‖_ℋ^2 + ∫ |∂_y^2 u|^2 + |∂_yu̇|^2 + |u̇|^2/y^2 + ∫_|y|≤ 11/y^2( ∂_y u - u/y)^2.For any x:=(x_1,…,x_n)∈ℝ^n, we set |x|^2 = x_1^2+ ⋯ + x_n^2 and ℬ^n:={x∈ℝ^n, |x|≤ 1 }, 𝒮^n:=∂ℬ^n={x∈ℝ^n, |x|= 1 }.We use the Kronecker delta notation: δ_ij=1 for i=j and δ_ij=0 for i≠ j.§.§ Strategy of the proofOur proof is based on the general modulational analysis scheme developed by Raphaël–Rodnianski <cit.>, Merle–Raphaël–Rodnianski <cit.> and Raphaël–Schweyer <cit.>, which also have difficulties arising from energy-critical nature and the small equivariance index, including logarithmic computations. We closely follow the main strategy of <cit.>. However, notable differences stem from the lack of dissipation in the higher-order (H^L+1, L ≫ 1) energy estimates due to the dispersive nature of our problem. We overcome this difficulty by carefully correcting the higher-order energy functional to uncover the repulsive property (to identify terms with good sign), generalizing the computation in the H^2 energy estimates of <cit.>.Given an odd integer L≥ 3, we first construct the blow-up profile Q_b of the form Q_b := Q + α_b:= [ Q; 0 ] + ∑_i=1^L b_i T_i + ∑_i=2^L+2S_iwhere b=(b_1,…,b_L) is a set of modulation parameters and T_i, S_i are deformation directions so that (Q_b(t))_λ(t) solves (<ref>) approximately. Equivalently, Q_b satisfies∂_s Q_b-F(Q_b) -λ_s/λΛ Q_b ≈ 0 ,ds/dt = 1/λ(t).From the imposed relations (<ref>), the blow-up dynamics is determined by the evolution of the modulation parameters b=(b_1,…,b_L). The leading dynamics of b and T_i are determined by considering the linearized flow of (<ref>) near Q:0≈∂_s Q_b-F(Q_b) -λ_s/λΛ Q_b= ∂_s (Q_b-Q)-F(Q_b) + F(Q) -λ_s/λΛ Q_b≈∂_s α_b + Hα_b -λ_s/λΛ(Q + α_b)where H denotes the linearized Hamiltonian H:= [0 -1;H0 ],H= -Δ + f'(Q)/y^2.After defining T_i inductivelyHT_i+1=- T_i, T_0:=Λ Q,(<ref>) and asymptotics Λ T_i ∼ (i-1)T_i yield the leading dynamics of b:-λ_s/λ=b_1, (b_k)_s=b_k+1-(k-1)b_1b_k, b_L+1:=0, 1≤ k ≤ L. S_i appears to correct (<ref>) to (<ref>) containing some radiative terms from the difference Λ T_i - T_i and the nonlinear effect from F(Q_b) - F(Q) + Hα_b. Then b drives the following ODE system (b_k)_s=b_k+1-(k-1+1/(1+δ_1k)log s)b_1b_k, b_L+1:=0, 1≤ k ≤ L.We then choose a special solution of (<ref>) depending on ℓ = L, L-1:b_1(s) ∼ℓ/ℓ-1( 1/s - (ℓ-1)^-1/slog s),which leads (<ref>) from the relations -λ_t=b_1 and ds/dt=1/λ. We control the unstable directions in the vicinity of these special solutions to ODE system (<ref>) by Brouwer's fixed point theorem.Now, we decompose the solution u=u(t,r) to (<ref>) as followsu= ( Q_b(t) + ε)_λ(t)= (Q_b(t))_λ(t) + w,⟨H^iε,Φ_M ⟩ = 0,0≤ i ≤ Lwhere Φ_M is defined in (<ref>). The orthogonality conditions in (<ref>) uniquely determine the decomposition by the implicit function theorem. Then we derive the evolution equation of ε from (<ref>), which contains the formal modulation ODE (<ref>) with some errors in terms of ε. To justify the formal modulation ODE (<ref>), we need sufficient smallness of ε and we need to propagate it. For this purpose, we consider the higher-order energy associated to the linearized Hamiltonian H:ℰ_L+1 = ⟨ H^L+1/2ε, H^L+1/2ε⟩ + ⟨ H H^L-1/2ε̇, H^L-1/2ε̇⟩.This energy is coercive thanks to the orthogonality conditions in (<ref>). Thus, our analysis comes down to estimating the time derivative of ℰ_L+1. Unlike in <cit.>, we cannot employ dissipation to control the time derivative of ℰ_L+1 due to the dispersive nature of our problem. Instead, we use the repulsive property of the (super-symmetric) conjugated Hamiltonian H of H observed in <cit.> and <cit.>. To illuminate the repulsive property in the energy estimate, we consider the linearized flow in terms of w from w=(w,ẇ) and the well-known factorization:w_tt + H_λ w=0,H_λ =A_λ^* A_λ, A_λ=-∂_r + sin Q_λ/r.Defining the higher-order derivatives adapted to H_λ and its corresponding operatorw_k:=𝒜_λ^k w,𝒜_λ = A_λ, 𝒜_λ^2 = A_λ^*A_λ,⋯, 𝒜_λ^k = ⋯ A_λ^*A_λ A_λ^* A_λ_ktimes, the higher-order energy (<ref>) can essentially be written as follows: ℰ_L+1 ≈λ^2L(⟨ w_L+1,w_L+1⟩ + ⟨∂_t w_L, ∂_t w_L ⟩)=λ^2L(⟨H_λ w_L,w_L⟩ + ⟨∂_t w_L, ∂_t w_L ⟩) where H_λ=A_λ A_λ^* is the conjugated Hamiltonian of H_λ. As an advantage of the adoption of the Leibniz rule notation between an operator and a function∂_t (P f) = ∂_t(P)f + P f_t,∂_t(P):=[∂_t, P],we can express the energy estimate for ℰ_L+1 succinctly:d/dt{ℰ_L+1/2λ^2L} ≈1/2⟨∂_t(H_λ) w_L,w_L⟩ + ⟨H_λ w_L,∂_t w_L⟩ + ⟨∂_tt w_L, ∂_t w_L ⟩≈1/2⟨∂_t(H_λ) w_L,w_L⟩ + 2 ⟨∂_t w_L, ∂_t (𝒜_λ^L)w_t ⟩.Integrating by parts in time, we getd/dt{ℰ_L+1/2λ^2L - 2⟨w_L, ∂_t (𝒜_λ^L)w_t ⟩} ≈1/2⟨∂_t(H_λ) w_L,w_L⟩ + 2 ⟨w_L, ∂_t (𝒜_λ^L)w_2⟩.In <cit.>, the authors exhibited the repulsive property by directly calculating the following identity with the advantage of L=1: ⟨w_1, ∂_t (𝒜_λ)w_2⟩=1/2⟨∂_t(H_λ) w_1,w_1⟩≤ 0However, this computation does not seem to be directly extended to our case L≥ 3. We overcome this problem by first pulling out the repulsive term using Leibniz rule⟨w_L, ∂_t (𝒜_λ^L)w_2⟩ = ⟨w_L, ∂_t (H_λ)w_L⟩ + ⟨H_λ w_L, ∂_t (𝒜_λ^L-2)w_2⟩≈⟨w_L, ∂_t (H_λ)w_L⟩ - ⟨∂_tt w_L, ∂_t (𝒜_λ^L-2)w_2⟩.Again integrating by parts in time, we obtaind/dt{ℰ_L+1/2λ^2L - 2 ( ⟨w_L, ∂_t (𝒜_λ^L)w_t ⟩ -⟨∂_t w_L, ∂_t (𝒜_λ^L-2)w_2⟩ + ⟨w_L, ∂_t (𝒜_λ^L-2)∂_t w_2⟩) }≈5/2⟨∂_t(H_λ) w_L,w_L⟩ + 2 ⟨w_L, ∂_t (𝒜_λ^L-2)w_4⟩.Repeating the above correction procedure, we arrive at the term with good sign:d/dt{ℰ_L+1/2λ^2L + corrections} ≈2L-1/2⟨∂_t(H_λ) w_L,w_L⟩ + 2 ⟨w_L, ∂_t (𝒜_λ)w_L+1⟩≈2L+1/2⟨∂_t(H_λ) w_L,w_L⟩≤ 0.In the actual energy estimate, there are also error terms such as the profile equation error and nonlinear terms in ε. For these nonlinear terms, we also estimate the intermediate energies ℰ_k, which can be defined similarly to ℰ_L+1. Especially for ℰ_ℓ, we detect subtle corrections arising from a different criticality than ℰ_L+1.§.§ Organization of the paper In section 2, we construct the approximate blow-up profile with the description of the ODE dynamics of the modulation equations. Section 3 is devoted to the decomposition of the solution into the blow-up profile constructed in the previous section and the remaining error. We also introduce the bootstrap setting to control the error and establish a Lyapounov-type monotonicity for the higher-order energy with respect to such error. Section 4 provides the proof of Theorem <ref> by closing the bootstrap with some standard topological arguments.§.§ AcknowledgementsThe author appreciates Kihyun Kim and Soonsik Kwon for helpful discussions and suggestions for this work. The author is partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2019R1A5A1028324 and NRF-2022R1A2C109149912).§ CONSTRUCTION OF THE APPROXIMATE SOLUTIONIn this section, we construct the approximate blow-up profile Q_b, represented by a deformation of the harmonic map Q through modulation parameters b=(b_1,…,b_L). We also establish the ODE dynamics of b, which leads to our desired blow-up rate. §.§ The linearized dynamicsIt is natural for us to look into the linearized dynamics of our system near the stationary solution Q. Let u=Q+ε where Q=(Q,0)^t and u is the solution to (<ref>). Then ε satisfies∂_t ε = F(Q+ε)-F(Q)=[ ε̇; Δε - 1/r^2 (f(Q+ε) - f(Q)) ] =[ε̇; Δε - r^-2f'(Q)ε ] - 1/r^2[0; f(Q+ε)-f(Q)-f'(Q)ε ].Ignoring higher-order terms for ε and setting λ =1 (i.e. r=y), we roughly obtain the linearized system:∂_t ε + Hε =0,Hε=[0 -1;H0 ][ε; ε̇ ] where H is the Schrödinger operator with explicitly computable potential f'(Q) from (<ref>) and (<ref>) H:=-Δ +V/y^2, V=f'(Q)=y^4-6y^2+1/(y^2+1)^2.Due to the scaling invariance, we have HΛ Q=0 where Λ Q=2y/1+y^2.However, Λ Q does not slightly belong to L^2(ℝ^2), so we call Λ Q the resonance of H. The positivity of Λ Q on ℝ_+^* allows us to factorize H:H=A^*A, A= -∂_y + Z/y,A^*= ∂_y + 1+Z/y, Z(y)=sin Q=1-y^2/1+y^2.The above factorization facilitates examining the formal kernel of H on ℝ_+^*, denoted by Ker(H). More precisely, the following equivalent formAu = -∂_y u + ∂_y(logΛ Q)u =-Λ Q∂_y(u/Λ Q)A^*u = 1/y∂_y(yu) + ∂_y(logΛ Q)u = 1/yΛ Q∂_y(uyΛ Q)yields for y>0, Ker(H) is given byKer(H)=Span(Λ Q,Γ), Γ(y)=Λ Q∫_1^ydx/x(Λ Q(x))^2= O(1/y)asy→ 0y/4 + O(log y/y)asy→∞. From variation of parameters, we obtain the formal inverse of H: H^-1f= Λ Q∫_0^y fΓ xdx-Γ∫_0^y fΛ Qxdx,so the inverse of H is given byH^-1:=[0 H^-1; -10 ].We remark that the inverse formula (<ref>) is uniquely determined by the boundary condition at the origin: for any smooth function f, H^-1f=O(y^2) near the origin.On the other hand, the super-symmetric conjugate operator H is given byH:=AA^*=-Δ +V/y^2,V(y)=(1+Z)^2-Λ Z = 4/y^2+1.We note that H has a repulsive property represented by its potentialV = 4/y^2+1 >0,ΛV = -8y^2/(y^2+1)^2≤ 0.Based on the following commutative relationA H = H A,we can naturally define the higher-order derivatives adapted to the linearized Hamiltonian H inductively:f_0:=ff_k+1:=A f_kfor keven ,A^* f_kfor kodd .For the sake of simplicity, we denote the corresponding operator as follows:𝒜:=A,𝒜^2:=A^* A,𝒜^3:= AA^*A,⋯𝒜^k:=⋯ A^*A A^* A_ktimes. We observe that the above suitable derivatives cannot be taken for general smooth functions defined on ℝ_+. More precisely for any smooth function f, (<ref>) implies f_1=Af∼ -y ∂_y(y^-1f)near y=0. Thus, f must degenerate near the origin as f=cy+O(y^2) and so Af=c'y + O(y^2). Here, the leading term c'y comes from a cancellation Ay=O(y^2),which is a direct consequence of (<ref>). However, f_2 does not degenerate near the origin like f since A^* does not have any cancellation like (<ref>). Hence, f should be more degenerate near the origin as f=cy+c'y^3 + O(y^4). Furthermore, if f_k is to be well-defined for all k∈ℕ, f must satisfy the following condition: for all p∈ℕ, f has a Taylor expansion near the origin asf(y) = ∑_k=0^p c_k y^2k+1 + O(y^2p+3). It is known that for any (well-localized) smooth 1-corotational map Φ(r,θ) (see (<ref>)), u be a smooth function satisfies (<ref>) (one can find the proof in Appendix A of <cit.>). §.§ Admissible functionsAs mentioned earlier, the leading dynamics of the blow-up are determined by the leading growth of tails from the blow-up profile. In order to construct appropriate tails, several conditions related to suitable derivatives (<ref>) except for the origin condition (<ref>). In the same way as in <cit.> and <cit.>, we first define an "admissible" vector-valued function characterized by three different indices, which represent the certain behavior near the origin and infinity, and the position of nonzero coordinate. We say that a smooth vector-valued function f:ℝ_+ →ℝ^2 is admissible of degree (p_1,p_2,ι) ∈ℕ×ℤ×{0,1} if (i) f is situated on the ι + 1-th coordinate, i.e.f=[ f; 0 ] if ι=0and f=[ 0; f ] if ι=1 . As for such case, we use f and f interchangeably.(ii) We can expand f near y=0: for all 2p≥ p_1,f(y)=∑_k=p_1-ι, k is even^2p c_ky^k+1+ O(y^2p+3) . (iii) The adapted derivatives f_k has the following bounds: for all k ≥ 0 and y≥ 1,|f_k(y)|≲ y^p_2-1-ι-k(1+|log y|1_p_2-k-ι≥ 1) The logarithm term in (<ref>) comes from integrating y^-1. The next lemma says that admissible functions are designed to be compatible with the linearized operator H.Let f be an admissible function of degree (p_1,p_2,ι). Then(i) For all k ∈ℕ, H^k f is admissible of degree(max(p_1-k,ι),p_2-k,ι+k). (ii) For all k ∈ℕ and p_2≥ 0, H^-kf is admissible of degree(p_1+k,p_2+k,ι+k). (i) This claim directly comes from the factsH=[0 -1;H0 ],H^2=[ -H0;0 -H ].More precisely, the maximum choice max(p_1-k,ι) appears from the cancellation (<ref>) near the origin. Near the infinity, the degree condition p_2-k is a consequence of the simple relation Hf=f_2. (ii) It suffices to calculate the case k=1 by induction. For ι=0, H^-1f=[0 H^-1; -10 ][ f; 0 ]=[0; -f ], H^-1f is admissible of degree (p_1+1,p_2+1,1). For ι=1, we have H^-1f=[0 H^-1; -10 ][ 0; f ]=[ H^-1f; 0 ]Instead of using the formal inverse formula (<ref>) directly, we utilize the relation (<ref>) asAH^-1f=1/yΛ Q∫_0^y f Λ Q x dx,and the relation (<ref>) asH^-1f=-Λ Q ∫_0^y AH^-1f/Λ Qdx.Near the origin, (<ref>) gives the expansion for AH^-1f:AH^-1f = ∑_k=p_1-1,even^2pc̃_k y^k+2 + O(y^2p+4),thus H^-1f satisfies the Taylor expansionH^-1f = ∑_k=p_1-1,even^2pc̃_k y^k+3 + O(y^2p+5)=∑_k=p_1+1-0,even^2pc̃_k y^k+1 + O(y^2p+3).For y≥ 1, (<ref>) and (<ref>) imply|A H^-1f|≲∫_0^y |f| dx ≲∫_1^y x^p_2-2(1+|log x|1_p_2 ≥ 2) dx ≲ y^(p_2+1)-1-0 -1(1+|log y|1_p_2 ≥ 1), |H^-1f|≲1/y∫_0^y |x AH^-1f| dx≲1/y∫_1^y x^p_2(1+|log x|1_p_2 ≥ 1) dx ≲ y^(p_2+1)-0-1(1+|log y|1_p_2 ≥ 0),we obtain (<ref>) for f and f_1. The higher derivatives results come from H(H^-1f)=f. Hence, H^-1f is admissible of degree (p_1+1,p_2+1,0). From (<ref>), we can easily check that Λ Q=(Λ Q,0)^t is admissible of degree (0,0,0). Thus, Lemma <ref> yields the presence of the admissible functions which generates the generalized null space of H formally: For each i ≥ 0, we define an admissible function T_i of degree (i,i,i) as follows:T_i:=(-H)^-iΛ Q.By the definition of the admissible functions, we will use the notation T_i as a scalar function.§.§ b_1-admissible functionsWe will keep track of logarithmic weight |log b_1| from the blow-up profiles to be constructed later. In the sense, the logarithmic loss of T_i hinders our analysis, so we settle this problem via introducing a new class of functions.We say that a smooth vector-valued function f:ℝ_+^*×ℝ_+ →ℝ^2 is b_1-admissible of degree (p_1,p_2,ι)∈ ℕ ×ℤ×{0,1} if (i) f is situated on the ι + 1-th coordinate (so we use f and f interchangeably). (ii) f=f(b_1,y) can be expressed as a finite sum of the smooth functions of the form h(b_1)f̃(y), where f̃(y) has a Taylor expansion (<ref>) and h(b_1) satisfies ∀ l≥ 0, |∂^l h_j/∂ b_1^l|≲1/b_1^l, b_1 >0. (iii) f and its adapted derivatives f_k given by (<ref>) have the following bounds: there exists a constant c_p_2>0 such that for all k ≥ 0 and y≥ 1,|f_k(b_1,y)| ≲ y^p_2-k-1-ι(g_p_2-k-ι(b_1,y)+|log y|^c_p_2/y^2 + 1_{p_2 ≥ k+3+ι, y≥ 3B_0}y^2 b_1^2|log b_1|), and for all l ≥ 1|∂^l/∂ b_1^lf_k(b_1,y)|≲ y^p_2-k-1-ι/b^l_1|log b_1|( g̃_p_2-k-ι(b_1,y)+|log y|^c_p_2/y^2+ 1_{p_2 ≥ k+3+ι,y≥ 3B_0}y^2b_1^2).where B_0 is given by (<ref>) and g_l, g̃_l are defined asg_l(b_1,y)=1+|log (b_1y)|1_{l≥ 1}|log b_1|1_y≤ 3B_0,g̃_l(b_1,y)= 1+|log y|1_{l≥ 1}|log b_1|1_y≤ 3B_0.One may think that the asymptotics (<ref>) and (<ref>) are quite artificial, the function g_ℓ(b_1,y) and g̃_ℓ(b_1,y) will appear in the construction of the radiation, Lemma <ref>. Then the indicator part 1_p_2 ≥ k+3+ι,y≥ 3B_0 comes from integrating g_ℓ in the region 1≤ y ≤ 3B_0 to take H^-1, which can be seen in more detail in the proof of the following lemma.Let f be a b_1-admissible function of degree (p_1,p_2,ι). Then(i) for all k ∈ℕ, H^k f is b_1-admissible of degree (max(p_1-k,ι),p_2-k,ι+k). (ii) for all k ∈ℕ and p_2 ≥ 0, H^-kf is b_1-admissible of degree (p_1+k,p_2+k,ι+k). (iii) The operators Λ : f↦Λf and b_1 ∂/∂ b_1 : f↦ b_1 ∂f/∂ b_1 preserve the degree.(i) We can borrow the proof of Lemma <ref> since b_1 is independent of H.(ii) Similar to the proof of Lemma <ref>, it suffices to calculate the case ι=1 and k=1. Near the origin, we still use (<ref>) and (<ref>) since the b_1-admissible function has the same power expansion condition as the one of admissible function with some b_1-dependent coefficients.However for y≥ 1, we need a subtle calculation to integrate the terms containing g_l and g̃_l, defined in (<ref>). More precisely, (<ref>) implies for 1≤ y≤ 3B_0, |AH^-1f|≲∫_1^yx^p_2-2 g_p_2-1(b_1,x)+x^p_2-4|log x|^c_p_2 dx ≲∫_1^y x^p_2-21+|log (b_1x)|1_{p_2≥ 2}|log b_1| dx + y^p_2-3|log y|^1+c_p_2≲1/b_1^p_2-1|log b_1|∫_0^b_1 y x^p_2-2(1+|log x|1_{p_2≥ 2}) dx + y^p_2-3|log y|^1+c_p_2≲ y^p_2-11+|log (b_1 y)|1_{p_2≥ 1}/|log b_1| + y^p_2-3|log y|^1+c_p_2= y^(p_2+1)-1-1-0( g_(p_2+1)-1(b_1,y) + |log y|^1+c_p_2/y^2),and for y≥ 3B_0, |AH^-1f|≲∫_1^yx^p_2-2g_p_2-1(b_1,x)+x^p_2-4|log x|^c_p_2 + x^p_2-41_{p_2 ≥ 4 ,x≥ 3B_0 }b_1^2|log b_1| dx ≲1/b_1^p_2-1|log b_1| +y^p_2-31_{p_2 ≥ 4 }b_1^2|log b_1|+y^p_2-3|log y|^1+c_p_2≲ y^(p_2+1)-1-1-0( 1_{p_2 ≥ 1+3, y≥ 3B_0 }y^2b_1^2|log b_1|+ |log y|^1+c_p_2/y^2).Once again, (<ref>) and (<ref>) yield for 1≤ y≤ 3B_0,|H^-1f|≲1/y∫_1^yx^p_2 g_p_2(b_1,x)+x^p_2-3|log x|^1+c_p_2 dx= y^(p_2+1)-1-0( g_p_2+1(b_1,y) + |log y|^2+c_p_2/y^2),and (<ref>) implies for y≥ 3B_0,|H^-1f|≲1/y∫_1^y x^p_2-2|log x|^1+c_p_2 + x^p_2-21_{p_2 ≥ 4 ,x≥ 3B_0 }b_1^2|log b_1| dx ≲ y^(p_2+1)-1-0( 1_{p_2 ≥ 3, y≥ 3B_0 }y^2b_1^2|log b_1|+ |log y|^2+c_p_2/y^2),we obtain (<ref>) for f and f_1. The higher derivatives results come from H(H^-1f)=f. We can easily prove (<ref>) by replacing g_l to g̃_l and dividing b_1^l |log b_1|. Hence, H^-1f is b_1-admissible of degree (p_1+1,p_2+1,0). (iii) Note thatΛ f=(Λ f, 0)^tif ι=0, (0,Λ_0 f)^tif ι=1,and Λ_0 f = f+ Λ f, we get the desired result since Λ preserve the parity of f and its adapted derivative satisfies the bound|(Λ f)_k| ≲ |y f_k+1| + |f_k| + y^p_2-k-3-ι , y≥ 1,which established in <cit.>.Near the origin, the property of the operator b_1 ∂/∂ b_1 comes from the fact that b_1 ∂/∂ b_1 preserves the parity of f. For y≥ 1, (<ref>) multiplied by b_1 with l=1 is bounded to (<ref>) from the following boundg̃_l(b_1,y)/|log b_1|≲g_l(b_1,y). §.§ Control of the extra growthThe elements of the null space of H, which was defined in (<ref>), serves as a kind of tails in our blow-up profile. Since we basically plan a bubbling off blow-up by scaling, the situation where the scaling generator Λ is taken by the tails T_i naturally emerges. Especially for i≥ 2, the leading asymptotics of Λ T_i matches that of (i-1)T_i and determines the leading dynamical laws. However, the extra growth of Λ T_i - (i-1)T_i is inadequate to close our analysis, we will eliminate it by adding some radiations, which were first introduced in <cit.>.We now define the radiation situated on the first coordinate as follows: for small b_1>0,Σ_b_1=[ Σ_b_1; 0 ],Σ_b_1 = H^-1{-c_b_1χ_B_0/4Λ Q + d_b_1 H[(1-χ_B_0)Λ Q]}wherec_b_1 = 4/∫χ_B_0/4 (Λ Q)^2=1/|log b_1|+O( 1/|log b_1|^2),d_b_1 =c_b_1∫_0^B_0χ_B_0/4Λ Q Γ y dy = O(1/b_1^2 |log b_1|).From the inverse formula (<ref>), we obtain the asymptotics near origin and infinity:Σ_b_1=c_b_1T_2fory ≤B_0/44Γ fory ≥ 3B_0.To deal with T_1, which is radiative itself, we further definec̃_b_1:=⟨Λ_0 Λ Q, Λ Q ⟩/⟨χ_B_0/4Λ Q, Λ Q ⟩=1/2|log b_1| + O( 1/|log b_1|^2). For i≥ 1, let Θ_i be Θ_1 :=ΛT_1 - c̃_b_1χ_B_0/4T_1for i≥ 2, Θ_i :=ΛT_i - (i-1)T_i -(-H)^-i+2Σ_b_1where T_i is given by (<ref>). Then Θ_i is b_1-admissible of degree (i,i,i). As mentioned earlier, our radiation Σ_b_1 cancels the extra growth of Λ T_2 - T_2∼ y from the asymptoticsT_2 = ylog y + cy + O( |log y|^2/y),Λ T_2 = ylog y + (c+1)y + O( |log y|^2/y)by 4Γ in (<ref>). Since T_2 and Γ are elements of the generalized null space of H, the above cancellation holds for all Θ_i, i ≥ 2.Step 1: i=1. Note that Θ_1=(0,Θ_1)^t andΘ_1=Λ_0Λ Q - c̃_b_1Λ Q χ_B_0/4,Θ_1 is b_1-admissible of degree (1,1,1) from the explicit formulaeΛ Q(y) =2y/1+y^2,Λ_0 Λ Q (y) =4y/(1+y^2)^2 and the bounds for l≥ 1,|∂^l c_b_1/∂ b_1^l|+ |∂^l c̃_b_1/∂ b_1^l|≲1/b_1^l |log b_1|^2,|∂^l d_b_1/∂ b_1^l|≲1/b_1^l+2|log b_1| ,|∂^l χ_B_0/∂ b_1^l|≲1_y∼ B_0/b_1^l.Step 2: i=2. Now, we use induction on i≥ 2. For i=2, (<ref>) and the admissibility of T_2 imply that Θ_2 satisfies the desired condition near zero (<ref>) since Θ_2=[ Θ_2; 0 ]= [ Λ T_2 - T_2 -Σ_b_1;0 ]. To exhibit the behavior near infinity, we deal with the case 1≤ y ≤ 3B_0 and y≥ 3B_0 separately. The inverse formula (<ref>) yields for 1≤ y ≤ 3B_0, Σ_b_1(y) =Γ∫_0^y c_b_1χ_B_0/4 (Λ Q)^2 xdx - Λ Q∫_0^y c_b_1χ_B_0/4Λ Q Γ xdx + d_b_1 (1-χ_B_0) Λ Q=y∫_0^yχ_B_0/4(Λ Q)^2 x/∫χ_B_0/4(Λ Q)^2x+O(1+y/|log b_1|), Θ_2(y)= y+ O( |log y|^2/y)- y∫_0^yχ_B_0/4(Λ Q)^2 x/∫χ_B_0/4(Λ Q)^2+O(1+y/|log b_1|)=y∫_y^B_0χ_B_0/4(Λ Q)^2 x/∫χ_B_0/4(Λ Q)^2+O(1+y/|log b_1|)+O(|log y|^2/y) = O ( 1+y/|log b_1|(1+|log (b_1 y) |) ).For y≥ 3B_0, (<ref>) impliesΣ_b_1(y) = Γ∫_0^y c_b_1χ_B_0/4 (Λ Q)^2 xdx = y+ O( log y/y).Hence, for y≥ 1, Θ_2 satisfies (<ref>) for the case k=0 as|Θ_2(y)| ≲ y^2-0-1-0g_2(b_1,y) + y^2-0-3-0(log y)^2.The higher derivatives, namely f_k and ∂^l f_k/∂ b_1^l can be also estimated by using (<ref>), the bounds of the coefficients (<ref>), (<ref>), (<ref>) and the commutator relationA(Λ f) = Af + Λ Af -Λ Z/y f, H(Λ f)=2Hf + Λ Hf - Λ V/y^2fwhere Z and V are given by (<ref>) and (<ref>). Here, we can easily check that Λ Z /y is an odd function and Λ V /y^2 is an even function. Furthermore for y≥ 1,|∂^k/∂ y^k( Λ Z/y) |≲1/1+y^k+3,|∂^k/∂ y^k( Λ V/y) |≲1/1+y^k+4.Therefore, Θ_2 is b_1-admissible of degree (2,2,0).Step 3: Induction on i. Suppose that Θ_i is b_1-admissible of degree (i,i,i). For even i, Θ_i+1 is b_1-admissible of degree (i+1,i+1,i+1) sinceΘ_i+1 = [ 0; Λ_0 T_i+1 - iT_i+1 - (-H)^-i/2+1Σ_b_1 ] = [ 0; Λ T_i - (i-1)T_i - (-H)^-i/2+1Σ_b_1 ] = [ 0; Θ_i ].For odd i, we haveH Θ_i+1 =[ 0 1; H 0 ][ Θ_i+1; 0 ] =[ 0; H Λ T_i+1 - i HT_i+1 -H(-H)^-(i+1)/2+1Σ_b_1 ]= [ 0; Λ HT_i+1 - (i-2) HT_i+1 -y^-2Λ V T_i+1 + (-H)^-(i-1)/2+1Σ_b_1 ]=-[ 0; Λ T_i - (i-2) T_i- (-H)^-(i-1)/2+1Σ_b_1+y^-2Λ V T_i+1 ]= -[ 0; Λ_0 T_i - (i-1) T_i- (-H)^-(i-1)/2+1Σ_b_1 ] + [ 0; y^-2Λ V T_i+1 ]= -Θ_i +[ 0; y^-2Λ V T_i+1 ],the desired result comes from Lemma <ref> with (<ref>) as𝒜^k ( Λ V/y^2 T_i+1) ≲∑_j=0^k 1/y^j+4 y^i-(k-j)|log y|^c_i≲ y^i-3-k-1|log y|^c_i.§.§ Adapted norms of b_1 admissible functionsThe next lemma yields some suitable norms corresponding to the adapted derivatives of b_1-admissible functions.For i≥ 1, a b_1-admissible function f of degree (i,i,i) has the following bounds: (i) Global bounds:‖f_k-i‖_L^2(|y|≤ 2B_1)≲b_1^k-i|log b_1|^γ(i-k-2)-1 ifk≤ i-3b_1^k-i|log b_1| ifk=i-2,i-11 ifk≥ i (ii) Logarithmic weighted bounds:∑_k=0^m ‖1+log y/1+y^m-kf_k-i‖_L^2(|y|≤ 2B_1)≲b_1^m-i |log b_1|^Cform≤ i-1|log b_1|^Cform≥ i (iii) Improved global bounds:∑_j=0^k-i‖ y^-(k-i-j) f_j‖_L^2(y ∼ B_1)≲ b_1^k-i |log b_1|^γ(i-k-2)-1. Here, B_1 is given by (<ref>). Due to the growth in (<ref>), it is indispensable to restrict the integration domain taking L^2 norm. Later, we will attach a cutoff function χ_B_1 to the profile modifications. Considering Leibniz's rule, the adapted derivative 𝒜^k can be taken on such modifications or the cutoff function. Then the global bounds (<ref>) yield some estimates for the former case and (<ref>) give those for the latter case. The choice of cutoff region B_1 will be determined by the localization of our blow-up profile, which can be seen in more detail in Proposition <ref>. (i) From (<ref>), f_k- i satisfies the following estimate for y≥ 2: |f_k-i|≲y^i-k-1(g_i-k(b_1,y)+|log y|^c_p_2/y^2 + 1_{i ≥ k+3, y≥ 3B_0}y^2 b_1^2|log b_1|). Therefore, we obtain (<ref>) for i≥ k+1, ‖f_k-i‖_L^2(|y|≤ 2B_1) ≲‖1_|y|≤ 2‖_L^2 +‖y^i-k-11+|log (b_1y)|/|log b_1|‖_L^2 (2≤ |y|≤ 3B_0) + ‖y^i-k-3|log y|^c_i‖_L^2 (2≤ |y|≤ 2B_1)+‖y^i-k-31_{i≥ k+3}/b_1^2|log b_1|‖_L^2 (3B_0≤ |y|≤ 2B_1)≲ 1 + b_1^k-i/|log b_1|+b_1^(k-i+2)1_{i≥ k+2}|log b_1|^C + B_1^i-k-2/b_1^2 |log b_1|1_{i≥ k+3}≲b_1^k-i|log b_1||log b_1|^γ(i-k-2)1_{i≥ k+3}, and the case i≤ k also holds similarly.(ii) The logarithmic weighted bounds (<ref>) are nothing but (<ref>) multiplied by the logarithmic loss |log b_1|^C with the fact |log y|/|log b_1| ≲ 1 on 2≤ |y| ≤ 3B_0. (iii) We can prove (<ref>) from pointwise estimate in the region y∼ B_1: |y^-(k-i -j) f_j | ≲ y^i-k-3( |log y|^C + 1_{i≥i + j +3}/b_1^2 |log b_1|) ≲y^i-k-1/|log b_1|^2γ + 1.§.§ Approximate blow-up profilesNow, we construct the blow-up profiles based on the generalized kernels T_i. To be more specific, our blow-up scenario is done by bubbling off of Q via scaling and adding b_i T_i, the evolution of λ is determined by the dynamical laws of the system b=(b_1,…,b_L). Here, we are faced with unnecessary growth made by linear and nonlinear terms. To minimize this growth, we define the homogeneous functions, which do not affect the evolution of b (i.e. b_i T_i). We note that this kind of construction was introduced in <cit.>.Denote J=(J_1,…,J_L) and |J|_2=∑_k=1^L k J_k. We say that a smooth vector-valued function S(b,y)=S(b_1,…,b_L,y) is homogeneous of degree (p_1,p_2,ι,p_3) ∈ℕ×ℤ×{0,1}×ℕ if it can be expressed as a finite sum of the smooth functions of the form ∏_i=1^L b_i^J_iS_J(y), where S_J(y) is a b_1-admissible functions of degree (p_1,p_2,ι) with |J|_2=p_3.Given a large constant M>0, there exists a small constant 0<b^*(M) ≪ 1 such that a C^1 map b : s ↦ (b_1(s),…,b_L(s))∈ℝ^*_+ ×ℝ^L-1verifies the existence of a slowly modulated profile Q_b given byQ_b:=Q+α_b,α_b:=∑_i=1^L b_iT_i+∑_i=2^L+2S_i, which drives the following equation∂_s Q_b-F(Q_b) + b_1 Λ Q_b = 𝐌𝐨𝐝(t) +ψ_b.where 𝐌𝐨𝐝(t) establishes the dynamical law of b:𝐌𝐨𝐝(t) = ∑_i=1^L ((b_i)_s +(i-1 + c_b_1,i)b_1b_i - b_i+1) (T_i + ∑_j=i+1^L+2∂S_j/∂ b_i),withb_L+1=0, c_b_1,i = c̃_b_1 fori=1 c_b_1 fori≠ 1Here, T_i is given by (<ref>) and S_i is a homogeneous function of degree (i,i,i,i) satisfiesS_1=0,∂S_i/∂ b_j=0 for2≤ i≤ j ≤ L+2.Moreover, the restriction |b_k|≲ b_1^k and 0<b_1 < b^*(M) yield the estimates below for ψ_b=(ψ_b,ψ̇_b)^t,(i) Global bound: for 2≤ k ≤ L-1,‖𝒜^kψ_b ‖_L^2(|y|≤ 2B_1) + ‖𝒜^k-1ψ̇_b ‖_L^2(|y|≤ 2B_1) ≲ b_1^k+1|log b_1|^C , ‖𝒜^Lψ_b ‖_L^2(|y|≤ 2B_1) + ‖𝒜^L-1ψ̇_b ‖_L^2(|y|≤ 2B_1) ≲b_1^L+1/|log b_1|^1/2 ‖𝒜^L+1ψ_b ‖_L^2(|y|≤ 2B_1) + ‖𝒜^Lψ̇_b ‖_L^2(|y|≤ 2B_1) ≲b_1^L+2/|log b_1|. (ii) Logarithmic weighted bound: for m≥ 1 and 0≤ k ≤ m,‖1+log y /1+y^m-k𝒜^k ψ_b ‖_L^2(|y|≤ 2B_1) ≲ b_1^m+1|log b_1|^C , ‖1+log y /1+y^m-k𝒜^k ψ̇_b ‖_L^2(|y|≤ 2B_1) ≲ b_1^m+2|log b_1|^C. (iii) Improved local bound: ^∀ 2≤ k ≤ L+1,‖𝒜^kψ_b ‖_L^2(|y|≤ 2M) + ‖𝒜^k-1ψ̇_b ‖_L^2(|y|≤ 2M)≲C(M) b_1^L+3. As can be seen in the following proof, the homogeneous profile S_i is eventually derived from the b_1-admissible function Θ_i-1 with some nonlinear effects. Step 1: Linearization. We pull out the modulation law of b from linearizing the renormalized equation. Since F(Q)=0, we have∂_s Q_b+b_1ΛQ_b -F(Q_b)= ∂_s α_b + b_1 Λ (Q + α_b ) - (F(Q+ α_b) -F(Q)) =: b_1 ΛQ + (∂_s + b_1 Λ)α_b + Hα_b + N(α_b)where N denotes the higher-order terms:N(α_b) := 1/y^2[0; f(Q+α_b)-f(Q)-f'(Q)α_b ],α_b = [α_b; α̇_b ].Note that ∂_s α_b =∑_i=1^L[ (b_i)_s T_i + ∑_j=i+1^L+2(b_i)_s ∂S_j/∂ b_i]=∑_i=1^L[ (b_i)_s T_i + ∑_j=1^i-1(b_j)_s ∂S_i/∂ b_j] + ∑_i=1^L (b_i)_s ∂S_L+1/∂ b_i + ∑_i=1^L (b_i)_s ∂S_L+2/∂ b_i.Rearranging the linear terms to the degree with respect to b_1 using the fact HT_i+1=-T_i for 1≤ i ≤ L-1,b_1 ΛQ + (∂_s + b_1 Λ)α_b + Hα_b=∑_i=1^L[(b_i)_s T_i + b_1b_iΛT_i-b_i+1T_i] + ∑_i=1^L [ HS_i+1+b_1ΛS_i+∑_j=1^i-1(b_j)_s ∂S_i/∂ b_j]+b_1 ΛS_L+1 +HS_L+2 + ∑_i=1^L (b_i)_s ∂S_L+1/∂ b_i +b_1ΛS_L+2 + ∑_i=1^L (b_i)_s ∂S_L+2/∂ b_i.From Lemma <ref>, (b_1)_s T_1 + b_1^2ΛT_1 - b_2 T_1 = ((b_1)_s + b_1^2c̃_b_1-b_2)T_1 - b_1^2c̃_b_1(1-χ_B_0/4) T_1 + b_1^2Θ_1and for 2≤ i≤ L,(b_i)_s T_i + b_1b_iΛT_i - b_i+1T_i= ((b_i)_s + (i-1 + c_b_1)b_1b_i -b_i+1)T_i+ b_1b_i(-H)^i+2 (Σ_b_1-c_b_1T_2) + b_1b_iΘ_i.Hence, we can separate 𝐌𝐨𝐝(t) from the RHS of (<ref>):𝐌𝐨𝐝(t)- b_1^2c̃_b_1(1-χ_B_0/4) T_1 + ∑_i=2^L b_1b_i(-H)^i+2 (Σ_b_1-c_b_1T_2)+ ∑_i=1^L [ HS_i+1 +b_1b_i Θ_i+b_1ΛS_i-∑_j=1^i-1((j-1+c_b_1,j)b_1b_j-b_j+1)∂S_i/∂ b_j]+ HS_L+2+b_1 ΛS_L+1-∑_i=1^L ((i-1+c_b_1,i)b_1b_i-b_i+1)∂S_L+1/∂ b_i + b_1 ΛS_L+2-∑_i=1^L ((i-1+c_b_1,i)b_1b_i-b_i+1)∂S_L+2/∂ b_i.Step 2: Construction of S_i. One can observe that the second and third lines of (<ref>) provide the definition of homogeneous profiles S_i inductively. Previously, we need to pull out the additional homogeneous functions from N(α_b)=(0,N(α_b))^t via Taylor theorem: N(α_b) = ∑_i=1^L+1/2P_2i/y^2 + R/y^2,P_i := ∑_j=2^L+1/2∑_|J|_1=j^|J|_2=i c_j,J∏_k=1^L-1/2 (b_2kT_2k)^J_2k∏_k=1^L+1/2 S_2k^J̃_2k,R:= ∑_j=2^L+1/2∑_|J|_1=j ^|J|_2 ≥ L+3 c_j,J∏_k=1^L-1/2 (b_2kT_2k)^J_2k∏_k=1^L+1/2 S_2k^J̃_2k + N_0(α_b)α_b^L+3/2 where J:=(J_2,J_4,…,J_L-1,J̃_2,J̃_4,…,J̃_L+1), |J|_1 := ∑_k=1^L-1/2 J_2k + ∑_k=1^L+1/2J̃_2k , |J|_2 := ∑_k=1^L-1/2 2kJ_2k + ∑_k=1^L+1/2 2kJ̃_2k, c_j,J= f^(j)(Q)/∏_k=1^L-1/2 J_2k! ∏_k=1^L+1/2J̃_2k!, N_0(α_b) =∫_0^1 (1-τ)^L+1/2f^(L+3/2)(Q+τα_b) dτ/((L+1)/2)!.We claim that P_2i/y^2 = (0,P_2i/y^2) is homogeneous of degree (2i-1,2i-1,1,2i) for 1≤ i ≤L+1/2. The case i=1 is trivial since P_2=0. For 2≤ i ≤L+1/2, we recall that P_2i/y^2 is a linear combination of the following monomials: for |J|_1=j, |J|_2=2i and 2≤ j ≤ i,f^(j)(Q)/y^2∏_k=1^i (b_2kT_2k)^J_2k∏_k=1^i S_2k^J̃_2k.Near the origin, we observe that T_2k, S_2k are odd functions and the parity of a function f^(j)(Q) is determined by the parity of j, each monomial is either an odd or even function. Hence, it suffices to calculate the leading power of the Taylor expansion of each function constituting the monomial: T_2k∼ y^2k+1, S_2k∼ O(b_1^2k) y^2k+1 and f^(j)(Q) ∼ y^j+1, the leading power of each monomial is given by b_1^∑_k=1^i 2k J_2k· b_1^∑_k=1^i 2k J̃_2k = b_1^2i, y^-2 y^j+1y^∑_k=1^i (2k+1)J_2ky^∑_k=1^i (2k+1)J̃_2k=y^2i+j-1-j.Therefore, the Taylor expansion condition (<ref>) comes from j-1-j≥ 1 is an odd number since j≥ 2. Similarly for y≥ 1, |T_2k| ≲ y^2k-1log y, |S_2k|≲ b_1^2k y^2k-1 and |f^(j)(Q)|≲ y^-1 + j imply |f^(j)(Q)/y^2∏_k=1^i b_2k^J_2kT_2k^J_2k∏_k=1^i S_2k^J̃_2k|≲ b_1^2i |y^-3 + j| ∏_k=1^i |y^2k-1log y|^J_2k∏_k=1^i |y^2k-1|^J̃_2k≲ b_1^2i y^2i-j-3 + j |log y|^C ≲ b_1^2i y^2i-5 |log y|^Cwith the fact j-j≥ 2. We can easily estimate the higher derivatives of each monomial.Under the setting P_2k+1 := (0,0)^t for k ∈ℕ, we obtain the final definition of S_i: S_1:=0 and for i=1,…,L+1,S_i+1:= (-H)^-1(b_1b_i Θ_i+b_1ΛS_i + P_i+1/y^2-∑_j=1^i-1((j-1+c_b_1,j)b_1b_j-b_j+1)∂S_i/∂ b_j).From the homogeneity of P_i/y^2 established above and Lemma <ref>, Lemma <ref>, we can prove S_i is homogeneous of degree (i,i,i, i) for 1≤ i ≤ L+2 with (<ref>) via induction. To sum up, we get (<ref>) by collecting remaining errors into ψ_b:ψ_b := -b_1^2c̃_b_1(1-χ_B_0/4)T_1 +∑_i=2^Lb_1b_i (-H)^-i+2Σ_b_1 + b_1 ΛS_L+2-∑_i=1^L ((i-1+c_b_1,i)b_1b_i-b_i+1)∂S_L+2/∂ b_i +R/y^2where Σ_b_1:=Σ_b_1-c_b_1T_2 and R=(0,R)^t from (<ref>). Step 3: Error bounds. Now, it remains to prove the sobolev bounds: (<ref>) to (<ref>). We can treat the errors involving S_L+2 in (<ref>) easily. Since S_L+2 is homogeneous of degree (L+2,L+2,1,L+2), Lemma <ref> ensures that the functions containing S_L+2 are homogeneous of degree (L+2,L+2,1,L+3) and thus the desired bounds come from Lemma <ref>. The other errors require separate integration to conclude. We first visit the RHS of (<ref>). Note that T_1=(0,T_1)^t and Λ Q ∼ 1/y on y≥ 1, we have for k≥ 0,|𝒜^k(1-χ_B_0/4) T_1|≲ y^-(k+1)1_y ≥ B_0/4,which imply (<ref>), (<ref>) and (<ref>): for 2≤ k ≤ L+1,‖ b_1^2 c̃_b_1𝒜^k-1 (1-χ_B_0/4) T_1 ‖_L^2(|y|≤ 2B_1) ≲b_1^2/|log b_1|‖y^-k‖_L^2(B_0/4 ≤ |y|≤ 2B_1)≲b_1^k+1/|log b_1|.For 2≤ i ≤ L, we rewrite (-H)^i+2Σ_b_1 =((-H)^-i/2 + 1Σ_b_1,0 )^t for eveni (0,-(-H)^-i-1/2 + 1Σ_b_1 )^t for oddifrom the fact H^-2=-H^-1. Moreover,supp(Σ_b_1) ⊂{|y| ≥ B_0/4} and for k≥ 0, we have the crude bound: for B_0/4 ≤ y≤ 2B_1,|𝒜^k-i H^-i - i/2+1Σ_b_1| ≲y^i-k-1|log y|/|log b_1|≲ y^i-k-1.Hence for 1≤ k < i ≤ L, we obtain (<ref>) from the following estimation‖ b_1 b_i 𝒜^k-i H^-i - i/2+1Σ_b_1‖_L^2(|y|≤ 2B_1) ≲b_1^i+1‖y^i-k-1‖_L^2(B_0/4 ≤ |y|≤ 2B_1)≲ b_1^k+1 |log b_1|^γ(i-k).We also observe for k ≥ i,𝒜^k-i H^-i - i/2+1Σ_b_1= 𝒜^k-i H Σ_b_1,the sharp bounds|H Σ_b_1| ≲1_y≥ B_0/4/|log b_1|1/y ,|𝒜^j H Σ_b_1| ≲1_y∼ B_0/B_0^j+1 |log b_1| , j≥ 1imply (<ref>), (<ref>) and (<ref>):‖ b_1 b_i 𝒜^k-i H Σ_b_1‖_L^2(|y|≤ 2B_1)≲b_1^i+1/|log b_1|‖y^i-k-1‖_L^2(B_0/4 ≤ |y|≤ 2B_1)≲b_1^k+1/|log b_1|^1/2, ‖ b_1 b_i 𝒜^L+1-i H Σ_b_1‖_L^2(|y|≤ 2B_1)≲b_1^i+1/B_0^L+1-i |log b_1|≲b_1^L+2/|log b_1|.The logarithmic weighted bounds (<ref>), (<ref>) come from the above estimation with the trivial bound |log y /log b_1| ≲ 1 on B_0/4≤ y≤ 2B_1 and the fact that the errors in the RHS of (<ref>) are supported in y≥ B_0/4. This support property also yields the improved local bound (<ref>) by choosing b^*(M) small enough.Now, we move to the last error: R/y^2. Recall (<ref>), we observe that R/y^2=(0,R/y^2) has two parts: sum of monomials like P_2i/y^2 and nonlinear terms 1/y^2N_0(α_b) α_b^L+3/2. For the monomial part, we can borrow the calculation of P_2i/y^2: (<ref>) and (<ref>). Under the range |J|_1=j, |J|_2 ≥ L+3, 2≤ j ≤L+1/2, those k-th suitable derivatives (i.e. 𝒜^k) have the pointwise boundsb_1^L+3 for y≤ 1, b_1^|J|_2 y^|J|_2 -k-5|log y|^Cfor 1≤ y≤ 2B_1,we simply obtain from (<ref>) to (<ref>) via integrating the above bound. It remains to estimate the nonlinear term. For y≤ 1, we utilize the parity of f^(L+3/2)(Q) and α_b. We already know that α_b is an odd function with the leading term O(b_1^2) y^3 and the parity of f^(L+3/2)(Q) is opposite of that of L+3/2, N_0(α_b)α_b^L+3/2/y^2 is an odd function with the leading term O(b_1^L+3) y^3L+3/2-1 -L+3/2. Hence for 1≤ k ≤ L,‖𝒜^k(N_0(α_b) /y^2α_b^L+3/2) ‖_L^∞(y≤ 1)≲ b_1^L+3.For 1≤ y≤ 2B_1, the simple bound|∂_y^k (Q+ τα_b)| ≲|log b_1|^C/y^k+1,k≥ 1 implies |N_0(α_b)|≲ 1,|∂_y^k N_0(α_b)| ≲|log b_1|^C/y^k+1fork≥ 1. From the Leibniz rule and the crude bound |∂_y^k α_b| ≲ b_1^2 |log b_1| y^1-k, we have |𝒜^k(N_0(α_b) /y^2α_b^L+3/2) |≲∑_j=0^k |∂_y^j (N_0(α_b) α_b^L+3/2) |/y^2+k-j≲ b_1^L+3 |log b_1|^C y^L+3/2-2-kfor 0≤ k ≤ L, the above pointwise bound yields from (<ref>) to (<ref>) via integration. §.§ Localization of the approximate profileIn the previous construction, we observe that the blow-up profile does not approximate the solution of (<ref>) on the region y≥ 2B_1. Hence, it is necessary to cut off the overgrowth of each tails. Consider the assumptions of Proposition <ref> and assume moreover the a priori bounds|(b_1)_s| ≲ b_1^2, |b_L| ≲b_1^L/|log b_1| when ℓ=L-1.Then the localized profile Q̃_b given byQ̃_b = Q + χ_B_1α_bdrives the following equation:∂_s Q̃_b-F(Q̃_b) + b_1 ΛQ̃_b = χ_B_1𝐌𝐨𝐝(t) +ψ̃_bwhere 𝐌𝐨𝐝(t) was defined in (<ref>) and ψ̃_b=(ψ̃_b,ψ̇̃̇_b)^t satisfies the bounds:(i) Global bound:^∀ 2≤ k ≤ L-1,‖𝒜^kψ̃_b ‖_L^2 + ‖𝒜^k-1ψ̇̃̇_b ‖_L^2 ≲ b_1^k+1|log b_1|^C,‖𝒜^Lψ̃_b ‖_L^2 + ‖𝒜^L-1ψ̇̃̇_b ‖_L^2 ≲ b_1^L+1|log b_1|,‖𝒜^L+1ψ̃_b ‖_L^2 + ‖𝒜^Lψ̇̃̇_b ‖_L^2 ≲b_1^L+2/|log b_1|. (ii) Logarithmic weighted bound: for m≥ 1 and 0≤ k ≤ m,‖1+log y /1+y^m-k𝒜^k ψ̃_b ‖_L^2 ≲ b_1^m+1|log b_1|^C , ‖1+log y /1+y^m-k𝒜^k ψ̇̃̇_b ‖_L^2 ≲ b_1^m+2|log b_1|^C . (iii) Improved local bound:^∀ 2≤ k ≤ L+1,‖𝒜^kψ̃_b ‖_L^2(|y|≤ 2M) + ‖𝒜^k-1ψ̇̃̇_b ‖_L^2(|y|≤ 2M)≲ C(M) b_1^L+3.This proposition says that our cutoff function χ_B_1 does not affect the estimates from (<ref>) to (<ref>) in Proposition <ref>. Although such bounds came from integrating over the region |y|≤ 2B_1, there are two main reasons why this is possible. First, we do not need to keep track of logarithmic weight |log b_1| except for (<ref>) corresponding to the highest order derivative. Second, (<ref>) was derived from the sharp pointwise bound (<ref>), which only depends on B_0. Thus, B_1=|log b_1|^γ/b_1 just needs to be large enough to obtain (<ref>) by raising γ.Note that ψ̃_b=ψ_b on |y|≤ B_1, (<ref>) directly implies the local bound (<ref>). For the other estimates, we will prove the global bounds (<ref>), (<ref>) first, and the less demanding logarithmic weighted bounds (<ref>), (<ref>) later. By a straightforward calculation, ψ̃_b is given byψ̃_b = χ_B_1ψ_b + (∂_s(χ_B_1)+ b_1(yχ')_B_1)α_b +b_1(1-χ_B_1)ΛQ-[0; Δ (χ_B_1α_b)-χ_B_1Δ(α_b) ] -1/y^2[ 0; f(Q̃_b)-f(Q)-χ_B_1(f(Q_b)-f(Q)) ].Before we estimate χ_B_1ψ_b in the RHS of (<ref>), we introduce a useful asymptotics of cutoff:𝒜^k(χ_B_1 f) = χ_B_1𝒜^k f + 1_y∼ B_1∑_j=0^k-1 O(y^-(k-j)) 𝒜^j f. Applying the above asymptotics to χ_B_1ψ_b, we get from Proposition <ref> that we only need to estimate the errors localized in y∼ B_1. From (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the following pointwise bounds: for y∼ B_1 and 0≤ j ≤ k,|y^-(k-j)𝒜^jψ_b_1 | ≲∑_i=1^L-1/2 b_1^2i+1 y^2i-k-1≲ b_1^k+1|log b_1|^γ(L-1-k) B_1^-1and|y^-(k-1-j)𝒜^jψ̇_b_1 |≲∑_i=1^L+1/2 b_1^2i y^2i-k-2 + b_1^L+3y^L+1-k/|log b_1|^2γ + 1+ (b_1^k+4 + b_1^L+3/2 + k+1 ) |log b_1|^C ≲ b_1^k+1|log b_1|^γ(L-k) B_1^-1.These pointwise bounds directly imply the global bounds (<ref>), (<ref>) and (<ref>) if we choose γ≥ 1. For the second term in the RHS of (<ref>), we recallα_b = [α_b; α̇_b ] = [ ∑_i=1,even^L b_iT_i+ ∑_i=2,even^L+2S_i;∑_i=1,odd^L b_iT_i+ ∑_i=2,odd^L+2 S_i ].From the a priori bound |b_1,s|≲ b_1^2, |∂_s(χ_B_1)+ b_1(yχ')_B_1 |≲(|b_1,s|/b_1 +b_1 ) |(y χ')_B_1|≲ b_1 1_y ∼ B_1.One can easily check that (<ref>) still holds even if we replace the cutoff function χ_B_1 to other cutoff functions supported in y∼ B_1. Hence, the cutoff asymptotics (<ref>) and the admissibility of T_i imply for 1≤ i ≤ L,‖ b_i 𝒜^k-i (∂_s(χ_B_1) + b_1(yχ')_B_1 ) T_i‖_L^2 ≲∑_j=0^k-i b_1|b_i| ‖ y^-(k-j-i)𝒜^j T_i‖_L^2 (y∼ B_1)≲b_1|b_i| ‖ y^i-k-1|log y|‖_L^2 (y∼ B_1)≲b_1^k+1-i|b_i| |log b_1|^γ(i-k)+1,and for 2≤ i ≤ L+2, Lemma <ref> implies‖𝒜^k-i (∂_s(χ_B_1) + b_1(yχ')_B_1 )S_i‖_L^2 ≲b_1 ∑_j=0^k-i‖ y^-(k-j-i)𝒜^j S_i‖_L^2 (y∼ B_1)≲b_1^k+1 |log b_1|^γ(i-k-2)-1,we obtain the global bounds (<ref>) and (<ref>). Here, we cannot cancel log y from T_i, the additional |log b_1| appears in (<ref>). Thus, we need to choose γ = 1+ℓ for the case (k,i)=(L+1,L), which corresponds to (<ref>). The third term in (<ref>) can be estimated‖ b_1 𝒜^k (1-χ_B_1) Λ Q ‖_L^2≲ b_1‖ y^-k-1‖_L^2(y≥ B_1)≲b_1^k+1/|log b_1|^γ k. Finally, we compute (<ref>)Δ (χ_B_1α_b)-χ_B_1Δ(α_b) = (Δχ_B_1)α_b + 2 ∂_y(χ_B_1)∂_y(α_b), f(Q̃_b)-f(Q)-χ_B_1(f(Q_b)-f(Q)) = χ_B_1α_b ∫_0^1 [f'(Q+τχ_B_1α_b) -f'(Q+τα_b) ] dτ,each term is localized in y∼ B_1. In this region, the rough bounds |f^(k)| ≲ 1 and |∂_y^k Q| + |∂_y^kχ_B_1|≲ y^-k yield |∂^k/∂ y^k( Δ (χ_B_1α_b)-χ_B_1Δ(α_b) + f(Q̃_b)-f(Q)-χ_B_1(f(Q_b)-f(Q)) /y^2) |≲|α_b|/y^k+2,we can borrow the estimation of ∂_s(χ_B_1)α_b, namely (<ref>) and (<ref>). The logarithmic weighted bounds (<ref>), (<ref>) basically come from the fact |log y| ∼ |log b_1| on y∼ B_1, we further use the decay property |log y|^C/y → 0 as y →∞ for the third term in the RHS of (<ref>). We also introduce another localization that depends on ℓ to verify the further regularity in Remark <ref>.Consider the assumptions of Proposition <ref>. Then the localized profile Q̂_b given byQ̂_b =Q̃_b + ζ_b :=Q̃_b + (χ_B_0 - χ_B_1 ) b_L T_Ldrives the following equation:∂_s Q̂_b-F(Q̂_b) + b_1 ΛQ̂_b = 𝐌𝐨𝐝(t) +ψ̂_bwhere 𝐌̂𝐨̂𝐝̂(t) is given by𝐌𝐨𝐝(t) = χ_B_1𝐌𝐨𝐝(t) + (χ_B_0-χ_B_1) ( (b_L)_s + (L-1 + c_b,L)b_1b_L)T_L and ψ̂_b=(ψ̂_b,ψ̇̂̇_b)^t satisfies the bounds:‖𝒜^L(ψ̂_b -(χ_B_1-χ_B_0) b_L T_L-1 )‖_L^2 ≲ b_1^L+1 ‖𝒜^L-1(ψ̇̂̇_b -(∂_s χ_B_0 + b_1(yχ')_B_0)b_L T_L)‖_L^2 ≲ b_1^L+1Note that F(Q̃_b+ζ_b )-F(Q̃_b )=(χ_B_0-χ_B_1)b_L T_L-1. From (<ref>) and (<ref>), we have∂_s Q̂_b-F(Q̂_b) + b_1 ΛQ̂_b= χ_B_1𝐌𝐨𝐝(t) + ψ̃_b + ∂_s ζ_b - (F(Q̃_b+ζ_b )-F(Q̃_b ))+ b_1 Λζ_b = 𝐌𝐨𝐝(t)+ b_1b_L(χ_B_0-χ_B_1){(-H)^L+2Σ_b_1 + θ_L}+ ψ̃_b -(∂_s(χ_B_1)+ b_1(yχ')_B_1)b_L T_L+ (∂_s(χ_B_0) + b_1(yχ')_B_0)b_L T_L + (χ_B_1-χ_B_0)b_L T_L-1.From the above identity, we can see that (<ref>) is exactly subtracted from ψ̂_b in (<ref>) and (<ref>). Hence,we need to estimate the second term of (<ref>) and (<ref>). We point out that the logarithm weight |log b_1| in (<ref>) comes from the estimate (<ref>) when i=L, which is eliminated in (<ref>). For the second term of (<ref>), we can borrow the bound (<ref>) and Lemma <ref>. Consider the assumptions of Proposition <ref>. Then the localized profile Q̂_b given byQ̂_b =Q̃_b + ζ_b :=Q̃_b + (χ_B_0 - χ_B_1 )(b_L-1T_L-1 +b_L T_L )drives the following equation:∂_s Q̂_b-F(Q̂_b) + b_1 ΛQ̂_b = 𝐌𝐨𝐝(t) +ψ̂_bwhere 𝐌̂𝐨̂𝐝̂(t) is given by𝐌𝐨𝐝(t) = χ_B_1𝐌𝐨𝐝(t)+ (χ_B_0-χ_B_1) ( (b_L-1)_s + (L-2 + c_b,L-1)b_1b_L-1)T_L-1+ (χ_B_0-χ_B_1) ( (b_L)_s + (L-1 + c_b,L)b_1b_L)T_L and ψ̂_b=(ψ̂_b,ψ̇̂̇_b)^t satisfies the bounds:‖𝒜^L-1(ψ̂_b -(∂_s χ_B_0 + b_1(yχ')_B_0)b_L-1 T_L-1 -(χ_B_1-χ_B_0) b_L T_L-1 )‖_L^2 ≲ b_1^L ‖𝒜^L-2(ψ̇̂̇_b -(∂_s χ_B_0 + b_1(yχ')_B_0)b_L T_L + b_L-1H(χ_B_1-χ_B_0)T_L)‖_L^2 ≲ b_1^LNote that F(Q̃_b+ζ_b )-F(Q̃_b )=-Hζ_b - NL(ζ_b) - L(ζ_b) whereNL(ζ_b)= [ 0; NL(ζ_b) ] := 1/y^2[0; f(Q̃_b+ζ_b) -f(Q̃_b) - f'(Q̃_b)ζ_b ],L(ζ_b)=[0; L(ζ_b) ] := 1/y^2[0; (f'(Q̃_b) -f'(Q))ζ_b ].From (<ref>) and (<ref>), we have∂_s Q̂_b-F(Q̂_b) + b_1 ΛQ̂_b= χ_B_1𝐌𝐨𝐝(t) + ψ̃_b + ∂_s ζ_b - (F(Q̃_b+ζ_b )-F(Q̃_b ))+ b_1 Λζ_b = 𝐌𝐨𝐝(t) + b_1b_L-1(χ_B_0-χ_B_1){(-H)^L+1Σ_b_1 + θ_L-1}+ b_1b_L(χ_B_0-χ_B_1){(-H)^L+2Σ_b_1 + θ_L} + NL(ζ_b) + L(ζ_b) + ψ̃_b -(∂_s(χ_B_1)+ b_1(yχ')_B_1)(b_L-1T_L-1+b_L T_L) + (∂_s(χ_B_0) + b_1(yχ')_B_0)b_L T_L + (χ_B_1-χ_B_0)b_L T_L-1+ Hζ_b.Based on the proof of the previous proposition, it suffices to show that‖𝒜^L-2NL(ζ_b)‖_L^2+‖𝒜^L-2L(ζ_b)‖_L^2≲ b_1^L,which come from the following crude pointwise bounds in B_0 ≤ y ≤ 2 B_1: for k ≥ 0,|𝒜^k NL(ζ_b)|≲ b_1^2L-2 y^2L-6-k|log b_1|^C,|𝒜^k L(ζ_b)|≲ b_1^L y^L-4-k|log b_1|^C.§.§ Dynamical laws of b=(b_1,…,b_L)As previously mentioned, the blow-up rate is determined by the evolution of the system b, we figure out its dynamical laws from (<ref>): for 1≤ k ≤ L,(b_k)_s=b_k+1-(k-1+1/(1+δ_1k)log s)b_1b_k, b_L+1=0.One can check that the above system has L linearly independent solutions characterized by the number of nonzero coordinate: for 1≤ k ≤ L, b=(b_1,…,b_k,0,…,0). Here, we adopt two special solutions among them.For ℓ=L,L-1, a system of functionsb_k^e(s)=c_k/s^k+d_k/s^klog s for1≤ k≤ℓ,b_k^e≡ 0fork>ℓsolves (<ref>) approximately: for 1≤ k ≤ L,(b_k^e)_s+(k-1+1/(1+δ_1k)log s)b^e_1b^e_k-b^e_k+1=O(1/s^k+1(log s)^2)where the sequence (c_k,d_k)_k=1,…,ℓ is given byc_1= ℓ/ℓ-1,c_k+1=-ℓ-k/ℓ-1 c_k, 1≤ k ≤ℓand for 2≤ k ≤ℓ-1,d_1= -ℓ/(ℓ-1)^2, d_2=-d_1+1/2c_1^2, d_k+1=-ℓ-k/ℓ-1d_k+ℓ(ℓ-k)/(ℓ-1)^2 c_k.The recurrence relations (<ref>) and (<ref>) are obtained by substituting (<ref>) for (<ref>) and comparing the coefficients of s^-k and (s^klog s)^-1, which yields the proof.For our b system to drive like the special solution b^e, we should control the fluctuationU_k(s)/s^k(log s)^β:=b_k(s)-b_k^e(s) for1≤ k ≤ℓ, U_k(s) =0fork >ℓ.Here, (<ref>) and (<ref>) restricts the range of β to 1<β<2, we will choose β=5/4 later. The next lemma provides the evolution of U=(U_1,…,U_ℓ) from (<ref>).Let U=(U_1,…,U_ℓ) be given by (<ref>). Then (<ref>) is equivalent to s(U)_s= A_ℓU + O ( 1/(log s)^2-β + |U|+|U|^2/log s) ,where the ℓ×ℓ matrix A_ℓ has of the form:A_ℓ=[ 1 1;-c_2 ℓ-2/ℓ-1 1 (0); -2c_3 ℓ-3/ℓ-1 1; ⋮ ⋱ ⋱; -(ℓ-2)c_ℓ-1 (0) 1/ℓ-1 1; -(ℓ-1)c_ℓ 0 ].Moreover, there exists an invertible matrix P_ℓ such that A_ℓ=P_ℓ^-1D_ℓP_ℓ withD_ℓ=[-1; 2/ℓ-1 (0); 3/ℓ-1; ⋱; (0) 1; ℓ/ℓ-1 ]. Observing the relation(k-1)c_1-k=(k-1)ℓ/ℓ-1-k=-ℓ-k/ℓ-1,we obtain (<ref>) and (<ref>) since(b_k)_s+(k-1+1/(1+δ_1k)log s)b_1b_k-b_k+1=1/s^k+1(log s)^β[ s(U_k)_s - kU_k + O(|U|/log s) ] + O(1/s^k+1(log s)^2) +1/s^k+1(log s)^β[ (k-1)c_kU_1 + (k-1)c_1U_k - U_k+1 + O(|U|+|U|^2/log s) ] =1/s^k+1(log s)^β[ s(U_k)_s+(k-1)c_kU_1 - ℓ-k/ℓ-1U_k -U_k+1] + O( 1/s^k+1(log s)^2+|U|+|U^2|/s^k+1(log s)^1+β). (<ref>) is obtained by substituting α=1 for the result of Lemma 2.17 in <cit.>. Since the above work can be seen as linearizing (<ref>) around our special solution b^e, the appearance of a matrix A_ℓ is quite natural. We also note that ℓ-1 unstable directions corresponding ℓ-1 positive eigenvalues yield the restriction of our initial data. § THE TRAPPED SOLUTIONSOur goal is to decompose the solution u as the blow-up profile and the error, i.e. u=(Q̃_b+ε)_λ=Q̃_b,λ + w. For the term "error" to be meaningful, we need to control the "direction" and "size" of w=ε_λ. First, ε must be orthogonal to the directions that provoke blow-up from Q̃_b,λ. Such orthogonal conditions determine the modulation equations system of the dynamical parameters b as designed in subsection <ref>. In this process, ε appear as an error in the form of some suitable norms. Not to perturb the evolution of the modulation parameters, the smallness of ε is required. We describe the set of initial data and the trapped conditions represented by some bootstrap bounds for such suitable norms i.e, the higher-order energies. We also establish a Lyapounov type monotonicity of the higher-order energies to close our bootstrap assumptions. §.§ Decomposition of the flow We recall the approximate direction Φ_M which was defined in <cit.>. For a large constant M>0, we defineΦ_M=∑_p=0^Lc_p,MH^*p(χ_M ΛQ) ,H^*= [0H; -10 ]where c_p,M is given byc_0,M=1,c_k,M=(-1)^k+1∑_p=0^k-1 c_p,M⟨H^*p(χ_M ΛQ),T_k ⟩/⟨χ_M ΛQ, ΛQ⟩ , 1≤ k ≤ L.One can easily verify (see section 3.1.1 in <cit.>) that H^* be an adjoint operator of H in the sense⟨Hu,v⟩=⟨u,H^*v⟩and Φ_M=(Φ_M,0) satisfies ⟨Φ_M , Λ Q⟩ =⟨χ_M Λ Q , Λ Q⟩∼ 4log M, |c_p,M| ≲ M^p ,||Φ_M||_L^2^2 ∼ clog M.We then obtain our desired decomposition by imposing a collection of orthogonal directions, which approximates the generalized kernel defined in Definition <ref>.Let u(t) be a solution to (<ref>)starting close enough to Q in ℋ. Then there exist C^1 functions λ(t) and b(t)=(b_1,…,b_L) such that u can be decomposed asu=(Q̃_b(t)+ε)_λ(t)where Q̃_b is given in Proposition <ref> and ε satisfies the following orthogonality conditions⟨ε,H^*iΦ_M⟩=0,for 0≤ i ≤L . and the orbital stability:|b(t)|+‖ε‖_ℋ≪ 1 (<ref>) says that {H^*iΦ_M}_i ≥ 0 is a suitable substitute for {T_i}_i ≥ 0 in terms of the L^2 × L^2 inner product (<ref>). It is clear that H^i T_j=0 for i>j.For 0≤ i ≤ j,⟨Φ_M, H^i T_j ⟩ = (-1)^i⟨Φ_M,T_j-i⟩= (-1)^i∑_p=0^j-i-1 c_p,M⟨H^*p(χ_M ΛQ),T_j-i⟩ + (-1)^j c_j-i,M⟨χ_M ΛQ, ΛQ⟩=(-1)^j⟨χ_M ΛQ, ΛQ⟩δ_i,j .Now, we consider ε:=u_1/λ - Q̃_b as a map in the (λ,b,u) basis. By the implicit function theorem, (<ref>) is deduced from the non-degeneracy of the following Jacobian|(∂/∂(λ,b)⟨ε, H^*iΦ_M ⟩)_0≤ i ≤ L|_(λ,b,u)=(1,0,Q) =(-1)^L+1|(⟨T_j, H^*iΦ_M ⟩)_0≤ i,j ≤ L|= |(⟨Φ_M , H^iT_j⟩)_0≤ i,j ≤ L|= |( (-1)^j⟨χ_M ΛQ, ΛQ⟩δ_i,j)_0≤ i,j ≤ L|=(-1)^L+1/2⟨χ_M ΛQ, ΛQ⟩^L+1≠ 0 . §.§ Equation for the errorBased on the previously established decompositionu=Q̃_b(t),λ (t)+w=(Q̃_b(s)+ε (s))_λ(s),(<ref>) turns into the evolution equation of ε:∂_sε -λ_s/λΛε + Hε =- ( ∂_s Q̃_b - λ_s/λΛQ̃_b) + F(Q̃_b + ε )+ Hε= -( ∂_s Q̃_b - F(Q̃_b) + b_1 ΛQ̃_b) + (λ_s/λ+b_1)ΛQ̃_b + F(Q̃_b + ε )-F(Q̃_b)+ Hε=-𝐌𝐨𝐝(t)- ψ̃_b -NL(ε) -L(ε),where𝐌𝐨𝐝(t):=χ_B_1𝐌𝐨𝐝(t) -(λ_s/λ+b_1)ΛQ̃_b , NL(ε):= 1/y^2[0; f(Q̃_b+ε) -f(Q̃_b) - f'(Q̃_b)ε ],L(ε):= 1/y^2[0; (f'(Q̃_b) -f'(Q))ε ].For later analysis, we also employ the evolution equation of w:∂_t w+H_λw= 1/λℱ_λ,ℱ= -𝐌𝐨𝐝(t) -ψ̃_b - NL(ε)-L(ε),where:H_λ=[ 0-1; H_λ 0 ]:=[0 -1; -Δ +r^-2 f'(Q_λ)0 ],We notice that the NL and L terms are situated on the second coordinate:NL(ε)=[ 0; NL(ε) ],L(ε)=[0; L(ε) ].We also introduce another decompositionu=Q̂_b(t),λ (t)+ŵ=(Q̂_b(s)+ε̂ (s))_λ(s)which depends on whether ℓ=L (Proposition <ref>) or ℓ=L-1 (Proposition <ref>). The evolution equation of ε̂ is given by∂_sε̂ -λ_s/λΛε̂ + Hε̂ = -𝐌𝐨𝐝'(t)- ψ̂_b -NL(ε̂) -L(ε̂),where𝐌𝐨𝐝'(t):=𝐌𝐨𝐝(t) -(λ_s/λ+b_1)ΛQ̂_b , NL(ε̂):= 1/y^2[0; f(Q̂_b+ε̂) -f(Q̂_b) - f'(Q̂_b)ε̂ ],L(ε̂):= 1/y^2[ 0; (f'(Q̂_b) -f'(Q))ε̂ ].We also employ the evolution equation of ŵ:∂_t ŵ+H_λŵ= 1/λℱ_λ,ℱ= -𝐌𝐨𝐝'(t) -ψ̂_b - NL(ε̂)-L(ε̂). §.§ Initial data setting for the bootstrapIn this subsection, we describe our initial data and the bootstrap assumption in terms of the fluctuation (<ref>) and the adapted higher-order energies given byℰ_k:=⟨ε_k,ε_k⟩ + ⟨ε̇_k-1,ε̇_k-1⟩ ,2≤ k ≤ L+1.We set our s(t)-variable as follows: for a large enough s_0 ≫ 1, y=r/λ(t),s(t)=s_0+∫_0^t d τ/λ(τ) .For the sake of simplicity, we use a transformed fluctuation V=(V_1(s),…,V_ℓ(s)),V=P_ℓ U where P_ℓ yields the diagonalization (<ref>). Then we illustrate the modulation parameters b as a sum of the exact solutions b^e(s) and V(s): for ℓ=L-1 or L,b(s)=b^e(s)+((P_ℓ^-1V(s))_1/s(log s)^β,…,(P_ℓ^-1V(s))_ℓ/s^ℓ(log s)^β,b_ℓ+1(s),…,b_L(s)).Now, we assume some smallness conditions for our initial data u_0 (s_0) = (u_0,u̇_0) as follows: for large constants M=M(L), K=K(L,M), s_0=s_0(L,M,K), we set the initial data u_0 = u(s_0) asu_0 = (Q̃_b(s_0) + ε(s_0))_λ(s_0),where ε(s_0) satisfies the orthogonality conditions (<ref>), the smallness of higher-order energiesℰ_k (s_0)≤ b_1^2L+4(s_0) and b(s_0) satisfies the smallness of the stable modes: |V_1(s_0)|≤1/4and|b_L(s_0)|≤1/s_0^(L-1)c_1 (log s_0)^3/2forℓ= L-1 . Furthermore, we may assume λ(s_0)=1 up to rescaling. Given u(s_0) of the form (<ref>) satisfying (<ref>), (<ref>) and (<ref>), there exists an initial direction of the unstable modes (V_2(s_0),...,V_ℓ(s_0)) ∈ℬ^ℓ-1such that the corresponding solution to (<ref>) satisfies the following bounds for ^∀ s≥ s_0, * Control of the higher-order energies: for 2≤ k ≤ℓ-1,ℰ_k(s) ≤ b_1^2(k-1)c_1 |log b_1|^K, ℰ_L+1(s)≤ K b_1^2L+2/|log b_1|^2,ℰ_L(s) ≤ K λ^2(L-1) for ℓ=L,b_1^2L|log b_1|^Kfor ℓ=L-1, ℰ_L-1(s) ≤ K λ^2(L-2)for ℓ=L-1. * Control of the stable modes: |V_1(s)|≤ 1,|b_L(s)|≤1/s^L (log s)^β, for ℓ=L-1.* Control of the unstable modes:(V_2(s),…,V_ℓ(s))∈ℬ^ℓ-1.Under the initial setting of (ε(s_0),V(s_0),b_ℓ+1(s_0),…,b_L(s_0)), We define an exit times^*=sup{ s≥ s_0 : (<ref>), (<ref>), (<ref>) and (<ref>) hold on[s_0,s] }.We will prove Proposition <ref> in Section 4 by contradiction, assume that s^* < ∞for all (V_2,…,V_ℓ)∈ℬ^ℓ-1 . §.§ Modulation equations Now we provide the evolution of the modulation parameters from the orthogonality conditions (<ref>).The modulation parameters (λ,b_1,…,b_L) satisfy the following bound |λ_s/λ + b_1 | + ∑_i=1^L-1|(b_i)_s +(i-1 + c_b_1,i)b_1b_i - b_i+1| ≲ M^C b_1(√(ℰ_L+1)+b_1^L+2), | (b_L)_s +(L-1 + c_b_1,L)b_1b_L | ≲√(ℰ_L+1)/√(log M)+ M^C b_1^L+3.(<ref>) and (<ref>) allow us to obtain the a priori assumption (<ref>) under the trapped region. Step 1: Modulation identity. Denote D(t)=(D_0(t),…,D_L(t)) where D_i(t) is given byD_0(t):=-(λ_s/λ + b_1), D_i(t):=(b_i)_s +(i-1 + c_b_1,i)b_1b_i - b_i+1 , b_L+1=0.We take the vector-valued inner product (<ref>) of (<ref>) with H^*kΦ_M for 0≤ k ≤ L, we have the following identity ⟨𝐌𝐨𝐝(t), H^*kΦ_M ⟩ +⟨Hε, H^*kΦ_M ⟩ = λ_s/λ⟨Λε,H^*kΦ_M⟩- ⟨ψ̃_b,H^*kΦ_M⟩ - ⟨NL(ε) + L(ε) ,H^*kΦ_M⟩. Step 2: Estimates for each terms in (<ref>). We claim that the LHS of (<ref>) gives the main contribution to prove (<ref>) and (<ref>).(i) 𝐌𝐨𝐝(t) terms. First, χ_B_1α_b = α_b holds on |y|≤ 2M for small enough b_1. We also have the pointwise bound|Λα_b|+ ∑_i=1^L ∑_j=i+1^L+2|∂S_j/∂ b_i|≲ b_1 M^C for|y| ≤ 2Mfrom our blow-up profile construction.Hence, we estimate the 𝐌𝐨𝐝(t) term in (<ref>) by the transversality (<ref>) and the compact support property of Φ_M ⟨𝐌𝐨𝐝(t), H^*kΦ_M ⟩= D_0(t) ⟨ΛQ_b, H^*kΦ_M ⟩ +∑_i=1^L D_i(t) ⟨T_i + ∑_j=i+1^L+2∂S_j/∂ b_i, H^*kΦ_M ⟩= ∑_i=0^L D_i(t)⟨T_i , H^*kΦ_M ⟩ +⟨ D_0 (t) Λα_b +∑_i=1^L ∑_j=i+1^L+2 D_i(t) ∂S_j/∂ b_i , H^*kΦ_M ⟩=(-1)^k D_k(t) ⟨ΛQ, Φ_M ⟩ + O(M^C b_1 |D(t)|).(ii) Linear terms. For 0≤ k ≤ L-1, we have⟨Hε, H^*kΦ_M ⟩ = ⟨ε, H^*(k+1)Φ_M ⟩=0from the orthogonal conditions (<ref>). For k=L, Cauchy-Schwarz inequality implies|⟨ε, H^*(L+1)Φ_M ⟩ |=|⟨H^L+1ε, Φ_M ⟩ | ≲√(log M)√(ℰ_L+1).(iii) Scaling terms. We can estimate the scaling term in (<ref>) from the compact support property of Φ_M and the coercivity bound (<ref>)|λ_s/λ⟨Λε, H^*kΦ_M⟩| ≤(b_1 + |D_0(t)| ) | ⟨Λε, H^*kΦ_M⟩|≲(b_1 + |D_0(t)| ) M^C √(ℰ_L+1).(iv) ψ̃_b terms. Here, the improved local bound (<ref>) implies|⟨ψ̃_b,H^*kΦ_M⟩ |≲ M^C b_1^L+3.(v) NL(ε) and L(ε) terms.Using the coercivity bound (<ref>) with the crude bound |NL(ε)|≲ |ε|^2/y^2 and |L(ε)|≲ b_1^2|ε|/y,|⟨NL(ε),H^*iΦ_M⟩ | ≲ M^C ℰ_L+1, |⟨L(ε),H^*iΦ_M⟩| ≲ M^C b_1^2 √(ℰ_L+1).Step 3: Conclusion.Injecting the estimates from (<ref>) to (<ref>) into (<ref>), we obtain (-1)^k D_k(t) ⟨ΛQ, Φ_M ⟩ + O(M^C b_1 |D(t)|)= O(√(log M)√(ℰ_L+1))δ_kL+ O(M^C b_1 ( √(ℰ_L+1)+b_1^L+2 ))for 0≤ k ≤ L. We then divide them above equation by ⟨ΛQ, Φ_M ⟩, (<ref>) impliesD_k(t)+ O(M^C b_1 |D(t)|)= O(√(ℰ_L+1)/√(log M))δ_kL + O(M^C b_1 ( √(ℰ_L+1)+b_1^L+2 )),which yields (<ref>) and (<ref>).§.§ Improved modulation equation of b_LAt first glance, (<ref>) seems sufficient to close the modulation equation for b_L because of the presence of √(log M). However, our desired blow-up scenario comes from the exact solution b_L^e, (<ref>) is inadequate to close the bootstrap bounds for stable/unstable modes V(s). Thus, we need to obtain a further logarithm room by adding some correction to b_L.Let B_δ=B_0^δ and b̃_L=b_L + (-1)^L⟨H^Lε, χ_B_δΛQ⟩/4δ |log b_1|.for some small enough universal constant 0<δ≪ 1. Then b̃_L satisfies|b̃_L-b_L| ≲ b_1^L+1-Cδand |(b̃_L)_s + (L-1 + c_b,L)b_1b̃_L| ≲√(ℰ_L+1)/√(|log b_1|).We point out that b̃_L is well-defined at time s=s_0, since b̃_L-b_L only depends on b_1 and ε.We knowd/ds⟨H^Lε, χ_B_δΛQ⟩ =⟨H^Lε_s, χ_B_δΛQ⟩ + ⟨H^Lε, (χ_B_δ)_s ΛQ⟩We compute the last inner product (<ref>) from the coercivity bound (<ref>) and (<ref>)|⟨H^Lε, (χ_B_δ)_s ΛQ⟩|=|δ (b_1)_s b_1^-1| |⟨H^Lε, (y∂_yχ)_B_δΛQ⟩| = |δ (b_1)_s b_1^-1| | ⟨ H^L-1/2ε̇, (y∂_y χ)_B_δΛ Q ⟩| ≲ C(M)δ b_1^1-δ√(ℰ_L+1).Using (<ref>), we obtain the following identity similar to (<ref>)⟨H^Lε_s, χ_B_δΛQ⟩ =-⟨H^L𝐌𝐨𝐝(t), χ_B_δΛQ⟩- ⟨H^L+1ε, χ_B_δΛQ⟩+λ_s/λ⟨H^LΛε, χ_B_δΛQ⟩-⟨H^Lψ̃_b, χ_B_δΛQ⟩ - ⟨H^LNL(ε), χ_B_δΛQ⟩- ⟨H^LL(ε), χ_B_δΛQ⟩Considering the support of χ_B_δΛ Q, we can borrow all estimates in Step 2 by substituting the weight log M and M^C to |log b_1| and b_1^-Cδ, respectively. Hence, Lemma <ref> and (<ref>) give a "B_δ version" of (<ref>)d/ds⟨H^Lε, χ_B_δΛQ⟩ =(-1)^L+1D_L(t) ⟨ΛQ, χ_B_δΛ Q⟩ + O( b_1^1-Cδ |D(t)|) + O(√(|log b_1|)√(ℰ_L+1)) + O( b_1^1-Cδ ( √(ℰ_L+1)+b_1^L+2 )) =(-1)^L+14δ |log b_1| D_L(t) +O(√(|log b_1|)√(ℰ_L+1)).Similar to when we estimated (<ref>), we obtain (<ref>)|⟨H^Lε, χ_B_δΛQ⟩|≲ C(M)δ b_1^-Cδ√(ℰ_L+1)≲ b_1^L+1-Cδ, and (<ref>) as follows:|(b̃_L)_s + (L-1 + c_b,L)b_1b̃_L|≲ |⟨H^Lε, χ_B_δΛQ⟩| | b_1+d/ds{1/4δlog b_1}| + √(ℰ_L+1)/√(|log b_1|)≲√(ℰ_L+1)/√(|log b_1|) +b_1^L+2-Cδ.§.§ Lyapounov monotonicity for ℰ_L+1 A simple way to control the adapted higher-order energy ℰ_L+1 is to estimate its time derivative. However, we cannot obtain enough estimates to close the bootstrap bound (<ref>) with ℰ_L+1 by itself, i.e. with b_1 ∼ -λ_t,d/dt{ℰ_L+1/λ^2L}≤ C b_1ℰ_L+1/λ^2L, ℰ_L+1(t)/λ^2L(t) ≤ℰ_L+1(0)/λ^2L(0) + C∫_0^t b_1(τ)ℰ_L+1(τ)/λ^2L(τ) dτ≤K∫_0^tb_1(τ)/λ^2L(τ)b_1^2(L+1)(τ)/|log b_1(τ)|^2dτ≲ K/λ^2L(t)b_1^2(L+1)(t)/|log b_1(t)|^2.Thus, we use the repulsive property of the conjugated Hamiltonian H of H observed in <cit.> and <cit.> with some additional integration by parts to pull out the accurate corrections. We have the following bound:d/dt{ℰ_L+1/λ^2L + O( b_1 C(M) ℰ_L+1/λ^2L)}≤ C b_1/λ^2L+1[ b_1^L+1/|log b_1|√(ℰ_L+1)+ℰ_L+1/√(log M)] Step 1: Evolution of adapted derivatives. We start by introducing the rescaled version of the operators A and A^*A_λ :=-∂_r + Z_λ/r , A^*_λ :=∂_r + 1+Z_λ/r, Z_λ(r)=Z(r/λ)=1-(r/λ)^2/1+(r/λ)^2 .We also recall H_λ in (<ref>) and define its conjugate operator H_λ as the rescaled version of the linearized operator H and its conjugate H:H_λ := A^*_λA_λ=-Δ + V_λ/r^2,V(y)=y^4-6y^2+1/(y^2+1)^2,H_λ := A_λA^*_λ=-Δ + V_λ/r^2,V(y) =4/y^2+1.In the same manner as (<ref>), we denote the rescaled version of the adapted derivative operator𝒜_λ:=A_λ,𝒜_λ^2:=A_λ^* A_λ,𝒜_λ^3:= A_λA_λ^*A_λ, ⋯,𝒜_λ^k:=⋯ A_λ^*A_λ A_λ^* A_λ_ktimes,so the higher-order derivatives of w=(w,ẇ)^t adapted to the Hamiltonian H_λ are given by w_k:=𝒜_λ^k w,ẇ_k:=𝒜_λ^k ẇ.One can easily check that w_k=(ε_k)_λ/λ^k andẇ_k=(ε̇_̇k̇)_λ/λ^k+1, our target energy can be written asℰ_L+1/λ^2L=⟨w_L+1,w_L+1⟩ + ⟨ẇ_L, ẇ_L ⟩ = ⟨H_λ w_L,w_L ⟩ + ⟨ẇ_L, ẇ_L ⟩.To describe the evolution of w_k and ẇ_k, we first rewrite the flow (<ref>) of w=(w,ẇ) component-wisely:w_t -ẇ = ℱ_1ẇ_t + H_λ w = ℱ_2,[ ℱ_1; ℱ_2 ] := 1/λℱ_λ=1/λ[ℱ; ℱ̇ ]_λ .Taking 𝒜_λ^k given by (<ref>) into (<ref>), we obtain the evolution equation of w_k:∂_t w_k -ẇ_k = [∂_t,𝒜_λ^k]w +𝒜_λ^kℱ_1∂_tẇ_k + w_k+2 = [∂_t,𝒜_λ^k]ẇ +𝒜_λ^kℱ_2 .Lastly, we employ the following notation: for any time-dependent operator P,∂_t(P):=[∂_t, P],which yields the Leibniz rule between the operator and function:∂_t(Pf)=∂_t(P)f + P f_t.Step 2: First energy identity.Recall (<ref>), we compute the energy identity:∂_t (ℰ_L+1/2λ^2L) = 1/2⟨∂_t(H_λ) w_L,w_L ⟩ + ⟨H_λ w_L,∂_t w_L ⟩ +⟨ẇ_L,∂_t ẇ_L ⟩= 1/2⟨∂_t(H_λ) w_L,w_L ⟩ +⟨H_λ w_L, ∂_t(𝒜_λ^L)w ⟩ + ⟨ẇ_L,∂_t(𝒜_λ^L)ẇ⟩+ ⟨H_λ w_L, 𝒜_λ^L ℱ_1 ⟩ + ⟨ẇ_L,𝒜_λ^Lℱ_2 ⟩.We will check that (<ref>) satisfies the desired bound (<ref>) later. Unlike (<ref>), when (<ref>) and (<ref>) are estimated using coercivity (<ref>) directly, we obtain the following insufficient boundb_1/λ^2L+1 C(M) ℰ_L+1.One can employ repulsive property (<ref>) for (<ref>) with the modulation equation (<ref>):∂_t(H_λ)=-λ_t/λ(ΛV)_λ/r^2 = -b_1+O(b_1^L+2)/λ^38/(1+y^2)^2⇒⟨∂_t(H_λ) w_L,w_L ⟩ <0.We claim that (<ref>) is eventually negative like (<ref>) by adding some corrections. For this,we start by employing (<ref>) to exchange H_λw_L for -∂_tẇ_L,⟨H_λ w_L, ∂_t(𝒜_λ^L)w ⟩=-⟨∂_t ẇ_L, ∂_t(𝒜_λ^L)w ⟩+ ⟨∂_t(𝒜_λ^L) ẇ, ∂_t(𝒜_λ^L)w ⟩ + ⟨𝒜_λ^L ℱ_2, ∂_t(𝒜_λ^L)w ⟩,we can treat (<ref>) via integration by parts in time with (<ref>),-⟨∂_t ẇ_L, ∂_t(𝒜_λ^L)w ⟩ +∂_t ⟨ẇ_L, ∂_t(𝒜_λ^L)w ⟩ = ⟨ẇ_L, ∂_tt(𝒜_λ^L)w ⟩ + ⟨ẇ_L, ∂_t(𝒜_λ^L)w_t ⟩=⟨ẇ_L, ∂_t(𝒜_λ^L)ẇ⟩ + ⟨ẇ_L, ∂_tt(𝒜_λ^L)w ⟩ + ⟨ẇ_L, ∂_t(𝒜_λ^L) ℱ_1 ⟩.In short, we add a correction in (<ref>) to the first inner product in (<ref>) to transform it into the second inner product in (<ref>) up to some errors (<ref>), (<ref>):⟨H_λ w_L, ∂_t(𝒜_λ^L)w ⟩+ ∂_t D_0,1,1 =⟨ẇ_L, ∂_t(𝒜_λ^L)ẇ⟩+ E_0,1,1 + E_0,1,2 + F_0,1,1 + F_0,1,2whereD_0,1,1 =⟨ẇ_L, ∂_t(𝒜_λ^L)w ⟩ , E_0,1,1 =⟨ẇ_L, ∂_tt(𝒜_λ^L)w ⟩ , E_0,1,2= ⟨∂_t(𝒜_λ^L) ẇ, ∂_t(𝒜_λ^L)w ⟩, F_0,1,1 = ⟨ẇ_L, ∂_t(𝒜_λ^L) ℱ_1 ⟩ , F_0,1,2=⟨𝒜_λ^L ℱ_2, ∂_t(𝒜_λ^L)w ⟩ .However, the second inner product in (<ref>) is also not small enough to close our bootstrap by itself. Thus, we use (<ref>) again to exchange ẇ_L for ∂_t w_L,⟨ẇ_L, ∂_t(𝒜_λ^L)ẇ⟩ = ⟨∂_tw_L, ∂_t(𝒜_λ^L)ẇ⟩- ⟨∂_t(𝒜_λ^L)w, ∂_t(𝒜_λ^L)ẇ⟩ - ⟨𝒜_λ^L ℱ_1, ∂_t(𝒜_λ^L)ẇ⟩.Integrating by parts in time once more,⟨∂_tw_L, ∂_t(𝒜_λ^L)ẇ⟩ - ∂_t⟨w_L, ∂_t(𝒜_λ^L)ẇ⟩ =-⟨w_L, ∂_tt(𝒜_λ^L)ẇ⟩-⟨w_L, ∂_t(𝒜_λ^L)ẇ_t ⟩=⟨w_L, ∂_t(𝒜_λ^L)w_2 ⟩- ⟨w_L, ∂_tt(𝒜_λ^L)ẇ⟩ -⟨w_L, ∂_t(𝒜_λ^L)ℱ_2⟩.To sum it up, we obtain a relation similar to (<ref>):⟨ẇ_L, ∂_t(𝒜_λ^L)ẇ⟩ + ∂_t D_0,2,1 = ⟨w_L, ∂_t(𝒜_λ^L)w_2 ⟩ + E_0,2,1 + E_0,2,2 + F_0,2,1 + F_0,2,2whereD_0,2,1 =-⟨w_L, ∂_t(𝒜_λ^L)ẇ⟩ , E_0,2,1 = - ⟨w_L, ∂_tt(𝒜_λ^L)ẇ⟩ , E_0,2,2= - ⟨∂_t(𝒜_λ^L)w, ∂_t(𝒜_λ^L)ẇ⟩, F_0,2,1 = - ⟨𝒜_λ^L ℱ_1, ∂_t(𝒜_λ^L)ẇ⟩ , F_0,2,2=-⟨w_L, ∂_t(𝒜_λ^L)ℱ_2⟩.In <cit.> (the case L=1), the authors directly checked that ⟨ w_1,∂_t(𝒜_λ^L)w_2⟩ < 0. In contrast, when L≥ 3, we cannot obtain similar information from ⟨ w_L, ∂_t(𝒜_λ^L)w_2⟩ by itself. We pull out the repulsive terms using the Leibniz rule,⟨ w_L , ∂_t(𝒜_λ^L)w_2⟩ = ⟨ w_L, ∂_t(H_λ)w_L⟩ + ⟨w_L, H_λ∂_t(𝒜_λ^L-2)w_2⟩=⟨ w_L, ∂_t(H_λ)w_L⟩ + ⟨H_λ w_L, ∂_t(𝒜_λ^L-2)w_2⟩ .We observe that the second inner product in (<ref>) has the same form as the first inner product in (<ref>), we can iterate integration by parts, which leads to the following recurrence equations: for 0≤ k ≤L-1/2,⟨H_λ w_L, ∂_t(𝒜_λ^L-2k)w_2k⟩+ ∂_t D_k,1,1 =⟨ẇ_L, ∂_t(𝒜_λ^L-2k)ẇ_2k⟩ + E_k,1,1 + E_k,1,2 + F_k,1,1 + F_k,1,2whereD_k,1,1 = ⟨ẇ_L, ∂_t(𝒜_λ^L-2k) w_2k⟩,E_k,1,1= ⟨ẇ_L, ∂_tt(𝒜_λ^L-2k)w_2k⟩,E_k,1,2 = ⟨∂_t(𝒜_λ^L) ẇ, ∂_t(𝒜_λ^L-2k)w_2k⟩ +⟨ẇ_L, ∂_t(𝒜_λ^L-2k) ∂_t(H_λ^k)w ⟩,F_k,1,1= ⟨ẇ_L, ∂_t(𝒜_λ^L-2k) H_λ^kℱ_1⟩,F_k,1,2= ⟨𝒜_λ^L ℱ_2, ∂_t(𝒜_λ^L-2k)w_2k⟩and⟨ẇ_L, ∂_t(𝒜_λ^L-2k)ẇ_2k⟩ + ∂_t D_k,2,1 = ⟨w_L, ∂_t(𝒜_λ^L-2k)w_2k+2⟩ + E_k,2,1 + E_k,2,2 + F_k,2,1 + F_k,2,2whereD_k,2,1 =-⟨ w_L, ∂_t(𝒜_λ^L-2k) ẇ_2k⟩,E_k,2,1=- ⟨w_L, ∂_tt(𝒜_λ^L-2k)ẇ_2k⟩ ,E_k,2,2 = - ⟨∂_t(𝒜_λ^L)w, ∂_t(𝒜_λ^L-2k)ẇ_2k⟩ - ⟨w_L, ∂_t(𝒜_λ^L-2k)∂_t(H_λ^k)ẇ⟩,F_k,2,1=- ⟨𝒜_λ^L ℱ_1, ∂_t(𝒜_λ^L-2k)ẇ_2k⟩ ,F_k,2,2= -⟨w_L, ∂_t(𝒜_λ^L-2k)ℱ_2⟩.We can also pull out the repulsive term like (<ref>) from (<ref>): for 0≤ k ≤L-3/2, ⟨ w_L , ∂_t(𝒜_λ^L-2k)w_2k+2⟩ = ⟨ w_L, ∂_t(H_λ)w_L⟩ + ⟨H_λ w_L,∂_t(𝒜_λ^L-2k-2)w_2k+2⟩ ,which allows us to iterate our recurrence relations. For k=L-1/2, we can verify that (<ref>) is negative from the fact ∂_t(A_λ)=∂_t(A_λ^*)=-λ_t/λ(ΛZ)_λ/r,⟨∂_t(H_λ) w_L,w_L ⟩ = ⟨∂_t(A_λA_λ^*) w_L,w_L ⟩= ⟨∂_t(A_λ)A_λ^* w_L,w_L ⟩ + ⟨ A_λ∂_t(A_λ^* )w_L,w_L ⟩ = 2⟨∂_t(A_λ) w_L+1,w_L ⟩ .Hence, we decompose (<ref>) as follows: ⟨H_λ w_L, ∂_t(𝒜_λ^L)w ⟩ + ∑_k=0^L-1/2∑_i=1^2 ∂_t D_k,i,1 = L/2⟨∂_t(H_λ) w_L,w_L ⟩ + ∑_k=0^L-1/2∑_i,j=1^2(E_k,i,j+F_k,i,j ).Together with (<ref>) and (<ref>), we obtain the following initial identity of ℰ_L+1:∂_t {ℰ_L+1/2λ^2L+∑_k=0^L-1/2∑_i=1^2 (2-δ_k,0δ_i,1) D_k,i,1} =2L+1/2⟨∂_t(H_λ) w_L,w_L ⟩ +⟨H_λ w_L, 𝒜_λ^L ℱ_1 ⟩ + ⟨ẇ_L,𝒜_λ^Lℱ_2 ⟩+∑_k=0^L-1/2∑_i,j=1^2 (2-δ_k,0δ_i,1) (E_k,i,j+F_k,i,j ).Step 3: Second energy identity. We find out another corrections from E_k,i,1, which contains ∂_tt(𝒜_λ^L-2k). More precisely from Lemma <ref>,E_k,1,1 = ⟨ẇ_L, ∂_tt(𝒜_λ^L-2k)w_2k⟩=∑_m=2k^L-1λ_tt/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m, ẇ_L ⟩+ ∑_m=2k^L-1 O(b_1^2)/λ^L+2-m⟨ (Φ_m,L,k^(2))_λ w_m, ẇ_L ⟩andE_k,2,1 = -⟨w_L, ∂_tt(𝒜_λ^L-2k)ẇ_2k⟩=-∑_m=2k^L-1λ_tt/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m, w_L ⟩- ∑_m=2k^L-1 O(b_1^2)/λ^L+2-m⟨ (Φ_m,L,k^(2))_λẇ_m, w_L ⟩where Φ_m,L,k^(j_1)(y):= Φ_m-2k,L-2k^(j_1)(y) with j_1=1,2, so that |Φ_m,L,k^(j_1)(y)| ≲1/1+y^L+2-m. Here, we cannot treat λ_tt directly because we do not have the exact relation λ_t=-b_1. Thus, we add (b_1)_t to λ_tt and use (<ref>),λ_tt/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m, ẇ_L ⟩ =(λ_t+b_1)_t/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m, ẇ_L ⟩+O(b_1^2)/λ^L+2-m⟨ (Φ_m,L,k^(1))_λ w_m, ẇ_L ⟩.We then correct (<ref>) via integration by parts in time with (<ref>):(λ_t + b_1)_t/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m , ẇ_L ⟩ - ∂_t (λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m , ẇ_L ⟩)= (λ_t + b_1)⟨∂_t(λ^m-(L+1)(Φ_m,L,k^(1))_λ) w_m , ẇ_L ⟩ + λ_t + b_1/λ^L+1-m[⟨ (Φ_m,L,k^(1))_λ∂_tw_m , ẇ_L ⟩ + ⟨ (Φ_m,L,k^(1))_λ w_m , ∂_tẇ_L ⟩] = -λ_t(λ_t + b_1)/λ^L+2-m⟨ ( Λ_m-LΦ_m,L,k^(1))_λ w_m , ẇ_L ⟩- λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ (ẇ_m + ∂_t(𝒜_λ^m)w + 𝒜_λ^m ℱ_1) , ẇ_L ⟩ +λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m , w_L+2 - ∂_t(𝒜_λ^L)ẇ - 𝒜_λ^L ℱ_2⟩.We can also obtain the same correction for E_k,2,1:(λ_t + b_1)_t/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m , w_L ⟩ - ∂_t (λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m , w_L ⟩) = -λ_t(λ_t + b_1)/λ^L+2-m⟨ (Λ_m-LΦ_m,L,k^(1))_λẇ_m , w_L ⟩- λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ ( w_m+2 - ∂_t(𝒜_λ^m)ẇ - 𝒜_λ^m ℱ_2) , w_L ⟩+ λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m , ẇ_L + ∂_t(𝒜_λ^L)w + 𝒜_λ^L ℱ_1⟩.Rearranging the existing errors E_k,i,j, F_k,i,j with introducing a new correction notation D_k,i,2 and new error notation E^*_k,i,j, F^*_k,i,j for 0≤ k ≤L-1/2 and i=1,2:E_k,i,1 - ∂_t D_k,i,2+ E_k,i,2 + F_k,i,1 + F_k,i,2 = E^*_k,i,1 + E^*_k,i,2 + F^*_k,i,1 + F^*_k,i,2whereD_k,1,2= ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m , ẇ_L ⟩ , E_k,1,1^* = -∑_m=2k^L-1λ_t(λ_t + b_1)/λ^L+2-m⟨ (Λ_m-LΦ_m,L,k^(1))_λ w_m , ẇ_L ⟩-∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ (ẇ_m + ∂_t(𝒜_λ^m)w) , ẇ_L ⟩+ ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m , w_L+2 - ∂_t(𝒜_λ^L)ẇ⟩,E_k,1,2^*= E_k,1,2+∑_m=2k^L-1O(b_1^2)/λ^L+2-m⟨ (Φ_m,L,k^(2))_λ w_m , ẇ_L ⟩,F_k,1,1^*=F_k,1,1- ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ𝒜_λ^m ℱ_1 , ẇ_L ⟩F_k,1,2^*=F_k,1,2 - ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ w_m , 𝒜_λ^L ℱ_2 ⟩andD_k,2,2 = -∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m , w_L ⟩ , E_k,2,1^*= ∑_m=2k^L-1λ_t(λ_t + b_1)/λ^L+2-m⟨ (Λ_m-LΦ_m,L,k^(1))_λẇ_m , w_L ⟩ + ∑_k=2m^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ ( w_m+2 - ∂_t(𝒜_λ^m)ẇ) , w_L ⟩ - ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m , ẇ_L + ∂_t(𝒜_λ^L)w ⟩,E_k,2,2^*= E_k,2,2 -∑_m=2k^L-1O(b_1^2)/λ^L+2-m⟨(Φ_m,L,k^(2))_λẇ_m , w_L ⟩,F_k,2,1^*=F_k,2,1- ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λẇ_m , 𝒜_λ^L ℱ_1⟩F_k,2,2^*=F_k,2,2 - ∑_m=2k^L-1λ_t + b_1/λ^L+1-m⟨ (Φ_m,L,k^(1))_λ𝒜_λ^m ℱ_2, w_L ⟩,we obtain the following modified energy identity:∂_t {ℰ_L+1/2λ^2L+∑_k=0^L-1/2∑_i,j=1^2 (2-δ_k,0δ_i,1) D_k,i,j} =2L+1/2⟨∂_t(H_λ) w_L,w_L ⟩ +⟨H_λ w_L, 𝒜_λ^L ℱ_1 ⟩ + ⟨ẇ_L,𝒜_λ^Lℱ_2 ⟩+∑_k=0^L-1/2∑_i,j=1^2 (2-δ_k,0δ_i,1) (E^*_k,i,j+F^*_k,i,j ).Step 4: Error estimation. All we need is to estimate all inner products except the repulsive one ⟨∂_t(H_λ) w_L, w_L ⟩. We can classify such inner products into two main categories: quadratic terms with respect to w (i.e. D_k,i,j and E^*_k,i,j), those involving ℱ_i, i=1,2 (i.e. F^*_k,i,j and (<ref>)).(i) D_k,i,j terms. From (<ref>) and Lemma <ref>, all inner products of D_k,i,j can be written as sums of terms of the form: 0≤ m ≤ L-1,O(b_1)/λ^2L⟨Φ_m,Lε_m,ε̇_L ⟩,O(b_1)/λ^2L⟨Φ_m,Lε̇_m ,ε_L ⟩, |Φ_m,L(y)| ≲1/1+y^L+2-m.Indeed, the Φ_m,L's included in each of the above inner products are different functions (ex. Φ_m-2k,L-2k^(j_1), Φ_m,L,k^(j_2), Λ_m-LΦ_m,L,k^(1)…), but we abuse the notation because they are all rational functions with the same asymptotics. From the coercive property (<ref>), we obtain the desired bound for the correction in (<ref>):| ⟨Φ_m,Lε_m,ε̇_L ⟩ |≲‖ε_m/1+y^L+2-m‖_L^2√(ℰ_L+1)≲ C(M) ℰ_L+1, | ⟨Φ_m,Lε̇_m ,ε_L ⟩ |≲‖1+log y/1+y^L+1-mε̇_m ‖_L^2√(ℰ_L+1)≲ C(M) ℰ_L+1.(ii) E^*_k,i,j terms. Similarly, all inner products of E^*_k,i,j can be written as sums of terms of the form: for 0≤ m,n ≤ L-1, O(b_1^2)/λ^2L+1⟨Φ_m,Lε_m,ε̇_L ⟩ ,O(b_1^2)/λ^2L+1⟨Φ_m,Lε̇_m ,ε_L ⟩ ,O(b_1^2)/λ^2L+1⟨Φ_m,Lε̇_m , Φ_n,Lε_n ⟩ O(b_1^2)/λ^2L+1⟨Φ_m,Lε̇_m,ε̇_L ⟩ ,O(b_1^2)/λ^2L+1⟨Φ_m,Lε_m ,ε_L+2⟩ ,O(b_1^2)/λ^2L+1⟨Φ_m,Lε_m+2 ,ε_L ⟩,which are bounded byb_1^2/λ^2L+1 C(M) ℰ_L+1.(iii) F^*_k,i,j and (<ref>). Recall ℱ_1 = λ^-1ℱ_λ and ℱ_2 = λ^-2ℱ̇_λ, all inner products of F^*_k,i,j can be written as sums of terms of the form: for 0≤ m ≤ L-1O(b_1)/λ^2L+1⟨Φ_m,L𝒜^m ℱ ,ε̇_L ⟩ ,O(b_1)/λ^2L+1⟨Φ_m,Lε̇_m , 𝒜^L ℱ⟩ , O(b_1)/λ^2L+1⟨Φ_m,Lε_m , 𝒜^L ℱ̇⟩, O(b_1)/λ^2L+1⟨Φ_m,L𝒜^m ℱ̇ ,ε_L ⟩ ,1/λ^2L+1⟨ε_L+1, 𝒜^L+1ℱ⟩ ,1/λ^2L+1⟨ε̇_L,𝒜^Lℱ̇⟩.We claim that ℱ and ℱ̇ satisfy the following estimates: for 0≤ k ≤ L-1, ‖𝒜^L+1ℱ‖_L^2 +‖𝒜^L ℱ̇‖_L^2 ≲ b_1 [ b_1^L+1/|log b_1| + √(ℰ_L+1/log M)],‖1+log y/1+y^L+1-k𝒜^k ℱ‖_L^2 ≲ b_1^L+2|log b_1|^C,‖1+log y/1+y^L+1-k𝒜^k ℱ̇‖_L^2 ≲b_1^L+1/|log b_1| + √(ℰ_L+1/log M).Together with the coercivity (<ref>), the three inner products in (<ref>) are bounded byb_1/λ^2L+1 C(M) b_1^L+2|log b_1|^C√(ℰ_L+1) .For the three inner products in (<ref>), we obtain the sharp boundb_1/λ^2L+1(b_1^L+1/|log b_1| + √(ℰ_L+1/log M))√(ℰ_L+1) from (<ref>), (<ref>) and the sharp coercivity bound ‖ε_L/y(1+|log y|)‖_L^2^2 ≤ C⟨Hε_L, ε_L ⟩≤ Cℰ_L+1.Hence, it remains to prove (<ref>), (<ref>) and (<ref>).Step 5: Proof of (<ref>), (<ref>) and (<ref>). Recall (<ref>), we have ℱ=(ℱ,ℱ̇)^t and[ℱ; ℱ̇ ]= -𝐌𝐨𝐝(t) -ψ̃_b - NL(ε)-L(ε),NL(ε) = [ 0; NL(ε) ],L(ε) = [0; L(ε) ].Thus, we will estimate each of the above four errors.(i) ψ̃_b term. It directly follows from the global and logarithmic weighted bounds of Proposition <ref>.(ii) 𝐌𝐨𝐝(t) term. Recall (<ref>), we have𝐌𝐨𝐝(t)= -(λ_s/λ+b_1 )( ΛQ + ∑_i=1^L b_iΛ( χ_B_1T_i) +∑_i=2^L+2Λ(χ_B_1S_i) ) + ∑_i=1^L ( (b_i)_s+(i-1 + c_b,i)b_1b_i -b_i+1) χ_B_1( T_i +∑_j=i+1^L+2∂S_j/∂ b_i) .Due to Lemma <ref>, the logarithmic weighted bounds (<ref>) and (<ref>) are derived from the finiteness of the following integrals∫|1+log y/1+y^L+1-k𝒜^k [ Λ Q + ∑_i=1^L b_iΛ_1-i(χ_B_1T_i)+∑_i=2^L+2Λ_1-i(χ_B_1S_i) ] |^2 ≲ 1 ∑_i=1^L∫|1+log y/1+y^L+1-k𝒜^k [ χ_B_1T_i+ χ_B_1∑_j=i+1^L+2∂S_j/∂ b_i] |^2 ≲ 1,which comes from the admissibility of T_i and Lemma <ref>. For the global bounds (<ref>), we need to gain one extra b_1 as follows: since A Λ Q=0, the admissibility of T_i and Lemma <ref> imply∫|𝒜^L+1Λ Q + ∑_i=1^L b_i𝒜^L+1-i[Λ_1-i(χ_B_1T_i)]+∑_i=2^L+2𝒜^L+1-i[Λ_1-i(χ_B_1S_i)]|^2 ≲∑_i=1^L b_1^i∫_y≤ 2B_1|(1+log y)y^i-2/1+y^L|^2 + ∑_i=2^L+1 b_1^2i + b_1^2(L+1)/|log b_1|^2≲ b_1^2.For (<ref>), we additionally use the cancellation 𝒜^L T_i = 0 for 1≤ i ≤ L to estimate∑_i=1^L ∫ |𝒜^L+1-i (χ_B_1 T_i) |^2≲∑_i=1^L∫_y∼ B_1|y^i-2log y/y^L|^2 ≲b_1^2. ∑_j=i+1^L+2∫|𝒜^L+1-i[χ_B_1∂S_j/∂ b_i] |^2≲∑_j=i+1^L+2 b_1^2(j-i) + b_1^2(L+1-i)/|log b_1|^2≲ b_1^2.Hence, (<ref>) comes from Lemma <ref>:‖𝒜^L+1Mod(t) ‖_L^2+‖𝒜^L Ṁȯḋ(t) ‖_L^2≲ b_1[ b_1^L+1/|log b_1| + √(ℰ_L+1/log M)].For the remaining two terms, NL(ε) and L(ε), we follow the approach developed in <cit.>. We deal with the case y≤ 1 and y≥ 1 separately.(iii) NL(ε) term:(a) y≤ 1. From a Taylor Lagrange formula in Lemma <ref>, NL(ε) also satisfies a Taylor Lagrange formulaNL(ε)= ∑_i=0^L-1/2c_i y^2i+1 + r_ε,where|c_i|≲ C(M) ℰ_L+1, |𝒜^k r_ε| ≲ y^L-k|log y|C(M) ℰ_L+1,0≤ k ≤ L.Since the expansion part of NL(ε) is an odd function, that of 𝒜^k NL(ε) also has a single parity from the cancellation A(y)=O(y^2). Using (<ref>), we obtain|𝒜^k NL(ε)(y)| ≲ C(M)|log y| ℰ_L+1, 0≤ k ≤ L,and thus we conclude‖𝒜^L NL(ε) ‖_L^2(y≤ 1)+‖1+|log y|^C/1+y^L+1-k𝒜^k NL(ε) ‖_L^2(y≤ 1)≲ C(M) ℰ_L+1≲ b_1^2L+1.(b) y≥ 1. LetNL(ε)=ζ^2 N_1(ε), ζ=ε/y, N_1(ε) = ∫_0^1 (1-τ)f”(Q̃_b+τε)dτ .We have the following bounds for i≥ 0, j≥ 1 and 1≤ i+j ≤ L,‖∂_y^i ζ/y^j-1‖_L^∞(y≥ 1) + ‖∂_y^i ζ/y^j‖_L^2(y≥ 1) ≲ |log b_1|^C √(ℰ_i+j+1), ‖ζ‖_L^2(y≥ 1)≲ 1 |N_1(ε)| ≲ 1 , |∂_y^k N_1(ε)|≲ |log b_1|^C [ 1/y^k+1 + √(ℰ_k+1)],1≤ k ≤ L .The estimates (<ref>) are consequences of Lemma <ref> and the orbital stability (<ref>). The estimates (<ref>) come from the crude bound|∂_y^k Q̃_b | ≲ |log b_1|^C [ 1/y^k+1 + ∑_i=1^L+1/2 b_1^2i y^2i-1-k 1_y≤ 2B_1] ≲|log b_1|^C/y^k+1.We have the trivial bound|1+|log y|^C/y^L+1-k𝒜^k NL(ε) |≲|𝒜^k NL(ε)/y^L-k|,for 0≤ k ≤ L,|𝒜^k NL(ε)/y^L-k| ≲∑_k=0^L |∂_y^k NL(ε)|/y^L-k≲∑_k=0^L 1/y^L-k∑_i=0^k|∂_y^i ζ^2| |∂_y^k-iN_1(ε)| ≲∑_k=0^L |log b_1|^C/y^L-k[ |∂_y^k ζ^2| + ∑_i=0^k-1√(ℰ_k-i+1)|∂_y^i ζ^2|] ≲∑_k=0^L |log b_1|^C/y^L-k[ ∑_i=0^k |∂_y^i ζ||∂_y^k-iζ| + ∑_i=0^k-1∑_j=0^i √(ℰ_k-i+1)|∂_y^j ζ||∂_y^i-jζ|].Denote I_1=k-i, I_2=i, there exists J_2 ∈ℕ such thatmax(0,1-i) ≤ J_2 ≤min(L+1-k,L-i), J_1=L+1-k-J_2,we have1≤ I_1+J_1 ≤ L,1≤ I_2+J_2 ≤ L, I_1+I_2+J_1+J_2=L+1.Thus‖∂_y^iζ·∂_y^k-iζ/y^L-k‖_L^2(y≥ 1) ≤‖∂_y^I_1ζ/y^J_1-1‖_L^∞(y≥ 1)‖∂_y^I_2ζ/y^J_2‖_L^2(y≥ 1)≲ |log b_1|^C √(ℰ_I_1+J_1+1ℰ_I_2+J_2+1).If I_1+J_1<L-1 and I_2+J_2<L-1, (L+1)c_1>L+2 implies√(ℰ_I_1+J_1+1ℰ_I_2+J_2+1)≲ |log b_1|^C(K) b_1^(L+1)c_1≲ b_1^δ(L) b_1^L+2.If either I_1+J_1=L-1 or I_2+J_2=L-1, L + 2c_1 > L+2 implies√(ℰ_I_1+J_1+1ℰ_I_2+J_2+1)≲ |log b_1|^C(K) b_1^L+2c_1≲ b_1^δ(L) b_1^L+2.If either I_1+J_1=L or I_2+J_2=L, L+1 + c_1 > L+2 implies√(ℰ_I_1+J_1+1ℰ_I_2+J_2+1)≲ |log b_1|^C(K) b_1^L+1+c_1≲ b_1^δ(L) b_1^L+2.We calculate the latter term similarly except for the case k=L and 0≤ i=j ≤ k-1. Here, we use the energy bound ‖ζ‖_L^2(y≥ 1)≲ 1, √(ℰ_L-i+1)‖∂_y^i ζ·ζ‖_L^2(y≥ 1) ≲√(ℰ_L-i+1)‖∂_y^i ζ‖_L^∞(y≥ 1)≲ |log b_1|^C(K) b_1^(L+1)c_1 if0<i<L-1 |log b_1|^C(K) b_1^L+2c_1 ifi=1,L-2 |log b_1|^C(K) b_1^L+1 + c_1 ifi=0,L-1≲ b_1^δ(L) b_1^L+2.The remaining case can be estimated by the following inequalities: since k-i≥ 1, I_1+J_1≥ 1, I_2+J_2≥ 1 and I_1+I_2+J_1+J_2=L+1-(k-i), √(ℰ_k-i+1ℰ_I_1+J_1+1ℰ_I_2+J_2+1) ≲ |log b_1|^C(K) b_1^(L+1)c_1 ifk-i <L-1 |log b_1|^C(K) b_1^L+2c_1 ifk-i=L-1≲ b_1^δ(L) b_1^L+2.(iv) L(ε) term :(a) y≤ 1. Similar to the case NL(ε), we obtain a Taylor Lagrange formula for L(ε):L(ε)=b_1^2[∑_i=0^L-1/2c̃_i y^2i+1 + r̃_ε],where|c̃_i|≲ C(M) √(ℰ_L+1), |𝒜^k r̃_ε| ≲ y^L-k|log y|C(M) √(ℰ_L+1),0≤ k ≤ L.Using the cancellation A(y)=O(y^2) and (<ref>), we obtain|𝒜^k L(ε)(y)| ≲ C(M)b_1^2|log y| √(ℰ_L+1), 0≤ k ≤ L,and thus we conclude‖𝒜^L L(ε) ‖_L^2(y≤ 1)+‖1+|log y|^C/1+y^L+1-k𝒜^k L(ε) ‖_L^2(y≤ 1)≲ C(M) b_1^2 √(ℰ_L+1).(b) y≥ 1. Let L(ε)=ε N_2(α_b), N_2(α_b)=f'(Q̃_b)-f'(Q)/y^2=χ_B_1α_b/y^2∫_0^1 f”(Q+τχ_B_1α_b)dτ.Similar to (<ref>), we have the bound|∂_y^k N_2|≲b_1^2 |log b_1|^C/y^k+1, 0≤ k ≤ L,this yields the desired result since L(ε) satisfies the pointwise bound|𝒜^k L(ε)/y^L-k|≲∑_i=0^k |∂_y^i ε||∂_y^k-i N_2|/y^L-k≲ b_1^2 |log b_1|^C ∑_i=0^k |∂_y^iε|/y^L+1-i.§ PROOF OF THE MAIN THEOREM §.§ Proof of Proposition <ref>Step 1: Control of the scaling law. We have the bound-λ_s/λ=c_1/s + d_1/slog s + O( 1/s(log s)^β).We rewrite as|d/ds(log( s^c_1(log s)^d_1λ(s) ))|≲1/s (log s)^β,integration givesλ(s)=s_0^c_1(log s_0)^d_1/s^c_1(log s)^d_1(1+O( 1/(log s)^β-1) ).Note thatd/ds(b_1^2n (log b_1)^2m/λ^2k-2) = 2 b_1^2n-1 (log b_1)^2m/λ^2k-2[ (k-1)b_1^2 + b_1s( n + m/log b_1) +O(b_1^L+2) ] .From Lemma <ref> with (<ref>), (<ref>) and (<ref>),(k-1)b_1^2 + b_1s( n + m/log b_1) =(k-1)b_1^2 + ( b_2 - c_b_1,1 b_1^2 ) ( n + m/log b_1) + O(b_1^L+2 )= (k-1)b_1^2 + n b_2 + 2mb_2-nb_1^2/2log b_1 + O( b_1^2/(log b_1)^2) = (k-1)c_1^2 + n c_2/s^2 +2(k-1)c_1d_1 - nd_2 - mc_2 + n/2c_1^2 /s^2 log s+ O( 1/s^2 (log s)^β).The recurrence relations (<ref>) and (<ref>) imply(k-1)c_1^2 + nc_2 = c_1( (k-1)ℓ/ℓ-1 -n )and2(k-1)c_1d_1 -nd_2+ n/2c_1^2 = d_1(2(k-1)c_1 +n ) <0.Hence, if we set n=L+1 and m=-1 for k=L+1, c_1 ≥L/L-1 implies(k-1)b_1^2 + b_1s( n + m/log b_1) ≥1/s^2(c_1/L-1 + O( 1/log s) ) > 0 and if we set n=(k-1)c_1 and large enough m=m(k,L) for k≤ L,(k-1)b_1^2 + b_1s( n + m/log b_1) ≥c_1/s^2log s( m/2 + O(1/(log s)^β-1)) > 0for all s∈ [s_0,s^*) with sufficiently large s_0. Thus, b_1^2(L+1)(0)/(log b_1(0))^2 λ^2L(0)≤b_1^2(L+1)(t)/(log b_1(t))^2 λ^2L(t)andb_1^2(k-1)c_1(0)|log b_1(0)|^m/λ^2(k-1)(0)≤b_1^2(k-1)c_1(t)|log b_1(t)|^m/λ^2(k-1)(t).Step 2: Improved bound on ℰ_L+1.We integrate the Lyapounov monotonicity (<ref>) and inject the bootstrap bounds (<ref>) and (<ref>),ℰ_L+1(t)≲λ^2L(t)/λ^2L(0)(1+b_1C(M))ℰ_L+1(0) + b_1C(M) ℰ_L+1(t) + [ K/√(log M) +√(K)] λ^2L(t) ∫_0^t b_1/λ^2L+1b_1^2(L+1)/|log b_1|^2 dτ≲b_1^2(L+1)(t)/|log b_1(t)|^2 + [K/√(log M) +√(K)] λ^2L(t) ∫_0^t b_1/λ^2L+1b_1^2(L+1)/|log b_1|^2.To deal with the integral in (<ref>), one can directly replace λ and b_1 with functions of s using (<ref>) and (<ref>). However, the fact that s_0 in (<ref>) depends on the bootstrap constant K requires carefulness in direct substitution. On behalf of this approach, we employ integration by parts with(<ref>), (<ref>), (<ref>) and the fact c_1 ≥ L/(L-1),∫_0^t b_1/λ^2L+1b_1^2(L+1)/|log b_1|^2= - ∫_0^t λ_t/λ^2L+1b_1^2(L+1)/|log b_1|^2 + ∫_0^t O( b_1^L+2)b_1^2(L+1)/λ^2L+1|log b_1|^2 = 1/2L[b_1^2(L+1)(t)/λ^2L(t)|log b_1(t)|^2-b_1^2(L+1)(0)/λ^2L(0)|log b_1(0)|^2] -1/2L∫_0 ^t 1/λ^2L(b_1^2(L+1)/|log b_1|^2)_t+∫_0^t O( b_1^L+2)b_1^2(L+1)/λ^2L+1|log b_1|^2≤b_1^2(L+1)(t)/λ^2L(t)|log b_1(t)|^2 + ∫_0^t b_1/λ^2L+1( L^2-1/L^2 + C/|log b_1|) b_1^2(L+1)/|log b_1|^2,we obtain the bound∫_0^t b_1/λ^2L+1b_1^2(L+1)/|log b_1|^2≲b_1^2(L+1)(t)/λ^2L(t)|log b_1(t)|^2and therefore,ℰ_L+1(t) ≲[ 1+ K/√(log M) +√(K)] b_1^2(L+1)(t)/|log b_1(t)|^2≤K/2b_1^2(L+1)(t)/|log b_1(t)|^2 .Step 3: Improved bound on ℰ_k. We now claim the improved bound on the intermediate energiesℰ_k≤b_1^2(k-1)c_1 |log b_1|^C+K/2.This follows from the monotonicity formula for 2≤ k ≤ L,d/dt{ℰ_k/λ^2k-2}≤ C b_1|log b_1|^C/λ^2k-1(√(ℰ_k+1)+b_1^k+b_1^δ(k)+(k-1)c_1)√(ℰ_k)for some universal constants C, δ>0 independent of the bootstrap constant K. We integrate the above monotonicity formula (K/2 comes from √(ℰ_k)),ℰ_k ≲ b_1^2(k-1)c_1|log b_1|^C+K/2 + λ^2k-2(t) ∫_0^t b_1^1+ 2(k-1)c_1/λ^2k-1 |log b_1|^C+K/2 In this case, we directly substitute λ and b_1 with functions of s since the possible large coefficient can be absorbed by |log b_1|^C. From (<ref>), (<ref>) and (<ref>),λ^2k-2(t) ∫_0^t b_1^1+ 2(k-1)c_1/λ^2k-1 |log b_1|^C+K/2 dτ= λ^2k-2(s) ∫_s_0^sb_1^1+ 2(k-1)c_1/λ^2k-2 |log b_1|^C+K/2 d σ≲(log s)^C+K/2/s^2(k-1)c_1∫_s_0^s 1/σ dσ≲ b_1^2(k-1)c_1|log b_1|^C + K/2.However, these improved bounds (<ref>) are inadequate to close the bootstrap bounds when ℓ=L (<ref>) and when ℓ=L-1(<ref>). In these cases, we employ alternative energies defined byℰ_ℓ:=⟨ε̂_ℓ, ε̂_ℓ⟩ + ⟨ε̇̂̇_ℓ-1, ε̇̂̇_ℓ-1⟩.We can easily check thatℰ_ℓ =ℰ_ℓ + O(b_1^2ℓ |log b_1|^2)Then we have the following monotonicity formulae d/dt{ℰ_ℓ/λ^2ℓ-2 + O(b_1^2ℓ|log b_1|^2/λ^2ℓ-2)} ≤b_1^ℓ+1|log b_1|^δ/λ^2ℓ-1 ( b_1^ℓ|log b_1|+√(ℰ_ℓ)).Integrating (<ref>), the initial bounds (<ref>) and the bootstrap bounds (<ref>), (<ref>) implyℰ_ℓ(t)/λ^2(ℓ-1)(t) ≲b_1^2ℓ|log b_1|^2(t)/λ^2ℓ-2(t) + ℰ_ℓ(0) + b_1^2ℓ(0)|log b_1(0)|^2/λ^2(ℓ-1)(0) + ∫_0^tb_1^ℓ+1|log b_1|^δ/λ^2ℓ-1 ( b_1^ℓ|log b_1|+√(ℰ_ℓ)) dτ≲ 1 + ∫_0^tb_1^ℓ+1|log b_1|^δ'/λ^ℓdτ≲ 1 + ∫_s_0^s1/σ (logσ)^ℓ/ℓ-1-δ'dσ≲K/2.The monotonicity formulae (<ref>), (<ref>) are proved in Appendix <ref>.We remark that the exponent 1+2(k-1)c_1 of b_1 in (<ref>) can be replaced by 1+δ+2(k-1)c_1 for some small δ>0 when 2≤ k ≤ℓ-1, so we can improve the bound (<ref>) to b_1^2(k-1)c_1 + δ|log b_1|^C. Hence for 2≤ k ≤ℓ, we get the uniform boundsℰ_k≲λ^2k-2. Step 4: Control of stable/unstable parameters. We use the modified modulation parameters b̃=(b_1,…,b_L-1,b̃_L) with b̃_L given by (<ref>) and the corresponding fluctuation V=P_ℓU where U=(U_1,…,U_ℓ) is defined byU_k/s^k (log s)^β =b̃_k-b_k^e ,1≤ k ≤ℓ. We note that the existence of V(s_0) in Proposition <ref> is equivalent to the existence of V(s_0) from remark <ref> and (<ref>) in terms of |V-V| ≲ s^L|log s|^β|b_L-b̃_L|≲ s^L|log s|^β b_1^L+1-Cδ≲1/s^1/2. Hence, we can replace V for all V of the initial assumptions (<ref>), (<ref>) and bootstrap bounds (<ref>), (<ref>) in subsection <ref>. In particular, we replace the assumption (<ref>) as s̃^* <∞for all (V_2,…,V_ℓ)∈ℬ^ℓ-1 . where s̃^* denotes the modified exit time to indicate that V has been changed to V. We start by closing the bootstrap bounds for the stable parameters b_L (for the case ℓ=L-1) and V_1, then we rule out the assumption of the unstable parameters (V_2(s),…,V_ℓ(s)) via showing a contradiction by Brouwer's fixed point theorem. (i) Stable parameter b_L when ℓ=L-1: Recall Lemma <ref>, we have |(b̃_L)_s + (L-1 + c_b,L)b_1b̃_L| ≲√(ℰ_L+1)/√(|log b_1|).Note that c_1=(L-1)/(L-2) and b_1∼ c_1/s+d_1/(slog s). Then from (<ref>) and (<ref>),d/ds( s^(L-1)c_1(log s)^3/2b̃_L ) = s^(L-1)c_1-1(log s)^3/2((L-1)c_1 + 3/2/log s)b̃_L-s^(L-1)c_1(log s)^3/2( (L-1 + c_b,L)b_1 b̃_L + O( √(ℰ_L+1)/√(|log b_1|)) ) =s^(L-1)c_1-1(log s)^3/2 O( 1/s^L (log s)^1+β + 1/s^L (log s)^3/2) =O( s^(L-1)c_1-L-1).We integrate the above equation and estimate using the initial condition (<ref>)|b_L(s)| ≲ b_1^L+1-Cδ + s_0^(L-1)c_1 (log s_0)^3/2|b̃_L(s_0)|/s^(L-1)c_1 (log s)^3/2 + 1+(s_0/s)^(L-1)c_1-L/s^L (log s)^3/2≤1/2/s^L(log s)^βwith the fact (L-1)c_1 > L. Here, we choose β=5/4.To control the modes V, we rewrite (<ref>) for our b̃ as follows:s(U)_s- A_ℓU = O ( 1/(log s)^3/2-β)using (<ref>), Lemma <ref> and Lemma <ref>. Here, the reduced exponent 3/2 comes from (<ref>).By the definition of V, (<ref>) is equivalent tos(V)_s-D_ℓV = O(1/(log s)^3/2-β)where D_ℓ is given by (<ref>). (ii) Stable mode V_1: the first coordinate of (<ref>) can be written ass(V_1)_s + V_1=(sV_1)_s =O (1/(log s)^3/2-β).Hence, we improve the bound for V_1(s) from the initial assumption (<ref>):|V_1(s)| ≲|s_0V_1(s_0)|/s + 1/s∫_s_0^s dτ/(logτ)^3/2-β≤1/2.(iii) Unstable mode V_k, 2≤ k ≤ℓ: Our goal is to construct a continuous map f:ℬ^ℓ-1→𝒮^ℓ-1 as f(V_2(s_0),…,V_ℓ(s_0)) = (V_2(s̃^*),…,V_ℓ(s̃^*)).The assumption (<ref>) yields that f can be well-defined on ℬ^ℓ-1 and the improved bootstrap bounds gives the exit condition (V_2(s̃^*),…,V_ℓ(s̃^*)) ∈𝒮^ℓ-1. We obtain the outgoing behavior of the flow map s↦ (V_2,…,V_ℓ) from (<ref>): for all time s ∈ [s_0,s̃^*] such that ∑_i=2^ℓV_i^2 ≥ 1/2,d/ds(∑_i=2^ℓV_i^2 )= 2 ∑_i=2^ℓ (V_i)_sV_i= 2/s∑_i=2^ℓ[ i/ℓ-1V_i^2 + O (1/(log s)^3/2-β) ] >0.We note that (<ref>) implies two key results. First, (<ref>) allows us to prove the continuity of f by showing the continuity of the map(V_2(s_0),…,V_ℓ(s_0)) ↦s̃^* with some standard arguments (see Lemma 6 in <cit.>).Second, if we choose s=s_0 and (V_2(s_0),…,V_ℓ(s_0)) ∈𝒮^ℓ-1, ∑_i=2^ℓV_i^2(s) >1 for any s>s_0, so s̃^*=s_0. Hence, f is an identity map on 𝒮^ℓ-1 itself, which contradicts to Brouwer's fixed point theorem.§.§ Proof of Theorem 1.1Recall that there exists c(u_0,u̇_0)>0 such thatλ(s)= c(u_0,u̇_0)/s^c_1(log s)^d_1[ 1+O( 1/(log s)^β-1)].Using T-t= ∫_s^∞λ(s) ds <∞, we have T<∞ and(T-t)^ℓ-1 =c'(u_0,u̇_0) s^-1 (log s)^ℓ/(ℓ-1)[ 1+o_t→ T( 1)] =c”(u_0,u̇_0)λ(s)^ℓ-1/ℓ( log s) [ 1+o_t→ T( 1)].Therefore, we obtainλ(t)=c(u_0,u̇_0)(T-t)^ℓ/|log (T-t)|^ℓ/(ℓ-1)[1+o_t→ T(1)].The strong convergence (<ref>) follows as in <cit.>. § COERCIVE PROPERTIESWe recall that Φ_M=(Φ_M,0)^t, the orthogonality conditions (<ref>) are equivalent to⟨ε, H^iΦ_M ⟩ =⟨ε̇, H^iΦ_M ⟩ = 0,0≤ i ≤L-1/2.In this section, we claim that the above equivalent orthogonality conditions yield the coercive property of the higher-order energy ℰ_k+1ℰ_k+1 = ⟨ε_k+1, ε_k+1⟩ + ⟨ε̇_k, ε̇_k⟩, 1≤ k ≤ L.Our desired result is deduced from the coercivity of {‖ v_m ‖_L^2^2 }_m=1^L+1 under the following orthogonality conditions⟨ v, H^i Φ_M⟩ =0 , 0≤ i ≤⌊m-1/2⌋ .First, we restate Lemma B.5 of <cit.>, which established the coercivity of ‖ v_m ‖_L^2^2 when m is even. Let 0≤ k ≤L-1/2 and M=M(L)>0 be a large constant. Then there exists C(M)>0 such that the following holds true. Let v satisfies (denote v_-1=0)∫ |v_2k+2|^2 + ∫|v_2k+1|^2/y^2(1+y^2)+ ∑_i=0^k∫|v_2i-1|^2/y^6(1+|log y|^2)(1+y^4(k-i)) + |v_2i|^2/y^4(1+|log y|^2)(1+y^4(k-i))< ∞ and (<ref>) for m=2k+2. Then ∫ |v_2k+2|^2 ≥ C(M) {∫|v_2k+1|^2/y^2(1+|log y|^2) + ∑_i=0^k∫[|v_2i-1|^2/y^6(1+|log y|^2)(1+y^4(k-i)) + |v_2i|^2/y^4(1+|log y|^2)(1+y^4(k-i))] }. We additionally prove the coercivity of ‖ v_m ‖_L^2^2 when m is odd, which is an unnecessary step in <cit.>.Let 1≤ k ≤L-1/2 and M=M(L)>0 be a large constant. Then there exists C(M)>0 such that the following holds true. Let v satisfies (denote v_-1=0)∫ |v_2k+1|^2 + ∫|v_2k|^2/y^2 + ∫|v_2k-1|^2/y^4(1+|log y|^2)+ ∑_i=0^k-1∫|v_2i-1|^2/y^6(1+|log y|^2)(1+y^4(k-i)-2) + |v_2i|^2/y^4(1+|log y|^2)(1+y^4(k-i)-2)< ∞ and (<ref>) for m=2k+1. Then∫ |v_2k+1|^2 ≥ C(M) {∫|v_2k|^2/y^2(1+|log y|^2) +|v_2k-1|^2/y^4(1+|log y|^2) + ∑_i=0^k-1∫[|v_2i-1|^2/y^6(1+|log y|^2)(1+y^4(k-i)-2) + |v_2i|^2/y^4(1+|log y|^2)(1+y^4(k-i)-2)] }.The case k=0 is nothing but the coercivity of H, described in Lemma B.1 of <cit.>. In order to use induction on k, we have to prove the case k=1 first, and the case k → k+1.Based on the proof of Lemma B.5 of <cit.>, the latter case can be deduced from the following weighted coercive bounds: for n≥ 1 and radially symmetric u with ∫|u|^2/y^4(1+|log y|^2)(1+y^4n+2) + |Au|^2/y^6(1+|log y|^2)(1+y^4n-2)< ∞and⟨ u, Φ_M ⟩=0,we have∫|Hu|^2/y^4(1+|log y|^2)(1+y^4n-2)≥ C(M) {∫|u|^2/y^4(1+|log y|^2)(1+y^4n+2) + |Au|^2/y^6(1+|log y|^2)(1+y^4n-2)}.We can prove (<ref>) easily by imitating the proof of Lemma B.4 of <cit.>. We can also verify the case k=1 by deriving the weighted coercive bound similar to (<ref>) as∫|Hu|^2/y^2(1+|log y|^2)≥ C(M) {∫|u|^2/y^4(1+|log y|^2)(1+y^2) + |Au|^2/y^4(1+|log y|^2)}.We remark that (<ref>) required some cautious estimates in the region y≥ 1: roughly speaking, we have∫_y≥ 1|Hu|^2/y^2(1+|log y|^2) ≥∫_y≥ 1|∂_y (y ∂_y u)|^2/y^4(1+|log y|^2) - ∫_y≥ 1 |u|^2 Δ( V/y^4 (1+|log y|^2))+ ∫_y≥ 1V^2 |u|^2/ y^6 (1+|log y|^2) - C ∫_1≤ y ≤ 2 [|∂_y u|^2 + |u|^2]where V(y)=1-8y^2/(1+y^2)^2 is the potential part of H. In the proof of Lemma B.4 of <cit.>, the author used the sharp logarithmic Hardy inequality to prove∫_y≥ 1|∂_y (y ∂_y u)|^2/y^4k+4(1+|log y|^2) - ∫_y≥ 1 |u|^2 Δ( 1/y^4k+4 (1+|log y|^2)) ≥[(4k+4)^4/16 - (4k+4)^2] ∫_y≥ 1 |u|^2/ y^4k+6 (1+|log y|^2)- C ∫_1≤ y ≤ 2 [|∂_y u|^2 + |u|^2],which implies the desired result when k > 0. However, such estimate is not applicable in our case since (4k+4)^4/16=(4k+4)^2 for k=0. In this case, we employ the additional positive term in (<ref>) with the asymptotics of the potential V(y)=1+O(y^-2) for y≥ 1,∫_y≥ 1V^2 |u|^2/y^6(1+|log y|^2)≥ 1^- ∫_y≥ 1|u|^2/y^6(1+|log y|^2) - C ∫|u|^2/1+y^8.From the previous lemmas, we obtain the coercivity of ℰ_k+1.Let 1≤ k ≤ L and M=M(L)>0 be a large constant. Then there exists C(M)>0 such that ℰ_k+1= ⟨ε_k+1 , ε_k+1⟩ + ⟨ε̇_k , ε̇_k⟩ ≥ C(M)[∑_i=0^k∫|ε_i|^2/y^2(1+y^2(k-i))(1+|log y|^2)+ ∑_i=0^k-1∫|ε̇_i|^2/y^2(1+y^2(k-1-i))(1+|log y|^2)].The finiteness assumptions (<ref>), (<ref>) and (<ref>) for (<ref>) are satisfied from the well-localized smoothness of 1-corotational map (Φ, ∂_t Φ) (see Lemma A.1 in <cit.>). § INTERPOLATION ESTIMATES In this section, we provide some interpolation estimates for ε, i.e. the first coordinate part of ε. We will employ these bounds to deal with NL(ε) and L(ε) terms in the evolution equation of ε (<ref>).(ii) For y≤ 1, ε has a Taylor-Lagrange expansionε=∑_i=1^L+1/2 c_i T_L+1-2i + r_εwhere T_2i is the first coordinate part of T_2i and|c_i| ≲ C(M) √(ℰ_L+1), |∂_y^k r_ε| ≲ C(M) y^L-k|log y| √(ℰ_L+1),0≤ k ≤ L.(iii) For y≤ 1, ε satisfies the following pointwise bounds|ε_k|≲ C(M) y^1+k|log y| √(ℰ_L+1), 0≤ k ≤L-1, |ε_L|≲ C(M) √(ℰ_L+1),|∂_y^kε|≲ C(M) y^k+1|log y| √(ℰ_L+1), 0≤ k ≤ L.(iv) For 1≤ k ≤ L and 0≤ i ≤ k,∫1+|log y|^C/1+y^2(k-i+1)(|ε_i|^2 + |∂_y^i ε |^2) +‖∂_y^i ε/y^k-i‖^2_L^∞(y≥ 1) ≲ |log b_1|^C ℰ_k+1.It is provided from the proof of Lemma C.1 in <cit.>. § LEIBNIZ RULE FOR 𝒜^KUnlike <cit.>, we encounter some terms in which ∂_t is taken more than once in 𝒜_λ^k, such as ∂_tt(𝒜_λ^k), ∂_t(𝒜_λ^i)∂_t(H_λ^j), etc. To control those terms, we recall the following asymptotics∂_t(𝒜_λ^k)f_λ(r)= λ_t/λ^k+1∑_i=0^k-1Φ_i,k^(1)(y) f_i (y), |Φ_i,k^(1)(y)| ≲1/1+y^k+2-i,which was introduced in the Appendix D and E of <cit.>. We note that near the origin, Φ_i,k^(1) satisfies Φ_i,k^(1)(y) = ∑_p=0^N c_i,k,p y^2p + O(y^2N+2) k-iis even ∑_p=0^N c_i,k,p y^2p+1 + O(y^2N+3) k-iis odd. Based on the above facts, we can obtain the following lemma.Let 1≤ k≤ (L-1)/2. Then∂_tt(𝒜_λ^k)f_λ(r) =λ_tt/λ^k+1∑_i=0^k-1Φ_i,k^(1)(y) f_i (y)+ O(b_1^2)/λ^k+2∑_i=0^k-1Φ_i,k^(2)(y) f_i (y)∂_t (𝒜_λ^L-2k)∂_t (H_λ^k)f_λ(r)= O(b_1^2)/λ^L+2∑_i=0^L-1Φ_i,L^(3)(y) f_i(y)where|Φ_i,k^(2)(y)| ≲1/1+y^k+2-i,|Φ_i,L^(3)(y)| ≲1/1+y^L+3-i.Recall ∂_tt(𝒜_λ^k)f_λ = [∂_t, ∂_t(𝒜_λ^k)]f_λ andλ_t/λ^k+1Φ_i,k^(1)(y) f_i(y) = λ_t/λ^k+1-i (Φ_i,k^(1))_λ(r) 𝒜_λ^i f_λ(r),∂_t Φ_λ = -λ_t/λ (ΛΦ)_λ,we get (<ref>) since[∂_t, λ_t/λ^k+1-i (Φ_i,k^(1))_λ𝒜_λ^i ]f_λ = λ_tt/λ^k+1-i (Φ_i,k^(1))_λ𝒜_λ^i f_λ-(λ_t)^2/λ^k+2-i (Λ_i-kΦ_i,k^(1))_λ𝒜_λ^i f_λ + λ_t/λ^k+1-i (Φ_i,k^(1))_λ∂_t(𝒜_λ^i )f_λ= λ_tt/λ^k+1Φ_i,k^(1)(y) f_i (y) +O(b_1^2)/λ^k+2∑_j=0^iΦ_i,j,k(y) f_j(y)where|Φ_i,j,k(y)| ≲1/1+y^k+2-j.Moreover, we can easily check that Φ_i,k^(2) satisfies (<ref>) because the scaling generator Λ preserves the asymptotics near origin as well as infinity. To prove (<ref>), we need to justify the terms of the form 𝒜^i ∘Φ𝒜^j. When j is an even number, we can use the Leibniz rule from the Appendix D of <cit.>. However, when j is odd, terms such as A∘Φ A appear, making the problem a bit more tricky. Fortunately, our Φ from the terms of the form 𝒜^i ∘Φ𝒜^2j+1 have an expansionΦ(y)=∑_p=0^N c_p y^2p+1 + O(y^2N+3)near the origin since each Φ𝒜^2j+1 comes from ∂_t(H_λ^k) or ∂_tt(H_λ^k), satisfies (<ref>). Hence(A ∘Φ𝒜^2j+1)f=(AΦ) f_2j+1 - Φ∂_y f_2j+1=(-∂_y + 1+2Z/y)Φ· f_2j+1 - Φf_2j+2 =: Φ_1 f_2j+1 - Φ f_2j+2where Φ_1 satisfies Φ_1(y)=∑_p=0^N c_p y^2p + O(y^2N+2)near the origin. If we take A^* here,(H∘Φ𝒜^2j+1 )f = A^* (Φ_1 f_2j+1 - Φ f_2j+2) = (∂_y Φ_1) f_2j+1 + (Φ_1 -A^* Φ)f_2j+2 - Φ∂_y f_2j+2=(∂_y Φ_1) f_2j+1 + (Φ_1 -∂_y Φ - 1+2Z/yΦ) f_2j+2 + Φ f_2j+3,we can justify 𝒜^i ∘Φ𝒜^2j+1 by iterating above calculation.§ MONOTONICITY FOR THE INTERMEDIATE ENERGYLet 2≤ k ≤ L. We haved/dt{ℰ_k/λ^2k-2}≤b_1|log b_1|^C(k)/λ^2k-1(√(ℰ_k+1)+b_1^k+b_1^δ(k)+(k-1)c_1)√(ℰ_k)where C(k),δ(k)>0 are constants that depend only on k,L. We compute the energy identity:∂_t (ℰ_k/2λ^2(k-1)) =⟨∂_tw_k, w_k⟩ + ⟨∂_tẇ_k-1,ẇ_k-1⟩= ⟨∂_t (𝒜^k_λ) w, w_k ⟩ + ⟨∂_t (𝒜^k-1_λ) ẇ ,ẇ_k-1⟩+⟨𝒜^k_λℱ_1, w_k ⟩ + ⟨𝒜^k-1_λℱ_2,ẇ_k-1⟩.We can directly estimate (<ref>) by Lemma <ref>|⟨∂_t(𝒜_λ^k)w , w_k⟩|≲b_1/λ^2k-1∑_m=0^k-1 |⟨Φ_m,k^(1)ε_m, ε_k ⟩ | ≲b_1/λ^2k-1∑_m=0^k-1‖ε_m/1+y^k+2-m‖_L^2√(ℰ_k)≲b_1 C(M)/λ^2k-1√(ℰ_k+1ℰ_k), |⟨∂_t(𝒜_λ^k-1)ẇ , ẇ_k-1⟩|≲b_1/λ^2k-1∑_m=0^k-2 |⟨Φ_m,k-1^(1)ε̇_m, ε̇_k-1⟩ | ≲b_1 C(M)/λ^2k-1√(ℰ_k+1ℰ_k).Then we conclude (<ref>) from the following bounds:‖𝒜^kℱ‖_L^2 +‖𝒜^k-1ℱ̇‖_L^2 ≲ b_1 |log b_1|^C [ b_1^k + b_1^δ(k) + (k-1)c_1],(<ref>) is bounded byb_1|log b_1|^C/λ^2k-1(b_1^k+b_1^δ(k)+(k-1)c_1)√(ℰ_k).Now, it remains to prove (<ref>) and we address it by separating ℱ=(ℱ,ℱ̇)^t into four types, as we did for Step 5 in the proof of Proposition <ref>. (i) ψ̃_b terms. The contribution of ψ̃_b terms to the above inequalities is estimated from the global weighted bounds of Proposition <ref>.(ii) 𝐌𝐨𝐝(t) terms. Similar to (ii) of Step 5 in the proof of Proposition <ref> with the cancellation 𝒜^kT_i=0 for 1≤ i ≤ k and Lemma <ref>, we obtain ∫|∑_i=1^L b_i𝒜^k-i[ Λ_1-i(χ_B_1T_i)] +∑_i=2^L+2𝒜^k-i[ Λ_1-i(χ_B_1S_i) ] |^2 ≲ b_1^2 ∑_i=1^L∫|𝒜^k-i[ χ_B_1T_i+ χ_B_1∑_j=i+1^L+2∂S_j/∂ b_i] |^2 ≲ b_1^2(k-L)|log b_1|^2γ(L-k)+2Hence, Lemma <ref> and the bootstrap bound (<ref>) implies:‖𝒜^kMod(t) ‖_L^2+‖𝒜^k-1Ṁȯḋ(t) ‖_L^2 ≲b_1^k-L|log b_1|^γ(L-k)+1b_1^L+1/|log b_1|≲b_1^k+1|log b_1|^γ(L-k). (iii) NL(ε) term: We can utilize the bound (<ref>) near origin. For y≥ 1, we recall the calculation and estimates from (iii) of Step 5 in the proof of Proposition <ref>, ‖𝒜^k-1NL(ε) ‖_L^2(y≥ 1) is bounded by|log b_1|^C √(ℰ_I+1ℰ_J+1) + |log b_1|^C √(ℰ_X+1ℰ_Y+1ℰ_Z+1)where I,J,X,Y,Z ≥ 1, I+J=k and X+Y+Z=k. From the bootstrap bounds (<ref>), (<ref>) and the fact that c_1 >1, we obtain‖𝒜^k-1NL(ε) ‖_L^2(y≥ 1)≲ |log b_1|^C(K) b_1^k c_1≲ b_1^1+δ(k)+(k-1)c_1. (iv) L(ε) term: With some modifications (replace L to k-1, for instance), it is proved by (<ref>) and (<ref>). In step (iii) when k=L, we can avoid the case that either I=L-1 or J=L-1 by estimating ‖∂_y^L-1N_1(ε) ‖_L^2(y≥ 1) instead of ‖∂_y^L-1N_1(ε) ‖_L^∞(y≥ 1).Recall the modified higher order energiesℰ_ℓ:=⟨ε̂_ℓ, ε̂_ℓ⟩ + ⟨ε̇̂̇_ℓ-1, ε̇̂̇_ℓ-1⟩.We rewrite the flow (<ref>) component-wisely: for 1≤ k ≤ℓ,∂_t ŵ_k -ŵ̇_k = ∂_t(𝒜_λ^k)ŵ +𝒜_λ^kℱ_1∂_tŵ̇_k + ŵ_k+2 = ∂_t(𝒜_λ^k)ŵ̇ +𝒜_λ^kℱ_2 , [ ℱ_1; ℱ_2 ] := 1/λℱ_λ=1/λ[ℱ; ℱ̇ ]_λ .Let ℓ=L. Then we haved/dt{ℰ_L/λ^2L-2 + O(b_1^2L|log b_1|^2/λ^2L-2)}≤b_1^L+1|log b_1|^δ/λ^2L-1 ( b_1^L|log b_1|+√(ℰ_L))where 0<δ≪ 1 is a sufficient small constant that depend only on L. We compute the energy identity:∂_t (ℰ_L/2λ^2(L-1)) = ⟨∂_t (𝒜^L_λ) ŵ, ŵ_L ⟩ + ⟨∂_t (𝒜^L-1_λ) ŵ̇ ,ŵ̇_L-1⟩+⟨𝒜^L_λℱ_1, ŵ_L ⟩ + ⟨𝒜^L-1_λℱ_2,ŵ̇_L-1⟩.We can directly estimate (<ref>) from the bounds (<ref>), (<ref>) and the fact ε-ε̂=ζ_b, we obtain the bound| (<ref>) | ≲b_1 C(M)/λ^2L-1√(ℰ_L+1ℰ_L) +b_1^L+3|log b_1|^C/λ^2L-1√(ℰ_L) + b_1^2L+3|log b_1|^C/λ^2L-1.We can borrow step (ii), (iii) and (iv) in the proof of Proposition <ref> to estimate (<ref>) except ψ̂_b terms. Also by Proposition <ref>, all the inner products we have to deal with are:b_L⟨𝒜^L (χ_B_1-χ_B_0)T_L-1 , ε̂_L ⟩ , b_L⟨𝒜^L-1 (∂_sχ_B_0+b_1(yχ')_B_0)T_L , ε̇̂̇_L-1⟩ .From the fact ε̂ = ε and 𝒜^L-1 T_L-1= (-1)^L-1/2Λ Q, we obtain𝒜^L-1 (χ_B_1-χ_B_0)T_L-1 = (-1)^L-1/2(χ_B_1-χ_B_0)Λ Q + (1_y∼ B_1+ 1_y∼ B_0) O(y^-1|log y|).Hence, the bootstrap bound (<ref>) yields|⟨𝒜^L (χ_B_1-χ_B_0)T_L-1 , ε̂_L ⟩| = |⟨𝒜^L-1 (χ_B_1-χ_B_0)T_L-1 , ε̂_L+1⟩| ≤ |⟨y^-11_B_0 ≤ y ≤ 2B_1 + (1_y∼ B_1+ 1_y∼ B_0) y^-1|log y| ,ε_L+1⟩ |≤ (|log b_1|^1/2+|log b_1|)√(ℰ_L+1)≤ b_1^L+1 |log b_1|^δ.Note that ε̇̂̇= ε̇ + b_L (χ_B_1-χ_B_0)T_L. The asymptotics (<ref>) implies|⟨𝒜^L-1 (∂_sχ_B_0+b_1(yχ')_B_0)T_L , ε̇_L-1⟩|≤b_1|⟨𝒜^L-2 (1_y∼ B_0y^L-2|log y| ), ε̇_L⟩|≤ |log b_1|√(ℰ_L+1)≤ b_1^L+1 |log b_1|^δ.To estimate the last inner product, we employ the sharp asymptoticsb_1 (yχ')_B_0= -c_1 ∂_sχ_B_0 + O( b_1 1_y∼ B_0/|log b_1|)from the fact (b_1)_s =b_2 + O(b_1^2/|log b_1|). Using the cancellation 𝒜^L T_L =0 and χ_B_1=1 on y ∼ B_0, the remaining inner product can be written as1/L-1 b_L^2 ⟨𝒜^L-1∂_s(χ_B_0T_L) ,𝒜^L-1(χ_B_0T_L) ⟩ + O ( b_1^2L+1/|log b_1|‖𝒜^L-1 (1_y∼ B_0T_L) ‖_L^2^2).We can easily check that the second term in (<ref>) is bounded by b_1^2L+1|log b_1|. For the first term in (<ref>), we use integration by parts in time to find out the correction for ℰ_L:b_L^2/λ^2L-1⟨𝒜^L-1∂_s(χ_B_0T_L) ,𝒜^L-1(χ_B_0T_L) ⟩= b_L^2/2λ^2L-1∂_s ⟨𝒜^L-1 (χ_B_0T_L) ,𝒜^L-1(χ_B_0T_L) ⟩=b_L^2/2λ^2L-2∂_t‖𝒜^L-1 (χ_B_0T_L)‖_L^2^2,by Lemma (<ref>), we conclude (<ref>):b_L^2/2λ^2L-2∂_t‖𝒜^L-1 (χ_B_0T_L)‖_L^2^2-∂_t ( b_L^2/2λ^2L-2‖𝒜^L-1 (χ_B_0T_L)‖_L^2^2 )= - ∂_t ( b_L^2/2λ^2L-2)‖𝒜^L-1 (χ_B_0T_L)‖_L^2^2 = ( (L-1)b_L^2 λ_t/λ^2L-1 -b_L (b_L)_t/λ^2L-2)‖𝒜^L-1 (χ_B_0T_L)‖_L^2^2 =-b_L/λ^2L-1((b_L)_s + (L-1 )b_1b_L )O(|log b_1|^2) = O(b_1^2L+1/λ^2L-1|log b_1|).Let ℓ=L-1. Then we haved/dt{ℰ_L-1/λ^2L-4 + O(b_1^2L-2|log b_1|^2/λ^2L-4)}≤b_1^L|log b_1|^δ/λ^2L-3 ( b_1^L-1|log b_1|+√(ℰ_L-1))where 0<δ≪ 1 is a sufficient small constant that depend only on L. Based on the proof of Proposition <ref> with Proposition <ref>, all the inner products we have to deal with are:b_L⟨𝒜^L-1 (χ_B_1-χ_B_0)T_L-1 , ε̂_L-1⟩ , b_L-1⟨𝒜^L-1(∂_s χ_B_0 + b_1(yχ')_B_0) T_L-1 , ε̂_L-1⟩b_L-1⟨𝒜^L-2H (χ_B_1-χ_B_0)T_L , ε̇̂̇_L-2 .⟩, b_L⟨𝒜^L-2 (∂_sχ_B_0+b_1(yχ')_B_0)T_L , ε̇̂̇_L-2⟩.By additionally considering ε̂= ε + b_L-1 (χ_B_1-χ_B_0)T_L-1, we can estimate the above inner products similarly to (<ref>) due to the derivative gain 𝒜^L-2H = 𝒜^L and the logarithmic gain |log b_1|^-β from the bootstrap bound (<ref>) for b_L when ℓ=L-1. The exact correction term is given by-∂_t ( b_L-1^2/2(L-2)λ^2L-4‖𝒜^L-1 (χ_B_0T_L-1)‖_L^2^2 ). abbrv
http://arxiv.org/abs/2312.16452v1
{ "authors": [ "Uihyeon Jeong" ], "categories": [ "math.AP", "35B44, 35L05" ], "primary_category": "math.AP", "published": "20231227073649", "title": "Quantized slow blow-up dynamics for the energy-critical corotational wave maps problem" }
Event-based Shape from Polarization with Spiking Neural Networks Peng Kang ^1,*, Srutarshi Banerjee^2, Henry Chopp^3, Aggelos Katsaggelos^3, and Oliver Cossairt^1 January 14, 2024 ===================================================================================================== § INTRODUCTIONThe connection between the thermal origin of dark matter (DM) and its impact on searches today has became an interesting topic to explore. In particular, when one of the yields of a DM candidate presents a bouncing effect before it freezes-out, i.e., a period of exponential growth for some time, it can have intriguing features in observations today <cit.>. In this work, we study a thermal multicomponent DM scenario consisiting on the introduction of two gauge singlets: a fermion and a complex scalar field. After spontaneous and explicit symmetry breaking, the low energy theory gives rise to a second Higgs boson, a stable fermion and, in a certain parameter space of the model, a stable pseudo-Nambu-Goldstone boson (pNGB). Under certain conditions, it is the pNGB whic presents this intriguing bouncing effect due to precense of the singlet fermion. We explore the impact of this thermal effect on todays DM observables such as indirect detection (ID). § MODELAside from the SM particle content, we add two SM singlets, a Dirac fermion ψ and a complex scalar S, both transforming under a chiral approximate global symmetry U(1)_V× U(1)_A:U(1)_V :S → S ,ψ_L→ e^iβ_V/2ψ_L , ψ_R→ e^iβ_V/2ψ_R, U(1)_A :S → e^iβ_A S ,ψ_L→ e^iβ_A/2ψ_L , ψ_R→ e^-iβ_A/2ψ_R ,with β_V,A arbitrary constants. This symmetry and particle content give rise to the Lagrangian:ℒ_BSM = ψ̅i∂ψ + (∂_μ S)^†∂^μS - g_ψψ̅_Lψ_R S - g_ψ^*ψ̅_Rψ_L S^† - V(H , S),with the potential given byV(H , S) = -μ_H^2/2|H|^2 - μ_S^2/2| S |^2+ λ_H/2| H |^4 + λ_S/2| S |^4+ λ_HS| H |^2 | S |^2 + V_soft,with H is the Higgs field, and μ_S^2>0. The U(1)_A soft breaking term is given byV_soft = -m_χ^2/2(S^2 + S^*2).A subgroup Z_2 of U(1)_A remains unbroken (i.e. β_A = π), in such a way that only soft-breaking terms with even powers of S are allowed. After the SSB of both S and H, and considering S = (v_s + s + iχ)/√(2), the low energy Lagrangian is given byℒ⊃ - g_ψ/√(2)ψ̅ψ(-h_1 sinθ + h_2cosθ) - g_ψ/√(2)iψ̅γ^5ψχ - V(h_1 , h_2 , χ),with (h_1,h_2) the physical Higgs bosons related to the original ones through h = cosθ h_1 + sinθ h_2 and s = -sinθ h_1 + cosθ h_2, with θ a mixing angle which satisfiestan 2θ = 2λ_HSv_hv_s/λ_S v_s^2 - λ_H v_h^2,with v_h = 246 GeV. The potential V(h_1 , h_2 , χ) is shown explicitely in <cit.>. We identify h_1 with the 125 GeV Higgs boson. We choose the free parameters of the model as {m_ψ , m_χ, m_h_2 ,g_ψ , θ}. In the case in which m_χ < 2m_ψ, the model presents two DM candidates: ψ and χ.§ RELIC ABUNDANCES AND CROSS SECTIONSWe assume that in the early Universe both DM candidates were in thermal equilibrium with the SM particles. We solve the the evolution of the individual singlet abundances Y_i ≡ n_i/s, with i=ψ , χ, as a function of the temperature x ≡μ /T, with μ = m_ψ m_χ /(m_ψ + m_χ), usingcode <cit.>. We distinguish two hierarchies between the DM particles which can make the yield behavior quite different, particularly for χ: (i) m_ψ > m_χ, what we call the normal hierarchy, and (ii) m_ψ < m_χ, the inverse hierarchy. In the former case, the freeze-out of the DM particles results to follow the standard freeze-out of two interacting DM particles, each one decoupling from the SM plasma at x ≈ 15 - 20 (see Fig. <ref>(left)). On the other hand, in the inverse hierarchy, and at high temperatures, ψ and χ being in thermal equilibrium with the SM plasma, both particles have vanishing chemical potentials, μ_ψ = μ_χ = 0. Since m_χ > m_ψ, the fermion is less Boltzmann suppressed than χ, and assuming that ψ does not rise in abundance and they keep the same chemical potential, one has that n_χ = (n_χ,e/n_ψ, e)n_ψ∼ e^-(m_χ - m_ψ)/T n_ψ, i.e., the abundance of χ decreases as T decreases. After chemical decoupling from the SM sector, the DM particles develop a chemical potential n_i ≈ n_i,e e^μ_i/T, in such a way that the yield of χ may increase for some time.After the dark particles decouple chemically from the SM thermal bath, the process sustaining chemical equilibrium within the dark sector are semi-annihilations ψψ̅↔χ h_i, with i=1,2, such that μ_χ≈ 2μ_ψ. The resulting effect is shown in Fig. <ref> (middle) for a specific choice of parameters, where the rising of Y_χ results for a finite interval of temperature. The size of the yield increment is model-dependent, and as it can be noted in Fig. <ref> (middle), the height of the bouncing depends on m_h_2. In Fig. <ref> (right), we observe graphically the non-zero values taken by the chemical potential of the two stable particles for each value of m_h_2 as a function of x, fulfilling μ_χ≈ 2μ_ψ. §.§ Cross sectionsIn order to quantify the cross sections at low velocities relevant for ID, we run two random scans, one considering m_χ = 300 GeV and the other m_χ = 700 GeV. In both scans we have considered m_ψ = 500 GeV, m_h_2 = [50, 2000] GeV, g_ψ = [0.1, 10] and tanθ = [10^-2, 10^1]. We have selected all the points which match the observed relic abundance. As shown in the first row of Fig. <ref>, we have projected the points in different planes, with the color of each point indicating the corresponding value of tanθ. As it is expected in the normal hierarchy m_ψ > m_χ, the first two plots corroborate the fact that the weighted cross sections never surpass the thermal cross section reference (pink regions). The other two plots in the right of the first line show the corresponding values for the couplings of the model.In the lower row of Fig. <ref>, we project the random scan for the inverse hierarchy m_ψ < m_χ. Contrary to the previous case, here a bunch of points surpass the thermal reference value in the first two plots, although with clear differences in their value acquired by θ. In conclusion, the necessary condition for the existence of the bouncing is m_χ >m_ψ, with some weighted cross sections at low velocities surpassing the canonical thermal cross section value, relevant for ID observables.§ PHENOMENOLOGY §.§ ConstraintsThe relic abundance measure today is given by the most updated Planck result <cit.>, Ω_c h^2 = 0.12. In our scenario we set Ω_c h^2 = Ω_ψ h^2 + Ω_χ h^2. On the other hand, the direct detection (DD) spin-independent cross section for the pNGB DM candidate vanishes due to its Goldstone nature <cit.>. On the contrary, ψ is subject to sizable constraints appearing from the scattering in t-channel exchange of h_1 and h_2. We take bounds from the Lux-Zeplin experiment (LZ) <cit.>.On the other hand, a second Higgs is constrained throughout two parameters, (m_h_2,θ). Collider searches set θ≲ 0.3 for m_h_2 > 100 GeV <cit.>. In what we present in this proceeding, Higgs to invisible constraints do not play any role. §.§ Results [In the full version of this work <cit.>, we explore the viability to have pNGB DM below 50 GeV, founding parameter regions in which this possibility is achieved, contrary to the model in which the fermion is absent at low energies <cit.>.] Considering that the two-DM scenario presents sizable average annihilation cross section at low temperatures, we focus on (semi)annihilation processes ψψ̅→χ h_i and χχ→ XX, with X an SM state, relevant for ID. The partial cross sections result to be highly dependent on the parameters of the model. As we exemplify in the upper row of Fig. <ref>, considering m_ψ = 500 GeV, m_h_2 = 600 GeV and g_ψ getting the appropriate value to match the correct relic abundance, the cross sections not only vary in orders of magnitude depending on m_χ, but as θ decreases, the parameter space available to obtain the correct relic abundance shrinks allowing only m_χ≈ m_h_2/2, otherwise an overabundance is obtained. In this way, in the normal hierarchy and small mixing angles is possible to obtain sizable cross sections, as shown by the orange solid line and the dashed curves in the left plot of the upper row of Fig. <ref>, but only in a reduced parameter space. On the contrary, higher tanθ values, e.g. tanθ=0.1, imply less suppression for (semi)annihilation processes into SM states including h_1 in the final state, presenting strong cross sections specially in the case m_χ > m_ψ, where the bouncing effect is present. This can be seen in the third plot of the first row of Fig. <ref>, showing a wider range of m_χ allowed. For completeness, we also present the case with tanθ = 10^-2.We confront the resulting zero-velocity relic weighted cross sections with LZ for points fulfilling the correct relic abundance. In Fig. <ref>(below), we show the results as a function of the singlet-doublet mixing angle assuming m_ψ = 500 GeV, m_χ = 800 GeV, and m_h_2 = (130, 300, 600) GeV (from left to right, respectively). LZ data rule out the shaded region in each plot. In the left plot of Fig. <ref>(below), LZ bounds are relaxed, due to the algebraic cancellation between the close-in mass of h_1 and h_2. As m_h_2 deviates away from m_h_1, LZ bounds start to be notorious and strong, as it is shown by the middle and the plot in the right, even for θ≪ 1. In this way, all the cross sections with values above the thermal value obtained in the case in which m_ψ < m_h_2 < m_χ result disfavored by LZ, although the model still presents sizable ID signatures to be tested by future laboratories.§ CONCLUSIONSWe have studied a simple extension to the SM presenting two DM candidates simultaneously. We have found a peculiar yield behavior for the pNGB when this is heavier than the fermion singlet: it bounces. This model is one of the first scenarios in which is exemplified in more detail the bouncing effect. We have explored the zero velocity average annihilation cross sections relevant for ID, founding parameter space regions in which both the fermion semi-annihilation and the pNGB annihilation today present values above the canonical thermal value. DD and collider constraints force the mixing angle θ to be small, e.g. θ≲ 0.1, in such a way that the strongest signals relevant for ID becomes disfavored in this scenario. However, the model still presents testable ID signals in the ballpark of, for instance, CTA.§ ACKNOWLEDGMENTSB.D.S has been founded by ANID (ex CONICYT) Grant No. 74200120. B.D.S also wants to thank DESY and the Cluster of Excellence Quantum Universe.JHEP
http://arxiv.org/abs/2401.01368v1
{ "authors": [ "Bastian Diaz Saez", "Patricio Escalona" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20231226141115", "title": "Bouncing pNGB Dark Matter via a Fermion Dark Matter" }
http://arxiv.org/abs/2312.16120v1
{ "authors": [ "SK Firoz Islam" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231226165917", "title": "Photoinduced metallic Volkov-Pankratov states in semi-Dirac material" }
By analyzing (27.12±0.14)×10^8 ψ(3686) events collected with the BESIII detector operating at the BEPCII collider, the decay processes χ_cJ→ 3(K^+K^-) (J=0,1,2) are observed for the first time with statistical significances of 8.2σ, 8.1σ, and 12.4σ, respectively. The product branching fractions of ψ(3686)→γχ_cJ, χ_cJ→ 3(K^+K^-) are presented and the branching fractions of χ_cJ→ 3(K^+K^-) decays are determined to be ℬ_χ_c0→ 3(K^+K^-)=(10.7±1.8±1.1)×10^-6, ℬ_χ_c1→ 3(K^+K^-)=(4.2±0.9±0.5)×10^-6, and ℬ_χ_c2→ 3(K^+K^-)=(7.2±1.1±0.8)×10^-6, where the first uncertainties are statistical and the second are systematic.Observation of χ_cJ→ 3(K^+K^-)M. Ablikim^1, M. N. Achasov^4,c, P. Adlarson^75, O. Afedulidis^3, X. C. Ai^80, R. Aliberti^35, A. Amoroso^74A,74C, Q. An^71,58,a, Y. Bai^57, O. Bakina^36, I. Balossino^29A, Y. Ban^46,h, H.-R. Bao^63, V. Batozskaya^1,44, K. Begzsuren^32, N. Berger^35, M. Berlowski^44, M. Bertani^28A, D. Bettoni^29A, F. Bianchi^74A,74C, E. Bianco^74A,74C, A. Bortone^74A,74C, I. Boyko^36, R. A. Briere^5, A. Brueggemann^68, H. Cai^76, X. Cai^1,58, A. Calcaterra^28A, G. F. Cao^1,63, N. Cao^1,63, S. A. Cetin^62A, J. F. Chang^1,58, G. R. Che^43, G. Chelkov^36,b, C. Chen^43, C. H. Chen^9, Chao Chen^55, G. Chen^1, H. S. Chen^1,63, H. Y. Chen^20, M. L. Chen^1,58,63, S. J. Chen^42, S. L. Chen^45, S. M. Chen^61, T. Chen^1,63, X. R. Chen^31,63, X. T. Chen^1,63, Y. B. Chen^1,58, Y. Q. Chen^34, Z. J. Chen^25,i, Z. Y. Chen^1,63, S. K. Choi^10A, G. Cibinetto^29A, F. Cossio^74C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^78, A. Dbeyssi^18, R.  E. de Boer^3, D. Dedovich^36, C. Q. Deng^72, Z. Y. Deng^1, A. Denig^35, I. Denysenko^36, M. Destefanis^74A,74C, F. De Mori^74A,74C, B. Ding^66,1, X. X. Ding^46,h, Y. Ding^34, Y. Ding^40, J. Dong^1,58, L. Y. Dong^1,63, M. Y. Dong^1,58,63, X. Dong^76, M. C. Du^1, S. X. Du^80, Z. H. Duan^42, P. Egorov^36,b, Y. H. Fan^45, J. Fang^59, J. Fang^1,58, S. S. Fang^1,63, W. X. Fang^1, Y. Fang^1, Y. Q. Fang^1,58, R. Farinelli^29A, L. Fava^74B,74C, F. Feldbauer^3, G. Felici^28A, C. Q. Feng^71,58, J. H. Feng^59, Y. T. Feng^71,58, M. Fritsch^3, C. D. Fu^1, J. L. Fu^63, Y. W. Fu^1,63, H. Gao^63, X. B. Gao^41, Y. N. Gao^46,h, Yang Gao^71,58, S. Garbolino^74C, I. Garzia^29A,29B, L. Ge^80, P. T. Ge^76, Z. W. Ge^42, C. Geng^59, E. M. Gersabeck^67, A. Gilman^69, K. Goetzen^13, L. Gong^40, W. X. Gong^1,58, W. Gradl^35, S. Gramigna^29A,29B, M. Greco^74A,74C, M. H. Gu^1,58, Y. T. Gu^15, C. Y. Guan^1,63, Z. L. Guan^22, A. Q. Guo^31,63, L. B. Guo^41, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,g, A. Guskov^36,b, J. Gutierrez^27, K. L. Han^63, T. T. Han^1, X. Q. Hao^19, F. A. Harris^65, K. K. He^55, K. L. He^1,63, F. H. Heinsius^3, C. H. Heinz^35, Y. K. Heng^1,58,63, C. Herold^60, T. Holtmann^3, P. C. Hong^34, G. Y. Hou^1,63, X. T. Hou^1,63, Y. R. Hou^63, Z. L. Hou^1, B. Y. Hu^59, H. M. Hu^1,63, J. F. Hu^56,j, S. L. Hu^12,g, T. Hu^1,58,63, Y. Hu^1, G. S. Huang^71,58, K. X. Huang^59, L. Q. Huang^31,63, X. T. Huang^50, Y. P. Huang^1, T. Hussain^73, F. Hölzken^3, N Hüsken^27,35, N. in der Wiesche^68, J. Jackson^27, S. Janchiv^32, J. H. Jeong^10A, Q. Ji^1, Q. P. Ji^19, W. Ji^1,63, X. B. Ji^1,63, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^71,58, D. Jiang^1,63, H. B. Jiang^76, P. C. Jiang^46,h, S. S. Jiang^39, T. J. Jiang^16, X. S. Jiang^1,58,63, Y. Jiang^63, J. B. Jiao^50, J. K. Jiao^34, Z. Jiao^23, S. Jin^42, Y. Jin^66, M. Q. Jing^1,63, X. M. Jing^63, T. Johansson^75, S. Kabana^33, N. Kalantar-Nayestanaki^64, X. L. Kang^9, X. S. Kang^40, M. Kavatsyuk^64, B. C. Ke^80, V. Khachatryan^27, A. Khoukaz^68, R. Kiuchi^1, O. B. Kolcu^62A, B. Kopf^3, M. Kuessner^3, X. Kui^1,63, N.  Kumar^26, A. Kupsc^44,75, W. Kühn^37, J. J. Lane^67, P.  Larin^18, L. Lavezzi^74A,74C, T. T. Lei^71,58, Z. H. Lei^71,58, M. Lellmann^35, T. Lenz^35, C. Li^43, C. Li^47, C. H. Li^39, Cheng Li^71,58, D. M. Li^80, F. Li^1,58, G. Li^1, H. B. Li^1,63, H. J. Li^19, H. N. Li^56,j, Hui Li^43, J. R. Li^61, J. S. Li^59, Ke Li^1, L. J Li^1,63, L. K. Li^1, Lei Li^48, M. H. Li^43, P. R. Li^38,l, Q. M. Li^1,63, Q. X. Li^50, R. Li^17,31, S. X. Li^12, T.  Li^50, W. D. Li^1,63, W. G. Li^1,a, X. Li^1,63, X. H. Li^71,58, X. L. Li^50, X. Z. Li^59, Xiaoyu Li^1,63, Y. G. Li^46,h, Z. J. Li^59, Z. X. Li^15, C. Liang^42, H. Liang^71,58, H. Liang^1,63, Y. F. Liang^54, Y. T. Liang^31,63, G. R. Liao^14, L. Z. Liao^50, J. Libby^26, A.  Limphirat^60, C. C. Lin^55, D. X. Lin^31,63, T. Lin^1, B. J. Liu^1, B. X. Liu^76, C. Liu^34, C. X. Liu^1, F. H. Liu^53, Fang Liu^1, Feng Liu^6, G. M. Liu^56,j, H. Liu^38,k,l, H. B. Liu^15, H. M. Liu^1,63, Huanhuan Liu^1, Huihui Liu^21, J. B. Liu^71,58, J. Y. Liu^1,63, K. Liu^38,k,l, K. Y. Liu^40, Ke Liu^22, L. Liu^71,58, L. C. Liu^43, Lu Liu^43, M. H. Liu^12,g, P. L. Liu^1, Q. Liu^63, S. B. Liu^71,58, T. Liu^12,g, W. K. Liu^43, W. M. Liu^71,58, X. Liu^38,k,l, X. Liu^39, Y. Liu^80, Y. Liu^38,k,l, Y. B. Liu^43, Z. A. Liu^1,58,63, Z. D. Liu^9, Z. Q. Liu^50, X. C. Lou^1,58,63, F. X. Lu^59, H. J. Lu^23, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,63, C. L. Luo^41, M. X. Luo^79, T. Luo^12,g, X. L. Luo^1,58, X. R. Lyu^63, Y. F. Lyu^43, F. C. Ma^40, H. Ma^78, H. L. Ma^1, J. L. Ma^1,63, L. L. Ma^50, M. M. Ma^1,63, Q. M. Ma^1, R. Q. Ma^1,63, X. T. Ma^1,63, X. Y. Ma^1,58, Y. Ma^46,h, Y. M. Ma^31, F. E. Maas^18, M. Maggiora^74A,74C, S. Malde^69, Y. J. Mao^46,h, Z. P. Mao^1, S. Marcello^74A,74C, Z. X. Meng^66, J. G. Messchendorp^13,64, G. Mezzadri^29A, H. Miao^1,63, T. J. Min^42, R. E. Mitchell^27, X. H. Mo^1,58,63, B. Moses^27, N. Yu. Muchnoi^4,c, J. Muskalla^35, Y. Nefedov^36, F. Nerling^18,e, L. S. Nie^20, I. B. Nikolaev^4,c, Z. Ning^1,58, S. Nisar^11,m, Q. L. Niu^38,k,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^63, Q. Ouyang^1,58,63, S. Pacetti^28B,28C, X. Pan^55, Y. Pan^57, A.  Pathak^34, P. Patteri^28A, Y. P. Pei^71,58, M. Pelizaeus^3, H. P. Peng^71,58, Y. Y. Peng^38,k,l, K. Peters^13,e, J. L. Ping^41, R. G. Ping^1,63, S. Plura^35, V. Prasad^33, F. Z. Qi^1, H. Qi^71,58, H. R. Qi^61, M. Qi^42, T. Y. Qi^12,g, S. Qian^1,58, W. B. Qian^63, C. F. Qiao^63, X. K. Qiao^80, J. J. Qin^72, L. Q. Qin^14, L. Y. Qin^71,58, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, Z. H. Qu^72, C. F. Redmer^35, K. J. Ren^39, A. Rivetti^74C, M. Rolo^74C, G. Rong^1,63, Ch. Rosner^18, S. N. Ruan^43, N. Salone^44, A. Sarantsev^36,d, Y. Schelhaas^35, K. Schoenning^75, M. Scodeggio^29A, K. Y. Shan^12,g, W. Shan^24, X. Y. Shan^71,58, Z. J Shang^38,k,l, J. F. Shangguan^55, L. G. Shao^1,63, M. Shao^71,58, C. P. Shen^12,g, H. F. Shen^1,8, W. H. Shen^63, X. Y. Shen^1,63, B. A. Shi^63, H. Shi^71,58, H. C. Shi^71,58, J. L. Shi^12,g, J. Y. Shi^1, Q. Q. Shi^55, S. Y. Shi^72, X. Shi^1,58, J. J. Song^19, T. Z. Song^59, W. M. Song^34,1, Y.  J. Song^12,g, Y. X. Song^46,h,n, S. Sosio^74A,74C, S. Spataro^74A,74C, F. Stieler^35, Y. J. Su^63, G. B. Sun^76, G. X. Sun^1, H. Sun^63, H. K. Sun^1, J. F. Sun^19, K. Sun^61, L. Sun^76, S. S. Sun^1,63, T. Sun^51,f, W. Y. Sun^34, Y. Sun^9, Y. J. Sun^71,58, Y. Z. Sun^1, Z. Q. Sun^1,63, Z. T. Sun^50, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, Y. A. Tang^76, L. Y. Tao^72, Q. T. Tao^25,i, M. Tat^69, J. X. Teng^71,58, V. Thoren^75, W. H. Tian^59, Y. Tian^31,63, Z. F. Tian^76, I. Uman^62B, Y. Wan^55,S. J. Wang ^50, B. Wang^1, B. L. Wang^63, Bo Wang^71,58, D. Y. Wang^46,h, F. Wang^72, H. J. Wang^38,k,l, J. J. Wang^76, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, Meng Wang^1,63, N. Y. Wang^63, S. Wang^12,g, S. Wang^38,k,l, T.  Wang^12,g, T. J. Wang^43, W.  Wang^72, W. Wang^59, W. P. Wang^35,71,o, X. Wang^46,h, X. F. Wang^38,k,l, X. J. Wang^39, X. L. Wang^12,g, X. N. Wang^1, Y. Wang^61, Y. D. Wang^45, Y. F. Wang^1,58,63, Y. L. Wang^19, Y. N. Wang^45, Y. Q. Wang^1, Yaqian Wang^17, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^72, Z. Y. Wang^1,63, Ziyi Wang^63, D. H. Wei^14, F. Weidner^68, S. P. Wen^1, Y. R. Wen^39, U. Wiedner^3, G. Wilkinson^69, M. Wolke^75, L. Wollenberg^3, C. Wu^39, J. F. Wu^1,8, L. H. Wu^1, L. J. Wu^1,63, X. Wu^12,g, X. H. Wu^34, Y. Wu^71,58, Y. H. Wu^55, Y. J. Wu^31, Z. Wu^1,58, L. Xia^71,58, X. M. Xian^39, B. H. Xiang^1,63, T. Xiang^46,h, D. Xiao^38,k,l, G. Y. Xiao^42, S. Y. Xiao^1, Y.  L. Xiao^12,g, Z. J. Xiao^41, C. Xie^42, X. H. Xie^46,h, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^71,58, T. Y. Xing^1,63, C. F. Xu^1,63, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^66, M. Xu^71,58, Q. J. Xu^16, Q. N. Xu^30, W. Xu^1, W. L. Xu^66, X. P. Xu^55, Y. C. Xu^77, Z. P. Xu^42, Z. S. Xu^63, F. Yan^12,g, L. Yan^12,g, W. B. Yan^71,58, W. C. Yan^80, X. Q. Yan^1, H. J. Yang^51,f, H. L. Yang^34, H. X. Yang^1, Tao Yang^1, Y. Yang^12,g, Y. F. Yang^43, Y. X. Yang^1,63, Yifan Yang^1,63, Z. W. Yang^38,k,l, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Z. Y. You^59, B. X. Yu^1,58,63, C. X. Yu^43, G. Yu^1,63, J. S. Yu^25,i, T. Yu^72, X. D. Yu^46,h, Y. C. Yu^80, C. Z. Yuan^1,63, J. Yuan^34, L. Yuan^2, S. C. Yuan^1, Y. Yuan^1,63, Y. J. Yuan^45, Z. Y. Yuan^59, C. X. Yue^39, A. A. Zafar^73, F. R. Zeng^50, S. H.  Zeng^72, X. Zeng^12,g, Y. Zeng^25,i, Y. J. Zeng^59, X. Y. Zhai^34, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,63, B. L. Zhang^1,63, B. X. Zhang^1, D. H. Zhang^43, G. Y. Zhang^19, H. Zhang^80, H. Zhang^71,58, H. C. Zhang^1,58,63, H. H. Zhang^34, H. H. Zhang^59, H. Q. Zhang^1,58,63, H. R. Zhang^71,58, H. Y. Zhang^1,58, J. Zhang^80, J. Zhang^59, J. J. Zhang^52, J. L. Zhang^20, J. Q. Zhang^41, J. S. Zhang^12,g, J. W. Zhang^1,58,63, J. X. Zhang^38,k,l, J. Y. Zhang^1, J. Z. Zhang^1,63, Jianyu Zhang^63, L. M. Zhang^61, Lei Zhang^42, P. Zhang^1,63, Q. Y. Zhang^34, R. Y Zhang^38,k,l, Shuihan Zhang^1,63, Shulei Zhang^25,i, X. D. Zhang^45, X. M. Zhang^1, X. Y. Zhang^50, Y.  Zhang^72, Y.  T. Zhang^80, Y. H. Zhang^1,58, Y. M. Zhang^39, Yan Zhang^71,58, Yao Zhang^1, Z. D. Zhang^1, Z. H. Zhang^1, Z. L. Zhang^34, Z. Y. Zhang^76, Z. Y. Zhang^43, Z. Z.  Zhang^45, G. Zhao^1, J. Y. Zhao^1,63, J. Z. Zhao^1,58, Lei Zhao^71,58, Ling Zhao^1, M. G. Zhao^43, N. Zhao^78, R. P. Zhao^63, S. J. Zhao^80, Y. B. Zhao^1,58, Y. X. Zhao^31,63, Z. G. Zhao^71,58, A. Zhemchugov^36,b, B. Zheng^72, B. M. Zheng^34, J. P. Zheng^1,58, W. J. Zheng^1,63, Y. H. Zheng^63, B. Zhong^41, X. Zhong^59, H.  Zhou^50, J. Y. Zhou^34, L. P. Zhou^1,63, S.  Zhou^6, X. Zhou^76, X. K. Zhou^6, X. R. Zhou^71,58, X. Y. Zhou^39, Y. Z. Zhou^12,g, J. Zhu^43, K. Zhu^1, K. J. Zhu^1,58,63, K. S. Zhu^12,g, L. Zhu^34, L. X. Zhu^63, S. H. Zhu^70, S. Q. Zhu^42, T. J. Zhu^12,g, W. D. Zhu^41, Y. C. Zhu^71,58, Z. A. Zhu^1,63, J. H. Zou^1, J. Zu^71,58 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 BochumRuhr-University, D-44780 Bochum, Germany^4 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^6 Central China Normal University, Wuhan 430079, People's Republic of China^7 Central South University, Changsha 410083, People's Republic of China^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^9 China University of Geosciences, Wuhan 430074, People's Republic of China^10 Chung-Ang University, Seoul, 06974, Republic of Korea^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^12 Fudan University, Shanghai 200433, People's Republic of China^13 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^14 Guangxi Normal University, Guilin 541004, People's Republic of China^15 Guangxi University, Nanning 530004, People's Republic of China^16 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^17 Hebei University, Baoding 071002, People's Republic of China^18 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany^19 Henan Normal University, Xinxiang 453007, People's Republic of China^20 Henan University, Kaifeng 475004, People's Republic of China^21 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^22 Henan University of Technology, Zhengzhou 450001, People's Republic of China^23 Huangshan College, Huangshan245000, People's Republic of China^24 Hunan Normal University, Changsha 410081, People's Republic of China^25 Hunan University, Changsha 410082, People's Republic of China^26 Indian Institute of Technology Madras, Chennai 600036, India^27 Indiana University, Bloomington, Indiana 47405, USA^28 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione diPerugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy^29 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara,I-44122, Ferrara, Italy^30 Inner Mongolia University, Hohhot 010021, People's Republic of China^31 Institute of Modern Physics, Lanzhou 730000, People's Republic of China^32 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia^33 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile^34 Jilin University, Changchun 130012, People's Republic of China^35 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^36 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^37 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^38 Lanzhou University, Lanzhou 730000, People's Republic of China^39 Liaoning Normal University, Dalian 116029, People's Republic of China^40 Liaoning University, Shenyang 110036, People's Republic of China^41 Nanjing Normal University, Nanjing 210023, People's Republic of China^42 Nanjing University, Nanjing 210093, People's Republic of China^43 Nankai University, Tianjin 300071, People's Republic of China^44 National Centre for Nuclear Research, Warsaw 02-093, Poland^45 North China Electric Power University, Beijing 102206, People's Republic of China^46 Peking University, Beijing 100871, People's Republic of China^47 Qufu Normal University, Qufu 273165, People's Republic of China^48 Renmin University of China, Beijing 100872, People's Republic of China^49 Shandong Normal University, Jinan 250014, People's Republic of China^50 Shandong University, Jinan 250100, People's Republic of China^51 Shanghai Jiao Tong University, Shanghai 200240,People's Republic of China^52 Shanxi Normal University, Linfen 041004, People's Republic of China^53 Shanxi University, Taiyuan 030006, People's Republic of China^54 Sichuan University, Chengdu 610064, People's Republic of China^55 Soochow University, Suzhou 215006, People's Republic of China^56 South China Normal University, Guangzhou 510006, People's Republic of China^57 Southeast University, Nanjing 211100, People's Republic of China^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand^61 Tsinghua University, Beijing 100084, People's Republic of China^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey^63 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^64 University of Groningen, NL-9747 AA Groningen, The Netherlands^65 University of Hawaii, Honolulu, Hawaii 96822, USA^66 University of Jinan, Jinan 250022, People's Republic of China^67 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom^68 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany^69 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom^70 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^71 University of Science and Technology of China, Hefei 230026, People's Republic of China^72 University of South China, Hengyang 421001, People's Republic of China^73 University of the Punjab, Lahore-54590, Pakistan^74 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^75 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^76 Wuhan University, Wuhan 430072, People's Republic of China^77 Yantai University, Yantai 264005, People's Republic of China^78 Yunnan University, Kunming 650500, People's Republic of China^79 Zhejiang University, Hangzhou 310027, People's Republic of China^80 Zhengzhou University, Zhengzhou 450001, People's Republic of China^a Deceased^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China^h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China^i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China^j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China^k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China^l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China^m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan^n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland^o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, GermanyJanuary 14, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONExperimental studies of charmonium states and their decay properties are important to test quantum Chromodynamics (QCD) models and QCD based calculations. In the quark model, the χ_cJ (J = 0, 1, 2) mesons are identified as ^3P_J charmonium states. Unlike the vector charmonium states J/ψ and ψ(3686), however, the χ_cJ mesons can not be directly produced in e^+e^- collisions due to parity conservation, and our knowledge about their decays is relatively deficient. These P-wave charmonium mesons are produced abundantly via radiative ψ(3686) decays, with branching fractions of about 9%, thereby offering a good opportunity to study various χ_cJ decays. Currently, theoretical studies indicate that the color octet mechanism (COM) <cit.> may substantially influence the decays of the P-wave charmonium states. However, some discrepancies between these theoretical calculations and experimental measurements have been reported in Refs. <cit.>. Therefore, intensive measurements of exclusive χ_cJ hadronic decays are highly desirable to understand the underlying χ_cJ decay dynamics. In this paper we present the first observation and branching fraction measurements of χ_cJ→ 3(K^+K^-) by analyzing (27.12±0.14)×10^8 ψ(3686) events <cit.> collected with the BESIII detector <cit.>. § BESIII DETECTOR AND MONTE CARLO SIMULATION The BESIII detector <cit.> records symmetric e^+e^- collisions provided by the BEPCII storage ring <cit.> in the center-of-mass energy range from 2.0 to 4.95 GeV, with a peak luminosity of 1 × 10^33 cm^-2s^-1 achieved at √(s) = 3.77 GeV. The cylindrical core of the BESIII detector covers 93% of the full solid angle and consists of a helium-basedmultilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0 T magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identification modules interleaved with steel. The charged-particle momentum resolution at 1  GeV/c is 0.5%, and the dE/ dx resolution is 6% for electrons from Bhabha scattering. The EMC measures photon energies with a resolution of 2.5% (5%) at 1 GeV in the barrel (end cap) region. The time resolution in the TOF barrel region is 68 ps, while that in the end cap region was 110 ps. The end-cap TOF system was upgraded in 2015 using multi-gap resistive plate chamber technology, providing a time resolution of 60 ps <cit.>.Simulated data samples produced with a geant4-based <cit.> Monte Carlo (MC) package, which includes the geometric description of the BESIII detector and the detector response, are used to determine detection efficiencies and to estimate backgrounds. The simulation models the beam energy spread and initial state radiation (ISR) in the e^+e^- annihilations with the generator kkmc <cit.>. The inclusive MC sample includes the production of the ψ(3686) resonance, the ISR production of the J/ψ, and the continuum processes incorporated in kkmc <cit.>. All particle decays are modelled with evtgen <cit.> using branching fractions either taken from the Particle Data Group (PDG) <cit.>, when available, or otherwise estimated with lundcharm <cit.>. Final state radiation (FSR) from charged final state particles is incorporated using the photos package <cit.>. An inclusive MC sample containing 2.7×10^9 generic ψ(3686) decays is used to study background. To account for the effect of intermediate resonance structure on the efficiency, each of these decays is modeled by the corresponding mixed signal MC samples, in which the dominant decay modes containing resonances of ϕ are mixed with the phase-space (PHSP) signal MC samples. The mixing ratios are determined by examining the corresponding invariant mass as discussed in Section VI.§ EVENT SELECTIONWe reconstruct the events containing the charmonium transitions ψ(3686)→γJ followed by the hadronic decays J→ 3(K^+K^-). The signal events are required to have at least six charged tracks and at least one photon candidate.All charged tracks detected in the MDC are required to be within a polar angle (θ) range of |cosθ|<0.93, where θ is defined with respect to the z-axis, which is the symmetry axis of the MDC. The distance of closest approach to the interaction point (IP) must be less than 10 cm along the z-axis, |V_z|, and less than 1 cm in the transverse plane, |V_xy|. Particle identification (PID) for charged tracks combines measurements of the energy deposited in the MDC (dE/dx) and the flight time in the TOF to form likelihoods ℒ(h) (h=p,K,π) for each hadron h hypothesis. Tracks are identified as protons when the proton hypothesis has the greatest likelihood (ℒ(p)>ℒ(K) and ℒ(p)>ℒ(π)), while charged kaons and pions are identified by comparing the likelihoods for the kaon and pion hypotheses, ℒ(K)>ℒ(π) and ℒ(π)>ℒ(K), respectively. Those with likelihood for kaon hypothesis greater than that for pion hypothesis are assigned to be kaon candidates.Photon candidates are identified using showers in the EMC. The deposited energy of each shower must be more than 25 MeV in the barrel region (|cosθ|< 0.80) and more than 50 MeV in the end cap region (0.86 <|cosθ|< 0.92). To exclude showers that originate from charged tracks, the angle subtended by the EMC shower and the position of the closest charged track at the EMC must be greater than 10 degrees as measured from the IP. To suppress electronic noise and showers unrelated to the event, the difference between the EMC time and the event start time is required to be within [0, 700] ns.A four-momentum conservation constraint (4C) kinematic fit is applied to the events. In each event, if more than one combination survives, the one with the smallest χ_ 4C^2 value of the 4C fit is retained. Figure <ref> shows the χ^2_ 4C distributions of the accepted candidate events for data and MC samples. The requirement on χ^2_ 4C is optimized with the Figure of Merit (FOM)FOM = 𝑆/√(𝑆+𝐵).Here S denotes the number of events from the signal MC sample, normalized according to the pre-measured branching fractions; B denotes the number of background events from the inclusive MC sample, normalized to the data size. After optimization, we choose χ^2_ 4C<50 as the nominal requirement. § BACKGROUND ANALYSISThe continuum data collected at √(s) = 3.650 and 3.682 GeV, corresponding to an integrated luminosity of 800 pb^-1 <cit.>, are used to estimate the QED background. No event satisfies the same selection criteria applied to ψ(3686) data. Furthermore, the inclusive MC sample is used to study all potential backgrounds from ψ(3686) decays, and no event is observed in the χ_cJ signal regions. Consequently, all peaking background components are treated as negligible in this analysis. § DATA ANALYSISThe distribution of the invariant mass of the 3(K^+K^-) combination, M_3(K^+K^-), of the accepted candidate events is shown in Fig. <ref>. Clear 0, 1 and 2 signals are observed. The signal yields of χ_cJ→ 3(K^+K^-) are obtained from an unbinned maximum likelihood fit to this distribution.In the fit, the signal shape of each χ_cJ is described by a Breit-Wigner functions convolved with a Gaussian. The widths and masses of Breit-Wigner functions are fixed to PDG averages <cit.> for χ_c0,1,2, respectively. The parameters of the Gaussian are floated.From this fit, the signal yields of χ_c0, χ_c1, and χ_c2, N_χ_cJ^ obs, are obtained to be 37.4±6.3, 24.6±5.2, and 46.3±7.0, respectively. The statistical significances are estimated to be 9.5σ, 9.0σ, and 13.7σ for 0, 1, and 2 individually, which are determined by comparing the fit likelihood values separately with and without each χ_cJ signal component. § DETECTION EFFICIENCY The efficiencies of detecting ψ(3686)→γχ_cJ with χ_cJ→ 3(K^+K^-) are determined with the mixed signal MC sample with fractions of the components of χ_cJ→ 2ϕ (K^+K^-), χ_cJ→ϕ2(K^+K^-), and χ_cJ→ 3(K^+K^-) derived from a three-dimensional fit on the three K^+K^- invariant mass spectra of the data events. Table <ref> shows the fractions of the sub-resonant decays. The variations of these fractions are taken as systematic uncertainties. The obtained detection efficiencies for χ_cJ→ 3(K^+K^-) are (13.3 ± 0.1)× 10^-3, (22.3 ± 0.1)× 10^-3, and (25.0 ± 0.2)× 10^-3, respectively, including detector acceptance as well as reconstruction and selection efficiencies.§ BRANCHING FRACTIONFor each decay ψ(3686)→γJ, J→ 3(K^+K^-), about 10.8×10^5 signal MC events are generated using a 1+λcos^2θ distribution, where θ is the angle between the radiative photon and beam directions, and λ=1,-1/3,1/13 for J=0,1,2 in accordance with the expectations for electric dipole transitions <cit.>. Intrinsic width and mass values in PDG <cit.> are used to simulate the J states.The product of branching fractions of ψ(3686)→γχ_cJ with χ_cJ→ 3(K^+K^-) is calculated asℬ_χ_cJ→ 3(K^+K^-)·ℬ_ψ(3686)→γχ_cJ=N^ obs_χ_cJ/N_ψ(3686)·ϵ,where ϵ is the detection efficiency and N_ψ(3686) is the total number of ψ(3686) events in data. Combining the branching fractions of ψ(3686)→γχ_cJ decays quoted from the PDG <cit.>, the branching fractions of χ_cJ→ 3(K^+K^-) are determined. The obtained results are summarized in Table <ref>.§ SYSTEMATIC UNCERTAINTYThe systematic uncertainties in the branching fraction measurements originate from several sources, as summarized in Table <ref>. They are estimated and discussed below. The total number of ψ(3686) events in data has been measured to be N_ψ(3686)=(27.12±0.14)×10^8 with the inclusive hadronic data sample, as described in Ref. <cit.>. The uncertainty of N_ψ(3686) is 0.5%.The systematic uncertainty of the K^± tracking or PID efficiencies isassigned as 1.0% per K^±<cit.>, which is estimated with the control samples of J/ψ→ K^*K̅.The systematic uncertainty in the photon detection is assumed to be 1.0% per photon with the control sample J/ψ→π^+π^-π^0 <cit.>.To estimate the systematic uncertainties of the MC model for the χ_cJ→ 3(K^+K^-) decays, we compare our nominal efficiencies with those determined from the signal MC events after varying ± 1 standard deviation of the relative fractions of the sub-resonant decays, including χ_cJ→ 2ϕ K^+K^-, χ_cJ→ϕ 2(K^+K^-), and χ_cJ→ 3(K^+K^-). The relative changes of efficiencies, which are 3.3%, 0.8%, and 2.4% for 0, 1, and 2 decays respectively, are assigned as the corresponding systematic uncertainties. The systematic uncertainty of the fit to the M_3(K^+K^-) spectrum includes three parts: * The first is the background shape estimated by allowing a slope in the background. The changes of the fitted signal yields, 1.4% for χ_c0, 6.5% for χ_c1, 4.4% for χ_c2, are taken as the corresponding systematic uncertainties. * The second is from the signal shape, which is estimated by varying the width of the χ_cJ state by ± 1 standard deviation. The change of the fitted signal yield of each decay is negligible.* The third is due to the fit range estimated with alternative ranges of [3.225, 3.635], [3.225, 3.615], [3.215, 3.625], [3.235, 3.625], [3.225,3.625] GeV/c^2. The maximum changes of the fitted signal yields, 3.0% for χ_c0, 2.8% for χ_c1, and 2.8% for χ_c2 are taken as the corresponding systematic uncertainties. The systematic uncertainty resulting from the M_3(K^+K^-) fit is determined be 3.3% for χ_c0, 7.0% for χ_c1, and 5.2% for χ_c2, when combining these three uncertainties in quadrature.The systematic uncertainty of the 4C kinematic fit comes from the inconsistency between the data and MC simulation of the track-helix parameters. We make helix parameter corrections to take the difference between the efficiencies with and without the corrections as the systematic uncertainty. The systematic uncertainties of the 4C kinematic fits are obtained to be 3% for all decays χ_cJ→ (K^+K^-) (J=0,1,2).The systematic uncertainties due to the statistics of the MC samples are 1.6%, 1.2%, and 1.1% for χ_c0, χ_c1, and χ_c2 decays, respectively.The systematic uncertainties from the branching fractions of ψ(3686)→γJ decays quoted from the PDG <cit.> are 2.0%, 2.4%, and 2.0% for χ_c0, χ_c1, and χ_c2 decays, respectively.We assume that all systematic uncertainties are independent and combine them in quadrature to obtain the total systematic uncertainty for each decay.§ SUMMARY By analyzing (27.12±0.14)×10^8 ψ(3686) events with the BESIII detector, the product branching fractions of ψ(3686)→γχ_cJ, χ_cJ→ 3(K^+K^-) are determined to be ℬ_ψ(3686)→γχ_c0·ℬ_0→ 3(K^+K^-)=(10.5±1.8)×10^-5, ℬ_ψ(3686)→γχ_c1·ℬ_1→ 3(K^+K^-)=(4.1±0.9)×10^-5, and ℬ_ψ(3686)→γχ_c2·ℬ_2→ 3(K^+K^-)=(6.8±1.1)×10^-5, where the uncertainties are statistical. The decays of χ_cJ→ 3(K^+K^-) are observed for the first time with statistical significances of 8.2σ, 8.1σ, and 12.4σ, respectively. We measure the branching fractions of J→ 3(K^+K^-) to be ℬ_0→ 3(K^+K^-)=(10.7±1.8±1.1)×10^-6, ℬ_1→ 3(K^+K^-)=(4.2±0.9±0.5)×10^-6, ℬ_2→ 3(K^+K^-)=(7.2±1.1±0.8)×10^-6, where the first uncertainties are statistical and the second systematic. These results offer additional data for understanding of the decay mechanisms of χ_cJ states.§ ACKNOWLEDGMENTS The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key R&D Program of China under Contracts Nos. 2020YFA0406300, 2020YFA0406400; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11635010, 11735014, 11835012, 11935015, 11935016, 11935018, 11961141012, 12025502, 12035009, 12035013, 12061131003, 12192260, 12192261, 12192262, 12192263, 12192264, 12192265, 12221005, 12225509, 12235017; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contract No. U1832207; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; The Institute of Nuclear and Particle Physics (INPAC) and Shanghai Key Laboratory for Particle Physics and Cosmology; European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement under Contract No. 894790; German Research Foundation DFG under Contracts Nos. 455635585, Collaborative Research Center CRC 1044, FOR5327, GRK 2149; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Research Foundation of Korea under Contract No. NRF-2022R1A2C1092335; National Science and Technology fund of Mongolia; National Science Research and Innovation Fund (NSRF) via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation of Thailand under Contract No. B16F640076; Polish National Science Centre under Contract No. 2019/35/O/ST2/02907; The Swedish Research Council; U. S. Department of Energy under Contract No. DE-FG02-05ER41374. 99 com G. T. Bodwin, E. Braaten and G. P. Lepage, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.51.112551, 1125 (1995); H. W. Huang and K. T. Chao, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.54.685054, 6850 (1997); A. Petrelli, https://doi.org/10.1016/0370-2693(96)00459-5380, 159 (1996); J. Bolz, P. Kroll and G. A. Schuler, https://link.springer.com/article/10.1007/s100529800716Eur. Phys. J. C 2, 705 (1998); ref::theroy1 R. G. Ping, B. S. Zou and H. C. Chiang, https://link.springer.com/article/10.1140/epja/i2004-10069-9Eur. Phys. J. A 23, 129 (2005). ref::theroy2 X. H. Liu and Q. Zhao, https://iopscience.iop.org/article/10.1088/0954-3899/38/3/035007J. Phys. G 38, 035007 (2011). ref::theroy3 S. M. H. Wong, https://link.springer.com/article/10.1007/s100520000376Eur. Phy. J. C 14, 643 (2000). ref::pdg2022 R. L. Workman et al. (Particle Data Group), https://academic.oup.com/ptep/article/2022/8/083C01/6651666?login=truePTEP 2022, 083C01 (2022). ref::psip-num-inc M. Ablikim et al. (BESIII Collaboration), https://iopscience.iop.org/article/10.1088/1674-1137/42/2/023001Chin. Phys. C 42, 023001 (2018). With the same method, the total number of ψ(3686) events collected in 2009, 2012 and 2021 is determined to be 27.12×10^8 with an uncertainty of 0.5% as a preliminary result. ref::BesIII M. Ablikim et al. (BESIII Collaboration), https://www.sciencedirect.com/science/article/pii/S0168900209023870Nucl. Instrum. Methods Phys. Res. A 614, 345 (2010). ref::collider C. H. Yu et al., https://accelconf.web.cern.ch/ipac2016/doi/JACoW-IPAC2016-TUYA01.htmlProceeding of IPAC2016, Busan, Korea, 2016 Tof1 X. Li et al. https://link.springer.com/article/10.1007/s41605-017-0014-2 Radiat. Detect. Technol. Methods 1, 13 (2017).Tof2 Y. X. Guo et al. https://link.springer.com/article/10.1007/s41605-017-0012-4 Radiat. Detect. Technol. Methods 1, 15 (2017).Tof3 P. Cao et al. https://www.sciencedirect.com/science/article/pii/S0168900219314068 Nucl. Instrum. Meth. A 953, 163053 (2020).Geant4 S. Agostinelli et al. (geant4 Collaboration), https://www.sciencedirect.com/science/article/pii/S0168900203013688?via%3DihubNucl. Instrum. Methods Phys. Res. A 506, 250 (2003). Jadach01 S. Jadach, B. F. L. Ward and Z. Was, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.63.113009Phys. Rev. D 63, 113009 (2001). Lange01 D. J. Lange,https://www.sciencedirect.com/science/article/pii/S0168900201000894?via%3DihubNucl. Instrum. Methods Phys. Res. A 462, 152 (2001);R. G. Ping, https://iopscience.iop.org/article/10.1088/1674-1137/32/8/001Chin. Phys. C 32, 599 (2008).Lundcharm00 J. C. Chen, G. S. Huang, X. R. Qi, D. H. Zhang and Y. S. Zhu, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.62.034003Phys. Rev. D 62, 034003 (2000).PHOTOS E. Richter Was, https://www.sciencedirect.com/science/article/pii/037026939390062MPhys. Lett. B 303, 163 (1993).lum M. Ablikim et al. (BESIII Collaboration), https://iopscience.iop.org/article/10.1088/1674-1137/37/12/123001Chin. Phys. C 37, 123001 (2013). ref::generate W. M. Tanenbaum et al., https://journals.aps.org/prd/abstract/10.1103/PhysRevD.17.1731Phys. Rev. D 17, 1731 (1978). ref::tracking M. Ablikim et al. (BESIII Collaboration), https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.092006Phys. Rev. D 102, 092006 (2020). ref::gamma-recon M. Ablikim et al. (BESIII Collaboration), https://journals.aps.org/prd/abstract/10.1103/PhysRevD.86.052011Phys. Rev. D 86, 052011 (2012).
http://arxiv.org/abs/2312.16405v1
{ "authors": [ "BESIII Collaboration", "M. Ablikim", "M. N. Achasov", "P. Adlarson", "O. Afedulidis", "X. C. Ai", "R. Aliberti", "A. Amoroso", "Q. An", "Y. Bai", "O. Bakina", "I. Balossino", "Y. Ban", "H. -R. Bao", "V. Batozskaya", "K. Begzsuren", "N. Berger", "M. Berlowski", "M. Bertani", "D. Bettoni", "F. Bianchi", "E. Bianco", "A. Bortone", "I. Boyko", "R. A. Briere", "A. Brueggemann", "H. Cai", "X. Cai", "A. Calcaterra", "G. F. Cao", "N. Cao", "S. A. Cetin", "J. F. Chang", "G. R. Che", "G. Chelkov", "C. Chen", "C. H. Chen", "Chao Chen", "G. Chen", "H. S. Chen", "H. Y. Chen", "M. L. Chen", "S. J. Chen", "S. L. Chen", "S. M. Chen", "T. Chen", "X. R. Chen", "X. T. Chen", "Y. B. Chen", "Y. Q. Chen", "Z. J. Chen", "Z. Y. Chen", "S. K. Choi", "G. Cibinetto", "F. Cossio", "J. J. Cui", "H. L. Dai", "J. P. Dai", "A. Dbeyssi", "R. E. de Boer", "D. Dedovich", "C. Q. Deng", "Z. Y. Deng", "A. Denig", "I. Denysenko", "M. Destefanis", "F. De Mori", "B. Ding", "X. X. Ding", "Y. Ding", "Y. Ding", "J. Dong", "L. Y. Dong", "M. Y. Dong", "X. Dong", "M. C. Du", "S. X. Du", "Z. H. Duan", "P. Egorov", "Y. H. Fan", "J. Fang", "J. Fang", "S. S. Fang", "W. X. Fang", "Y. Fang", "Y. Q. Fang", "R. Farinelli", "L. Fava", "F. Feldbauer", "G. Felici", "C. Q. Feng", "J. H. Feng", "Y. T. Feng", "M. Fritsch", "C. D. Fu", "J. L. Fu", "Y. W. Fu", "H. Gao", "X. B. Gao", "Y. N. Gao", "Yang Gao", "S. Garbolino", "I. Garzia", "L. Ge", "P. T. Ge", "Z. W. Ge", "C. Geng", "E. M. Gersabeck", "A. Gilman", "K. Goetzen", "L. Gong", "W. X. Gong", "W. Gradl", "S. Gramigna", "M. Greco", "M. H. Gu", "Y. T. Gu", "C. Y. Guan", "Z. L. Guan", "A. Q. Guo", "L. B. Guo", "M. J. Guo", "R. P. Guo", "Y. P. Guo", "A. Guskov", "J. Gutierrez", "K. L. Han", "T. T. Han", "X. Q. Hao", "F. A. Harris", "K. K. He", "K. L. He", "F. H. Heinsius", "C. H. Heinz", "Y. K. Heng", "C. Herold", "T. Holtmann", "P. C. Hong", "G. Y. Hou", "X. T. Hou", "Y. R. Hou", "Z. L. Hou", "B. Y. Hu", "H. M. Hu", "J. F. Hu", "S. L. Hu", "T. Hu", "Y. Hu", "G. S. Huang", "K. X. Huang", "L. Q. Huang", "X. T. Huang", "Y. P. Huang", "T. Hussain", "F. Hölzken", "N Hüsken", "N. in der Wiesche", "J. Jackson", "S. Janchiv", "J. H. Jeong", "Q. Ji", "Q. P. Ji", "W. Ji", "X. B. Ji", "X. L. Ji", "Y. Y. Ji", "X. Q. Jia", "Z. K. Jia", "D. Jiang", "H. B. Jiang", "P. C. Jiang", "S. S. Jiang", "T. J. Jiang", "X. S. Jiang", "Y. Jiang", "J. B. Jiao", "J. K. Jiao", "Z. Jiao", "S. Jin", "Y. Jin", "M. Q. Jing", "X. M. Jing", "T. Johansson", "S. Kabana", "N. Kalantar-Nayestanaki", "X. L. Kang", "X. S. Kang", "M. Kavatsyuk", "B. C. Ke", "V. Khachatryan", "A. Khoukaz", "R. Kiuchi", "O. B. Kolcu", "B. Kopf", "M. Kuessner", "X. Kui", "N. Kumar", "A. Kupsc", "W. Kühn", "J. J. Lane", "P. Larin", "L. Lavezzi", "T. T. Lei", "Z. H. Lei", "M. Lellmann", "T. Lenz", "C. Li", "C. Li", "C. H. Li", "Cheng Li", "D. M. Li", "F. Li", "G. Li", "H. B. Li", "H. J. Li", "H. N. Li", "Hui Li", "J. R. Li", "J. S. Li", "Ke Li", "L. J Li", "L. K. Li", "Lei Li", "M. H. Li", "P. R. Li", "Q. M. Li", "Q. X. Li", "R. Li", "S. X. Li", "T. Li", "W. D. Li", "W. G. Li", "X. Li", "X. H. Li", "X. L. Li", "X. Z. Li", "Xiaoyu Li", "Y. G. Li", "Z. J. Li", "Z. X. Li", "C. Liang", "H. Liang", "H. Liang", "Y. F. Liang", "Y. T. Liang", "G. R. Liao", "L. Z. Liao", "J. Libby", "A. Limphirat", "C. C. Lin", "D. X. Lin", "T. Lin", "B. J. Liu", "B. X. Liu", "C. Liu", "C. X. Liu", "F. H. Liu", "Fang Liu", "Feng Liu", "G. M. Liu", "H. Liu", "H. B. Liu", "H. M. Liu", "Huanhuan Liu", "Huihui Liu", "J. B. Liu", "J. Y. Liu", "K. Liu", "K. Y. Liu", "Ke Liu", "L. Liu", "L. C. Liu", "Lu Liu", "M. H. Liu", "P. L. Liu", "Q. Liu", "S. B. Liu", "T. Liu", "W. K. Liu", "W. M. Liu", "X. Liu", "X. Liu", "Y. Liu", "Y. Liu", "Y. B. Liu", "Z. A. Liu", "Z. D. Liu", "Z. Q. Liu", "X. C. Lou", "F. X. Lu", "H. J. Lu", "J. G. Lu", "X. L. Lu", "Y. Lu", "Y. P. Lu", "Z. H. Lu", "C. L. Luo", "M. X. Luo", "T. Luo", "X. L. Luo", "X. R. Lyu", "Y. F. Lyu", "F. C. Ma", "H. Ma", "H. L. Ma", "J. L. Ma", "L. L. Ma", "M. M. Ma", "Q. M. Ma", "R. Q. Ma", "X. T. Ma", "X. Y. Ma", "Y. Ma", "Y. M. Ma", "F. E. Maas", "M. Maggiora", "S. Malde", "Y. J. Mao", "Z. P. Mao", "S. Marcello", "Z. X. Meng", "J. G. Messchendorp", "G. Mezzadri", "H. Miao", "T. J. Min", "R. E. Mitchell", "X. H. Mo", "B. Moses", "N. Yu. Muchnoi", "J. Muskalla", "Y. Nefedov", "F. Nerling", "L. S. Nie", "I. B. Nikolaev", "Z. Ning", "S. Nisar", "Q. L. Niu", "W. D. Niu", "Y. Niu", "S. L. Olsen", "Q. Ouyang", "S. Pacetti", "X. Pan", "Y. Pan", "A. Pathak", "P. Patteri", "Y. P. Pei", "M. Pelizaeus", "H. P. Peng", "Y. Y. Peng", "K. Peters", "J. L. Ping", "R. G. Ping", "S. Plura", "V. Prasad", "F. Z. Qi", "H. Qi", "H. R. Qi", "M. Qi", "T. Y. Qi", "S. Qian", "W. B. Qian", "C. F. Qiao", "X. K. Qiao", "J. J. Qin", "L. Q. Qin", "L. Y. Qin", "X. S. Qin", "Z. H. Qin", "J. F. Qiu", "Z. H. Qu", "C. F. Redmer", "K. J. Ren", "A. Rivetti", "M. Rolo", "G. Rong", "Ch. Rosner", "S. N. Ruan", "N. Salone", "A. Sarantsev", "Y. Schelhaas", "K. Schoenning", "M. Scodeggio", "K. Y. Shan", "W. Shan", "X. Y. Shan", "Z. J Shang", "J. F. Shangguan", "L. G. Shao", "M. Shao", "C. P. Shen", "H. F. Shen", "W. H. Shen", "X. Y. Shen", "B. A. Shi", "H. Shi", "H. C. Shi", "J. L. Shi", "J. Y. Shi", "Q. Q. Shi", "S. Y. Shi", "X. Shi", "J. J. Song", "T. Z. Song", "W. M. Song", "Y. J. Song", "Y. X. Song", "S. Sosio", "S. Spataro", "F. Stieler", "Y. J. Su", "G. B. Sun", "G. X. Sun", "H. Sun", "H. K. Sun", "J. F. Sun", "K. Sun", "L. Sun", "S. S. Sun", "T. Sun", "W. Y. Sun", "Y. Sun", "Y. J. Sun", "Y. Z. Sun", "Z. Q. Sun", "Z. T. Sun", "C. J. Tang", "G. Y. Tang", "J. Tang", "Y. A. Tang", "L. Y. Tao", "Q. T. Tao", "M. Tat", "J. X. Teng", "V. Thoren", "W. H. Tian", "Y. Tian", "Z. F. Tian", "I. Uman", "Y. Wan", "S. J. Wang", "B. Wang", "B. L. Wang", "Bo Wang", "D. Y. Wang", "F. Wang", "H. J. Wang", "J. J. Wang", "J. P. Wang", "K. Wang", "L. L. Wang", "M. Wang", "Meng Wang", "N. Y. Wang", "S. Wang", "S. Wang", "T. Wang", "T. J. Wang", "W. Wang", "W. Wang", "W. P. Wang", "X. Wang", "X. F. Wang", "X. J. Wang", "X. L. Wang", "X. N. Wang", "Y. Wang", "Y. D. Wang", "Y. F. Wang", "Y. L. Wang", "Y. N. Wang", "Y. Q. Wang", "Yaqian Wang", "Yi Wang", "Z. Wang", "Z. L. Wang", "Z. Y. Wang", "Ziyi Wang", "D. H. Wei", "F. Weidner", "S. P. Wen", "Y. R. Wen", "U. Wiedner", "G. Wilkinson", "M. Wolke", "L. Wollenberg", "C. Wu", "J. F. Wu", "L. H. Wu", "L. J. Wu", "X. Wu", "X. H. Wu", "Y. Wu", "Y. H. Wu", "Y. J. Wu", "Z. Wu", "L. Xia", "X. M. Xian", "B. H. Xiang", "T. Xiang", "D. Xiao", "G. Y. Xiao", "S. Y. Xiao", "Y. L. Xiao", "Z. J. Xiao", "C. Xie", "X. H. Xie", "Y. Xie", "Y. G. Xie", "Y. H. Xie", "Z. P. Xie", "T. Y. Xing", "C. F. Xu", "C. J. Xu", "G. F. Xu", "H. Y. Xu", "M. Xu", "Q. J. Xu", "Q. N. Xu", "W. Xu", "W. L. Xu", "X. P. Xu", "Y. C. Xu", "Z. P. Xu", "Z. S. Xu", "F. Yan", "L. Yan", "W. B. Yan", "W. C. Yan", "X. Q. Yan", "H. J. Yang", "H. L. Yang", "H. X. Yang", "Tao Yang", "Y. Yang", "Y. F. Yang", "Y. X. Yang", "Yifan Yang", "Z. W. Yang", "Z. P. Yao", "M. Ye", "M. H. Ye", "J. H. Yin", "Z. Y. You", "B. X. Yu", "C. X. Yu", "G. Yu", "J. S. Yu", "T. Yu", "X. D. Yu", "Y. C. Yu", "C. Z. Yuan", "J. Yuan", "L. Yuan", "S. C. Yuan", "Y. Yuan", "Y. J. Yuan", "Z. Y. Yuan", "C. X. Yue", "A. A. Zafar", "F. R. Zeng", "S. H. Zeng", "X. Zeng", "Y. Zeng", "Y. J. Zeng", "X. Y. Zhai", "Y. C. Zhai", "Y. H. Zhan", "A. Q. Zhang", "B. L. Zhang", "B. X. Zhang", "D. H. Zhang", "G. Y. Zhang", "H. Zhang", "H. Zhang", "H. C. Zhang", "H. H. Zhang", "H. H. Zhang", "H. Q. Zhang", "H. R. Zhang", "H. Y. Zhang", "J. Zhang", "J. Zhang", "J. J. Zhang", "J. L. Zhang", "J. Q. Zhang", "J. S. Zhang", "J. W. Zhang", "J. X. Zhang", "J. Y. Zhang", "J. Z. Zhang", "Jianyu Zhang", "L. M. Zhang", "Lei Zhang", "P. Zhang", "Q. Y. Zhang", "R. Y Zhang", "Shuihan Zhang", "Shulei Zhang", "X. D. Zhang", "X. M. Zhang", "X. Y. Zhang", "Y. Zhang", "Y. T. Zhang", "Y. H. Zhang", "Y. M. Zhang", "Yan Zhang", "Yao Zhang", "Z. D. Zhang", "Z. H. Zhang", "Z. L. Zhang", "Z. Y. Zhang", "Z. Y. Zhang", "Z. Z. Zhang", "G. Zhao", "J. Y. Zhao", "J. Z. Zhao", "Lei Zhao", "Ling Zhao", "M. G. Zhao", "N. Zhao", "R. P. Zhao", "S. J. Zhao", "Y. B. Zhao", "Y. X. Zhao", "Z. G. Zhao", "A. Zhemchugov", "B. Zheng", "B. M. Zheng", "J. P. Zheng", "W. J. Zheng", "Y. H. Zheng", "B. Zhong", "X. Zhong", "H. Zhou", "J. Y. Zhou", "L. P. Zhou", "S. Zhou", "X. Zhou", "X. K. Zhou", "X. R. Zhou", "X. Y. Zhou", "Y. Z. Zhou", "J. Zhu", "K. Zhu", "K. J. Zhu", "K. S. Zhu", "L. Zhu", "L. X. Zhu", "S. H. Zhu", "S. Q. Zhu", "T. J. Zhu", "W. D. Zhu", "Y. C. Zhu", "Z. A. Zhu", "J. H. Zou", "J. Zu" ], "categories": [ "hep-ex" ], "primary_category": "hep-ex", "published": "20231227042638", "title": "Observation of $χ_{cJ}\\to 3(K^+K^-)$" }
subha.7491@gmail.comPhilippe.Ghosez@uliege.be Theoretical Materials Physics, Q-MAT, University of Liège, B-4000 Sart-Tilman, Belgium BiNiO_3exhibits an unusual metal-insulator transition from Pnma to P1 that is related to charge ordering at the Bi sites, which is intriguingly distinct from the charge ordering at Ni sites usually observed in related rare-earth nickelates. Here, using first principles calculations, we first rationalize the phase transitionfrom Pnma to P1, revealing an overlooked intermediate P2_1/m phase and a very unusual phase transition mechanism. Going further, we point out that thecharge ordering at Bi sites in the P1 phase is not unique. We highlight an alternative polar orderings giving rise to a ferroelectric Pmn2_1 phase nearly degenerated in energy with P1 and showing an in-plane electric polarisation of 53 μ C/cm^2 directly resulting from the charge ordering.The close energy of Pmn2_1 and P1 phases, together with low energy barrier between them, make BiNiO_3 a potential electronic antiferroelectric in which the field-induced transition from non-polar to polar would relate to non-adiabatic inter-site electron transfer. We also demonstrate the possibility to stabilize an electronic ferroelectric ground state from strain engineering in thin films, using an appropriate substrate. Latent electronic (anti-)ferroelectricity in BiNiO_3 Philippe Ghosez January 14, 2024 ====================================================Nickelate perovskites (RNiO_3 with R = Y or a rare-earth element) have generated a significant interest over the last years due to their fascinating electronic, magnetic and structural properties, potentially linked to a wide variety of functional applications<cit.>. RNiO_3 compounds (except R=La) undergo a metal to insulator transition (MIT)with associatedstructural phase transition fromhigh-temperature orthorhombic Pnma to low-temperature monoclinic P2_1/n structure <cit.>.The critical temperature of the MIT decreases with increasing R^3+ ionic radius and is finally suppressed for LaNiO_3, which exhibits a distinct metallic R3c phase at all temperatures. For smaller R^3+ ion, the MIT is driven by a breathing distortion of the NiO_6 octahedra, which creates two inequivalent Ni sites and subsequent charge ordering (CO), 2 Ni^3+→Ni^2++Ni^4+ <cit.>. At the electronic level, considering Ni-O hybridizations, this formal transition is often better reformulated in terms of oxygen holes (L):2( Ni^2+L^1) → Ni^2++Ni^2+L^2 <cit.>. At the structural level, it has been shown that the breathing distortion is triggered by the oxygen octahedra rotations (OOR) inherent to the Pnma phase <cit.>. This behavior is ubiquitous amongst the RNiO_3 compounds, making them a distinct and well-defined family of materials.We might naturally expect BiNiO_3 to belong to this class of compounds. In view of the similar size of Bi^3+ and La^3+ cations, it is questionable why BiNiO_3 does not behave like LaNiO_3 <cit.>. However, relying instead on bond-valence analysis <cit.>, it appears that BiNiO_3 has a Goldschmidt tolerance factor <cit.> very similar to SmNiO_3 (see Fig.S1(a)). In line with that, BiNiO_3 shows a metallic Pnma phase with OOR amplitudes comparable to those of SmNiO_3 (see SI). Like the latter, it then exhibits an insulating ground state but instead of crystallizing in the same insulating P2_1/n phase with CO at the Ni sites, it is reported in an unusual P1 phase combining an unexpected Ni^2+ state with CO at the Bi sites (Bi^3+Ni^3+→Bi^3+_1/2Bi^5+_1/2Ni^2+) <cit.>. Although the P2_1/n phase has been theoretically predicted to be metastable <cit.>, it has never been experimentally observed. A temperature versus pressure phase diagram has been reported experimentally, suggesting direct phase transition from Pnma to P1 at a critical temperature decreasing linearly with increasing pressure <cit.>.Various studies have discussed the MIT in BiNiO_3, focusing mainly on the electronic properties. Dynamical mean field theory calculations <cit.> reproduce CO of Bi^3+ and Bi^5+ in the insulating phase assuming Bi^4+ to be a valence skipper with an attractive Hubbard interaction, while the formal Bi^3+Ni^3+ occupancy makes BiNiO_3 a metal in the Pnma phase. This integer valence description is too simple to reflect the exact electronic configurations and X-ray absorption spectroscopy finds a charge state away fromNi^3+ <cit.> in the metallic state. Paul et al. <cit.> then better proposed an description of the form ( Bi^3+ L^δ) ( Ni^2+ L^1-δ) → Bi^3+_1/2 ( Bi^3+ L^2(1-δ))_1/2 ( Ni^2+L^δ) involving oxygen holes L and explained the pressure dependence of the MIT from changes of Bi-O and Ni-O hybridizations. Although this alternative view is likely more accurate, we continue hereafter using an integer description of Bi valence that provides a simplified but qualitatively correct global picture.Here, we report a detailed first-principles study of BiNiO_3 addressing together electronic and structural aspects. Our approach accurately reproduces the CO and P1 ground state. First, we unveil the existence of an intermediate P2_1/m phase along the path from the high-temperaturePnma phase tothe P1 ground state and an unusual transition mechanism from P2_1/m to P1 involving only stable modes. Then, we point out that the CO of the P1 phase is not unique and identify an alternative CO giving rise to a ferroelectric Pmn2_1 phase of comparable energy. We clarify that ferroelectricity in that phase is electronic in nature and discuss practical implication of our findings in terms of electronic (anti-)ferroelectricity. Our calculations are performed using a DFT+U approach, relying on the PBEsol <cit.> exchange-correlation functional, as implemented in ABINIT software<cit.> (see SI). U and Jcorrections are included for Ni 3d states <cit.>. We checked the results for different (U,J) values and found that (6,1) eVprovides excellent theoretical description of the experimentalP1 ground state (see SI Table.ST1 and Fig.<ref>(a)).For too small U,P1 cannot be stabilized, consistently with Ref.<cit.>. Symmetry-adapted mode analysis is performed using Isodistort <cit.>. The phase to which refer each symmetry label is identified with a subscript : c for cubic (Pm3̅m), o for orthorhombic (Pnma) and m for monoclinic (P2_1/m). Connection between symmetry labels of the three phases is reported in Table.ST3. Non adiabatic charge transfer is probed using constrained DFT, as implemented in ABINIT <cit.>.P1 ground  state – Starting from the experimental P1structure, we first carry out full structural optimization for different collinear magnetic configurations of Ni. Comparing ferromagnetic (FM) with A, C and G type antiferromagnetic (AFM) spin orders, we find G-AFM to be energetically the most favorable order (Table.ST2), with a theoretical unit cell volume (233 Å^3) comparable to experiment (233-235 Å^3) <cit.>. Since G-AFM spin configuration also remains the most favorable in other phases, it is kept all along this work.Relying on symmetry-adapted mode analysis <cit.>, we point out in Fig.<ref>a the excellent agreement between theoptimized and experimental <cit.> atomic distortions of the P1 structure, with respect to the Pm3m cubicreference. Amongst these distortions, some are already inherent to the intermediate Pnma phase <cit.>: primary in-phase (M_2,c^+) and anti-phase (R_5,c^-) NiO_6 octahedra rotations together with secondary anti-polar motions of Bi atoms (X_5,c^- and R_4,c^-) and more negligible Jahn-Teller distortion (M_3,c^+). Then, additional M_1,c^+ and M_5,c^+ distortions (Fig.S3) are also present, which explain together the lowering of symmetry from Pnma toP1 : M_1,c^+ motions of O atoms in ab-plane, which induce a breathing-like distortion of BiO_12 polyhedra and M_5,c^+ anti-phase motions of O atoms along c, which distort the polyhedra further. This gives rise to large (Bi_L, 51.07 Å^3) and small (Bi_S, 47.24 Å^3) Bi sites that order according to a C-type pattern in which Bi_L and Bi_S alternate along two directions and are preserved in the third one (Fig.<ref>b). SmallR_3,c^-, M_4,c^+ and X_3,c^- distortions are also present. The negligible contribution of the R_2,c^- mode confirms the absence of breathing distortion at Ni sites,dominant in the insulating P2_1/n phase of other RNiO_3 perovskites <cit.>. The partial density of states (PDOS) in Fig.<ref>(c) reveal dominant antibonding Bi 6s + O 2p contributions around the Fermi energy (E_f), whereas bonding states are lying much deeper (i.e. ∼10 eV below E_f). In the P1 phase, a splitting between antibonding Bi 6s + O 2p states is opening a band gap of 0.5 eV in line with experiment<cit.>. Distinct Bi_L and Bi_S contributions with occupied (unoccupied) 6s levels near E_f are consistent with Bi^3+ and Bi^5+ (or Bi^3+L^2) states, giving rise to CO according to a C-type pattern <cit.>. This is confirmed by charge density plots of top valence electrons (Fig.S4a, Fig.<ref>c), highlighting the presence of a Bi 6s lone pairs at Bi_L site only. These lone pairs are pointing along the pseudo-cubic diagonal in each ab-plane; they are lying on the same side of Bi atoms in a given ab-plane and in opposite sides in consecutive layers, in line with anti-polar motion of Bi atoms and inversion symmetry of the system.Also, PDOS of the Ni-3d show that t_2g states are occupied for both the spin channel and e_g states are occupied (empty) for majority (minority) spin channel. This confirms a high-spin Ni^2+(t_2g^6e_g^2) state, consistent with the calculated magnetic moment of ∼1.67 μ_B/Ni. Small differences in the Ni magnetic moments results in an uncompensated ferrimagnetic (FiM) net magnetization of 0.01 μ_B. Such a weak magnetisation is also observed experimentally, but as a result of a canted G-AFM ordering<cit.>. Pnmaphase –The Pnma phase lies 61 meV/f.u higher in energy than P1. Its relaxed unit cell volume (228 Å^3) is ∼2.4 % smaller than that of P1, consistently with the ∼2.5 %volume shrinkage observed experimentally during the P1-Pnma transition at 3.5 GPa<cit.>. Structurally, the Pnma phase (a^-b^+c^- in Glazer's notations) shows large out-of-phase and in-phase NiO_6 octahedra rotations of 9.6^∘ and 11.2^∘, which remain similar in the P1 phase (Fig.<ref>a). At the electronic level, the PDOS (Fig.<ref>d) point out a metallic character, with partially occupied Bi 6s and O 2p antibonding states at E_f. The significant occupancy of Ni 3d states and the Ni magnetic moment of 1.65 μ_B indicate a charge state closer to high-spin Ni^+2 than to Ni^+3, in line with experimental observations <cit.>. Consequently, the nominal charge state of Bi should be Bi^4+, which suggests a strong tendency toelectronic instability since Bi^4+ is a valence skipper<cit.>. Accordingly, the Pnma phase shows two unstable phonon modes at Γ : aΓ_4,o^+ ( 310i cm^-1) and a Γ^-_2,o mode (149i cm^-1) that both induce CO at Bi sites. Condensing the Γ_4,o^+ mode lowers the symmetry to P2_1/m and give rise to a relaxed insulating metastable phase located 55 meV/f.u. below the Pnma phase (Fig. <ref>a). Inspection of the PDOS highlights a bandgap of 0.46 eV and confirms charge disproportionation at the Bi sites (Fig.-S5). This P2_1/m phase shows a C-type CO and lone-pair orientations similar to P1 (Fig. <ref>b-c).Condensing instead the Γ^-_2,o mode lowers the symmetry to Pmn2_1 and give rise to another insultating metastable phase located 60.5 meV/f.u. below the Pnma phase (i.e. only 0.3 meV/f.u. above P1). Inspection of the PDOS also show charge disproportionation at the Bi sites (shown in Fig.S5) but Bi_L 6s states are much broaden at the conduction level (compared to P2_1/m and P1), indicating stronger Bi 6s – O 2p hybridizations resulting in a smaller bandgap of 0.3 eV. Moreover, Bi^3+ and Bi^5+ sites now alternate along the three directions giving rise to a G-type CO that breaks the inversion symmetry in line with the polar character of the Pmn2_1 phase. Interestingly, the only appearance of C-type (resp. G-type) CO in Pnma already lowers the symmetry to P2_1/m (resp. Pmn2_1). Together with the close energies of Pmn2_1,P2_1/m and P1 phases, this emphasizes that the major driving force destabilizing the Pnma structure is CO, whatever the resulting order. Path to the ground state – Amazingly the P2_1/m and Pmn2_1 phases are both dynamically stable. The natural path from Pnma to P1 should preferably go through P2_1/m, which already condense Γ_4,o^+ distortion. In the monoclinic P2_1/m phase, none of the mode is however unstable but additional condensation of the low frequency Γ^+_2,m mode (50 cm^-1) properly brings the system to the P1 ground state. Doing so requires however to overcome an energy barrier of 4 meV/f.u.. In order to clarify the mechanism of this unusual phase transition condensing a stable mode, we studied the energy landscape around the P2_1/m phase from a Landau-type expansion (up to 4^th order) involving Γ^+_2,m=Γ^+_3,o ⊕ Γ^+_2,o and Γ^+_1,m=Γ^+_1,o ⊕ Γ^+_4,o lattice modes as well as η_Γ^+_1,m andη_Γ^+_2,mmacroscopic strains degrees of freedom.The expansion coefficients have been adjusted on a training set of DFT data including 300 configurations (Fig.S6 and are reported in Table ST5). Amongst the various coupling terms, we find that the 3^rd order coupling Q_Γ^+_1,mQ_Γ^+_2,m^2 is the most significant in lowering the energy (-456 meV/f.u.). Then, strain couplings Q_Γ^+_2,mη_Γ^+_2,m (-100 meV/f.u.) and Q_Γ^+_2,m^2 η_Γ^+_1,m (-232 meV/f.u.) are also significant. This highlights a rather complex and unusual phase transition mechanism in which many anharmonic couplingsof Γ^+_2,m with Γ^+_1,m ,η_Γ^+_1,m and η_Γ^+_2,mcooperate to lower the energy and produce the P1̅ ground state.Competing polar phase and electronic ferroelectricity – Being only 0.3 meV/f.u. higher in energy than the observed P1̅ ground state, the Pmn2_1 phase emerges as a close and competing phase. As previously discussed, its G-type CO (Fig.<ref>) phase breaks inversion symmetry, yielding a spontaneous polarization along x, P_x^s. Further, the direction of P_x^s can be reversed by reversing the charge ordering (i.e. condensingΓ_2,o^- in opposite direction). Together, this makes Pmn2_1 a conceptual electronic ferroelectric phase, as long as experimental switching is practically achievable. Estimating P_x^s is not so trivial. Berry-phase calculation in the Pmn2_1 phase delivers a set of valuesP_x^s = -20.52 + n Q_P μC/cm^2(with n an integer and Q_P= 36.76 μC/cm^2 the polarization quantum), without clarifying which value of n is appropriate. Using a Nudged Elastic Band (NEB) technique, we identified an insulating low-energy pathfrom non-polar P1̅ to polar Pmn2_1 phase (with an energy barrier of 35 meV/f.u., Fig.<ref>a). From this, we can follow the evolution of P_x^s along the path, as illustrated in Fig.<ref>a. This shows first that the spontaneous polarization of the Pmn2_1 phase is P_x^s = 53 μC/cm^2, which is even larger than that of a conventional ferroelectric like BaTiO_3.Then, it clarifies that the change of polarization is strongly non-linear with a jump of about 40 μC/cm^2. This jump that can be assigned to the change from C-type to G-type CO as highlighted from the PDOS of Bi in Fig.<ref>b. It is also compatible (see SI) with the transfer of 2 electrons between Bi sites in one layer (z=1/2 in Fig.<ref>b) , confirming that P^s_x mainly originates from electronic CO. The non-polar character of the P1 ground state, combined with the very close energy of the Pmn2_1 ferroelectric phase (Δ E = 0.3 meV/f.u.), makes BiNiO_3 a potential antiferroelectric. Applying an electric field E_T = Δ E / Ω_0 P^s_x ≈ 15 kV/cm should be enough to stabilize thermodynamically Pmn2_1 against the P1 phase. However, achieving electric field transition would a priori require a much larger field E_A to overcome the adiabatic energy barrier between the two phases (Δ E_ A≈ 35 meV/f.u. at zero field and zero kelvin). Alternatively, it might be questioned if non-adiabatic electron transfer would eventually be possible. Following the scheme proposed Qi and Rabe <cit.> (see Fig.<ref>c and SI), we estimate the field required for non-adiabatic transition to E_ NA = Δ E_ NA / Ω_0 P^s_x ≈ 800 kV/cm (Δ E_ NA=15 meV/f.u.). As discussed by Qi and Rabe, this should not be taken as an exact value but rather as an estimate to compare distinct compounds. Our computed E_ NA is larger than that in Fe_3O_4<cit.> showing a similar bandgap and in which ferroelectric switching has been experimentally observed <cit.>. It is however significantly smaller than in other electronic ferroelectrics like SrVO_3/LaVO_3 or LuFe_2O_4 <cit.>. As such, BiNiO_3 remains a plausible candidate for electronic antiferroelectricity, with field induced non-polar to polar transition potentially accessible and driven by non-adiabatic electron transfer. Strain engineering –Interestingly, the lattice parameters (along a and c) of the Pmn2_1 phase are significantly different from those of the P1 ground state (see Table.-ST7), which opens the perspective of using strain engineering to stabilize a ferroelectric ground state. It appears that the lattice parameters of theNdGaO_3, a widely used substrate for the growth of perovskite oxide films, perfectly match with those of the Pmn2_1 phase. Comparing then the energies of different possible orientations of P1 and Pmn2_1 phases, epitaxially strained on commercially available (110)_o and (001)_o NdGaO_3 substrates (Table.ST8,ST9), it appears that the ferroelectric Pmn2_1 phase is always elastically favored. In the case of the (110)_o substrate, the strained Pmn2_1 ferroelectric phase lies 7 meV/f.u. below the strained P1 and moreover aligns its long axis in plane as that of the substrate, which makes it a likely case to be realized experimentally.Polarization switching in such a marginally strained polar Pmn2_1 phase would require reversing the CO and as such electron transfer in each of the two layers (z = 0 and 1/2 of Fig.<ref>d). According to Fig.<ref>c, it should be accessible from non-adiabatic electron transfer at the same reasonable field E_NA≈800 kV/cm as before, making the system a potential electronic ferroelectric.Conclusions – BiNiO_3behaves differentlythan other nickelate perovskites, which show CO at Ni sites. The charge transfer Bi^3+ Ni^3+→ Bi^4+ Ni^2+, yielding the Bi^4+ valence skipper state, is the starting point for the electronic instability of the metallic Pnma phase, which is then further stabilized by CO at Bi sites in the insulating non-polar P1̅ ground state and close feroelectric Pmn2_1 phase. TlMnO_3<cit.> is another alternative perovskite we foundhosting a P1 ground state. Interestingly, it also shows a metallic Pnma to insulating P1 phase transition but coming instead from orbital ordering at Mn^3+ sites. We want to stress that ferroelectricity in BiNiO_3 is distinct from that in other BiMO_3 perovskites (M= Fe, Co, In) <cit.> in which only Bi^3+ is present and polarisation driven by the lone pair of Bi^3+. In BiNiO_3, the polarisation arises from the G-type Bi^3+/Bi^5+ CO and is electronic in nature. Electronic ferroelectricty has been reported in non-perovskite Fe_3O_4<cit.>, AFe_2O_4 compounds <cit.> and perovskite oxide superlattices <cit.> but remains a rare phenomena. Stabilizing the polar Pmn2_1 phase of BiNiO_3 by electric field or strain enginering appears as a promising new platform to probe further the intriguing concept of electronic (anti-)ferroelectricity. Acknowledgement: SB thanks He Xu for useful discussions and technical support. This work was supportedby F.R.S.-FNRS Belgium under PDR grant T.0107.20 (PROMOSPAN). The authors acknowledge the use of the CECI supercomputer facilities funded by the F.R.S-FNRS (Grant No. 2.5020.1) and of the Tier-1 supercomputer of the Fédération Wallonie-Bruxelles funded by the Walloon Region (Grant No. 1117545).42 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Zhou et al.(2016)Zhou, Guan, Zhou, Ramadoss, Adam, Liu, Lee, Shi, Tsuchiya, Fong, and Ramanathan]appl_nick_1 author author Y. Zhou, author X. Guan, author H. Zhou, author K. Ramadoss, author S. Adam, author H. Liu, author S. Lee, author J. Shi, author M. Tsuchiya, author D. D. Fong, and author S. Ramanathan, https://doi.org/10.1038/nature17653 journal journal Nature volume 534, pages 231 (year 2016)NoStop [Shi et al.(2013)Shi, Ha, Zhou, Schoofs, andRamanathan]appl_nick_2 author author J. Shi, author S. D. Ha, author Y. Zhou, author F. Schoofs, and author S. Ramanathan, https://doi.org/10.1038/ncomms3676 journal journal Nature Communications volume 4,pages 2676 (year 2013)NoStop [Alonso et al.(1999)Alonso, Martínez-Lope, Casais, Aranda, and Fernández-Díaz]PT_RNO1 author author J. A. Alonso, author M. J. Martínez-Lope, author M. T. Casais, author M. A. G. Aranda, and author M. T. Fernández-Díaz, https://doi.org/10.1021/ja984015x journal journal Journal of the American Chemical Societyvolume 121, pages 4754 (year 1999), https://arxiv.org/abs/https://doi.org/10.1021/ja984015x https://doi.org/10.1021/ja984015x NoStop [Mazin et al.(2007)Mazin, Khomskii, Lengsdorf, Alonso, Marshall, Ibberson, Podlesnyak, Martínez-Lope, and Abd-Elmeguid]Ni_disp author author I. I. Mazin, author D. I. Khomskii, author R. Lengsdorf, author J. A. Alonso, author W. G. Marshall, author R. M. Ibberson, author A. Podlesnyak, author M. J. Martínez-Lope, and author M. M. Abd-Elmeguid, https://doi.org/10.1103/PhysRevLett.98.176406 journal journal Phys. Rev. Lett. volume 98,pages 176406 (year 2007)NoStop [Varignon et al.(2017a)Varignon, Grisolia, Íñiguez, Barthélémy, and Bibes]PT_RNO2 author author J. Varignon, author M. N. Grisolia, author J. Íñiguez, author A. Barthélémy, and author M. Bibes, https://doi.org/10.1038/s41535-017-0024-9 journal journal npj Quantum Materials volume 2,pages 21 (year 2017a)NoStop [Johnston et al.(2014)Johnston, Mukherjee, Elfimov, Berciu, and Sawatzky]O_hole1 author author S. Johnston, author A. Mukherjee, author I. Elfimov, author M. Berciu, and author G. A. Sawatzky, https://doi.org/10.1103/PhysRevLett.112.106404 journal journal Phys. Rev. Lett. volume 112,pages 106404 (year 2014)NoStop [Bisogni et al.(2016)Bisogni, Catalano, Green, Gibert, Scherwitzl, Huang, Strocov, Zubko, Balandeh, Triscone, Sawatzky, and Schmitt]O_hole2 author author V. Bisogni, author S. Catalano, author R. J. Green, author M. Gibert, author R. Scherwitzl, author Y. Huang, author V. N. Strocov, author P. Zubko, author S. Balandeh, author J.-M.Triscone, author G. Sawatzky, and author T. Schmitt, https://doi.org/10.1038/ncomms13017 journal journal Nature Communications volume 7, pages 13017 (year 2016)NoStop [Mercy et al.(2017)Mercy, Bieder, Íñiguez, andGhosez]YNO_Mercy author author A. Mercy, author J. Bieder, author J. Íñiguez,and author P. Ghosez, https://doi.org/10.1038/s41467-017-01811-x journal journal Nature Communications volume 8,pages 1677 (year 2017)NoStop [Shannon(1976)]Shanon author author R. D. Shannon, https://doi.org/https://doi.org/10.1107/S0567739476001551 journal journal Acta Crystallographica Section A volume 32, pages 751 (year 1976)NoStop [Lufaso()]Radii author author M. Lufaso, https://doi.org/https://www.unf.edu/ michael.lufaso/spuds/radii-alpha.pdfhttps://arxiv.org/abs/https://www.unf.edu/ michael.lufaso/spuds/radii-alpha.pdf https://www.unf.edu/ michael.lufaso/spuds/radii-alpha.pdf NoStop [Brese and O'Keeffe(1991)]BVpa author author N. E. Brese and author M. O'Keeffe, @noopjournal journal Acta Crystallographica Section B volume 47, pages 192 (year 1991)NoStop [Goldschmidt(1926)]Goldschmidt author author V. M. Goldschmidt, https://doi.org/10.1007/BF01507527 journal journal Naturwissenschaften volume 14, pages 477 (year 1926)NoStop [Ishiwata et al.(2002a)Ishiwata, Azuma, Takano, Nishibori, Takata, Sakata, and Kato]BNO_02 author author S. Ishiwata, author M. Azuma, author M. Takano, author E. Nishibori, author M. Takata, author M. Sakata, and author K. Kato, https://doi.org/10.1039/B206022A journal journal J. Mater. Chem. volume 12, pages 3733 (year 2002a)NoStop [Azuma et al.(2007)Azuma, Carlsson, Rodgers, Tucker, Tsujimoto, Ishiwata, Isoda, Shimakawa, Takano, and Attfield]BNO_JACS author author M. Azuma, author S. Carlsson, author J. Rodgers, author M. G. Tucker, author M. Tsujimoto, author S. Ishiwata, author S. Isoda, author Y. Shimakawa, author M. Takano, and author J. P.Attfield, https://doi.org/10.1021/ja074880u journal journal Journal of the American Chemical Society volume 129, pages 14433 (year 2007)NoStop [Cohen and Diéguez(2021)]twotypes_BNO author author N. Cohen and author O. Diéguez, https://doi.org/10.1103/PhysRevB.104.064111 journal journal Phys. Rev. B volume 104, pages 064111 (year 2021)NoStop [Azuma et al.(2011)Azuma, Chen, Seki, Czapski, Olga, Oka, Mizumaki, Watanuki, Ishimatsu, Kawamura, Ishiwata, Tucker, Shimakawa, and Attfield]BNO_NAT author author M. Azuma, author W.-t. Chen, author H. Seki, author M. Czapski, author S. Olga, author K. Oka, author M. Mizumaki, author T. Watanuki, author N. Ishimatsu, author N. Kawamura, author S. Ishiwata, author M. G. Tucker, author Y. Shimakawa, and author J. P. Attfield, https://doi.org/10.1038/ncomms1361 journal journal Nature Communications volume 2,pages 347 (year 2011)NoStop [Naka et al.(2016)Naka, Seo, and Motome]BNO_Hartree author author M. Naka, author H. Seo, andauthor Y. Motome, https://doi.org/10.1103/PhysRevLett.116.056402 journal journal Phys. Rev. Lett. volume 116,pages 056402 (year 2016)NoStop [Kojima et al.(2016)Kojima, Nasu, and Koga]BNO_DMFT author author S. Kojima, author J. Nasu, andauthor A. Koga, https://doi.org/10.1103/PhysRevB.94.045103 journal journal Phys. Rev. B volume 94, pages 045103 (year 2016)NoStop [Mizumaki et al.(2009)Mizumaki, Ishimatsu, Kawamura, Azuma, Shimakawa, Takano,and Uozumi]xRAY_BNO author author M. Mizumaki, author N. Ishimatsu, author N. Kawamura, author M. Azuma, author Y. Shimakawa, author M. Takano, and author T. Uozumi, https://doi.org/10.1103/PhysRevB.80.233104 journal journal Phys. Rev. B volume 80, pages 233104 (year 2009)NoStop [Paul et al.(2019)Paul, Mukherjee, Dasgupta, Paramekanti, and Saha-Dasgupta]BNO_PRL author author A. Paul, author A. Mukherjee, author I. Dasgupta, author A. Paramekanti, and author T. Saha-Dasgupta, https://doi.org/10.1103/PhysRevLett.122.016404 journal journal Phys. Rev. Lett. volume 122,pages 016404 (year 2019)NoStop [Perdew et al.(2008)Perdew, Ruzsinszky, Csonka, Vydrov, Scuseria, Constantin, Zhou,and Burke]PBESOL author author J. P. Perdew, author A. Ruzsinszky, author G. I. Csonka, author O. A. Vydrov, author G. E. Scuseria, author L. A. Constantin, author X. Zhou, and author K. Burke, https://doi.org/10.1103/PhysRevLett.100.136406 journal journal Phys. Rev. Lett. volume 100,pages 136406 (year 2008)NoStop [Gonze et al.(2002)Gonze, Beuken, Caracas, Detraux, Fuchs, Rignanese, Sindic, Verstraete, Zerah, Jollet, Torrent, Roy, Mikami, Ghosez, Raty, and Allan]ABINIT2 author author X. Gonze, author J.-M. Beuken, author R. Caracas, author F. Detraux, author M. Fuchs, author G.-M. Rignanese, author L. Sindic, author M. Verstraete, author G. Zerah, author F. Jollet, author M. Torrent, author A. Roy, author M. Mikami, author P. Ghosez, author J.-Y. Raty, andauthor D. Allan, https://doi.org/https://doi.org/10.1016/S0927-0256(02)00325-7 journal journal Computational Materials Science volume 25, pages 478 (year 2002)NoStop [Gonze(2005)]ABINIT3 author author X. Gonze, https://doi.org/doi:10.1524/zkri.220.5.558.65066 journal journal Zeitschrift für Kristallographie - Crystalline Materials volume 220, pages 558 (year 2005)NoStop [Torrent et al.(2008)Torrent, Jollet, Bottin, Zérah, and Gonze]ABINIT4 author author M. Torrent, author F. Jollet, author F. Bottin, author G. Zérah, and author X. Gonze, https://doi.org/https://doi.org/10.1016/j.commatsci.2007.07.020 journal journal Computational Materials Science volume 42, pages 337 (year 2008)NoStop [Liechtenstein et al.(1995)Liechtenstein, Anisimov, and Zaanen]U author author A. I. Liechtenstein, author V. I. Anisimov, and author J. Zaanen, https://doi.org/10.1103/PhysRevB.52.R5467 journal journal Phys. Rev. B volume 52, pages R5467 (year 1995)NoStop [Stokes et al.()Stokes, Hatch, and Campbell]Isodistort author author H. T. Stokes, author D. M. Hatch,and author B. J. Campbell,https://iso.byu.edu journal ISOTROPY Software SuiteNoStop [Gonze et al.(2022)Gonze, Seddon, Elliott, Tantardini,and Shapeev]cDFT journal author author X. Gonze, author B. Seddon, author J. A.Elliott, author C. Tantardini, and author A. V. Shapeev, https://doi.org/10.1021/acs.jctc.2c00673 journal journal Journal of Chemical Theory and Computationvolume 18, pages 6099 (year 2022), note pMID: 36099643, https://arxiv.org/abs/https://doi.org/10.1021/acs.jctc.2c00673 https://doi.org/10.1021/acs.jctc.2c00673 NoStop [Carlsson et al.(2008)Carlsson, Azuma, Shimakawa, Takano, Hewat, and Attfield]BNO_attfield author author S. J. Carlsson, author M. Azuma, author Y. Shimakawa, author M. Takano, author A. Hewat, and author J. P. Attfield, https://doi.org/https://doi.org/10.1016/j.jssc.2007.12.037 journal journal Journal of Solid State Chemistry volume 181, pages 611 (year 2008)NoStop [Varignon et al.(2017b)Varignon, Grisolia, Preziosi, Ghosez, andBibes]RTiO3_modes author author J. Varignon, author M. N. Grisolia, author D. Preziosi, author P. Ghosez, and author M. Bibes, https://doi.org/10.1103/PhysRevB.96.235106 journal journal Phys. Rev. B volume 96, pages 235106 (year 2017b)NoStop [Ishiwata et al.(2002b)Ishiwata, Azuma, Takano, Nishibori, Takata, Sakata, and Kato]Gap_BNO author author S. Ishiwata, author M. Azuma, author M. Takano, author E. Nishibori, author M. Takata, author M. Sakata, and author K. Kato, https://doi.org/10.1039/B206022A journal journal J. Mater. Chem. volume 12, pages 3733 (year 2002b)NoStop [Varma(1988)]skipping author author C. M. Varma, https://doi.org/10.1103/PhysRevLett.61.2713 journal journal Phys. Rev. Lett. volume 61, pages 2713 (year 1988)NoStop [Qi and Rabe(2022)]Karin_nonadiabatic_switching author author Y. Qi and author K. M. Rabe,https://doi.org/10.1103/PhysRevB.106.125131 journal journal Phys. Rev. B volume 106,pages 125131 (year 2022)NoStop [Yamauchi et al.(2009)Yamauchi, Fukushima, and Picozzi]Fe3O4_Silvia author author K. Yamauchi, author T. Fukushima, and author S. Picozzi, https://doi.org/10.1103/PhysRevB.79.212404 journal journal Phys. Rev. B volume 79, pages 212404 (year 2009)NoStop [Non()]Nonadiabatic @nooptitle Non-adiabatic paths have been obtained fixing the atomic geometry to λ=0 or λ = 1 and then constraining the electronic charge at each point to what it is along the adiabatic path, using constrained-DFT calculations.Stop [Yi et al.(2014)Yi, Kumagai, Spaldin, Matsushita, Sato, Presniakov, Sobolev, Glazkova, and Belik]TlMnO author author W. Yi, author Y. Kumagai, author N. A. Spaldin, author Y. Matsushita, author A. Sato, author I. A. Presniakov, author A. V. Sobolev, author Y. S.Glazkova, and author A. A.Belik, https://doi.org/10.1021/ic501380m journal journal Inorganic Chemistry volume 53, pages 9800 (year 2014)NoStop [Neaton et al.(2005)Neaton, Ederer, Waghmare, Spaldin,and Rabe]BiFeO3 author author J. B. Neaton, author C. Ederer, author U. V. Waghmare, author N. A. Spaldin, and author K. M. Rabe, https://doi.org/10.1103/PhysRevB.71.014113 journal journal Phys. Rev. B volume 71, pages 014113 (year 2005)NoStop [Oka et al.(2010)Oka, Azuma, Chen, Yusa, Belik, Takayama-Muromachi, Mizumaki, Ishimatsu, Hiraoka, Tsujimoto, Tucker, Attfield, andShimakawa]BiCoO3 author author K. Oka, author M. Azuma, author W.-t. Chen, author H. Yusa, author A. A. Belik, author E. Takayama-Muromachi, author M. Mizumaki, author N. Ishimatsu, author N. Hiraoka, author M. Tsujimoto, author M. G.Tucker, author J. P.Attfield, and author Y. Shimakawa, https://doi.org/10.1021/ja102987d journal journal Journal of the American Chemical Society volume 132, pages 9438 (year 2010), note pMID: 20568754NoStop [Belik et al.(2006)Belik, Stefanovich, Lazoryak, and Takayama-Muromachi]BiInO3 author author A. A. Belik, author S. Y. Stefanovich, author B. I. Lazoryak, and author E. Takayama-Muromachi, https://doi.org/10.1021/cm052627s journal journal Chemistry of Materials volume 18, pages 1964 (year 2006), https://arxiv.org/abs/https://doi.org/10.1021/cm052627s https://doi.org/10.1021/cm052627s NoStop [Ikeda et al.(2005)Ikeda, Ohsumi, Ohwada, Ishii, Inami, Kakurai, Murakami, Yoshii, Mori, Horibe, andKitô]LuFe2O4 author author N. Ikeda, author H. Ohsumi, author K. Ohwada, author K. Ishii, author T. Inami, author K. Kakurai, author Y. Murakami, author K. Yoshii, author S. Mori, author Y. Horibe, and author H. Kitô, https://doi.org/10.1038/nature04039 journal journal Nature volume 436,pages 1136 (year 2005)NoStop [Fujiwara et al.(2021)Fujiwara, Fukada, Okuda, Seimiya, Ikeda, Yokoyama, Yu, Koshihara, and Okimoto]YbFe2O4 author author K. Fujiwara, author Y. Fukada, author Y. Okuda, author R. Seimiya, author N. Ikeda, author K. Yokoyama, author H. Yu, author S. Koshihara, and author Y. Okimoto, https://doi.org/10.1038/s41598-021-83655-6 journal journal Scientific Reports volume 11, pages 4277 (year 2021)NoStop [Park et al.(2019)Park, Rabe, and Neaton]KarinFe author author S. Y. Park, author K. M. Rabe,and author J. B. Neaton,@noopjournal journal Proceedings of the National Academy of Sciences volume 116, pages 23972 (year 2019)NoStop [Park et al.(2017)Park, Kumar, and Rabe]KarinV author author S. Y. Park, author A. Kumar, andauthor K. M. Rabe, @noopjournal journal Physical Review Lettersvolume 118, pages 087602 (year 2017)NoStop
http://arxiv.org/abs/2312.16727v1
{ "authors": [ "Subhadeep Bandyopadhyay", "Philippe Ghosez" ], "categories": [ "cond-mat.str-el", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.str-el", "published": "20231227213802", "title": "Latent electronic (anti-)ferroelectricity in BiNiO$_3$" }
.eps
http://arxiv.org/abs/2312.16568v1
{ "authors": [ "Ashirbad Padhan", "Tapan Mishra" ], "categories": [ "cond-mat.quant-gas", "cond-mat.mes-hall", "cond-mat.other" ], "primary_category": "cond-mat.quant-gas", "published": "20231227133141", "title": "Disorder driven Thouless charge pump in a quasiperiodic chain" }
Journal ofClass Files, Vol. xx, No. xx, December 2023 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsInterference-Resilient OFDM Waveform Design with Subcarrier Interval Constraint for ISAC Systems Qinghui Lu, Zhen Du, Member, IEEE, and Zenghui Zhang, Senior Member, IEEEThis work was supported in part by the National Natural Science Foundation of China under Grants 62271311 and 62301264, and in part by the Natural Science Foundation of Jiangsu Province under Grant BK20230416. Qinghui Lu and Zenghui Zhang are with the Shanghai Key Laboratory of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, China (e-mail: zenghui.zhang@sjtu.edu.cn). Zhen Du is with the School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China.Received Month DD, Year; accepted Month DD, Year ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Conventional orthogonal frequency division multiplexing (OFDM) waveform design in integrated sensing and communications (ISAC) systems usually selects the channels with high-frequency responses to transmit communication data, which does not fully consider the possible interference in the environment. To mitigate these adverse effects, we propose an optimization model by weighting between peak sidelobe level and communication data rate, with power and communication subcarrier interval constraints. To tackle the resultant nonconvex problem, an iterative adaptive cyclic minimization (ACM) algorithm is developed, where an adaptive iterative factor is introduced to improve convergence. Subsequently, the least squares algorithm is used to reduce the coefficient of variation of envelopes by further optimizing the phase of the OFDM waveform. Finally, the numerical simulations are provided to demonstrate the interference-resilient ability of the proposed OFDM strategy and the robustness of the ACM algorithm.Integrated sensing and communications, OFDM, subcarrier interval constraint, interference-resilience. § INTRODUCTIONIntegrated sensing and communications (ISAC) as an enabler to synergistically design sensing and communications (S&C) functionalities, can facilitate the utilization efficiency of both hardware and wireless resources <cit.>, which has been envisioned as a promising technology for numerous emerging applications in 6G networks, such as intelligent transportation, activity recognition, smart city, etc <cit.>.To attain excellent S&C performance, waveform design approaches are desired to facilitate communication data rate (CDR) and sensing capabilities such as target detection, estimation, and tracking. Consequently, one of the best waveform candidates is orthogonal frequency division multiplexing (OFDM), owing to its superiority of simple discrete Fourier transform (DFT) structure, large bandwidth enabling high CDR and range resolution, and frequency diversity, etc<cit.>. For instance, the authors in <cit.> designed a peak-to-average power ratio (PAPR) reduction scheme under the principle of uniform power allocation, which only optimizes the integrated sidelobe level of the autocorrelation function, while resulting in limited communication performance. To this end, in <cit.>, the power minimization-based joint subcarrier assignment and power allocation (SAPA) model is formulated while guaranteeing the specified S&C constraints. However, the subcarrier assignment strategy aims to transmit data through the communication channels with a high signal-to-noise ratio (SNR), while its performance may be impacted by potential interference in practical scenarios.An integrated OFDM waveform method to reduce the peak sidelobe level (PSL) while meeting the CDR requirement is considered in <cit.>. Nevertheless, the proposed algorithm in <cit.> is a heuristic approach whose results are susceptible to the initial feasible points (IFPs), so this algorithm is not robust.From the aforementioned discussions, OFDM waveform design methods applied to ISAC systems are in the absence of a more comprehensive model and a more robust algorithm. In this letter, we present an OFDM waveform optimization strategy with two steps. Firstly, a joint SAPA method for optimizing autocorrelation PSL and CDR under the constraints of power and communication subcarrier interval is established. To solve this nonconvex problem, a modified adaptive cyclic minimization (ACM) algorithm is proposed, and an iteration factor is introduced to release the effect caused by the IFPs. Then, taking the coefficient of variation of envelopes (CVE) <cit.> as the objective function to minimize, we optimize the remaining phase, except for which is occupied by the communication symbols, in order to mitigate the envelope fluctuation. Simulation results demonstrate the effectiveness of the proposed method.The rest of this letter is organized as follows. In section <ref>, we introduce the OFDM signal model and S&C metrics. We propose a joint SAPA method in section <ref>. In section <ref>, we model the CVE reduction problem and develop an iterative algorithm to solve it. In section <ref>, we evaluate the performance of the proposed method by numerical simulations. This letter is finally concluded in section <ref>.Notation: The transpose and Hermitian operators are denoted by (·)^T and (·)^H, respectively. The modulus of a complex number is denoted by |·|, the Euclidean norm is denoted by ·, and ·_1 means the l_1-norm. ℝ and ℂ represent the real and complex set, respectively. ⊙ denotes Hadamard product. 𝔼[·] is mathematical expectation. diag[·] represents the diagonal matrix. § OFDM SIGNAL MODEL AND S&C METRICSIn this section, we first discuss the system model and then introduce a generic OFDM waveform structure where the communication frequency bins are allocated over a large contiguous radar band. Finally, we introduce the S&C metrics. Consider the system model in a typical vehicular scenario presented in Fig. 1. Vehicle 1 is equipped with an ISAC transceiver that radiates an integrated OFDM waveform for radar detection and communication transmission. Specifically, Vehicle 1 can send communication symbols to the communication receiver of Vehicle 2 and estimate target information such as the range and speed of Vehicle 2. §.§ Signal ModelThe integrated OFDM waveform, yielding N subbands in frequency domain, can be defined as <cit.>s = F^Hx = F^H[ Uc + ( I - U)r],where F∈ℂ^N × N represents the DFT matrix, F_k,p = e^ - j2π/Nkp,( k,p = 0, ⋯ ,N-1). And x= Uc + ( I - U)r stands for sensing symbols in frequency domain, where the phase part of c∈ℂ^N × 1 bears binary communication data modulated by a phase shift keying (PSK) modulator, and the phase part of r∈ℂ^N × 1 is reserved to mitigate envelope fluctuations. U =diag[ u] selects the subcarriers for communication, in which the selection variable u∈ℝ^N × 1 is binary with entry one denoting the corresponding subcarrier selected for communication and entry zero discarded.We also apply the ISAC signal processing structure referring to <cit.>. Particularly, the sensing echo of s is received and processed for target detection. And for communication reception, the symbols corresponding to the phase part of c should be extracted for PSK demodulation to obtain binary data. §.§ Sensing Metric: Autocorrelation PSLTo suppress interference and improve target detection capability, low autocorrelation PSL property is highly required, which is expressed as <cit.>PSL= max_k ∈Θ| R_s[ k ]|= max_k ∈Θ| ∑_n = 0^N - 1| x_n|^2e^jπ nk/ . - K|,where R_s[ k ],k ∈[- K + 1,K - 1] denotes the ( 2K - 1) autocorrelation sampling points, Θ = [- K + 1,Υ) ∪( Υ ,K - 1] is the sidelobe region with Υ being the mainlobe boundary. §.§ Communication Metric 1: CDRIn the frequency selective fading channel, the CDR is regarded as a significant communication metric, which can be optimized by selecting appropriate subcarriers and allocating corresponding transmit power. The definition of CDR is CDR= ∑_n = 0^N - 1log_2 [ 1 + u_n| c_n|^2| h_n|^2/ . -σ _c^2],where h_n means the frequency response of the n-th subcarrier, and σ _c^2 denotes the noise power in communication channel. §.§ Communication Metric 2: CVEGenerally speaking, the amplitude variation of the OFDM signal fluctuates wildly, which will lead to signal distortion and increase the bit error rate (BER). The authors in <cit.> improve this shortcoming by reducing the PAPR of the OFDM waveform, which is defined as the ratio between the maximum power and its average power. It is worth noting that this criterion only seeks to decrease the peak values in s. Herein, we adopt the CVE as the other communication metric, which takes into account both peak and valley, defined as <cit.>CVE = 𝔼[ ( | s_n| - 𝔼[ | s_n|])^2]/ . -( 𝔼[ | s_n|])^2. Notably, the autocorrelation PSL and CDR are only related to the transmit power p_n = | x_n|^2 and communication power p_c,n = | c_n|^2. This inspires us to devise a joint SAPA method to improve the ISAC performance, and reduce the CVE by further optimizing the phase of r.§ JOINT SUBCARRIER ASSIGNMENT AND POWER ALLOCATION STRATEGYIn this section, we investigate a more comprehensive joint design optimization problem for ISAC systems, and propose a new ACM algorithm to solve it. §.§ Problem FormulationIn this subsection, the transmit power p_n and communication subcarrier indicator u_n are optimized. Specifically, we build up an optimization problem of joint autocorrelation PSL and CDR, which is formulated asmin_p_n,u_nρmax_k ∈Θ| ∑_n = 0^N - 1p_ne^jπ nk/ . - K|/R_max- ( 1 -ρ)∑_n = 0^N - 1log_2[ 1 + u_np_n| h_n|^2/σ _c^2]/C_maxs.t. C _1:∑_n = 0^N - 1p_n= P_total, C _2:∑_n = 0^N - 1u_np_n≤P_c, C _3:p_n≥ 0,∀ n ∈ N, C _4:u_n∈{0,1}, C _5:∑_n = 0^N - 1u_n= N_r, C _6:u_i + L+ ⋯+ u_i + 1+u_i≤ 1,i = 0,⋯ ,N - L-1.where R_max and C_max denote the maximum autocorrelation sidelobe level without optimizing, and the maximum CDR, respectively. ρ∈[ 0,1] is a weighted coefficient striking a trade-off between autocorrelation PSL and CDR, thereby balancing S&C performance. P_total denotes the total transmit power of all subcarriers. P_c represents the threshold of maximum communication power. The constraints C _4 and C _5 indicate that only N_r subcarriers are allocated for communication purpose. Notably, the minimum interval constraint C _6 implies that the index interval between two adjacent communication subcarriers is no less than ( L + 1). Innovatively, we exploit the constraint C _6 to reduce the probability of interference, mainly concentrating on the case that non-cooperators impose interference based on channel characteristics. §.§ Adaptive Cyclic Minimization AlgorithmDue to the fact that variable u_n is binary, the resulting joint optimization model is a mixed-integer nonconvex problem. Since the two optimization variables u_n and p_n can be solved separately <cit.> in each iteration, a tailored ACM algorithm is utilized to tackle the problem (<ref>) by solving the subproblems of u_n and p_n in sequence.To be more specific, we firstly introduce the auxiliary variable η and then rewrite problem (<ref>) in the following form:min_p_n,u_n,ηρη/R_max - ( 1 - ρ)∑_n = 0^N - 1log_2[ 1 + | h_n|^2u_np_n/σ _c^2]/C_maxs.t. C _1, C _2, C _3, C _4, C _5, C _6, C _7:| ∑_n = 0^N - 1p_ne^jπ nk/ . - K| ≤η ,k ∈Θ . Due to the constraints C _4 and C _5, the suboptimal value of u_n can be obtained by utilizing either exhaustive search or heuristic search method <cit.>. Subsequently, for a specified value of u_n, the transmit power p_n can be determined. Assuming that p_n^( t ),u_n^( t ),η ^( t ) are obtained at the tth iteration, optimization variables at the ( t + 1)th iteration can be updated via the following two steps.§.§.§ Step 1 - Updating u_n^( t+1 )Ignoring irrelevant terms, the subproblem with respect to u_n can be simplified asmin_u_n - ∑_n = 0^N - 1log_2[ 1 + | h_n|^2u_np_n^( t )/σ _c^2]s.t. C _2, C _5, C _6, C _4:u_n∈{0,1}. The nonconvex constraint C _4 is equivalent to <cit.>min_u u^T( 1 - u) s.t. 0 ≤u_n≤ 1,∀ n.However, the objective function in (<ref>) is concave and difficult to solve. Resorting to the first-order Taylor expansion around u^( t + 1,m - 1) (the result of u^( t + 1) at the ( m - 1)th iteration), the problem (<ref>) can be approximated asmin_u_n - ∑_n = 0^N - 1log_2[ 1 + | h_n|^2u_np_n^( t )/σ _c^2] + λ[ u^T( 1 - 2u^( t + 1,m - 1)) +u^( t + 1,m - 1)^Tu^( t + 1,m - 1)] s.t. C _2, C _5, C _6, C̅ _4:0 ≤u_n≤ 1,∀ n.where λ represents a weight factor. This convex problem can be solved iteratively via interior point method (IPM), and can be implemented by the CVX toolbox <cit.>.Since the IFP of u_n and the value of λ significantly affect the algorithm convergence, we ameliorate this shortcoming by adaptively updating λ according to the following iteration formula<cit.>λ ^( t + 1,m )= {[ λ ^( t + 1,m - 1),α ^( t + 1,m - 1)≤ξ _1α ^( t + 1,m - 2);ξ _2λ ^( t + 1,m - 1), otherwise ].where ξ _1 < 1, α ^( t + 1,m) = u^( t + 1,m)^T( 1 - 2u^( t + 1,m - 1)) + u^( t + 1,m - 1)^Tu^( t + 1,m - 1) and ξ _2 > 1. As a consequence, α ^( t + 1,m) tends to 0 during iterations and parameter tuning can be avoided.In addition, the exit condition is defined as α ^( t + 1,m)≤ε _u.§.§.§ Step 2 - Updating p_n^( t + 1),η ^( t + 1)With a determined u^( t + 1), the optimization problem with respect to p_n and η can be expressed asmin_p_n,ηρη/R_max - ( 1 - ρ)∑_n = 0^N - 1log_2[ 1 + | h_n|^2u_n^( t + 1)p_n/σ _c^2]/C_maxs.t. C _1, C _3, C _7,C̅ _2:∑_n = 0^N - 1u_n^( t + 1)p_n≤P_c.Evidently, it is a convex optimization problem that can be solved using the CVX toolbox <cit.>.Finally, we repeat step 1 and step 2 until the maximum number of iterations t_max is reached, or Δ r_c^( t )≤ε _c and Δ r_a^( t )≤ε _aare satisfied at the same time, where ε _c and ε _a are the maximum tolerance errors of CDR and PSL, respectively. Notably, the residuals Δ r_c and Δ r_a are defined asΔ r_c^( t+1 ) = | ( Obj_c^( t+1 ) - Obj_c^( t))/ . -Obj_c^( t )|, Obj_c^( t ) = ∑_n = 0^N - 1log_2[ 1 + | h_n|^2u_n^( t )p_n^( t )/ . -σ _c^2], Δ r_a^( t+1 ) = | ( Obj_a^( t+1 ) - Obj_a^( t ))/ . -Obj_a^( t )|, Obj_a^( t ) = max_k ∈Θ| ∑_n = 0^N - 1p_n^( t )e^jπ nk/ . - K|. For the sake of completeness, the main steps of the proposed ACM algorithm are summarized in Algorithm <ref>. § CVE DESIGN METHODIn this section, we consider optimizing CVE for ISAC systems. The phase part of r is optimized on the premise that the power of each subcarrier and the modulated communication information are known. Omitting the denominator in (<ref>), the CVE-minimization problem can be designed as[ min_Φ_r 𝔼[ ( | s_n| - 𝔼[ | s_n|])^2]; s.t. 2pts = F^H[ U( √(p)⊙e^jΦ_c) +( I -U)( √(p)⊙e^jΦ_r)] ]where Φ_r represents the optimizable phase of r, and e^jΦ_c represents communication symbols drawn from PSK constellation. To proceed, problem (<ref>) can be reformulated asmin_w( F^Hw + v - βe^jΦ)^H( F^Hw + v - βe^jΦ)s.t.β= F^Hw + v_1/ . - N,Φ = ∠( F^Hw + v),| w| = ( I - U) ⊙√(p) .where ∠(·) denotes the angle of the complex-value, and v = F^HU( √(p)⊙e^jΦ_c) is the known communication part.To solve (<ref>) at the ( i + 1)th iteration, (<ref>)-(<ref>) can be iteratively updated via the least squares (LS) algorithm<cit.>:β^( i )= F^Hw^( i ) + v_1/ . - N, Φ^( i )=∠( F^Hw^( i ) + v), w^( i + 1) = ( I - U) ⊙√(p)⊙ - F( c - β ^( i )e^jΦ^( i ))/| F( c - β ^( i )e^jΦ^( i ))|. § SIMULATION RESULTSIn this section, we evaluate the proposed method. The parameters are listed as follows. N = 128, N_r = 16, P_total = 256 W, P_c = P_total/ . - 4, L=5, ρ= 0.5, t_max = 10^3, ε _c=ε _a=10^ - 4 and Θ = [- ( N - 1):1: - 2,2:1:( N - 1)]. The Gaussian white noise with power σ _c^2 = 1 is exploited, and the communication symbols are drawn from the 8-PSK constellation randomly. §.§ Interference-Resilient Performance EvaluationTwo types of communication channels, namely the Rayleigh distribution channel and the standard normal distribution channel, are employed to evaluate the SAPA performance of our proposed approach. The multi-tone narrowband interference from <cit.> was randomly added to the first N_r subcarriers with better channel responses during each simulation. 2000 Monte Carlo trials were carried out to produce average performance curves. These two cases of channel responses with interference are depicted in Fig. <ref> (a) and Fig. <ref> (a), respectively. The traditional high-response SAPA (HSAPA) method is chosen as a baseline method for comparison, which selects the first N_r subcarriers with better channel responses to transmit information <cit.>, and the power allocation problem is the same as the proposed method. Its optimization results are shown in Fig. <ref> (b) and Fig. <ref> (b), where dark-blue bars represent optimized subcarrier power, and red circles indicate selected communication channels. The HSAPA method selects subcarrier channels with high responses for communication, achieving high CDR values of 58.5574 bps/Hz and 60.3964 bps/Hz for the two cases. Our proposed approach incorporates a constraint on the communication subcarrier interval, yielding results in Fig. <ref> (c) and Fig. <ref> (c). As can be seen, the selected communication channels are more evenly dispersed across the entire bandwidth. While the CDR values of our approach decrease to 52.3644 bps/Hz and 51.3620 bps/Hz for the above two cases, the interference-resilient performance obviously surpasses that of the HSAPA method. Fig. <ref> (a) shows the comparison results of BER under different SNR values when the interference-to-signal ratio (ISR) is 30 dB. It can be seen that the BER performance of the proposed method is superior to HSAPA due to the reduced probability of interference. Fig. <ref> (b) depicts the BER performance versus ISR when SNR is 10 dB, which indicates that the proposed method has robust interference-resilient ability. Further, to analyze the envelope of the optimized waveform, we use the quadruple sampling rate so that the discrete-time envelope approximates the continuous-time envelope well <cit.>. And the average results of 500 Monte Carlo trials are considered, in which the SAPA result in Fig. <ref>(c) is used. In Fig. <ref>, the complementary cumulative distribution functions (CCDFs) of PAPR and CVE are examined by comparing the design method via l-norm cyclic algorithm (LNCA) in <cit.> and the random phase method (RPM). Obviously, the proposed method optimizes the CVE and then indirectly reduces the PAPR, so the PAPR result is better than that of RPM, and is close to that in <cit.>. Fig. <ref>(b) highlights that the CVE result of the proposed method outperforms the other two methods. Moreover, considering the lack of communication performance optimization in <cit.>, the comprehensiveness of the proposed method is illustrated. §.§ Impact of IFPs and ParametersIn this part, we set different IFPs and adaptive iterative factors under the communication channel displayed in Fig. <ref> (a) to depict the enhancements of the ACM algorithm.To illustrate the influence of different IFPs on the algorithm, three different cases are considered: Case 1 - u_n^( 0 )=1,n = 0:N / . -N_r:( N_r - 1)N/ . -N_r; Case 2 - u_n^( 0 ) = 1,n = 0:( L + 1):( L + 1)( N_r - 1); Case 3 - u_n^( 0 )=1,n = 2:N / . -N_r:2+( N_r - 1)N/ . -N_r. The average CDR and PSL results of 50 trials are summarized in TABLE <ref>. Evidently, different IFPs can obtain similar S&C results, indicating that the proposed algorithm is effective. In addition, we set ξ _1=0.9, ξ _2=2, λ ^( 0 ) = 10^ - 4,10^ - 2,10^0 and λ= 10^ - 4 to examine the convergence of the proposed algorithm. Fig. <ref> depicts that different initial values of λ^( 0 ) affect the convergence speed, but the proposed algorithm remains convergent. In contrast, if the weight factor λ= 10^ - 4 is fixed, it may not converge. Therefore, the proposed algorithm is more robust.§ CONCLUSIONIn this letter, we have devised a joint SAPA method for OFDM waveform in ISAC systems with two steps to enhance S&C performance. Firstly, an integrated OFDM model with minimizing autocorrelation PSL and maximizing CDR under the constraints of power and communication subcarrier interval has been introduced. We have presented the ACM algorithm to solve the nonconvex optimization problem, and introduced an adaptive iterative factor to improve the convergence. Secondly, by further optimizing the phase of the complex waveform, the CVE optimization problem has been effectively solved via the LS algorithm. Finally, the effectiveness of the proposed method has been verified by numerical simulations.1 IEEEtran ISAC1F. Liu, L. Zheng, Y. Cui, C. Masouros, A. P. Petropulu, H. Griffiths and Y. C. Eldar, “Seventy years of radar and communications: the road from separation to integration," IEEE Signal Process. Mag., vol. 40, no. 5, pp. 106-121, Jul. 2023. ISAC4G. Song, J. Bai and G. Wei, “An OTFS-DFRC waveform design method based on phase perturbation," IEEE Commun. Lett., vol. 27, no. 10, pp. 2578-2582, Oct. 2023. ea2X. Li, Y. Cui, J. A. Zhang, F. Liu, D. Zhang and L. Hanzo, “Integrated human activity sensing and communications," IEEE Commun. Mag., vol. 61, no. 5, pp. 90-96, May 2023. feZ. Du, F. Liu, Y. Li, W. Yuan, Y. Cui, Z. Zhang, C. Masouros, and B. Ai, “Towards ISAC-Empowered Vehicular Networks: Framework, Advances, and Opportunities," arXiv:2305.00681., 2023. papr1Y. Huang, S. Hu, S. Ma, Z. Liu and M. Xiao, “Designing low-PAPR waveform for OFDM-based RadCom systems," IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 6979-6993, Sept. 2022. ofdm1C. Shi, F. Wang, S. Salous and J. Zhou, “Joint subcarrier assignment and power allocation strategy for integrated radar and communications system based on power minimization," IEEE Sens. J., vol. 19, no. 23, pp. 11167-11179, Dec. 2019. ofdm2Y. Chen, G. Liao, Y Liu, H Li and X. Liu, “Joint subcarrier and power allocation for integrated OFDM waveform in RadCom systems," IEEE Commun. Lett., vol. 27, no. 1, pp. 253-257, Jan. 2023. ofdm4C. D. Ozkaptan, E. Ekici and O. Altintas, “Adaptive waveform design for communication-enabled automotive radars," IEEE Trans. Wireless Commun., vol. 21, no. 6, pp. 3965-3978, Jun. 2022. ofdm5Y. Liu, G. Liao, Z. Yang and J. Xu, “Multiobjective optimal waveform design for OFDM integrated radar and communication systems," Signal Process., vol 141, pp. 331-342, Dec. 2017. papr2T. Huang and T. Zhao, “Low PMEPR OFDM radar waveform design using the iterative least squares algorithm," IEEE Signal Process. Lett., vol. 22, no. 11, pp. 1975-1979, Nov. 2015. suanC. Shi, Y. Wang, F. Wang, S. Salous and J. Zhou, “Joint optimization scheme for subcarrier selection and power allocation in multicarrier dual-function radar-communication system,"IEEE Syst. J., vol. 15, no. 1, pp. 947-958, Mar. 2021. aproX. Wang, A. Hassanien and M. G. Amin, “Dual-function MIMO radar communications system design via sparse array optimization,"IEEE Trans. Aerosp. Electron. Syst., vol. 55, no. 3, pp. 1213-1226, Jun. 2019. CVX1 M. Grant and S. Boyd, “CVX package,” Feb. 2012. [Online]. Available: http://www.cvxr.com/cvx.r. adpQ. Lu, G. Cui, R. Liu and X. Yu, “Beampattern synthesis via first-order iterative convex approximation," IEEE Antennas Wireless Propag. Lett., vol. 20, no. 8, pp. 1493-1497, Aug. 2021. ajH. Wang, X. Zhang and S. Wang, “Narrowband interference suppression in OFDM systems," in Proc. Intl. Conf. Wireless Commun. Sig. Process., Yangzhou, China, 2016, pp. 1-6.
http://arxiv.org/abs/2312.16006v1
{ "authors": [ "Qinghui Lu", "Zhen Du", "Zenghui Zhang" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231226112633", "title": "Interference-Resilient OFDM Waveform Design with Subcarrier Interval Constraint for ISAC Systems" }
[Periodically driven four-dimensional topological insulator with tunable second Chern number Bin Zhou January 14, 2024 =========================================================================================== < g r a p h i c s >figureAn overview of the data and task taxonomy of our proposed Inter-X dataset, which is a large-scale human-human interaction MoCap dataset with ∼11K interaction sequences and more than 8.1M frames. The fine-grained textual descriptions, semantic action categories, interaction order, and relationship and personality annotations allow for 4 categories of downstream tasks.] ^*Corresponding authors The analysis of the ubiquitous human-human interactions is pivotal for understanding humans as social beings.Existing human-human interaction datasets typically suffer from inaccurate body motions, lack of hand gestures and fine-grained textual descriptions. To better perceive and generate human-human interactions, we propose Inter-X, a currently largest human-human interaction dataset with accurate body movements and diverse interaction patterns, together with detailed hand gestures. The dataset includes ∼11K interaction sequences and more than 8.1M frames. We also equip Inter-X with versatile annotations of more than 34K fine-grained human part-level textual descriptions, semantic interaction categories, interaction order, and the relationship and personality of the subjects. Based on the elaborate annotations, we propose a unified benchmark composed of 4 categories of downstream tasks from both the perceptual and generative directions. Extensive experiments and comprehensive analysis show that Inter-X serves as a testbed for promoting the development of versatile human-human interaction analysis. Our dataset and benchmark will be publicly available for research purposes. § INTRODUCTION The ability to perceive and generate human-human interactions is fundamental in constructing intelligent digital human systems, which have numerous applications in surveillance, AR/VR, games, and robotics. However, this task is challenging due to the complex and diverse interaction patterns, as well as self-occlusions. Although impressive progress has been made in the perception tasks, , skeleton-based interaction recognition <cit.>, and the generation tasks, , action/text-conditioned interaction generation <cit.>, they remain sub-optimal due to the lack of a comprehensive dataset to cover all the aspects of this task. The advancement of human-human interaction analysis is accompanied by the construction of human-human interaction datasets <cit.>, as listed in <ref>. However, we believe that all the previous datasets remain unsatisfactory on the following aspects:1) Expressive ability, , the dexterous hand gestures play important roles for human-human interactions, like “shaking hands”, “grabbing”, “waving”, . However, to the best of our knowledge, there is no large-scale dataset providing high-fidelity finger movements for human-human interactions. 2) Fine-grained text descriptions, , text-driven generative tasks are promising for practical applications and have attracted much attention. Unlike coarse text annotations like “one person approaches the other and embraces her/him”, fine-grained descriptions with human part-level semantics enable controllable interaction generation and better alignment <cit.> between motion and text modalities, spatiotemporally. 3) Interaction order, , during a causal human-human interaction period such as “kicking”, the actor and reactor are asymmetric. However, the asymmetry property for human-human interactions is not considered in previous datasets. 4) Relationship and personality, , the intimacy level and social relationships between individuals together with their personalities intuitively affect the interaction patterns, which should be considered. To address the aforementioned limitations of existing datasets, we thus build a large-scale human-human interaction dataset, called Inter-X, as depicted in <ref>, with precise, diverse human-human interaction sequences, and detailed hand gestures.To capture Inter-X, we first build a MoCap system with the combination of the optical scheme to capture accurate body movement and the inertial solution to record hand gestures against occlusion. Inter-X covers 40 daily interaction categories, ∼11K motion sequences with more than 8.1M frames. We recruited 89 distinct subjects with different social relationships, , strangers, friends, lovers, schoolmates, and family members. We also collect their familiarity levels and their individual Big Five personalities <cit.>.With our proposed high-precision human-human interaction dataset and the versatile annotations, as illustrated in <ref>, we empower 4 categories of downstream tasks with half of them as generative tasks and the remaining as perceptive tasks.1) Texts enable not only controllable human interaction generation from natural languages <cit.> but also the human interaction captioning tasks <cit.>; 2) Action categories facilitate action-conditioned human interaction generation <cit.> together with the human interaction recognition tasks <cit.>; 3) Interaction order enables the causal human reaction generation <cit.> and the causal order inference tasks, , detecting the perpetrator in surveillance scenarios; 4) Relationship and personality make the stylized interaction generation <cit.> and the personality assessment possible. We formulate our Inter-X dataset as a unified testing ground for all the downstream tasks. For the existing tasks, we extensively evaluate the state-of-the-art methods on the Inter-X's test set with extensive discussions. We also build up the baseline methods and evaluation metrics for the remaining tasks.In summary, our contributions can be summarized as follows: 1) We collect the currently largest human-human interaction dataset with accurate human body movements, diverse interaction patterns, and expressive hand gestures; 2) We complement Inter-X with fine-grained human part-level textual descriptions, semantic action categories, causal interaction order annotations, relationship and personality information. 3) We propose a unified human-human interaction benchmark with 4 categories of downstream tasks to enable extensive research directions.§ RELATED WORK§.§ Human motion datasetsCompared to RGB videos, human motion representation is high-level, efficient, privacy-friendly and robust to illumination <cit.>. Human motion datasets with action labels <cit.> and text descriptions <cit.> facilitate the development for understanding human motions. Datasets accompanied with audio signals <cit.> and scene/object conditions <cit.> are also produced for real-world human-centric tasks.§.§ Human-human interaction datasetsBesides the single-human motion datasets, many human-human interaction datasets have been proposed <cit.> as listed in <ref> with various sizes, modalities and functionalities. Especially, InterHuman <cit.> was recently built as a large-scale human-human interaction dataset with textual annotations. However, as aforementioned, our Inter-X dataset still maintains advantages with respect to motion quality, fine-grained textual annotation, detailed hand gestures, and comprehensive annotation modalities.§.§ Perceptive tasks for human motionSkeleton-based human action recognition has been a long-standing problem for years <cit.>. Compared to it, human interaction recognition <cit.> is a sub-field of it, relying on modeling the semantic correlations between humans. Besides human action recognition, human motions contain biometric cues about human subjects <cit.>. Gait recognition <cit.> aims to identify the individuals from human motions. Other works like<cit.> regard the human movements as personality predictors. Our Inter-X dataset with large-scale action-motion and text-motion pairs will promote the development of human action recognition. We also take a significant step forward in assessing the human-human relationships and personalities from human motions. §.§ Generative tasks for human motionThe goal of human motion generation is to generate plausible and diverse motion data based on different guidances. Human motion generation from action labels <cit.>, textual descriptions <cit.> and audios <cit.> have emerged in recent years. Besides single-person human motion generation, <cit.> attempt to generate multi-person interactions. Besides, a few works <cit.> tackle the problem of generating the reaction between two interactions.To enhance the expressibility of the generated motions, <cit.> manage to solve motion style transfer and stylized motion generation tasks. Our Inter-X dataset can be utilized for action or text-conditioned human interaction generation tasks. The explicit interaction order annotations greatly facilitate the reaction generation task. At the same time, personalities and relationships can serve as factors for stylized human interaction generation. §.§ Multimodality in visionThe world surrounding us involves multiple modalities <cit.>, so are the ubiquitous human-human interactions. Many multimodal datasets <cit.> related to human motions emerged in recent years. Based on our multimodal Inter-X dataset, we unify several categories of downstream tasks towards a deeper understanding of human-human interactions. § THE INTER-X DATASET We present the large-scale Inter-X dataset towards versatile human-human interaction analysis, which consists of 11,388 interaction sequences and more than 8.1M frames, covering 40 daily interaction categories and 89 subjects. §.§ Data Capturing System Most of the previous datasets take the multi-view RGB-based technologies <cit.>, , extracting the human motion from RGB videos. Though the natural RGB images are captured, these datasets suffer from severe occlusions and penetrations, and the subtle finger movements are hard to obtain precisely. For the trade-off between accuracy and natural RGB images <cit.>, we prioritize accuracy and thus choose the optical MoCap system for body movements. Additionally, we adopt inertial gloves to capture the finger gestures, which are robust to occlusions. The overview of our capturing system is illustrated in <ref>.The length, width, and height of our MoCap venue are 8.5 meters, 5.4 meters, and 3.3 meters, which is capable of covering most daily human-human interactions. We deploy the OptiTrack MoCap system <cit.> with 20 PrimeX 22 infrared cameras. For each camera, we capture the resolution of 2048×1088 at 120 fps.The optical motion capture scheme ensures a ±0.15mm error, much lower than the RGB camera scheme.To capture the dexterous hand gestures without occlusion, we adopt the inertial solution of the commercial Noitom Perception Neuron Studio (PNS) gloves <cit.>. The subtle finger movements can be captured in real-time, disregarding the self-occlusion and occlusion with the other person during the interactions. We also re-calibrate the PNS gloves frequently to mitigate the error accumulation.For each group of two volunteers, they wear the MoCap suits with 41 reflective markers and the inertial gloves as depicted in <ref>(a),(b). Both of them are carefully calibrated before they perform the interactions. We provide timecodes for the OptiTrack MoCap system and the PNS gloves so that the body and hands can be temporally synchronized. For each batch of the shoot, we arrange five action categories with five repetitions for variability, which improves efficiency and also ensures the continuity of the volunteers' actions. The volunteers pause for several seconds between two interaction snippets to ease the subsequent segmentation. More details of the data capturing processing can be found in the supplementary materials. §.§ Data PostprocessingThe crux of the postprocessing is the alignment between the body poses from the OptiTrack MoCap system and the finger gestures from the inertial gloves. Temporally, we retrieve the intersection of the body pose and hand pose sequences. Spatially, they are naturally integrated through the shared wrist rotation from the triangular locating bracket. Given the spatiotemporally aligned motion sequences, the annotators should segment the start and end frames for each atomic interaction snippet. We collect, check the temporal segmentation results, and then trim the long recorded motion sequences into atomic segments. § DATASET TAXONOMY We enrich the high-precision human-human interaction sequences with multifaceted modalities, resulting in 13,888 pairs of SMPL-X <cit.> motion sequences, 273,312 synthetic multi-view RGB videos, 34,164 detailed text descriptions, 40 semantic action categories with diverse action/reaction patterns, interaction order labels, and the relationship for 59 groups and personality for 89 volunteers. <ref> shows some characteristics of the Inter-X dataset. §.§ Interaction dataMoCap Data. We adopt the SMPL-X parametric model for its expressivity for human body poses and articulated hand poses, and the generality for various downstream tasks. Formally, the SMPL-X parameter is composed of the body pose parameters θ∈ℝ^N×55×3, shape parameters β∈ℝ^N×10 and the translation parameters t∈ℝ^N×3, where N is the number of the frames. We initialize the shape parameters β based on the height and the weight of the volunteer as <cit.>. Then an optimization algorithm is well-tuned to fit the SMPL-X parameters based on the captured key points:E(θ,t)=λ_11/N∑_j ∈𝒥λ_p||J_j(𝕄(θ,t))-g_j||_2^2+λ_2||θ||_2^2,where 𝒥 denotes the joints set, 𝕄 is the SMPL-X parametric model, J_j is the joint regressor function for joint j, g is the skeleton captured from the MoCap system. λ_1, λ_2 and λ_p are different weights and we apply different weights for different body parts. Please refer to the supplementary materials for more details. Rendered RGB. The synthetic data has broad applications for human motions <cit.>. To enrich our Inter-X dataset with RGB modality, we utilize the Unreal Engine to render multi-view 2D videos similar to <cit.>. We download the free character models from Renderpeople <cit.>, and then retarget our full-body interaction data to the rigged characters.We select the realistic scene models from the Unreal Engine Store and then place the Renderpeople models into them. We capture multi-view videos with 6 rounded cameras, with a resolution of 1920×1080 and a frame rate of 30 fps. Ultimately, 273,312 synthesized RGB videos with 11,388 interaction sequences, 4 different scenes and 6 viewpoints are generated. §.§ Action categoriesWe choose the action categories referring to the existing human-human interaction datasets <cit.> and large language models <cit.>. Finally, we figure out 40 daily human-human interaction categories, which cover the most interaction categories to the best of our knowledge. We instruct each volunteer to perform naturally and diversely. For diversity, the volunteers can perform 1) Diverse actions, , raising left hand, right hand, or both hands when “raising hands”; 2) Diverse reactions, , rebelling, taking a few steps back or falling down when being “pushed”; 3) Diverse human boy states, , standing, sitting, crouching or even lying on the ground. Each interaction is repeated five times for variability. §.§ Text descriptions Textual descriptions, especially fine-grained ones, empower various practical applications for better perception and generation. We implement an annotation tool based on <cit.>, so that the annotators can scale and rotate the view for 360 degrees to observe the details of the interactions. For each interaction sequence, we ask 3 distinct annotators to describe it from human part levels with 1) the coarse body movements, 2) the finger movements, and 3) the relative orientations. We correct the typos of the collected textual descriptions with GPT-3.5 <cit.> and then spot-check the results. Upon analysis, the average length of our textual descriptions is ∼35, which significantly surpasses existing action datasets, reflecting the fine-grained nature of our texts. §.§ Interaction OrderThe study of causal relationships, where one person acts and the other one reacts, could help extend the understanding of human-human interactions <cit.>. We ask the volunteers to explicitly annotate the order of the actors and reactors for each atomic interaction sequence. §.§ Relationship & Personality Exploring the correspondence between human motion and personality is a niche <cit.>, and the essence lies in the disentanglement of the personality factors from motions. We adopt the dominant paradigm of the Big-Five Personality Model <cit.>. The participants are asked to fill out the NEO Five-Factor Inventory <cit.> to measure their personalities from openness, conscientiousness, extraversion, agreeableness and neuroticism perspectives. Besides, the volunteers fill out the questionnaire to rank their familiarity level from levels 1 to 4, and declare their social relationships of 5 categories, , strangers, friends, lovers, schoolmates, and family. § TASK TAXONOMY Our high-precision human-human interaction MoCap data with dexterous hand details bring vitality and challenge to existing tasks. Moreover, we also propose different downstream tasks with practical applications tailored to the versatile annotations. Formally, we denote each human-human interaction sequence as m=<x, y>, and the annotations as action category l_a, text description l_t, causal interaction order l_c, relationship l_r and personalities l_p=<l_p_x, l_p_y>. §.§ Texts related Tasks Text-conditioned human interaction generation. Text-conditioned single-person human motion generation has been widely explored with various datasets <cit.> and models. We pose opportunities for controllable human-human interaction generation <cit.> with fine-grained textual annotations and challenges to synthesize the subtle hand gestures and the alignment between human part-level textual descriptions and interactions. The task can be represented as learning a function F_t2m:F_t2m(l_t) ↦m.Human interaction captioning. Human interaction captioning is a newly proposed task <cit.>, to generate corresponding textual descriptions rather than recognizing the action category given a human-human interaction sequence, which can boost the alignment between texts and motion data and automatically generate diverse and reasonable textual descriptions. This task can be formulated as:F_m2t(m) ↦l_t. §.§ Actions related Tasks Action-conditioned human interaction generation. Given an action label, F_a2m(·) aims to generate diverse and plausible human-human interaction sequences <cit.>. With our proposed Inter-X, we can generate more realistic and detailed interactions with fingers:F_a2m(l_a) ↦m. Human interaction recognition. Human interaction recognition has practical applications for visual surveillance <cit.>. We believe that integrating the fine hand movements will enhance the recognition ability of current models. We formulate this task as:F_m2a(m) ↦l_a. §.§ Interaction-order related Tasks Human reaction generation. Human reaction generation <cit.> is a less explored problem yet with broad applications in AR/VR and gaming. Explicit annotations of the actor-reactor order will advance the research on the asymmetry of different roles with human-human interactions:F_c2m(l_c, x) ↦y.Causal order inference. F_m2c(·) aims to differentiate the actor and reactor given a human interaction sequence, which will benefit intelligent surveillance and sports:F_m2c(m) ↦l_c. §.§ Relationship & Personality related Tasks Stylized human interaction generation. The relationship between two participants and their personalities can serve as stylization factors for customized human interaction generation. The large number of participants with each having a long sequence of motion data enable us to accomplish this task. We formulate this task as:F_s2m(l_a, l_r, l_p) ↦m. Personality assessment. Previous works <cit.> regard the body movements of participants as personality predictors. Leveraging our Inter-X dataset, we propose a new task of personality and relationship assessment, which is vital for education, medicine, sports, . Specifically,F_m2s(m) ↦{l_r, l_p}. § EXPERIMENTS We extensively evaluate the state-of-the-art methods on the Inter-X dataset for the proposed downstream tasks with detailed discussion and analysis. In the main manuscript, we present four appealing tasks: 1) text-conditioned human interaction generation; 2) action-conditioned human interaction generation; 3) human reaction generation; and 4) human interaction recognition. The remaining experiments are presented in the supplementary materials. §.§ Text-conditioned Interaction Generation The detailed textual annotations combined with the human-human interaction sequences allow for human interaction generation. We extensively evaluate 6 state-of-the-art text to motion models, , TEMOS <cit.>, T2M <cit.>, MDM <cit.>, MDM-GRU <cit.>, ComMDM <cit.> and InterGen <cit.>. We modify the input and output dimensions to extend the single-person models to two-person settings and change the motion representation to SMPL-X <cit.> parameters.Experiment setup. We adopt the same protocol of <cit.> to split our dataset into training, test, and validation sets with a ratio of 0.8, 0.15, and 0.05. Following <cit.>, we directly borrow the SMPL-X parameters of Inter-X rather than the manually designed motion representation as in <cit.>. Different from single-person motion sequences that are canonicalized to the first frame, we keep the global translation of the interacted persons so that their relative positions are reserved. For all the methods, we adopt the 6D continuous rotation representation <cit.> as previous works <cit.>. For the diffusion-based models <cit.>, we train them with 1,000 noising timesteps and run 5 DDIM <cit.> sampling steps. Each model is trained on 4 NVIDIA A100 GPUs. Evaluation metrics. We follow <cit.> to adopt the Frechet Inception Distance (FID) <cit.> to measure the latent distance between real and generated samples, diversity to measure latent variance, multimodality (MModality) to measure the diversity of the generated results for the same text, R Precision to measure the top-1, top-2 and top-3 accuracy of retrieving the ground-truth description from 31 randomly mismatched descriptions, and MultiModal distance (MM Dist) to calculate the latent distance between generated motions and texts.We train a motion feature extractor together with a text feature extractor in a contrastive manner to better align the features of texts and motions. We run all the evaluations 20 times (except MModality for 5 times) and report the averaged results with the confidence interval at 95%. Quantitative results. The experimental results are depicted in <ref>. We can derive that InterGen <cit.> achieves state-of-the-art performance except for the MM Dist metric while ComMDM <cit.> achieves the worst R Precision scores. One possible explanation could be that ComMDM requires extra pre-training. From the results, we derive that our Inter-X dataset has the potential for further explorations.Qualitative results. We demonstrate the human-human interaction results generated from InterGen <cit.> together with the generated results for the InterHuman dataset for visual comparisons in <ref>. The visualization results show that with our Inter-X, the expressibility of the human-human interaction is highly enhanced with detailed hand movements. Since InterHuman does not provide dexterous hand gestures, the generated results for “Handshake”, “Wave” and “Shoulder to shoulder” are unplausible. Besides, the synthesized results of InterHuman contain occlusions and penetrations, while ours are much more precise.Please refer to the supplementary materials for more visual comparisons and video results.§.§ Action-conditioned Interaction Generation Inter-X contains 40 semantic action categories, which are currently the largest compared to other human-human interaction datasets. We conduct experiments of action-conditioned human interaction generation with the state-of-the-art methods, , Action2Motion <cit.>, ACTOR <cit.>, MDM <cit.>, MDM-GRU <cit.> and Actformer <cit.>. Same as the text-conditioned methods, we re-implement these methods to adapt to our dataset format. We adopt the same dataset split protocol and pose representation as the text-conditioned methods.Evaluation metrics. Similar to the previous works <cit.> for human motion generation, we also adopt the Frechet Inception Distance (FID) <cit.>, action recognition accuracy, diversity, and multi-modality for evaluation. For all these metrics, we train an action recognition model <cit.> for feature extraction as in previous works. We generate 1,000 samples 20 times and report the average score with a confidence score of 95%.Quantitative results. From the experimental results in <ref>, Actformer <cit.> achieves the best FID and action recognition accuracy, MDM <cit.> achieves the best Multimod. score and MDM-GRU <cit.> yields the best diversity score. Although the interaction transformer is designed to model the interaction between persons, there is still substantial potential for further improvements.§.§ Human Reaction Generation We explicitly annotate the interaction order for causal human interactions, , human reaction generation. We select the MDM <cit.>, MDM-GRU <cit.>, RAIG <cit.> and AGRoL <cit.> models for evaluation. We modify the architecture of all these methods so that the motion of the actor serves as the input conditions into the model, and the output is the human reaction.Quantitative results. We demonstrate the quantitative results in <ref>. We observe that AGRoL <cit.> yields the best performancefor all the evaluation metrics, while the GRU architecture achieves the worst results.§.§ Human Interaction RecognitionInter-X is built from the MoCap system with accurate 3D skeleton data. We evaluate five state-of-the-art skeleton-based action recognition models as ST-GCN <cit.>, 2s-AGCN <cit.>, HD-GCN <cit.>, CTR-GCN <cit.> and MS-G3D <cit.> and report the results of Top-1 and Top-5 recognition accuracy in <ref>. Note that for simplicity, we only employed the skeleton joint stream without ensembling with bone stream and motion streams <cit.>.Quantitative results. From the results, we can observe that MS-G3D <cit.> achieves the best Top-1 accuracy of 83.30%, which is not satisfactory. One possible reason is that Inter-X contains dexterous hand gestures and action/reaction diversities, which would pose new challenges and opportunities for further research works. § CONCLUSION AND LIMITATION In this paper, we propose Inter-X, a large-scale human-human interaction dataset with high-precision human body movements, diverse interaction patterns, and subtle hand gestures. We also annotate Inter-X with human-part level textual descriptions from different perspectives, the semantic interaction categories, the interaction order, and the relationship and personalities of the subjects to facilitate 4 categories of downstream tasks. The qualitative and quantitative results show that Inter-X poses challenges for human-human interaction related perceptual and generative tasks. We hope that the Inter-X dataset will promote in-depth research works on human-human interaction analysis.Limitations. Our work has some limitations in the following aspects: 1) Facial expressions: Inter-X dataset is created through an indoor MoCap venue and non-professional actors. Thus facial expressions are not involved since the correlation between expression and motion is unreliable. A possible alternative is referring to natural outdoor scenes or professional actors to explore the correlation between emotion and interactions;2) Atomic interactions: The Inter-X dataset contains 11,388 atomic human-human interaction sequences, rather than long human-human interaction sequences. We acknowledge that real-world interactions are much more complicated with longer durations and frequent transitions. However, we believe that our dataset with high precision and diversity can still serve as a cornerstone for more complicated human-human interaction analysis.Boarder impacts. With our proposed Inter-X dataset, one can facilitate the generative models for synthesizing human-human interaction sequences given detailed textual descriptions with plenty of applications in AR/VR and gaming. For perceptual tasks of human action recognition, one can also build intelligent models for intelligent surveillance.Acknowledgments: This work is supported by NSFC (62201342, 62101325), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), NSFC under Grant 62302246 and ZJNSFC under Grant LQ23F010008.ieeenat_fullname[ Inter-X: Towards Versatile Human-Human Interaction Analysis*Appendix** ]tablesection figuresection § EXTRA EXPERIMENTS In this section, we report the results for the remaining four settings of 1) Human interaction captioning; 2) Causal order inference; 3) Stylized human interaction generation, and 4) Personality assessment. §.§ Human interaction captioning Human interaction captioning aims to generate precise and diverse textual descriptions given the human interaction sequences. We follow <cit.> and evaluate for motion captioning models, , RAEs <cit.>, Seq2Seq <cit.>, SeqGAN <cit.> and TM2T <cit.>. Similar to the text-conditioned interaction generation task, we simply modify the input and output dimensions to extend these models to two-person settings and also change the motion representations to SMPL-X <cit.> representations.We follow the same protocol as text-conditioned interaction generation to split our dataset into training, testing and validation sets. Following <cit.>, we also adopt the R Precision and multimodal distance, together with the Bleu <cit.>, Rouge <cit.>, Cider <cit.> and BertScore <cit.> to extensively evaluate the performance of the motion captioning models.The quantitative results are demonstrated in <ref>. We can conclude that TM2T <cit.> achieves state-of-the-art performance for all the metrics. RAEs <cit.> fails to model long-term dependencies between human-human interaction sequences and texts, thus leading to low R Precision and linguistic evaluation metrics. Seq2seq <cit.> and SeqGAN <cit.> perform better than RAEs <cit.> by introducing the attention operation and the adversarial learning paradigm. §.§ Causal order inference Causal order inference aims to determine the order of the actor and the reactor in the interaction sequences. Similar to the human interaction recognition task, we adopt the models of ST-GCN <cit.>, 2s-AGCN <cit.>, HD-GCN <cit.>, CTR-GCN <cit.> and MS-G3D <cit.> as the backbone and model this problem as a binary classification task. From the quantitative results in <ref>, we can derive that MS-G3D <cit.> yields state-of-the-art performance over all the other methods. However, we found that this task is not that simple, and the performance is far from satisfactory, , only 76.8%. §.§ Stylized human interaction generationWe implement the stylized human interaction generation based on the vanilla human interaction generations models, , Action2Motion <cit.>, ACTOR <cit.>, MDM <cit.>, MDM-GRU <cit.> and Actformer <cit.>. We add the familiarity level as a style code injected into the model as in <cit.>. We also report the Frechet Inception Distance (FID) <cit.>, action recognition accuracy, diversity, and multi-modality in <ref>. From <ref>, we can derive that Actformer <cit.> achieves the best FID score and Accuracy, and MDM <cit.> achieves the best Diversity and Multimodality score. §.§ Personality assessmentPersonality assessment is to automatically obtain personalities through human interactions. Different from the previous dataset splitting methods, we split the train/test/val sets by person IDs with the ratio of 0.8, 0.15 and 0.05. We also adopt the models of ST-GCN <cit.>, 2s-AGCN <cit.>, HD-GCN <cit.>, CTR-GCN <cit.> and MS-G3D <cit.> as the backbone and model this problem as a regression task. We report the R^2 values for each personality element. From the quantitative results in <ref>, we can derive that MS-G3D <cit.> achieves the best performance over all the other methods, except for the element of “Agreeableness”, and CTR-GCN <cit.> achieves the best R^2 score for the “Agreeableness”. § SMPL-X OPTIMIZATION DETAILS Formally, our SMPL-X parameters consist of the body pose parameters θ∈ℝ^N×55× 3, translation t∈ℝ^N×3 and the shape parameters β∈ℝ^N×10, where N is the number of frames. We initialize the subjects' shape β based on their height and weight as <cit.>. Then a two-stage SMPL-X optimization algorithm is adopted to our Mocap data to obtain the SMPL-X parameters.In the first stage, we only optimize the pose parameters except that of fingers. The joint energy term 𝔼_j=1/N∑_i=0^N∑_j∈𝒥J_j^i(𝕄(θ_b,t)-g_j^i_2^2aims to fit the SMPL-X joints to our captured skeleton data, where 𝒥 denotes the joint set, 𝕄 is the SMPL-X parametric model, J_j^i is the joint regressor function for joint j at i-th frame, θ_b is the pose parameters excluding fingers, g is the Mocap skeleton data. A smoothing term 𝔼_smooth=1/N-1∑_i=0^N-1∑_j∈𝒥J_j^i+1-J_j^i_2^2alleviates the pose jittering between frames. A regularization term 𝔼_r=θ_b_2^2constrains the pose parameters from deviating too much. In total, our optimization objective at the first stage is:𝔼_1=λ_j𝔼_j+λ_smooth𝔼_smooth+λ_r𝔼_r,and we set λ_j,λ_smooth,λ_r=1,0.1,0.01.For the second stage, we append the finger pose parameters and jointly optimize the whole-body pose parameters. We especially emphasize fingers' optimization, thus we separate fingers' pose parameters from the body part. Our optimization objective in the second stage is summarized as:𝔼_b= λ_j𝔼_j+λ_smooth𝔼_smooth+λ_r𝔼_r,𝔼_h=λ_j_h𝔼_j_h+λ_smooth_h𝔼_smooth_h+λ_r_h𝔼_r_h,𝔼_2=𝔼_b+𝔼_h,we set λ_j,λ_smooth,λ_r=1,0.1,0.01 for the body part and λ_j_h,λ_smooth_h,λr_h=10,0.01,0.001 for fingers.§ THE ACTION CATEGORIES We provide the names of the 40 human-human interaction categories in <ref>.§ SAMPLES OF TEXTUAL ANNOTATIONSWe provide some samples of the textual annotations of our Inter-X dataset in <ref>.§ MORE VISUALIZATION RESULTSWe provide the rendered RGB frames based on the Unreal Engine in <ref>. We also provide more visualization samples of Inter-X in the supplementary video.
http://arxiv.org/abs/2312.16051v1
{ "authors": [ "Liang Xu", "Xintao Lv", "Yichao Yan", "Xin Jin", "Shuwen Wu", "Congsheng Xu", "Yifan Liu", "Yizhou Zhou", "Fengyun Rao", "Xingdong Sheng", "Yunhui Liu", "Wenjun Zeng", "Xiaokang Yang" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226133605", "title": "Inter-X: Towards Versatile Human-Human Interaction Analysis" }
𝐱 ŁL Liman Wang* Co-first author^*, Hanyang Zhong* Co-first author^* University of York; {lw2391, hanyang.zhong}@york.ac.uk LLM-SAP: Large Language Model Situational Awareness Based Planning Aldo Vera================================================================== This work pioneers evaluating emergent planning capabilities based on situational awareness in large language models. We contribute (i) novel benchmarks and metrics for standardised assessment; (ii) a unique dataset to spur progress; and (iii) demonstrations that are prompting and multi-agent schemes significantly enhance planning performance in context-sensitive planning tasks. Positioning this within a situated agent and automated planning research, we highlight inherent reliability challenges–efficiently mapping world states to actions without environmental guidance remains open despite simulated domain advances. Although out-of-scope, limitations around validation methodology and data availability indicate exciting directions, including fine-tuning on expanded planning corpora and optimizations for triggering fast latent planning. By conclusively demonstrating current methods' promise and limitations via rigorous comparison, we catalyze investigating reliable goal-directed reasoning for situated agents. The dataset is available at https://github.com/HanyangZhong/Situational_Planning_datasetshere.Large Language Model, Planning, Situational Awareness, Situated Agent, Multi-Agent Reasoning § INTRODUCTION Developing AI agents that can make flexible decisions is challenging, as they need to handle unpredictability in the real world <cit.>. Humans determine appropriate interventions through situational awareness, while inadequate situation awareness has been identified as a primary cause of accidents due to human error <cit.>. As Yadav <cit.> notes, rigorously examining situational awareness in large language models (LLMs) is crucial to steer their safe and reliable development. Without broader awareness, any seemingly beneficial action risks unintended harm <cit.>. For example, an autonomous agent may need nuanced judgements to avoid potential harm in situations like a toddler reaching for a pot or playing with a knife.This research shows LLMs can exhibit human-like planning capacities based on situational awareness. As Fig. <ref> illustrates, when prompted for situational awareness based planning spanning perception, comprehension and projection <cit.> or when provided reasoning feedback, the LLM displays improved deductive reasoning requiring perspective-taking and contemplating potential outcomes. This study distinguishes itself from previous AI reasoning and planning research in significant ways. Much existing work evaluates task-specific planning or makes assumptions about predefined steps under strict conditions <cit.>. Additionally, various agents reactively generate plans only after explicit instructions or demands <cit.>, rather than proactively. In contrast, our approach centres on assessing and enhancing models’ capacities for proactive situational planning when faced with real, open-world dilemmas. Without rigid constraints or environmental feedback, the models must leverage deductive logic to map conjectured actions while weighing consequences using only a descriptive starting scenario and prompts. By prioritizing latent planning that mirrors human cognition, namely situational awareness based rather than domain-limited planning, our method signifies a break from former suppositions and appraisals.The remainder of this paper overviews the methodology in Section 2, details experiments in Section 3, analyzes results in Section 4 and concludes with a summary and future directions in Section 5.§ METHODOLOGYIn this section, we outline the key challenges and methodological components that enable collaborative multi-agent reasoning to enhance LLMs' situational planning capacities. §.§ Task Formulation and Key Challenges We formulate situational awareness based planning as grounded inference over a dynamic hazard scenario s, with s ∈ S where S denotes the hazard situation space. The input comprises an unordered set of concepts x = {c1, c2,...,ck}⊆ C encapsulating entities, events, and temporal evolution within s, with C the overall conceptual vocabulary. The output is a step-wise plan π: S → A with actions a1, a2,... ∈ A, where A denotes the action space of possible interventions. Learning the planning policy π: S → A requires overcoming two intrinsic challenges: * Failing to achieve high levels of situational awareness - lacking perception, comprehension, and projection of the hazard environment when deducing appropriate state-action mappings* Struggling to anticipate potential downstream consequences of planned actions on human safety and property due to inadequate understanding of hazardous situation dynamics By formulating hazard remediation as a conceptual planning task requiring strong situational awareness, we evaluate the multidimensional latent reasoning essential for reliable situated agents operating in hazardous environments.§.§ Multi-AI Agents Enhance Reasoning and AccuracyRecent work has shown that employing multiple LLMs in a cooperative framework, whether collaborative or adversarial, can enhance reasoning and factuality accuracy. As Du et al. discuss, the debate between agents allows them to critique each other's logic, correcting flaws <cit.>. Similarly, Liang et al. find disagreement motivates broader reasoning as agents try to outdo each other <cit.>. In these cases, collaboration complements individual strengths <cit.>. In this work, we use two LLM agents - LLM_gen for plan generation and LLM_eval for critical evaluation. We rely on the closed-loop promotion between these complementary roles to improve latent planning. §.§ State-based Planning with FeedbackCurrent AI systems that rely solely on rigid, context-insensitive rules risk unintended outcomes when deployed in complex, real-world environments<cit.>. To enable more reliable and ethical decision-making, architectures should aspire to model interdependent variables and causal relationships, similar to fluid human reasoning processes.One approach is to have LLM agents iteratively generate and assess potential solutions before implementation. As an example, we frame the design of a finite state machine (FSM) <cit.> as a collaborative process between two models. A latent FSM plan can be defined by a tuple M = (S, T, A), comprising a set of states S, transitions T, and actions A. The process begins by representing the plan's reasoning process as R, with a generator agent reasoning (R_LLM_gen) to plan then an evaluator agent reasoning (R_LLM_eval) to evaluate. R_LLM_gen proposes a candidate FSM plan M̂, which R_LLM_eval then scores and provides feedback f on. R_LLM_gen incorporates this feedback into the next proposal. This iterative loop continues until the score of M̂ is higher than that of M^*(the benchmark plan), which is finally adopted as the new optimal plan M^*. This approach, shown in Algorithm <ref>, allows for tight refinement loops resembling human reasoning. By assessing solutions before real-world implementation, unintended outcomes can potentially be anticipated and mitigated.Fig. <ref> visually depicts the iterative process between LLM_gen and LLM_eval. Consider a scenario where a young child attempts to touch a hot pot on an active stove, posing a safety risk. The housekeeper robot observing this scene starts planning appropriate interventions. First, the instruction-prompted LLM_gen leverages its reasoning capacity to imagine potential outcomes and draft candidate FSM plans to mitigate harm. For instance, abruptly obstructing the child may startle the child, suggesting a gentle approach or showing toys instead. If the child refuses intervention and gets burned, emergency responses may become necessary. LLM_gen passes its proposed plan M̂ with the comparison plan, which is a human demo at the first round then is the previous plan, to LLM_eval for evaluative scoring and feedback f. LLM_gen then incorporates this f into the next round of planning. Over one or more proposal-evaluation iterations, the agents converge on a new optimal FSM plan M^*'s score surpasses the benchmark plan score that can morally and robustly handle edge cases through situational inference.Thus, by allowing the models to "think ahead" via deductive reasoning before real-world deployment, unintended consequences can potentially be anticipated and addressed. As LLM capabilities continue advancing, such techniques appear promising for instilling reliability and ethics when developing AI systems for physical-world interaction. §.§ Formation of PromptsAs depicted in Fig. <ref>, the prompts provided to the generative model (LLM_gen) contain the scene description, situational awareness based planning prompt, actions list, and an exemplar plan. The situational awareness based planning prompt (SAP prompt) aims to stimulate sophisticated reasoning by directing the model to deeply consider the diverse needs and potential interactions among people, animals and objects within the scene. By explicitly prompting inference of other entities’ requirements and predicting how situations may dynamically unfold, the prompt promotes empathy and holistic thinking which are integral when designing comprehensive plans. The one-shot exemplar further demonstrates the desired plan structure in code without revealing solutions tailored to the specific evaluation scenario. In contrast, the prompt for the evaluative model (LLM_eval) contains a generated FSM plan from LLM_gen, benchmark high-quality plan, and the scoring criteria descriptions to assess plan quality over iterative refinements (see Appendix Fig. <ref>). Initially, benchmark plans comprise manually authored solutions, while subsequent iterations utilize the top-scoring auto-generated plan from the prior round. § EXPERIMENTSTo systematically assess LLM planning capacities, standardized benchmark scenarios are developed along with quantitative scoring methodologies.§.§ Evaluation Scenarios We first collected a dataset of over 500 hazardous scenarios situated in home environments. From this dataset, we systematically sampled 24 representative vignettes showcasing common home hazards across four reasoning complexity levels, as categorized in Table <ref> and Fig. <ref>. High-resolution textual descriptions of these 24 evaluation scenes were then produced using GPT-4 <cit.>. Accompanying gold standard solutions formally encoding state-based hazard interventions were manually constructed and verified by experts for each scene. This standardized 24-scene benchmark with fluent human reasoning enables quantitative analysis of LLM planning capacities relative to human proficiency. The full dataset will be released. By summarizing the key complexity dimensions and gold standard solutions for 24 systematically selected scenes, we aim to evaluate model planning capacities on a wide distribution of common situations requiring hazard intervention. The full dataset allows pushing beyond the scope of the defined benchmark towards continual learning of planning skills for household robots. §.§ Actions Set Because the purpose of this study is to quantify the complex planning ability of LLM, to facilitate fairness and consistency in subsequent evaluations, certain limitations have been placed on the actions set of AI agents.The actions set encompasses 56 distinct robot behaviours commonly utilized in domestic scenarios, as exemplified through representative actions illustrated within the action enumeration diagram situated to the left of the central region in Fig. 2 (for more details see Appendix <ref>). It provides a thoughtful baseline for functionality, referencing some of the leading intelligent robotics projects <cit.>. §.§ Evaluation Dimensions Seven scoring dimensions have been developed as shown in Table <ref>. These seven dimensions provide a comprehensive methodology for assessing the latent planning, FSM designs. Touching on coverage, complexity, safety, reusability, user experience and coherence, the framework allows evaluating structured completeness, validation requirements, real-world reliability, adaptability, human factors and solution integrity. Using these lenses together promotes designs that are robust, dependable, future-proof, ethical and aligned to specifications. The dimensions offer multi-faceted technical and operational insight. Scoring FSMs across seven key dimensions on a scale from 0 to 10 enables impartial quantitative evaluation of overall plan quality while revealing relative strengths and weaknesses to prioritize refinements. The overall score is calculated as the average mean of the seven dimension scores. §.§ Evaluation MetricsMotivated by discussions of inconsistent human evaluation in Iskender et al.<cit.> and the inadequate quality of automatic metrics highlighted in Sottano et al. <cit.>, we introduce a rank-based scoring (RBS) method to help mitigate potential reliability issues when evaluating FSM plans. This aims to increase consistency compared to absolute scoring methods prone to rater variability.The RBS score provides an objective aggregation by comparing models pair-wise on each evaluation scenario and assigning differential rankings based on relative performance. This eliminates variability from subjective absolute scoring. The head-to-head comparisons also allow powerful models like GPT-4 to participate in the evaluation. Rather than requiring predefined output standards, GPT-4 can provide comparative judgments on model outputs. Given two model sets M = M_1, M_2 evaluated on N scenarios with D scoring dimensions, models were compared pairwise for each scenario i. Scores s_ijlwere assigned from 0-10 across dimensions j for each model l. Models were ranked r_ik∈1, 2 per scenario based on total score:r_ik = _l ∈1,2∑_j=1^D s_ijl The higher scoring model was assigned rank 1 (1 point). The lower scoring model was assigned rank 2 (2 points). If the two models had equal total scores for a scenario, both were assigned a mid-point rank of 1.5 (1.5 points). After evaluating all scenarios, the ranking scores were aggregated to produce an RBS score R_k per model:R_k = 1/N∑_i=1^N r_ik The RBS score indicates relative performance, with scores closer to 1 denoting better performance compared to the other model. By relying on comparative judgments between model outputs rather than absolute scores, the RBS methodology aims to provide a more reliable approach for evaluating text that requires subjective human judgment. Furthermore, the comparative nature allows the incorporation of evaluative models like GPT-4.§ RESULTSTo systematically evaluate LLMs' planning capacities, we conduct experiments assessing model performance on a standardized benchmark of 24 home hazard scenarios across four reasoning complexity levels.§.§ LLM Selection This experiment tests commercial models GPT-4, GPT-3.5, Claude-2 <cit.> and open-source options LLama-2 <cit.>, LLava <cit.>, Vicuna<cit.>, MiniGPT-4 <cit.>, CodeLLama <cit.> on hazard planning using scene-informed one-shot prompts. The analysis finds many open-source models struggled to leverage examples, with long contexts causing attention drift and impairing scene comprehension. In contrast, GPT-4, GPT-3.5 and Claude-2 show more robust mapping between examples and planning tasks. Quantitative and qualitative testing evidence stronger commercial models understanding despite drift risks. Thus, GPT-4, GPT-3.5, and Claude-2 are selected for further hazard planning evaluation based on superior grounding capabilities. §.§ Impact of The SAP Prompt An experiment is conducted to evaluate the effect of the SAP prompt on the quality of planning. As shown in Table <ref>, three LLMs, GPT-4, GPT-3.5, and Claude-2, are tested with or without the SAP prompt on the benchmark scenarios across four complexity levels.The RBS methodology is utilized, where models are compared pair-wise on each scenario and differentially ranked. Adding the SAP prompt leads to improved RBS scores for all three models compared to those not, indicating enhanced planning. Specifically, GPT-4 with the SAP prompt achieves the best overall RBS score of 1.21, significantly outperforming GPT-4 without prompts which has an RBS score of 2.04. Analysis of the reasoning level 3 scenarios, involving dangerous interactions with children, elderly, and pets, shows GPT-4 with the SAP prompt substantially exceeded the second-best model. This suggests the prompt provides particular value for complex, nuances planning situations requiring perspective-taking and contemplating potential outcomes (ablation studies see Appendix <ref>) <cit.>.The results demonstrate that incorporating prompts directing models to deeply consider relationships and iterative consequences significantly enhance latent planning capacities. By promoting coordination and foresight, deductions improve when reasoning about multi-agent safety hazards. §.§ LLM EvaluatorsExperiments are conducted to assess the feasibility of using LLMs as evaluators to score generated FSM plans. As shown in Table <ref>, the ability of GPT-4 and Claude-2 to rank FSM outputs similarly to human rankings is tested.The models are tasked with ranking FSM plans in pairs, groups of 4, and groups of 6. Rankings are then compared to expert human rank orders to quantify accuracy. Tests find both GPT-4 and Claude-2 could rank FSM pairs with 75.7% agreement to human ranking, evidencing reliability for comparative evaluation. However, accuracy drops significantly for ranking 4 or more FSMs. Table <ref> and Fig. <ref> show that GPT-4 and Claude-2 aligned best with human judgment when evaluating outputs from the models themselves. For example, GPT-4's scoring of GPT-4 FSMs closely matches expert ranking. This demonstrates LLMs can provide sound comparative assessments, particularly for their own model family's outputs. The experiments illustrate LLMs' potential to serve as evaluators that mimic human appraisals of planning formalisms. By relying on relative rather than absolute assessments, variability is reduced.§.§ Multi-Agent ImprovementA closed-loop experiment is conducted to quantify improvements in planning through iterative generation and evaluation between two agents. GPT-3.5 with the SAP prompt is utilized as the generative model (LLM_gen) and Claude-2 served as the evaluative model (LLM_eval). LLM_gen first proposes an FSM, which LLM_eval then scores and provides feedback. Incorporating this feedback, LLM_gen produced an enhanced FSM in the next round. Testing shows the updated FSM design exceeded the initial round's quality with a higher RBS score after just one iteration. As shown in Table <ref>, the closed-loop FSM surpasses even GPT-4 with the SAP prompt, previously the top standalone performer. Further analysis reveals the feedback-improved output contains more planning details, leading to increased RBS scores. This demonstrates two weaker models can effectively boost each other’s deficiencies through collaborative dialogue. The results empirically prove interactive cycles of generative prototyping and critical evaluation between LLMs can unlock latent strengths. By leveraging complementary capacities, the multi-agent approach strengthens deductive reasoning and planning beyond individual model limits.§ CONCLUSION Our analysis provides initial evidence that situational awareness prompts and multi-agent feedback improve planning reliability and humaneness. Experiments on a home hazard benchmark show gains when prompting relationship reasoning and consequence consideration, as exemplified by the planning outputs in Appendix <ref>. Results specifically demonstrate that GPT-3.5, GPT-4, and Claude-2 have superior deduction and mapping with open-ended prompts, especially for complex safety scenarios. Additionally, a complete closed-loop experiment proves that multi-agent collaboration boosts quality. While advances are promising, efficiently condensing fluid cognition absent environmental feedback remains an open challenge. Important future work includes expanded planning corpus training and coupling deduction with simulation. As capabilities progress, introduced techniques appear useful for instilling robustness. However, oversight guardrails remain vital for ensuring ethical alignment before real-world instantiation. By assessing generative planning capacities and limitations, this research steers progress at the intersection of deduction, awareness, and embodiment while balancing innovation with ethical caution.IEEEbib § APPENDIX SUPPLEMENT §.§ Action Set Supplement §.§ Human Annotator EvaluationBased on these scenarios with different levels of complexity, three human annotators evaluate each of the 24 scenarios and round the average of their evaluations to obtain the final score for each scenario.As shown in Table <ref>,<ref> and <ref>, the table is divided into three parts, mainly displaying the details of the rankings of each question measured by humans and the mean of the overall rankings. For more specific content, such as the best demonstration written by humans and detailed descriptions of scenes, please check our GitHub for detailed content. The annotation "None" indicates that the model cannot output at this time due to its limitations. §.§ Planning Qualitative Analysis A review of the state machines produced by large language models on this dataset reveals several common shortcomings compared to human-authored finite state machines. GPT-4 demonstrates reasonably coherent state transitions and actions in constrained situations but still falls short of human planning, especially long-term motivations. GPT-4 can understand scenarios comparably to GPT-4+SAP, with logical reasoning, but lacks care-oriented foresight, risking harm. GPT-3.5+SAP and GPT-3.5 show tendencies for unclear situational comprehension, chaotic logic, and state machine instability compared to GPT-4. GPT-3.5 compliance suffers regarding action specificity, partially mitigated by SAP. While Claude-2+SAP attempts simple state changes, safety awareness and self-limitations are often lacking, improved somewhat with SAP but still inconsistent. Claude-2 displays straightforward but limited state transitions, with unconventional logic and unresolved scenarios upon analysis.§ ABLATION STUDIES This section conducts experiments removing parts of the SAP prompt to validate the performance boost from the full SAP prompt compared to other prompts for this situational inference task. §.§ Ablation Study of One-shot ImpactThe SAP prompt typically includes a one-shot example to improve language model code generation. However, as these hand-crafted examples may introduce extra information beyond formatting, this study investigates removing the one-shot and only using abstract code format descriptions. By progressively eliminating the one-shot while retaining formatting guidelines, we can isolate the contributions of each component. As shown in Table <ref>, even without one-shot examples and prompted only with target formatting, adding the SAP prompt substantially improves GPT-4 performance on the Claude-2 metric. We can conclude that the SAP prompt enhances language model code synthesis capabilities regardless of one-shot demonstrations. §.§ Ablation Study of Other PromptingTargeted ablation studies are conducted to rigorously evaluate the proposed SAP prompt against existing methods for situational inference planning tasks. Specifically, differential impact on performance is assessed using various key prompts, including Zero_shot_COT <cit.> and parts of the EmotionPrompt <cit.> related to social effect and self-esteem (EP05 and EP09). As shown in Table <ref>, configurations employing the full SAP prompt set attain higher RBS evaluation scores from the GPT-4 model over all other prompt combinations tested. Hence, the comprehensive improvements yielded by the SAP prompt for situational inference capabilities are empirically demonstrated on this representative task, validating its effectiveness over prevailing approaches.§ EXAMPLES FOR DEMONSTRATING Notes: Since these images are produced by DALL-E and may diverge somewhat from the textual description, the text portrayal should take priority over the visual depiction for this experiment.§.§ Example 1 in scene 10Scene description: Fig. <ref>, result 1: Fig. <ref>, result 2: Fig. <ref>.§.§ Example 2 in scene 18Scene description: Fig. <ref>, result 1: Fig. <ref>, result 2: Fig. <ref>.§.§ Example 3 in scene 6Scene description: Fig. <ref>, result 1: Fig. <ref>, result 2: Fig. <ref>. §.§ Example 4 in scene 20 from closed-loop multi-agentScene description: Fig. <ref>, result 1: Fig. <ref>, result 2: Fig. <ref>, result 3: Fig. <ref>.
http://arxiv.org/abs/2312.16127v3
{ "authors": [ "Liman Wang", "Hanyang Zhong" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20231226171909", "title": "LLM-SAP: Large Language Model Situational Awareness Based Planning" }
: Bringing Performance Profiles into Integrated Development EnvironmentsQidong ZhaoNorth Carolina State University Raleigh, USA qzhao24@ncsu.edu Milind ChabbiScalable Machines Research San Francisco, USA milind@scalablemachines.org Xu LiuNorth Carolina State University Raleigh, USA xliu88@ncsu.eduJanuary 14, 2024 ========================================================================================================================================================================================================================================================== We study the conormal geometry theta divisors of certain singular bielliptic curves. We apply these results to the boundary components _ of the bielliptic Prym locus. We obtain results on the Gauss map, compute the Chern-Mather class and the characteristic cycle of the intersection complex of the corresponding Prym theta divisor.§ INTRODUCTION tocsectionsec:introLet 𝒜_g denote the moduli space of g-dimensional principally polarized abelian varieties (ppav's for short) over the complex numbers. The biellitpic Prym locus _g⊂𝒜_g is defined as the closure in 𝒜_g of the locus of Prym varieties of étale double covers of bielliptic curves, i.e. curves admitting a double cover to an elliptic curve E. In <cit.>, they introduce the boundary components _⊂_g for =(d_1,…,d_n) with =g. These correspond to degenerations of the above situation where the ellitpic curve E degenerates to an n-cycle of ^1's, i.e. E has n components, which are rational, and its dual graph is the cyclic n-graph. By <cit.>, two cases are of particular interest from the point of view of the Schottky problem: When =(g) or =(1,g-1), for a general (P,Ξ)∈_, the degree of the Gauss map is the same as for non-hyperelliptic Jacobians (and these are the only values offor which this happens). In the present paper, we carry out a detailed study of the conormal geometry of the Prym theta divisor in these two cases. The results will enable us to show in a subsequent paper that the Tannakian representation associated to these ppav's (as in <cit.>) differs from that of Jacobians. Note that Pryms in _ are never Jacobians when g≥ 4 by <cit.>. Let (A,Θ)∈𝒜_g and Z⊂ A a subvariety. We define the conormal variety to Z byΛ_Z{(x,ξ)∈ T^∨ A |x∈ Z_ , ξ T_x Z }⊂ T^∨ A.The projectivized conormal variety is the projectivization Λ_Z ⊂ T^∨ A. Translations induce a canonical trivialization of the cotangent bundle T^∨ A=A× V, where V T^∨_0 A. We define the Gauss map attached to Z as the projectionγ_Z: Λ_Z → V.Let q:A× V → A be the projection onto the first factor and h c_1(_ V(1))∈^2( V,). For r≥0, the r-th Chern-Mather class of Z is defined asc_M,r(Z) q_∗( h^r∩ [Λ_Z] ) ∈_2r(A,),where h is pulled back to T^∨ A in the obvious way. The degree of the Gauss map is by definition the degree of the 0-th Chern-Mather class. We extend the result of <cit.> to Chern-Mather classes of higher dimension: [<ref>] Let (P,Ξ)∈_g∪_1,g-1, then the Chern-Mather classes of Ξ are given byc_M,r(Ξ)= ξ^g-r/(g-r)!2g-2r-2g-r-1∩ [P]∈_2r(P,)for r≥ 1, where ξ=c_1(_P(Ξ))∈^2(P,).This coincides with the expected Chern-Mather classes for Jacobians. The Gauss map γ_Ξ:Λ_Ξ→ V is not finite in general, but we have the following bound on the dimension of the locus above which finiteness fails: [<ref>] Let (P,Ξ)∈_g∪_1,g-1, then away from a subset S⊂ V of codimension at least 3, γ_Ξ is finite. Recall that the characteristic cycle of the intersection complex CC(_Θ) is irreducible for a non-hyperelliptic Jacobian (JC,Θ) <cit.>. We have an analogous result for the loci _, apart from a correction term in the odd-dimensional case: [<ref>] Let g≥ 4 and (P,Ξ)∈_g∪_1,g-1. If g is even, then CC(_Ξ)=Λ_Ξ .If g is odd, thenCC(_Ξ)=Λ_Ξ+∑_x∈ 2 Λ_x ,whereis the set of isolated singularities of Ξ.Note that for a general (P,Ξ)∈_, the setis empty. We will give in Section <ref> a very explicit description ofin terms of the ramification points of the double cover to E. By <ref>, the points inare isolated quadratic singularities of maximal rank of Ξ, which explains why they appear in the characteristic cycle depending on the parity of g. All of these results follow from the study of a particular type of singular bielliptic curves, namely those that admit a double cover π:C→ E to a cycle of ^1's, such that the corresponding involution on C fixes the singular points but does not exchange the branches at these points. An essential tool in the study of theta divisors of smooth curves is the Abel-Jacobi map. We construct an analogue of the Abel-Jacobi map in our particular setting (which works more generally in the setting of cyclic curves). Note that if =(g), then C is irreducible and our construction coincides over the locus of line bundles with already existing generalizations of the Abel-Jacobi map using the Hilbert scheme as in <cit.>. In the reducible case, we replace the Hilbert scheme with a blowup of the symmetric product. This construction has the advantage of being very explicit and allows computations in cohomology.In Section <ref> we recall the construction and basic properties of the loci _. In Section <ref> we study singular bielliptic curves, construct the Abel-Jacobi map and derive some of its properties. We then apply the results to the Prym varieties in Section <ref> and we prove Theorems <ref>, <ref> and <ref>. We work over the field of complex numbers. § THE BIELLIPTIC PRYM LOCUSIn this section we recall general facts on bielliptic Prym varieties. The references are <cit.>, <cit.> and <cit.>. The curves considered will always be complete connected nodal curves over . By the genus of a curve C we mean the arithmetic genusp_a(C) 1- χ(C,_C). Let π:C̃→ C be a double cover of nodal curves, corresponding to an involution σ:C̃→C̃. At a nodal point x∈C̃, π is of one of the following three types (see <cit.>): * The involution doesn't fix x.* The involution fixes x and exchanges both branches.* The involution fixes x but preserves each branch. For I⊂{1,2,3}, we say that π is of type I if it is of type (i) for some i∈ I at every singular point of C̃. We sat π is of type (∗) if π is of type (3) and moreover étale away from the singular locus. This corresponds to the (∗) condition in <cit.>. We define the ramification divisor of π as the Cartier divisor on C̃ defined by the exact sequence0 →π^∗ω_C →ω_C̃→_R → 0 , where ω_C, ω_C̃ are the dualizing bundles.The biellitpic Prym locus _g⊂𝒜_g is defined as the closure in 𝒜_g of the locus of Prym varieties (P,Ξ)=Prym(C̃/C) where π:C̃→ C is of type (∗), and C is a curve of genus g+1 admitting a double cover p:C→ E to a genus 1 curve. Suppose that the Galois group of the composition p∘π is (/2)^2. Then the two other intermediate quotients induce a tower of curvesC̃[dl,"π"'] [d,"π'"] [dr,"π”"]C [dr,"p"'] C' [d,"p'"] C”[dl,"p”"]E .We can assume g(C')=t+1 ≤ g(C”)=g-t+1 for some 0≤ t≤ g/2. Denote by _g,t' the set of Pryms obtained in this way with the additional assumption that E is smooth, and by _g,t_g,t'⊂𝒜_g its closure in 𝒜_g. It is well-known<cit.> <cit.> that for g≥ 5, the loci _g,t for 0≤ t≤ g/2 are the ⌊ g/2 ⌋ irreducible components of _g. The set of bielliptic Pryms where the Galois group of p∘π is /4 is contained in _g,0 by <cit.>. In <cit.>, they define the following subloci of _g,t: Suppose that we have the above situation, but that E is an n-cycle of ^1's, i.e. the n irreducible components of E are rational and the dual graph of E is the cyclic n-graph. Let Δ∈ E_,2g be the branch locus of p and (Δ)/2. Assume moreover that Ram(p')=0. Then _ is defined as the set of Prym varieties (C̃/C) obtained in this way. It turns out <cit.>, that the two types of loci defined above cover all of _g apart from the intersection with the Jacobian locus 𝒥_g and the locus of decomposable ppav's 𝒜_g^_g=_g∩(𝒥_g∪𝒜_g^) ∪⋃_t=0^⌊ g/2 ⌋'_g,t∪⋃_ = g_ .The goal of the present paper is to carry out an in-depth study of the geometry of the Prym theta divisor for Prym varieties in _.Fixwith =g and (P,Ξ)=(C̃/C)∈_. We keep the notations of <ref>.Because π is of type (3), p is of type (1,2) and p” is of type (3). Thus C” has exactly one component sitting above each component of E. We can thus identify the multidegrees on both curves. As p' is unramified we have Branch(p”)=Branch(p)=Δ. Moreover p is flat and we can thus associate to it the line bundle δ(p_∗_C)^-1∈^(E) verifying Δ∈ |δ^⊗ 2| (see <cit.>). LetP” {L∈^(C”)| _p”(L)=δ} ,and Ξ” Θ”∩ P” ,where Θ”{L∈^(C”) | ^0(L)>0 }⊂^(C”) is the theta divisor as defined in <cit.>. Note that (P”,Ξ”) is not principally polarized but of type (1,…,1,2). By <cit.>, the pullback induces a degree 2 isogenyπ”^∗: P”→ P, with(π”^∗)^∗Ξ=Ξ” .This isogeny is the quotient by the two-torsion point p”^∗δ (-R”)∈ JC”, where R” is the ramification divisor of p”. Thus the geometry of (P,Ξ) can be completely understood through the study of (P”,Ξ”). Since p”:C”→ E is a morphism to a (singular) elliptic curve, it is substantially easier to study than π. Another big advantage is that Ξ” is defined as a scheme-theoretic intersection, in contrast to Ξ who is defined as an intersection up to multiplicity only. Thus from now on, we will forget about the double covering π:C̃→ C, and study the following situation: * A nodal curve C” with a double covering p”:C”→ E of type (3), where E is a cycle of ^1's.* A fixed δ∈^(E) with Δ∈ |δ^⊗ 2| where Δ is the branch divisor of p”.We define (P”,Ξ”) by <ref>, and obtain a principally polarized abelian variety after quotienting by ⟨ p”^∗δ(-R”)⟩. Thus Theorem <ref>, <ref> and <ref> will follow immediately from Theorem <ref>, <ref> and <ref> respectively.§ SINGULAR BIELLIPTIC CURVES §.§ PreliminariesWe start by setting some notations. From now on, E will be a cycle of ^1's, i.e. the normalization of E has n components E_1,…,E_n each isomorphic to ^1 and the dual graph of E is a cyclic graph. We assume that for i∈/n, E_i intersects E_i-1 at Q_i^0 and E_i+1 at Q_i^∞. Let Q_i∈ E be the image of Q_i^0 (i.e. the intersection of E_i and E_i-1. We fix an identification of E_i with ^1 where Q_i^0 is identified with 0, Q_i^∞ is identified with ∞. This identification also gives coordinates near Q_i^0 and Q_i^∞ coming from ^1. With these coordinates we can identify the group of Cartier divisors supported at Q_i with ^∗×× (see <cit.>). Let C be a stable nodal curve of genus g+1 with a double covering π:C→ E of type (3). It follows that the associated involution τ:C→ C preserves each irreducible component, fixes the singular points and is not the identity on any component. This implies that C has n component, one above each component of E and the dual graph is cyclic. Let β: N_1∪⋯∪ N_n → C be the normalization, and π_i:N_i→ E_i≃^1 be the induced morphism. C [d,"π"'] N [l,"β"'] [dl,"π_N"'] N_i [l,hook'] [d,"π_i"] E E_i [l,hook'] [r, phantom, "≃ "]_1Let P_i^0 (resp. P_i^∞) be the point in N_i sitting above Q_i^0 (resp. Q_i^∞). By assumption the morphism π_i is ramified at R_i+P_i^0+P_i^∞ for some R_i⊂ N_i. LetRR_1+⋯+R_n ⊂ C ,Δ π_∗(R) ⊂ E , (d_1,…,d_n) , with d_i R_i /2 .Both R and Δ are reduced an non-singular divisors, and are the ramification and branch locus of π, respectively. Let δ_i be the hyperelliptic bundle on N_i. There is an exact sequence0→^∗→^d(C) β^∗⟶^d(N) → 0. C is hyperelliptic if and only if d=(1) or d=(1,1). We use the usual notion of hyperellipticity for singular nodal curves (see <cit.>). The stability assumption on C implies d_i>0 for all i. Suppose C is hyperellitpic and let σ:C→ C be the hyperelliptic involution. If σ exchanges components, these must we isomorphic to ^1, contradicting the stability of . Thus σ preserves the components. σ can't be of type (3) at any node because then the node would have to be separating. Thus the only possibilities left are * C has one component, σ exchanges both branches at the node, thus N/σ=^1 but σ≠τ (since τN preserves P_1^0 and P_1^∞ and σ exchanges them). Since the hyperellitpic involution is unique when the p_a(N)>1, this implies d=(1).* C has two components, and the two nodes are exchanged by σ. Again σN exchanges P_1^0 and P_1^∞ (resp. P_2^0 and P_2^∞). Thus σ≠τ, thus p_a(N_1)=p_a(N_2)=1 and d=(1,1).Recall the Brill-Noether varieties defined byW^r_ (C) {L∈ |^ (C)| ^0(C,L)≥ r+1 }⊂^(C).We have the following “Martens Theorem" type resultSuppose 0≤≤(ω_C)/2, and 0<2r≤ d. ThenW^r_(C) = d-2r-1.Recall there is an exact sequence0 →_C →β_∗_N →⊕_i=1^n _Q_i→ 0,where Q_i are the singular points of C. From this we derive the exact sequence0 →^0(C,L)→^0(N,β^∗ L) ψ⟶^n.ψ depends on the gluing of the line bundle above the nodes, and on whether or not the P_i^0 and P_i^∞ are base points for |LN_i|. We will now give an explicit basis of this (ψ). Let Γ be the dual graph of C. For each i, if ^0(N_i,LN_i)=0, delete the vertex (N_i) and the edges to it from Γ. If ^0(N_i,LN_i)>0 we can write in a unique wayLN_i=k_i δ_i + _N_i(a_i^0 P_i^0+a_i^∞ P_i^∞ + D),where a_i^0,a_i^∞∈{0,1}, and D is τ-simple (recall that 2P_i^0∼ 2P_i^∞∼δ_i). Note also that if g(N_i)=1, then d_i≤ 1 by assumption. We have ^0(N_i,LN_i)=k_i+1. If a_i^0=a_i^∞=1, then ^0(N_i,LN_i)⊂ (ψ). Delete the vertex corresponding to N_i and the edges to this vertex from the graph Γ. Else, if k_i=0, and a_i^0=1 (resp. a_i^∞=1), mark the vertex (i) with {0} (resp. {∞}). In this case, the space of sections is generated by a section vanishing at P_i^0 (resp. P_i^∞) and not at P_i^∞ (resp. P_i^0) (if a_i^=a_i^∞=0 mark the vertex with {0,∞}). If k_i=0 and a_i^0=a_i^∞=0 the space of sections is generated by a section vanishing neither at P_i^0 nor at P_i^∞. If k_i>0, and a_i=0=a_i^∞ =0, then mark the vertex (i) with {0,∞}. In this case, ^0(N_i,LN_i) is generated by k_i-1 sections vanishing at P_i^0 and P_i^∞, a section vanishing at P_i^0 but not P_i^∞ and a section vanishing at P_i^∞ but not P_i^0. If k_i>0 and a_i^0=1 but a_i^∞ = 0 (resp. the opposite), mark the vertex (i) with {0} (resp. {∞}). The space of sections of L supported on single component is of dimension⊕_i=1^n ^0(N_i,LN_i(-P_i^0-P_i^∞))=∑_i=1^n max(0, k_i+a_i^0+a_i^∞ -1).The other sections are generated in the folowing way: start with a vertex of Γ marked with 0. that corresponds to a section s_i^0 vanishing at P_i^0 but non-zero at P_i^∞. This imposes a non-zero value at P_i+1^0. If (i+1) is marked with ∞ we have a new section. If (i+1) isn't marked at all that imposes a coefficient on the section of ^0(N_i+1,LN_i+1) not vanishing at P_i+1^0 and P_i+1^∞ and we move on to (i+2). If (i+1) is marked with 0 but not ∞ it is impossible to complete s_i^0 to a section. We repeat this process on the whole graph.Thus, for each segment (i,i+1,…,j) of the modified graph Γ such that i is marked with 0, j is marked with ∞, and i+1,…,j-1 are not marked, there is an additional section. In particular, since increasing k_i by one or having a pair (P_i^0,P_j^∞) imposes a condition of codimension two on W^0_d(N), we see that W^r_d(C) ≤ d-2r-1Moreover it is clear that choosing a_i^0 and a_i^∞ properly we achieve this bound. Our proof also shows that for r≥ 1, the varieties W^r_(C) are the preimages by β^∗ of certain varieties in ^(N). As in <cit.>, we define the Θ-divisor in ^d(C) byΘ{ L ∈^d(C) | ^0(C,L)>0 }⊂^d(C) .Recall the Riemann Singularity Theorem, who is due in this form to Kempf in the irreducible context and Beauville <cit.> in the reducible context:Let C be a connected nodal curve, and assume (ω_C)=2 is even. Let L∈Θ={M∈^d(C) | ^0(M)>0 } and consider the pairingϕ: ^0(C,L) ⊗^0(C,ω_C-L)→^0(C,ω_C) .Let (s_i) and (t_j) be a basis of ^0(C,L) and ^0(C,ω_C-L) respectively. Thenmult_L Θ≥^0(C,L),with equality if and only if (ϕ(s_i⊗ t_j)) is non-zero, in which case it gives the tangent cone of Θ at L.We will call a singularity of Θ exceptional if the equality doesn't hold above. We define a relation on triples (i,j,k) byi ≺ j ≺ k i< j <k , ori≥ k , and j∉ [[k,i]] ..i ≼ j ≺ k(i<j<k) or (i=jandj≠ k).We also define i ≺ j ≼ k and i≼ j ≼ k in the obvious way. We then haveThe singular locus of Θ is(Θ)=(β^∗)^-1(𝒜∪ℬ∪ℬ'∪𝒞∪𝒞' ) ,where𝒜 = ⋃_i,j{δ_i +δ_j +α_N(N_d-2e_i-2e_j )} , ℬ = ⋃_i≼ j≼ k{_N(P_i^0+P_k^∞)+δ_j + α_N(N_d-e_i-2e_j-e_k) } ,ℬ' = ⋃_i≺ j≺ k{_N(P_i^∞+P_k^0)+δ_j + α_N(N_d-e_i-2e_j-e_k) } ,𝒞 = ⋃_i≼ j ≺ k ≼ l {_N(P_i^0+P_j^∞+P_k^0+P_l^∞)+α_N(N_d-e_i-e_j-e_k-e_l) } , 𝒞' = ⋃_i≺ j ≼ k ≺ l {_N(P_i^0+P_j^0+P_k^∞+P_l^∞)+α_N(N_d-e_i-e_j-e_k-e_l) } .A general point of (β^∗)^-1(𝒜∪ℬ∪𝒞) is not an exceptional singularity, and a general point in (β^∗)^-1(ℬ'∪𝒞') is an exceptional singularity. More precisely, if d=(g), then ℬ'=𝒞=𝒞'=∅, all singularities are non exceptional, and we have_k(Θ){x∈Θ | _x Θ≥ k}=(β^∗)^-1(𝒜_k∪ℬ_k)with 𝒜_k ={kδ_N+α_N(N_g-2k) } ,ℬ_k ={_N(P^0+P^∞)+(k-1)δ_N+α_N(N_g-2k)} .If d=(1,g-1), then ℬ'=𝒞=𝒞'=∅, all singularities are non exceptional, and we have_k(Θ)=(β^∗)^-1(𝒜_k∪ℬ_k)with 𝒜_k ={^1(N_1)+kδ_2+α_N_2(N_2,g-2k-1) } ,ℬ_k ={^1(N_1)+ _N_2(P_2^0+P_2^∞)+(k-1)δ_2+α_N_2(N_2,g-2k-1)}∪{_N(P_1^0+P_2^∞)+(k-1)δ_2+ α_N_2(N_2,g-2k)}∪{_N(P_1^∞+P_2^0)+(k-1)δ_2+ α_N_2(N_2,g-2k)} . By <ref>, a point L∈(Θ) either verifies ^0(C,L)≥ 2 or ^0(C,L)=1 and st=0 where ^0(C,L)=⟨ s⟩ and ^0(C,ω_C-L)=⟨ t ⟩. From the proof of <ref> it is clear that any L with ^0(C,L)≥ 2 has to be in 𝒜,ℬ, or 𝒞. It is also straightforward to check that an exceptional singularity has to be in ℬ' or 𝒞': indeed if L is an exceptional singularity, there is a section s∈^0(C,L) such that s·^0(C,ω_C-L)=0. This implies that we can find i,j such that s is zero say on components l with i≺ l ≺ j and non-zero on i and j. There are also i',j' such that ^0(C,ω_C-L) is supported on components l with i'≺ l ≺ j', and i ≼ i' ≼ j' ≼ j.If i'=j' we are in ℬ', else we are in 𝒞'. When d=(g) or d=(1,g-1), ℬ'=𝒞=𝒞'=∅ for degree reasons. It is immediate that any line bundle in 𝒜_k or ℬ_k is not exceptional and thus the assertion about the multiplicities of the singularities follow from <ref>. §.§ The Abel-Jacobi mapRecall that sections of ω_C are 1-forms ω on N which can have poles at P^0_i and P^∞_i, subjected to the conditionsRes_P^∞_iω + Res_P^0_i+1ω = 0, for i∈/n .We thus have an inclusion of _C-modulesβ_∗ω_N ⊂ω_C”⊂β_∗ω_N(∑_i P^0_i+P^∞_i).From what precedes we have^0(N,ω_N) ⊂^0(C,ω_C) ⊂^0(N,ω_N(∑_i=1^n P_i^0+P_i^∞ )) . Let s_E be a generator of ^0(E,ω_E). As a 1-form, s_E is given on E_i by dz/z for a coordinate z centered at 0. Let s_R=p^∗ s_E be the pullback as a 1-form. π_i:N_i→ E_i is ramified at R_i+P_i^0+P_i^∞ thus (s_R)=R as a section of ω_C. For dimension reasons we have^0(C,ω_C)=^0(N,ω_N)⊕⟨ s_R⟩ .We see from the above discussion that ^0(N,ω_N) (resp. ⟨ s_R⟩) is the -1 (resp. +1) eigenspace for the action of τ on ^0(C,ω_C). We define|ω_C|^0(C,ω_C), |ω_C|^-^0(C,ω_C)^-, |ω_N| ^0(N,ω_N).We define a divisor to be singular if it intersects with the singular locus. The following lemma is very simple, but crucial:With the above notations, a divisor H∈ |ω_C| is singular if and only if H∈ |ω_C|^-, and in that case∑_i=0^∞ P_i^0+P_i^∞≤β^∗ H .Let H= ( λ s_R+s)∈ |ω_C|, where s∈^0(C,ω_C)^- and λ∈. By what precedes, s comes from a section of ^0(N,ω_N). Sections of ω_N are holomorphic 1-forms, thus immediately verify <ref>. As sections of ω_C, they vanish at the singular points. s_R is non-zero at the singular points thus H is singular if and only if λ=0. In that case, H vanishes at all the singular points.We thus have a canonical identification ρ: |ω_C|^- ∼⟶ |ω_N| corresponding on the locus of non-singular divisors toρ(H)= β^∗ H - ∑_i=1^n (P_i^0+P_i^∞).The Abel map is well known in the case of smooth, or singular irreducible curves. But for singular reducible curves the situation is much more technical. We will now show how to construct a candidate for the Abel map in the case of cyclic curves. In that case JC sits in an exact sequence0 →^∗→ JC → JN → 0It is well known (see <cit.>) that(JN,^∗)≃JN≃ JN ,and that under this identification, by <cit.>, the extension defining JC corresponds to the line bundleη_N (∑_i=1^n P_i^0-P_i^∞) ∈ JN.The corresponding line bundle on JN isL^ηℒ⊗τ_ηℒ^-1=τ_η_0ℒ⊗τ_η_∞ℒ^-1∈JN ,where ℒ is the principal polarization on JN, τ_x is the translation by x andη_0_N(P_1^0+⋯+P_n^0) , η_∞_N(P_1^∞+⋯+P_n^∞).The corresponding extension is JC ≃ L^η∖ JNwhere JN↪ L^η embeds as the 0 section. We define JC(L^η⊕_JN )= (τ_η_0ℒ⊕τ_η_∞ℒ)be the associated ^1-bundle. τ_η_0ℒ and τ_η_∞ℒ canonically define bundles on ^(N), thus we will seeas a ^1-bundle on ^(N) from now on. This is of course not the usual compactification of the Picard scheme, but this will be the convient compactification for our computations. Let α_N:N_→^(N)be the Abel-Jacobi map, where N_ N_1,d_1×⋯ N_n,d_nis the product of the symmetric product of the curves N_1,…,N_n. We have for k∈{0,∞}α_N^∗τ_η_kℒ = _N_(B^k), with B^k ∑_i=1^n (P_i^k+N_i,d_i-1)∏_j≠ i N_j,d_j .Let s^0,s^∞ be the sections oncorresponding to B^0 and B^∞ respectively. Let×_^(N) .We have the following commutative diagram[d] [r][d] [u,dashed, bend left=30, "(s^0 , s^∞)"] [r,"α_N"']^(N)Let b:Ñ_d_B N_d→ be the blowup at B B^0∩ B^∞. This resolves the indeterminancy of (s^0,s^∞)Ñ_d[dr ,"b"'] [rr, bend left=20, "α"] [r,hook,"i_"'][r] [d,"q_N"][d,"q"] N_d[r,"α_N"']^(N), and α is the Abel-Jacobi map we were looking for. By standard intersection theory we have[] =x_1+⋯+x_n+h'∈^2(,),where h'=c_1(_(1))∈^2(,) is the hyperplane section coming from the ^1-bundle structure and x_i=[N_i,d_i-1]∈^2(N_i,d_i,) (we make the abuse of notation of omitting the pullback notation when it is clear). For k∈{0,∞} letB_i^kP_i^k+ N_-e_i⊂ N_ ,and s_i^k∈^0(,_(B_i^k)) the corresponding section. By definition we haveB=⋃_i,j B_i^0∩ B_j^∞ .In particular, locallyis defined insideby the vanishing ofλ s_1^0 ⋯ s_n^0-μ s_1^∞⋯ s_n^∞ ,where (q_N,λ:μ):U→ U×^1 is a local trivialisation of the ^1-bundle on an open set U⊂.Above non-singular divisors,is smooth. Let D̃=(D,λ:μ)∈ be a point above a singular divisor, where we use the notations of <ref>. Letk #{i |P_i^0≤ D} + δ_λ,0 , l #{i |P_i^∞≤ D} + δ_μ,0 ,where δ_λ,0=1 if λ=0 and 0 otherwise. We then have a local analytic isomorphism(,D̃) ≃ (V(x_1 x_2… x_k-x_k+1x_k+2⋯ x_k+l),0)⊂ (^g+1,0) .Above non-singular divisors, the blowup b:→ is a local isomorphism, thusis smooth. Above singular divisors, the assertion follows from <ref> and the fact that for k∈{0,∞} and 1≤ i≤ n, the divisors B^k_i= s^k_i are smooth normal crossing divisors on .Let Θ be the closure of Θ in . Clearly we have a surjection α:→Θ. Although α is not a resolution of singularities, the singularities ofare much simpler than those of Θ. §.§ The conormal variety to thetaRecall that the (projectivised) conormal variety is defined byΛ_Θ{ (x,H)∈ T^∨ JC|x∈Θ_sm , T_x ⊂ H}⊂ T^∨ JC .Since the cotangent space to JC is trivial and canonically identified with JC×^0(C,ω_C), we will from now on view Λ_Θ inside JC× |ω_C|. We define the projectionsN_i,d_i [l]× |ω_C|[l,"p"] [ll,bend right=15, "p_i"'] [r,"γ"]|ω_C|for all 1≤ i ≤ n . We definine Λ_⊂× |ω_C| as the vanishing locus (i.e. the 0-th determinantal variety) of the following composition of maps of vector bundlesγ^∗_|ω_C|(-1) ↪^0(C,ω_C)↪⊕_i=1^n ^0(N_i,ω_N_i(P_i^0+P_i^∞)) ⊕ev_i⟶⊕_i=1^n p_i^∗ E_K,i ,where the vector spaces are identified with the corresponding trivial vector bundles, and E_K,i are the evaluation bundles on N_i,d_i associated to the line bundle ω_N_i(P_i^0+P_i^∞), and ev_i are the evaluation maps (see <cit.> for the definition of E_K,i). Thus set-theoretically we haveΛ_N_={ (D,H)∈× |ω_C| |D≤β^∗ H } .By <cit.>, for all r≥ 0, we havec_r(E_K,i)=∑_k=0^r rk x_i^k θ_i^r-k/(r-k)!∈^2r(N_i,d_i) .We also make the following computations: using Poincaré's Forumla <cit.> we haveα_N_i,∗(c_r(E_K,i)) =θ_i^r/r!∑_k rkrk=θ_i^r/r!2rr∈^2r(JN_i,) ,α_N_i,∗(x_i c_r(E_K,i)) =θ_i^r+1/(r+1)!∑_k rkr+1k+1 = θ_i^r+1/(r+1)!2r+1r+1 ,α_N_i,∗(x_i^2 c_r(E_K,i)) = θ_i^r+2/(r+2)!2r+2r+2 .We have the followingSuppose g≥ 3, then the projection Λ_→ is birational. In particular Λ_ is irreducible of dimension g. We have [Λ_] = c_g( γ^∗_|ω_C|(1) ⊗⊕_i=1^n p_i^∗ E_K,i ) = ∑_r=0^g h^r c_g-r(⊕_i=1^n p_i^∗ E_K,i ) ∈_2g(× |ω_C|,) .The corollary follows from intersection theory. The vector bundle on the right in the definition of Λ_ is of rank g, thus all components of Λ_ are of dimension at least g. Let [s]∈ |ω_C|, and let s_i=β_i^∗ s. The fiber of Λ_ above [s] is_i |s_i ≠ 0 {D∈ N_i,d_i |D≤ s_i }×_i |s_i=0 N_i,d_i .Thus p_2:Λ_→ |ω_C| is fibered above⋃_i (⊕_j≠ i^0(N_j,ω_N_j)) ⊂ |ω_C|The fiber above this locus is of dimension g-1, thus every irreducible component surjects onto |ω_C| and is of dimension g. A general divisor in |ω_C| is non-singular. The fiber above a non-singular D∈ is ^0(C,ω_C(-D)) is of dimension r(D)=^0(C,D)-1. Thus by <ref>, every irreducible component of Λ_ surjects onto . But a general point inhas a unique preimage, thus Λ_ is birational to .Let b':Λ_→Λ_ be the strict transform of Λ_ along the blowup × |ω_C|→× |ω_C|. We have the following commutative diagram[d,"b"']Λ_[l,"p̃"] [rr,"γ_",bend left=15] [d,"b'"] [r,phantom,"⊂"]×|ω_C|[d,"b× Id"] [r ] |ω_C|[d,phantom, "=" rotate=90] Λ_[l,"p"] [rr,"γ_"',bend right=15] [r,phantom, "⊂"]× |ω_C|[r]|ω_C|We have the following:The locus above which the fibers of b' are positive-dimensional is the set (D,H)∈Λ_ such that P_i^0+P_j^∞≤ D≤β^∗ H and P_i^0+P_j^∞≤β^∗ H-D for some 1≤ i,j≤ n. Recall that b' is the blowup of B'Λ_∩(B× |ω_C|) where B={D∈ |P_i^0+P_j^∞≤ D, for some 1≤ i,j≤ n} .Let (D,H)∈ B', then H must be singular and by <ref> we have H∈ |ω_C|^-. Let H̃=ρ(H)∈ |ω_N|. Suppose first that for all i such that P_i^0≤ D, the multiplicity of P_i^0 in D and β^∗ H is the same. Fix i_0,j_0 such that P_i_0^0+P_j_0^∞≤ D. Let X_P_j_0^∞⊂ be the set of divisors containing P_j_0^∞. Then locally near (D,H) we haveB'=Λ_∩ (X_P_j_0^∞× |ω_C|) .Indeed, locally near (D,H) we have Λ_∩ (X_P_j_0^∞× |ω_C|) ⊂× |ω_C|^- thus for any (D',H')∈Λ_∩ (X_P_j_0^∞× |ω_C|) near (D,H), we haveP_i^0≤ H', D' must contain P_i_0^0 and thus (D',H')∈ B'. Thus B' is locally a Cartier divisor and b' is a local isomorphism. The same reasoning applies if for all 1≤ j ≤ n, the multiplicity of P_j^∞ in D and β^∗ H is the same. Conversely, assume that P_i^0+P_j^∞≤ D≤β^∗ H and P_i^0+P_j^∞≤β^∗ H-D for some 1≤ i,j≤ n. Let a^0 (resp. a^∞) be the multiplicity of P_i^0 (resp. P_i^∞) in D. For any local parametrization P_i^0(t),P_j^∞(t) we can find a parametrization H(t)∈ |ω_C|^- such that a^0 P_i^0(t)+b^0P_j^∞(t)≤β^∗ H(t), and thus a family (D(t),H(t))∈Λ_ such that a^0P_i^0(t)+a^∞ P_j^∞≤ D(t). Thus the strict transform Λ_ contains the whole fiber of the blowup b at D∈.We then have:The projectionγ_: Λ_→ |ω_C|is finite above |ω_C|∖ |ω_C|^-. Let H∈ |ω_C|^-, assume ρ(H)= s with s=s_1+⋯+s_n∈⊕_i^0(N_i,ω_N_i). The fiber above H is positive-dimensional in only the two following cases: * s_i=0 for some 1≤ i≤ n. Then the fiber is(b')^-1( ∏_i |s_i≠ 0{ D∈ N_i,d_i |D≤ P_i^0+P_i^∞+ s_i }×∏_i , s_i=0 N_i,d_i×{H}) . * P_i^0+P_j^∞≤ s for some 1≤ i,j≤ n. For all such i,j, and for all D∈ such thatP_i^0+P_j^∞≤ D≤β^∗ H -P_i^0-P_j^∞ ,D×{H}⊂Λ_ is in the fiber above H.The projection decomposes asΛ_b'⟶ Λ_γ_⟶ |ω_C|. The first case are the positive-dimensional fibers of γ_ and follows from the proof of <ref>. The second case corresponds to the positive-dimensional fibers of b' and follows from <ref>.Consider the inclusion ^(C)× |ω_C|⊂× |ω_C|. LetΛ_Θ⊂× |ω_C|denote the closure of Λ_Θ. With the above notations, we have Λ_Θ=(α× Id)_∗(Λ_) . Both are reduced, irreducible and agree on an open dense subset. We have the following:Suppose =(g) or =(1,g-1). Then above the locus of line bundles ^(C) ⊂,parameterizes line bundles together with a “divisor"^(C)≃{(L,[s]) |L∈^(C) ,[s]∈^0(C,L) } .Recall from <ref> the following commutative diagramÑ_d [dr ,"b"'] [rr, bend left=20, "α"] [r,hook,"i_"'] [r] [d,"q_N"] [d,"q"] N_d[r,"α_N"'] ^(N).Given a point in x∈^(C), we thus have a line bundle L_xα(x)∈^(C) and a divisor D_x b(x)∈ N_. If D_x is non-singular it corresponds immediately to a unique Cartier divisor. We now assume D_x to be singular. Suppose first that =(g). A Cartier divisor on C is given byD=(λ,a,b)_Q+D'where D' is a non-singular divisor on C and (λ,a,b)_Q∈^∗×× is a Cartier divisor supported on the unique singular point Q∈ C. a,b and D' are determined uniquely by D_x and for a given a,b and D' there is a unique λ∈^∗ such that _C(D)=L_x. Suppose =(1,g-1). We have D_x=(D_1,D_2)∈ N_1× N_2,g-1. Suppose first that P_2^0+P_2^∞≤ D_2.Since D_1 is of degree 1, it can't contain both P_1^0 and P_1^∞. Thus any section of L_x vanishing at D_2 must vanish on N_1. Thus up to scalar, there is a unique section s∈^0(C,L_x) vanishing at D_2∪ N_1. We now assume P_2^0+P_2^∞≰ D_2. Assume for instance D_2=a· P_2^0+D'_2 with D'_2 non-singular. By assumption α(x)=L_x is a line bundle. This implies D_1=P_1^∞ (this comes from the description ofas a blow-up). For the same reason as in the irreducible case, there is now a unique λ∈^∗ such thatD=(λ,1,a)_Q_1+D'_2corresponds to L_x, where Q_1 is the singular point corresponding to P_2^0 and P_1^∞. Finally given L∈^(C) and D∈^0(C,L), then (L,β^∗ D)∈ is inand this gives the inverse of the map constructed above. Suppose =(g) or =(1,g-1). Let Λ_^∗{ (L,[s_1],[s_2]) | (L,[s_1])∈^(C) , [s_2]∈^0(C,ω_C⊗ L^-1) } .The mapΛ_^∗ ↪× |ω_C| (L,[s_1],[s_2])↦ (L,[s_1]),[s_1⊗ s_2]identifies Λ_^∗ with Λ_^(C). By <ref>, the projection Λ_^∗→ is birational. In particular, Λ_^∗ is irreducible. From the case of smooth curves we know that Λ_^(C) and Λ_^∗ coincide over the open locus of non-singular divisors <cit.>. Since both are irreducible, they are equal. Suppose =(1,g-1), let M=[s]∈ |ω_C|^- such that sN_2=0. Then the fiber of Λ_→ |ω_C| above M is supported above ∖^(C). Suppose the contrary. By <ref> there is (L,[s_1],[s_2])∈Λ_^∗ such that s_1 ⊗ s_2=s vanishes on N_2. But neither s_1 nor s_2 can vanish on all of N_2: Since the degree of the restriction of s_1 and s_2 to N_1 is 1, if they vanish at both P_1^0 and P_1^∞ they would be zero on N_1 as well.We end this section by introducing the following involution on Λ_^∗:ω_Λ : Λ_^∗ →Λ_^∗(L,[s_1],[s_2])↦ (ω_C⊗ L^-1,[s_2],[s_1]),and τ_Λ: Λ_^∗ →Λ_^∗ (L,[s_1],[s_2])↦ (τ^∗ L, [τ^∗ s_1], [τ^∗ s_2] ) .By abuse of notation denote by τ the involution induced by τ on |ω_C|. Clearly, we have γ_∘ω_Λ = γ_ , ∘ω_Λ(-)= 2·δ - (-) ,γ_∘τ_Λ = τ∘γ_ , ∘τ_Λ =.§.§ Chern-Mather class of the theta divisorWe now prove the following:Suppose d=(g) or =(1,g-1), then [Λ_Θ] = ∑_r=0^g h^r+1θ^g-r/(g-r)!2g-2r-2g-r-1∩[ T^∨ JC] ∈_2g( T^∨ JC,),where θ is the pullback of the polarization on JN and h is the hyperplane class in T^∨_0 JC. Our proof gives a recipe to do the above computation for a general , but as the computation would become much more cumbersome, we restrict to these cases. We expect the formula to be more complicated in the general case.Case =(g). Let x=[N_g-1]∈^2(N_g,) and θ∈^2(JN,) denote the class of the polarization, h∈^2(|ω_C|,) and h'∈^2(,) denote the respective hyperplane classes. By <ref>, <ref>, <ref> and <ref> we have in _2g(× |ω_C| ,)[(b×)^∗Λ_] =(∑_r=0^g h^r c_g-r( E_K) )∩[× |ω_C|]=(x+h')(∑_r=0^g h^r c_g-r( E_K) ) ∩[× |ω_C|].Recall from <ref> that the center of the blowup b:→ isB= { P^0+P^∞ +N_g-2}⊂ .Let B'=(Λ_∩ (B× |ω_C|)), thenB'={ (D,H)∈ N_g-2× |ω_C| |P^0+P^∞ + D ≤β^∗ H }A canonical divisor containing P^0 must be in |ω_C|^-⊂ |ω_C|. Thus under the identification ρ:|ω_C|^-≃|ω_N| the above is equal to the vanishing locus of the composition of maps of vector bundles on N_g-2× |ω_N|_|ω_N|(-1) →^0(N,ω_N) → E_K,N_g-2 ,where E_K,N_g-2 is the corresponding evaluation bundle on N_g-2. By <cit.> we havec_r(E_K,N_g-2 )∩ [N_g-2]= ∑_k=0^r rk x ^k θ^r-k/(r-k)!∩ [N_g-2] = x^2 c_r(E_K) ∩ [N_g].Thus[B'] = ∑_r h^r c_g-r-2(E_K) ∩ [N_g-2× |ω_N|] =x^2 h∑_r h^r c_g-r-2(E_K) ∩ [N_× |ω_C|] = x^2 ∑_r h^r c_g-r-1(E_K)∩ [N_× |ω_C|] ∈_2g-2(× |ω_C|,) .By the blowup formula <cit.> and <ref> we have(α×)_∗ [Λ_] = (α×)_∗( (b×)^∗ [Λ_]- (q_N×)^∗ [ B' ] )=∑_r h^r ( h' θ^g-r/(g-r)!2g-2rg-r +θ^g-r+1/(g-r+1)!2g-2r+1g-r+1..-θ^g-r+1/(g-r+1)!2g-2rg-r+1) = ∑_r h^r h' θ^g-r/(g-r)!2g-2rg-r +h^r θ^g-r+1/(g-r+1)!2g-2rg-r= ∑_r h^r (θ + h')^g-r+1/(g-r+1)!2g-2rg-r∩[ × |ω_C| ].Case d=(1.g-1). For i=1,2 let x_i=[N_i,d_i-1]∈^2(N_i,d_i,) and θ_i ∈^2(JN_i,) denote the class of the polarization. By <ref>, <ref> and <ref> we have[(b×)^∗Λ_] =(x_1+x_2+h')(∑_r=0^g h^r c_g-r( E_K,1⊕ E_K,2) ) = (x_1+x_2+h')(∑_r=0^g h^r 2 x_1 c_g-r-1 + c_g-r)∈_2g(^1×× |ω_C|, ),where we denote c_r(E_K,2) by c_r. Recall from <ref> that the center of the blowup → isB=B_12∪ B_21∪ B_22⊂ ,with B_12 ={P_1^0}×{ P_2^∞ + N_2,d_2-1} , B_21 ={P_1^∞}×{ P_2^0+ N_2,d_2-1} , B_22 = N_1 ×{ P_2^0+P_2^∞ +N_2,d_2-2} .Let B'_ij=(Λ_∩ (B_ij× |ω_C|)), thenB'_12={ D,H∈ N_2,d_2-1× |ω_C| |P_1^0+P_2^∞ + D ≤β^∗ H }Since a canonical divisor containing P_1^0 must be in |ω_N|≃ |ω_C|^-⊂ |ω_C|, the above is equal to the vanishing locus of the composition of maps of vector bundles on N_2,d_2-1× |ω_N|_|ω_N|(-1) →^0(N,ω_N) → E_ω_N_2(P_2^0) ,where E_ω_N_2(P_2^0) is the corresponding evaluation bundle on N_2,d_2-1. Notice thatc_r(E_ω_N_2(P_2^0) )∩ N_2,d_2-1 = ∑_k=0^r rk x_2 ^k θ_2^r-k/(r-k)!∩ [N_2,d_2-1] = x_2 c_r(E_K,2) ∩ [N_2,d_2]thus[B'_12] = ∑_r h^r c_g-r-2(E_ω_N_2(P_2^0)) ∩ [N_2,d_2-1× |ω_N|] = x_1 x_2∑_r h^r+1 c_g-r-2∩ [× |ω_C|]∈_2g-2(× |ω_C|,) .Clearly[B'_12]=[B'_21].In the same way, we have that B'_22 is N_1 times the vanishing locus of the composition of morphism of vector bundles on N_2,d_2-2_|ω_N_2|(-1) →^0(N_2,ω_N_2) → E_ω_N_2 .Again c_r(E_ω_N_2)∩ [N_2,d_2-2]=x_2^2 c_r ∩ [N_2,d_2], thus[B'_22] = ∑_r h^r c_g-r-3(E_ω_N_2) ∩ [N_1× N_2,d_2-2× |ω_N_2|] = x_2^2 ∑_r h^r+2 c_g-r-3∩ [× |ω_C|]∈_2g-2(× |ω_C|,) .By the blowup formula <cit.>, we have in _2g(× |ω_C|, )[Λ_] =(b×)^∗ [Λ_]- (q_N×)^∗[B'] =(b×)^∗ [Λ_]- (q_N×)^∗(2[B'_12]-[B'_22]) =(x_1+x_2+h')(∑_r=0^g h^r (2x_1 c_g-r-1 + c_g-r)) - 2x_1x_2∑_r h^r c_g-r-1 - x_2^2∑_r h^r c_g-r-1=∑_r h^r ( h'(2x_1 c_g-r-1+c_g-r) + (x_1+x_2) c_g-r-x_2^2 c_g-r-1)) ∩ [× |ω_C|] .Thus by <ref> we have in _2g(×|ω_C|,)(α×)_∗[Λ_] = ∑_r h^r ( h'(2 θ_1 θ_2^g-r-1/(g-r-1)!2g-2r-2g-r-1+ θ_2^g-r/(g-r)!2g-2rg-r) .. +θ_1 θ_2^g-r/(g-r)!2g-2rg-r+θ_2^g-r+1/(g-r+1)!2g-2r+1g-r+1 - θ_2^g-r+1/(g-r+1)!2g-2rg-r+1)=h'∑_r h^r( 2 θ_1 θ_2^g-r-1/(g-r-1)!2g-2r-2g-r-1+ θ_2^g-r/(g-r)!2g-2rg-r)+ ∑_r h^r (θ_1+θ_2)^g-r+1/(g-r+1)!2g-2rg-r=(∑_r h^r (θ_1+θ_2+h')^g-r+1/(g-r+1)!2g-2rg-r-h^rh' θ_1 θ_2^g-r/(g-r)!2g-2r-2g-r).The Lemma then follows from <ref>, and the fact that [h'JC]=0. § PRYMS ASSOCIATED TO BIELLITPIC CURVESWe keep the notations of Section <ref>, i.e. C is a nodal curve of genus g+1, π:C→ E is a double covering of type (3), E is a cycle of n ^1's, Δ is the branch locus of π and (Δ)/2. Moreover we now fix δ∈^(E) with Δ∈ |δ^⊗ 2|. We defineP{L∈^(C) | (L)=δ}⊂^(C),Ξ Θ∩ P ⊂ P.These notations are fixed for the remainder of Section <ref>. We have the following commutative diagram, whose rows and columns are exact <cit.>0 [d] 0 [d] 0 [d]0 [r]/2[d,hook] [r] P [d,hook] [r]^(N) [d] [r] 0 0 [r]^∗[r] [d, "z ↦ z^2" lablrot] [d]^(C) [r,"β^∗"] [d,""]^(N) [d] [r] 00 [r]^∗[r] [d]^(E) [d] [r] 0 0 0.In particular, there is a degree 2 isogeny of polarized abelian varieties P→^(N). We thus have an identificationT^∨_0 P ≃ T^∨_0 JN = ^0(N,ω_N) .LetW×_^(E){δ} , andW̃×_^(E){δ} .Let R be the ramification divisor of π: C→ E, and{D∈ W |D≤ R } ,b^-1()⊂W̃ , Recall that R is non-singular thus b is a local isomorphism near . The Abel-Jacobi map α restricts to a map ϕαW̃: W̃→Ξ. We have the following: The singular locus of W̃ is(W̃)=(b^-1(B^0∞)∩W̃)∪ ,whereB^0∞ { D∈ |P_i^0+P_j^0+P_k^∞+P_l^∞≤ D,for some i≠ j and k≠ l.} .Moreover, at a point x∈, W̃ has a quadratic singularity of maximal rank, i.e. locally analytically we have(W̃,x) ≃ (V(x_1^2+⋯+x_g^2),0) ⊂ (^g,0).Let D̃∈W̃, and D=b(D̃). Recall that b:→ is the blowup at B{ D∈ |P_i^0+P_j^∞≤ D }. Step 1: Suppose D∉ B. Then b is a local isomorphism at D, and thus induces a local isomorphism (W̃,D̃)→ (W,D). Suppose that D≤ R. We can assume that D=P_1+⋯+P_g. For 1≤ i ≤ g, the morphism π_N:N→ E is ramified at P_i thus there are local coordinates z_i on N centered at P_i such thatπ_N(z_i)=z_i^2+Q_i, where Q_i=π(P_i) .Moreover z_1,…,z_g define coordinates onlocally near D. On JE=^∗ the group law is multiplicative thus the condition to map to δ byreduces locally near D to ∏_i=1^g (z_i^2+Q_i) - Q_1⋯ Q_g =0,where we view the points α_N(Q_i)∈^1(E)≃^∗ as complex numbers by abuse of notations. The Hessian of the above function is non-degenerate, thus by the Morse Lemma this is a quadratic singularity of maximal rank. We now assume D≰ R. Then D=D'+P_0 for some point P_0≰ R. Locally near P_0 there is the embeddingi_D' : N↪P ↦ P+D'.The composite ∘ i_D' has non-zero differential at P_0 thus :^1 has non-zero differential at D. Thus W (resp. W̃) is smooth at D (resp. D̃). Step 2: Suppose D∈ B. Let U⊂ be an open set and (q_NU,λ,μ):U→ U×^1 be a local trivialization ofsuch thatis the vanishing locus ofλ s_1^0 s_2^0⋯ s_n^0 - μ s_1^∞ s_2^∞⋯ s_n^∞as in <ref>, where for k∈{0,∞}, 1≤ i ≤ n,s_i^k= B_i^k= P_i^k+ N_-e_i⊂ N_ .Above the trivialization U, the norm map becomes: U×^1→^1 (D,λ:μ)↦ (λ^2:μ^2).Under this identification we have δ∈^1∖{0,∞}. The divisors B_i^k are normal crossing divisors, thus the result follows. If =(g) or =(1,g-1), then (W̃)=. In these two cases the set B^0∞ is empty for degree reasons. Let ϕ() ⊂Ξ . The points ofare isolated singularities of maximal rank of Ξ. These correspond to the additional isolated singularities of <cit.> (hence the notation).For a line bundle L∈, we have ^0(N_i,LN_i)=1 for 1≤ i ≤ n thus ^0(C,L)=1 by the proof of <ref>. Thus ϕ:W̃→Ξ is a local isomorphism near L by <ref> and the result follows from the lemma above. §.§ Chern-Mather class of the Prym theta divisorWe keep the notations of the previous section. LetΛ_Ξ⊂ T^∨ P =P×^0(N,ω_N)be the conormal variety to Ξ, and Λ_Ξ⊂ P× |ω_N| the projectivization. Consider the following compositeℱ: |ω_C||ω_C|^- ρ⟶ |ω_N|,where |ω_C| |ω_C|^- is the projection from R∈ |ω_C|. We have the followingWith the above notations, we haveΛ_Ξ=(×ℱ)_∗( Λ_ΘP).Recall that we have a canonical identification T^∨ JC = JC× |ω_C|. It follows from <cit.> that for a smooth point x∈Ξ we have_Ξ(x)=ℱ∘_Θ(x) ,where _Ξ:Ξ |ω_N| and _Θ:Θ |ω_C| are the respective Gauss maps. The proposition follows since a general point in Λ_Θδ lies above a smooth point of Ξ, and Λ_Ξ is irreducible.Suppose =(g) or =(1,g-1), then[Λ_Ξ]= ∑_r=0^g-1 h^r ξ^g-r/(g-r)!2g-2r-2g-r-1∩ [T^∨ P]∈_2g(T^∨ P,) ,where h is the pullback of the hyperplane class in T^∨_0 P and ξ corresponds to the pullback of Ξ. In particular, the Chern-Mather classes of Λ_Ξ arec_M,r(Λ_Ξ)= ξ^g-r/(g-r)!2g-2r-2g-r-1∈_2r(P,).Follows from <ref> and <ref>, and the fact thatθ∩ [P]=ξ , andℱ_∗ h_C^r+1=h_N^r ,where h_C and h_N are the hyperplane classes on |ω_C| and |ω_N| respectively.§.§ The fibers of the Gauss mapWe will now study the fibers of the Gauss map γ_Ξ:Λ_Ξ→ |ω_N| in the cases =(g) and =(1,g-1). The main result is the following:Suppose =(g) or =(1,g-1), then away from a subset S⊂ |ω_N| of codimension at least 3, γ_Ξ is finite.We fix the following notationsM =[s_1+…+s_n]∈ |ω_N|=(⊕_i ^0(N_i,ω_N_i)) , H =ρ^-1(M) V_M =ℱ^-1(M)=⟨ H , R ⟩⊂ |ω_C| , Z_M =Λ^∗_V_M .From <ref> and <ref> we have Λ_Ξ = (α×ℱ)_∗ (Λ^∗_∩^-1(δ)),thus positive-dimensional fibers of γ_Ξ above M correspond to components Z of Z_M such that (Z)=δ. §.§.§ Step 1: The case of components not finite onto V_M.Suppose that there is a component Z of Z_M that is not finite onto V_M, such that (Z)=δ. By <ref> we have γ_(Z)= H. Suppose that we are in the second case of Prop. <ref>. Then Z⊂D×{H}≃^1 for some D. Then the norm map restricted to D is of degree 2 thus only finitely many points lie above δ, contradicting (Z)=δ.Suppose now that we are in the first case of Prop. <ref>. Then necessarily we must be in the case =(1,g-1). Suppose M=[s_1+s_2]. We thus have either s_1=0 or s_2=0. If s_2=0, then by <ref> we have (Z)⊂{0,∞} which contradicts (Z)=δ. We now assume s_1=0. Let H_2 HN_2= s_2+P_2^0+P_2^∞. By <ref>, we haveb'(Z) ⊂ N_1×{ D_2}×{ H}for some D_2≤ H_2. Suppose first that P_2^0+P_2^∞≰ D_2. Then b':Z→ b'(Z) is generically finite by <ref> (thus finite) and for a general point x∈ Z, we have(x)=(b'(x))≠δwhich is a contradiction. We thus have P_2^0+P_2^∞≤ D_2. Consider Y=ω_Λ(Z). Then by <ref> we have γ_(Y)=γ_(Z)=H and (Y)=δ thus Y is a positive-dimensional fiber of γ_. We haveb'(Y)=N_1×{H_2-D_2} ,and by the above reasoning applied to Y we haveP_2^0+P_2^∞≤ H_2-D_2.Thus we must have M∈ |ω_N_2(-P_2^0-P_2^∞)|⊂ |ω_N| which is of codimension 3. §.§.§ Step 2: The case of components finite onto V_M.Let Z be the union of all positive-dimensional components of Λ_^∗V_M that are finite above V_M, and are mapped to δ by . Note that if _δ(C/E)=0, Z is empty because by assumption no subdivisor of R lies above δ. This section is thus relevant only in the case _δ(C,E)>0. We use the notations of Fig. <ref>. Let π_∗:→ E_ be the pushforward of points. Let Yπ_∗∘ b ∘p̃(Z). As a general divisor in V_M is non-singular, so is a general divisor in Y and we thus have en embeddingj:Y↪ |δ|=^0(E,δ).The involutions ω_Λ and τ_Λ from <ref> and <ref> induce involutions on Z and Y, which we denote by ω and τ by abuse of notation. The action of τ on |ω_C| induces an involution on V_M as well which we denote by τ. We thus have the following commutative diagramZ [r] [d] V_M ≃^1 [d]|δ |Y [r] [l,hook',"j"] [d,hook,"(j,j∘ω)"]V_M/τ≃^1 [d,hook,"i"]|δ|× |δ|[r,"m"]|Δ| ,where m:|δ|×|δ|→ |Δ| is the multiplication map. We then have the following: [Y] = ( Z→ V_M)/2,where [Y]= [Y]∩ c_1(_|δ|(1)). The morphism Z→ Y and V_M→ V_M/τ are generically of degree 2, thus(Z→ V_M)=(Y→ V_M/τ)k.By definition we haveV_M={ (λ s_R+μπ ^∗ s) |(λ:μ)∈^1 }⊂ |ω_C| ,where s_R=R and M=[s]. Thus i_∗ V_M/τ = {( λ s_Δ+μ s^2)|(λ,μ)∈^1 }⊂ |Δ|,where s_Δ=Δ. Thus i_∗ [V_M/τ] is of degree 1. The multiplication map m is the composition of the Segre embedding and a linear projection, it is thus of bidegree (1,1). If d= j_∗[Y], then (j,j∘ω) is of bidegree (d,d), thusk=2d.There is a closed set S⊂ |ω_N| of codimension at least 3, such that for all M∈ |ω_N|∖ S, we have (Z→ V_M)≤ 4,where Z is the union of all components of Λ^∗_M that are finite onto V_M and mapped to δ by . For every (L,[s_1],[s_2])∈ ZH⊂Λ^∗_H such that (L)=δ, we have τ (L)=ω(L) and(ω_C⊗ L^-1)=δ .Thus points above H that map to δ come in pairs. It is not complicated to see that having 3 such pairs above H imposes a condition of codimension 3 on M.We can now complete the proof of Theorem <ref>. By the above lemma, away from a set S of codimension at least 3, we have (Z → V_M)≤ 4. By <ref> we then have [Y]=(Z → V_M)/2 =2 thus Y≃^1 is a rational curve. Recall that ω and τ commute. Consider the following tower of double coverings of curves Z [dl,"π_ω "'] [d,"π_τ"] [dr,"π_ωτ"]Y_ω[dr,"p_ω" '] Y [d,"p_τ"] Y_ωτ[dl,"p_ωτ"] ^1 ,where Y_ω (resp. Y_ωτ) is Z/ω (resp. Z/ωτ). Since Y≃^1, the lower curve has to be ^1. The fixed points of ω correspond to theta-nulls. Moreover ω doesn't fix the points in Z_M above R∈ V_M. Thus away from a finite locus in |ω_N| we can assume that p_ω is étale. For all L∈ P we have L+τ L=π^∗δ, thusω_C- τ L=L+ω_C-π^∗δ≠ L.Thus ωτ acts fixed point free on Z. By the above diagram this implies that Y→^1 is étale which is impossible by Riemann-Hurwitz. §.§ The characteristic cycleLet j:Ξ_↪Ξ be the embedding and _Ξ j_!∗_Ξ_[g-1]∈Perv(P) be the intersection complex associated to Ξ. We now compute the characteristic cycle CC(_Ξ) for =(g) and =(1,g-1). The proof is inspired from Bressler and Brylinski's proof of the irreducibility of the characteristic cycle of the theta divisor of non-hyperelliptic Jacobians <cit.>. Recall that the restriction of the Abel-Jacobi map α:→Θ induces a mapϕαW̃:W̃→Ξ . Let W̃^oW̃∖, Ξ^oΞ∖, and ϕ^o:W̃^o→Ξ^o be the restriction. By <ref>, W̃^o is smooth if =(g) or =(1,g-1). Moreover a general line bundle L∈Ξ verifies ^0(C,L)=1, thus ϕ is birational by <ref>. We start with the following:Suppose =(g) or =(1,g-1), and D∈W̃^o, then( ^t _D ϕ ) ≤^0(N,ω_N(-β^∗ D))+1.If moreover D is non-singular, then(^t ϕ)=(^0(C,ω_C(-D))+⟨ s_R ⟩)∩^0(N,ω_N).Recall the following commutative diagram defining ϕW̃[rrrr,bend left=15, "ϕ"] [d,"bW"'] [r,hook][r,"α"] [d,"b"][d,"β^∗"] P [l,hook']Ξ[l,hook'] [dl,"β^∗Ξ"] W [r,hook] [rrr,bend right=15, "ϕ_N"'][r,"α_N"]^(N)Ξ' [l,hook'].Let D∈W̃, with D≰ R. β^∗P:P→^(N) is a degree 2 isogeny, so composing ϕ with it doesn't change the codifferential. By <cit.> we have(^tα_N)=^0(N,ω_N(-β^∗ D)).Localy near D, W̃ is smooth of codimension 1 in , and W̃→ W is the normalization, thus the codifferential is injective, thus(^tϕ)≤(^t α_N)+1=^0(N,ω_N(-β^∗ D))+1.Now suppose D non-singular. Then the proof of Lemma 2.3 page 171 in <cit.> can be repeated at verbatim locally near D and thus(^t _D α)=^0(C,ω_C(-D)).We then have the following commutative diagram T^∨_D W̃T^∨_D [l,two heads,"^tι_W̃"] T^∨_α(D)[d,equal] [l,"^tα"] T^∨_α(D) P [l,hook'] [lll,"^tϕ"',bend right=15] [d,equal] ^0(C,ω_C)^0(N,ω_N) [l,hook'].Thus0≠⟨^tα(s_R) ⟩ = ( ^t ι_W̃ ) ,and(^t ϕ)=(^0(C,ω_C(-D))+⟨ s_R ⟩)∩^0(N,ω_N).We have the following:Suppose =(g) or =(1,g-1). If g is even, then CC(_Ξ)=Λ_Ξ .If g is odd, thenCC(_Ξ)=Λ_Ξ+∑_x∈ 2 Λ_x ,where ϕ(b^-1({D∈ W |D≤ R})) and Λ_x=N^∨_x P is the conormal variety to the point x∈ P. Let =(g) or =(1,g-1). By <ref>, <ref> and <ref> we know that ϕ^o:W̃^o→Ξ^o is a small resolution of singularities. Thus by <cit.> we haveCC(_Ξ^o) ⊆ϕ_π(^t ϕ^-1 (N^∨_W̃^oW̃^o))where ^tϕ is the codifferential and ϕ_π is the projectionT^∨W̃^o ^t ϕ⟵W̃^o×_Ξ^o T^∨ P ϕ_π⟶ T^∨ P,and N^∨_W̃^oW̃^o⊂ T^∨W̃^o is the zero section. Let Λ=^t ϕ^-1(N^∨_W̃^oW̃^o). We define the following stratification of W^oW∖ * W_k⊂ W^o is the locus of non-singular divisors D∈ W which can be written asD=π^∗ M+F ,with M∈ E_k and F π-simple. We have W_k=g-k-1.* V_k⊂ W^o is the locus of singular divisors D∈ W^o which can be written asD =π^∗ M+F,with M∈ E_k, and F π-simple. We have V_k=g-k-2.For a locus Z⊂ W^o, denote ΛZ the fiber of Λ above Z. By Lemma <ref> we have * The fibers of ΛW_0→ W_0 are of dimension 1, Thus ΛW_0 is of dimension g.* Let 0<k≤ g-1, and D=π^∗ M+F∈ W_k. An element of ^0(C,ω_C(-D)) vanishes at two conjugate points thus must be in the (+)-eigenspace of τ, ^0(C,ω_C)^+=^0(N,ω_N). Thus^0(C,ω_C(-D))=^0(N,ω_N(-D)) ,thus (^t _Dϕ)=^0(N,ω_N(-D)) which is of dimension k by Riemann-Roch. Thus ΛW_k=g-1.* Let 0≤ k ≤ g-2 and D∈ V_k. By <ref> we have(^t_D ϕ)≤^0(N,ω_N(-β^∗ D))+1=k+1,using Riemann-Roch. Thus ΛV_k≤ g-1.It follows that Λ, and thus ϕ_π(Λ) is irreducible of dimension g, which proves the theorem away from . Finally, the points inare isolated quadratic singularities of maximal rank by <ref>. For such a singularity, it is well-known that the characteristic cycle is irreducible if g is even, and contains the conormal variety to the singular points with multiplicity 2 if g is odd.
http://arxiv.org/abs/2312.16591v1
{ "authors": [ "Constantin Podelski" ], "categories": [ "math.AG" ], "primary_category": "math.AG", "published": "20231227143511", "title": "The boundary of the bielliptic Prym locus" }
(p,q)-Laplacian doubleobstacle problems]Convergence results for the solutions of (p,q)-Laplacian doubleobstacle problems onirregular domainsR. Capitanelli ]Raffaela CapitanelliDipartimento di Scienze di Base e Applicate per l'Ingegneria,Sapienza" Università di Roma Via A. Scarpa 16, 00161, Roma, Italyraffaela.capitanelli@uniroma1.itThis work was completed with the support of our -pert. Sapienza" Università di Roma salvatore.fragapane@uniroma1.it28A80, 35J87, 35J65, 35B65, 35B40In this paper we studydouble obstacle problems involving (p,q)-Laplace type operators. In particular, we analyze the asymptotics of the solutionson fractal and pre-fractal boundary domains.[ Salvatore Fragapane======================= § INTRODUCTIONIn this work we deal with double obstacle problems involving a family of (p,q)-Laplace type operators in fractal and pre-fractal boundary domains in ^2.Our motivation is due to the fact that(p,q)-Laplace type operatorsand irregular domains are usefulforthe study of many real problems. In fact, p-Laplace operatorappears in the study of many concrete problems as non-Newtonian fluid mechanics, flows through porous media(see <cit.> and the references therein). The (p, q)-Laplacian has a wide range of applications in physical and related sciences, e.g. in biophysics,quantum physics, plasma physics, solid state physics, chemical reaction design, and reaction-diffusion systems (see, for example,<cit.>,<cit.>, <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, <cit.> and the reference therein). The limit operator (the ∞-Laplacian) also plays a leading role in problems like the mass transport (see <cit.> and <cit.> ) and the torsion creep (see, f.i., <cit.> and <cit.>).Moreover, obstacle problemsappear in many other contexts: fluid filtration in porous media, elasto-plasticity, optimal control, and financial math (see, for example, <cit.> and <cit.>.) With regard to fractals, what we mainly want to underline is their capacity to describe natural objects in a better way; in particular, they can represent the irregularstructure of several media (coasts, elements of the human body, etc.) more appropriately with respect to the classicalsmooth" structures and therefore they find application in the modelization of various phenomena (see, for example, <cit.> and <cit.>). In this paper, for p>q, q∈[2,∞) and k∈, we consider the following double obstacle problem on a fractal domain Ω_α (see, for the definition, Section <ref>)min_v∈ℋ_pJ_p,q(v), 𝒫ℳ_p,qwithJ_p,q(v)=1/p∫_Ω_α(k^2+|∇ v|^2)^p/2+1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv, ℋ_p={v∈ W_g^1,p(Ω_α): φ_1≤ v≤φ_2 in Ω_α},and where f∈ L^1(Ω_α), g∈ W^1,∞(Ω_α), φ_1,φ_2∈ C(Ω_α) are given (W^1, p_g(Ω):={v∈ W^1,p(Ω) :u=gon ∂Ω}).As in <cit.>, where the authors analyzed the homogeneous case without obstacles, we study the asymptotic behavior of the solutions, u_p,q, as p→∞, showing that, along subsequences, they converge tosolution of the following problemmin_v∈ℋJ_q(v),withJ_q(v)=1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv,ifL^2+k^2≤1,𝒫ℳ_q min_v∈ℋ||∇ v||_∞,ifL^2+k^2>1𝒫ℳ_q,Lwhere L is the Lipschitz constant of g andℋ={v∈ W_g^1,∞(Ω_α): φ_1≤ v≤φ_2 in Ω_α, ||∇ v||_∞≤max{1, √(L^2+k^2)}},(see Theorem <ref>). We recall that asymptotic results for p-Laplacian have been studied whenp goes to ∞, for example,in <cit.>, <cit.>, <cit.> and <cit.>.Since the previous problems are defined on the domain Ω_α, which can be seen as the limit of appropriate domains Ω_α^n (see Section <ref>), it becomes natural to consider the corresponding approximating problems, that is the problems on the approximating domains Ω_α^n. For these problems, it is possible to prove analogous convergence results, as p→∞. Anyway, the introduction of the approximating problems opens another issue concerning the behavior of the solutions with respect to n, namely if it is possible to obtain a solution to Problems (<ref>) and (<ref>) or (<ref>) as limit,with respect to n, of the solutions of such problems.The behavior of the solutions with respect to n has been studied by many authors (see, f.i., <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>). Nevertheless, as far as we know, in <cit.>, considering a double obstacle problem, the authors analyzed simultaneously both the behavior with respect to p and n for the first time. In fact, their analysis raises a question about the possibility of changing the order of the limit obtaining the same limit solution. Unfortunately, the lack of a uniqueness result for the case p=∞ does not allow to the authors to affirm it (see <cit.>, <cit.>, <cit.> and the references quoted in them). A first step in this direction is done in <cit.>, where the authors give sufficient conditions which allow to obtain the convergence of the whole sequence. Then, in <cit.> these conditions are used to state uniqueness results (in the case p=∞) for the same unilateral obstacle problem. The purpose of this paper is to give a complete answer to the asymptotic behavior of the solutions to the problems considered. Indeed, after the analysis of the behavior with respect to p, we will study the one with respect to n, showing that analogous results to the ones stated in <cit.> and <cit.> hold even in the case here examined.We stress the fact that in the proof of the convergence as n→∞ a necessary step is the construction of a sequence of functions, belonging to the approximating convex (ℋ_p,n or ℋ_n), which converges to an element chosen in the corresponding final convex (ℋ_p or ℋ). The hard part in this construction is to obtain functions satisfying all the conditions required from the convex, and this is due also to the irregular nature of the domains considered. In this context, we emphasize how the introduction of a suitable coefficient functions and an integrability result for the gradient of the solutions will be fundamental tools in order to state our results. More precisely, for p fixed and finite, the coefficient functionsallow to obtain a sequence of functions which converge to a solution of the problem on Ω_α, preserving some property, as long as its gradient has a greater summability with respect to the one of the natural space in which we search solutions. In this framework, the summability results we quoted play a crucial role; in particular, the approach and the techniques are the ones of <cit.>, <cit.> and <cit.>. For reasons of completeness, we specify that, beyond a greater summability of the gradient, further regularity results for the solution are present in the literature. In particular, regularity results for p-Laplacian obstacle problems in pre-fractal domains has been given in<cit.> (see also the references quoted there) and, in the same paper, these results are applied to give a optimal error estimate for the corresponding FEM problem following the approach used in <cit.>.Finally, we point out that it is possible to extend theresults of the present paper to other domains possibly with prefractaland fractal boundariesifthese domains are Sobolev admissible domains" (see <cit.>, <cit.>). The plan of the paper is the following. In Section 2 weintroducethe construction of fractal and pre-fractal boundary domains. In Section 3 the problem is introduced anda corresponding integrabilityresultfor the gradient of the solution is obtained in Section 4. Section 5is devoted to the asymptotic analysis with respect to pandSection 6 is devoted to the asymptotic analysis with respect to n. § FRACTAL AND PRE-FRACTAL BOUNDARY DOMAINSIn order to introduce the domains Ω^n_α and Ω_α, which are the ones we will use in this paper, we need to remind how the construction of the Koch curve and the corresponding approximate pre-fractal curves works. So, let us recall the procedure which allows to obtain the n-th pre-fractal K^n_α, n∈ℕ, of the Koch curves (see <cit.> for details and proofs). Let us start considering, for instance, the line segment K^0 with endpoints A(0,0) and B(1,0) and let us introduce a family of four contractive similitudes Ψ_α={ψ_1,α,…,ψ_4,α}having α^-1 as contraction factor, with 2<α<4, defined as follow:ψ_1,α(z)=z/α, ψ_2,α(z)=z/αe^iθ(α)+1/α, ψ_3,α(z)=z/αe^-iθ(α)+1/2+i√(1/α-1/4),ψ_4,α(z)=z-1/α+1,withθ(α)=arcsin( √(α(4-α))/2).The first iteration makes us get a polygonal of four line segments. The following Figure <ref> shows this first step.In general, at every step each segment of the polygonal is replaced with a rescaled copy of the one in the basic step (see Figure <ref>).Then, for each n∈, we setK_α^n=⋃_i=1^4 ψ_i,α(K_α^n-1)=⋃_i|n K_α^i|n,withK_α^i|n=ψ_i|n,α(K^0),where ψ_i|n,α=ψ_i_1,α∘ψ_i_2,α∘⋯∘ψ_i_n,α is the map associated with an arbitrary n-tuple of indices i|n=(i_1, i_2, …, i_n)∈{1,…,4}^n, for each integer n>0 and ψ_i|n,α=id in ^2 if n=0.Figure <ref> shows the result of some steps of the procedure just recalled and the final curve.As n→∞, we get that the curves K_α^n converge to the fractal curve K_α in the Hausdorff metric. Moreover, K_α is the unique compact set which is invariant on Ψ_α andd_f=ln4/lnα is its Hausdorff dimension. Now, we denote with Ω^n_α the domain obtained starting to any regular polygon Ω^0 (triangle, square, etc.) and replacing each of its sides with the n-th pre-fractal Koch curve K_α^n. These domains are non-convex, polygonal, with an increasing number of sides and, at the limit, they develop the fractal geometry of Ω_α, i.e. the domain having a fractal boundary formed by the union of Koch curves(see Figure <ref>). § DOUBLE OBSTACLE PROBLEM Let p>q be, with q∈[2,∞) fixed. Given f∈ L^1(Ω_α), g∈ W^1,∞(Ω_α) and φ_1,φ_2∈ C(Ω_α),Problem (<ref>) is equivalent to the following variational inequality findu_p,q∈ℋ_p:a_p(u_p,q,v-u_p,q)+a_q(u_p,q,v-u_p,q)-∫_Ω_αf(v-u_p,q)⩾ 0,∀ v∈ℋ_p, 𝒫_p,qwherea_p(u,v)=∫_Ω_α(k^2+|∇ u|^2)^p-2/2∇ u∇ v,and ℋ_p is defined in (<ref>).Moreover (see, for instance, <cit.>), ifℋ_p is non-empty,asthe functional J_p,q(v) is convex, weakly lower semi-continuous and coercive, then Problem (<ref>) has a minimizer u_p,q in ℋ_p.Problem (<ref>)admits unique solution.Let u_1 and u_2 solutions to Problem (<ref>). It holds thata_p(u_1,u_2-u_1)+a_q(u_1,u_2-u_1)-∫_Ω_αf(u_2-u_1)≥ 0and-a_p(u_2,u_2-u_1)-a_q(u_2,u_2-u_1)+∫_Ω_αf(u_2-u_1)≥ 0.So, by the definition of a_p(u,v) and summing previous relations, we get∫_Ω_α(k^2+|∇ u_1|^2)^p-2/2∇ u_1∇(u_2-u_1)-∫_Ω_α(k^2+|∇ u_2|^2)^p-2/2∇ u_2∇(u_2-u_1)+ +∫_Ω_α(k^2+|∇ u_1|^2)^q-2/2∇ u_1∇(u_2-u_1)-∫_Ω_α(k^2+|∇ u_2|^2)^q-2/2∇ u_2∇(u_2-u_1)≥0,that is∫_Ω_α[(k^2+|∇ u_2|^2)^p-2/2∇ u_2∇(u_2-u_1)-(k^2+|∇ u_1|^2)^p-2/2∇ u_1∇(u_2-u_1)]+ +∫_Ω_α[(k^2+|∇ u_2|^2)^q-2/2∇ u_2∇(u_2-u_1)-(k^2+|∇ u_1|^2)^q-2/2∇ u_1∇(u_2-u_1)]≤0.Thus, by Lemma 2.1 in <cit.>, we deduce that there exist C=C(k)>0 such thatC(||∇(u_2-u_1)||_p+||∇(u_2-u_1)||_q)≤0⟺||∇(u_2-u_1)||_p+||∇(u_2-u_1)||_q≤0and thenu_1(x)=u_2(x)+c,c∈By taking y∈∂Ω_α, we have u_1(y)=u_2(y)=g(y) and theng(y)=g(y)+c⟺ c=0.Hence, u_1=u_2. Let us consider u_p,q∈ℋ_p solution to the problemfindu∈ℋ_p:a_p(u,v-u)+a_q(u,v-u)-∫_Ω_αf(v-u)⩾ 0,∀ v∈𝒦_p.𝒫_p,q Denoting u̅_E=1/|E|∫_Eu,with E⊆Ω_α, we have that (𝒫_p,q) is equivalent toa_p(u_p,q-u̅_p,q,E,v-u̅_p,q,E+u̅_p,q,E-u_p,q)+a_q(u_p,q-u̅_p,q,E,v-u̅_p,q,E+u̅_p,q,E-u_p,q)+ -∫_Ω_αf(v-u̅_p,q,E+u̅_p,q,E-u_p,q)⩾ 0,∀ v∈𝒦_p.Hence, definingv̂=v-u̅_p,q,E,φ̂_i=φ_i-u̅_p,q,E,i=1,2 ,andû_p,q=u_p,q-u̅_p,q,E,we get that û_p,q∈ℋ_p,-u̅_p,q,E={v̂∈ W_g-u̅_p,E^1,p(Ω_α): φ̂_1≤v̂≤φ̂_2in Ω_α} is solution to problemfindw∈ℋ_p,-u̅_p,q,E:a_p(w,v̂-w)+a_q(w,v̂-w)-∫_Ω_αf(v̂-w)⩾ 0,∀v̂∈𝒦_p,-u̅_p,q,E.§ INTEGRABILITY RESULTA first step to do in order to deal with the asymptotic behavior, as n→∞, consists in an integrability result. The issue of the integrability of the gradient was widely studied by many authors; here, in particular, we refer to the approach and the results of <cit.>, <cit.> and <cit.>. On the one hand, in <cit.> and <cit.> the authors gave a global integrability result for obstacle problems, assuming that the boundary was p-Poincaré thick. On the other hand, in <cit.> the authors obtained an analogous result for aproblem without obstacle and involving an operator satisfying the same assumptions, but requiring a weaker condition on the boundary. In the following subsection, we simply recalltheprevious quotedresults making only the necessary changes or assumptions to adapt them to our case. §.§ Preliminary tools Let us recall some definitions and crucial lemmas which are preparatory to the integrability result.We point out that from now on, we adopt the following notation:· B_ρ(x)={x∈^2 :||x||<ρ},ρ>0; · B_ρ a generic ball having radius equal to ρ. (Poincaré's inequalities)Let Ω be a bounded open subset of ^2, with diameter d, and let p≥1 be. (i) If u∈ W_0^1,p(Ω), then ||u||_p≤d/p^1/p||∇ u||_p≤ d||∇ u||_p,(see Theorem 12.17 in <cit.>). (ii) If Ω is convex and u∈ W^1,p(Ω), then ∃ C=C(p)>0 such that||u-u̅_Ω||_p,Ω≤ Cd||∇ u||_p,Ω,(see Theorem 12.30 in <cit.>). (Sobolev-Poincaré's inequalities)Let us consider p∈[1,2) and u∈ W^1,p(^2), then there exists C=C(p)>0 such that ||u-u̅_B_r||_p^*,B_r≤ Cr||∇ u||_p,B_r,withp^*=2p/2-p,for every B_r⊂^2 (see Theorems 3.16 and 3.20 in <cit.>). For the following definitions and lemmas (and more informations about them) we refer, for instance, to <cit.>, <cit.>, <cit.>, <cit.> and the references quoted there. The p-capacity of a compact set K⊂Ω in Ω is defined as _p(K;Ω)=inf_u∈ C_0^∞(Ω) u=1inK{∫_Ω|∇ u|^p} and for an arbitrary set A⊂Ω is defined as _p(A;Ω)=inf_A⊂ E⊂ΩE opensup_K⊂ E Kcompact{_p(K;Ω)}, Since we are in ^2,_p(B_r;B_2r)=Cr^2-p, where C=C(p)>0. We say that a function u is p-quasicontinuous in Ω if, for any ε>0, there exists an open set A with _p(A;Ω)<ε such that u restricted to Ω∖ A is continuous. For any v∈ W^1,p(Ω), p>1, there exists a p-quasicontinuous fuction w∈ W^1,p(Ω) such that v=w q.e. in Ω. Moreover, this representative w is unique, in the sense that every other is equal to it except at most in a set of p-capacity equal to zero. We say that a set Ω⊂^m, m∈ is uniformly p-thick, 1<p<∞, if there exist positive constants C and r such that _p(Ω∩B_r(x);B_2r(x))≥ C_p(B_r(x);B_2r(x)) whenever x∈Ω and 0<r<r. In our case this condition is not restrictive, since it is automatically satisfied; indeed if p>m each non-empty set is uniform p-thick. Moreover, the complement of any proper simply connected subdomain of ^2 is uniformly p-thick for all p>1.From now on, we denote· _Ωu:=1/|Ω|∫_Ωu,where Ω⊂^2, u:Ω→. Let B⊂^2 be a fixed ball and let g,h∈ L^s(B) be non-negative functions. If for some s>1 _B_rg^s≤ C(_B_2rg)^s+_B_2rh^s for each ball B_r, r>0, with B_2r⊂ B, then there exists ε=ε(C,s)>0 such that for all t∈[s,s+ε] it holds (_B_2rg^t)^1/t≤ C(_B_2rg^s)^1/s+(_B_2rh^t)^1/t (see Lemma 2.1 in <cit.>).Let u be a t-quasicontinuous function in W^1,t(B_r), where t>1 and B_r⊂^2, r>0. Let N(u)={x∈ B_r: u(x)=0}. Then(_B_ru^α t)^1/α t≤ C(1/_t(N(u);B_2r)∫_B_r|∇ u|^t)^1/t,where C=C(t)>0 and α=2/2-t,if1<t<22,ift≥2. (see Lemma 3.1 in <cit.>).This following last technical results is crucial in the proof of the integrability. We point out that it is an adaptation to our case of the analogousones proved in <cit.>, <cit.> and <cit.>, and its proof uses the same arguments. Let p>q, q∈[2,∞), f∈ L^p'(Ω_α), with 1/p+1/p'=1, φ_1,φ_2∈ W^1,p(Ω_α) and u solution to Problem (<ref>). Let r>0 such that B_r⊂Ω_α and ψ∈ C_0^∞(Ω_α), with 0≤ψ≤1. Then there exists C=C(k,p,q,Ω_α)>0 such that:∫_Ω_α|ψ|^p|∇ u|≤ C[∫_Ω_α|ψ|^p(|∇φ_1|^p+|∇φ_2|^p)+∫_Ω_α|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇φ_2|^p/p-1)+ +∫_Ω_α|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)+ ∫_Ω_α|∇ψ|^p(|u-u̅_B_r|^p+|φ_1-φ_1_B_r|^p+ +|φ_2-φ_2_B_r|^p)+∫_Ω_α|∇ψ|^p/p-1(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +∫_Ω_α|∇ψ|^p/p-q+1(|u-u̅_B_r|^p/p-q+1+|φ_1-φ_1_B_r|^p/p-q+1+|φ_2-φ_2_B_r|^p/p-q+1)+∫_Ω_α|f|^p/p-1|ψ|^p/p-1]; ∫_Ω_α|ψ|^p|∇ u|≤ C[∫_Ω_α|ψ|^p(|∇φ_1|^p +|∇ g|^p+|∇φ_2|^p)+∫_Ω_α|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)+ +∫_Ω_α|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇ g|^p/p-q+1+|∇φ_2|^p/p-q+1)+∫_Ω_α|∇ψ|^p|u-w|^p+∫_Ω_α|∇ψ|^p/p-1|u-w|^p/p-1+ +∫_Ω_α|∇ψ|^p/p-q+1|u-w|^p/p-q+1+∫_Ω_α|f|^p/p-1|ψ|^p/p-1],where w=(φ_1∨ g)φ_2, with g∈ W^1,∞(Ω_α). Let us prove (<ref>). The proof is not difficult, but a little tangled. Since u∈ℋ_p is solution to Problem (<ref>), thanks to Remark <ref>, we have∫_Ω_α(k^2+|∇û|^2)^p-2/2∇û∇(v̂-û) +∫_Ω_α(k^2+|∇û|^2)^q-2/2∇û∇(v̂-û)-∫_Ω_αf(v̂-û)⩾ 0∀v̂∈ℋ_p,-u̅_B_r, with û=u-u̅_B_r, û∈ℋ_p,-u̅_B_r. Now, let us considerv̂=u-u̅_B_r-ψ^p(u-u̅_B_r)+ψ^pw=(1-ψ^p)(u-u̅_B_r)+ψ^pw, with w=(φ_1-u̅_B_r)^+(φ_2-u̅_B_r)^+-(φ_2-u̅_B_r)^-= φ_1-u̅_B_r,if φ_1> u̅_B_r0,if φ_1≤u̅_B_r≤φ_2 φ_2-u̅_B_r,if φ_2< u̅_B_r.So, we deduce that v̂∈ℋ_p,-u̅_B_r.Moreover, it holds|w|≤ |φ_1-φ_1_B_r|,if φ_2≥u̅_B_r|φ_2-φ_2_B_r|,if φ_2< u̅_B_rand then |w|≤ |φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|. Furthermore, by (<ref>), we deduce that |∇ w|≤ |∇φ_1|+|∇φ_2|.With this choice of v̂ we have ∇(v̂-û)=∇(-ψ^p(u-u̅_B_r)+ψ^pw)=pψ^p-1∇ψ (-(u-u̅_B_r)+w)-ψ^p∇ u+ψ^p∇ w. Then, by (<ref>) and previous observations, we have:0≤∫_Ω_α(k^2+|∇ u|^2)^p-2/2∇ u·[pψ^p-1∇ψ (-(u-u̅_B_r)+w)-ψ^p∇ u+ψ^p∇ w]+ +∫_Ω_α(k^2+|∇ u|^2)^q-2/2∇ u·[pψ^p-1∇ψ (-(u-u̅_B_r)+w)-ψ^p∇ u+ψ^p∇ w]+ - ∫_Ω_αf(-ψ^p(u-u̅_B_r)+ψ^pw)≤≤-∫_Ω_α(k^2+|∇ u|^2)^p-2/2|∇ u|^2|ψ|^p+∫_Ω_α(k^2+|∇ u|^2)^p-2/2|∇ u|(|∇φ_1|+|∇φ_2|)|ψ|^p+ +p∫_Ω_α(k^2+|∇ u|^2)^p-2/2|∇ u||∇ψ||ψ|^p-1(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|)+ -∫_Ω_α(k^2+|∇ u|^2)^q-2/2|∇ u|^2|ψ|^p+ +∫_Ω_α(k^2+|∇ u|^2)^q-2/2|∇ u|(|∇φ_1|+|∇φ_2|)|ψ|^p+ +p∫_Ω_α(k^2+|∇ u|^2)^q-2/2|∇ u||∇ψ||ψ|^p-1(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|)+ +∫_Ω_α|fψ||ψ|^p-1(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|).Now, since∀ a,b≥0,∀ r≥1∃C=C(r)>0:(a+b)^r≤ C(a^r+b^r),and by the fact that |ψ|^a≤|ψ|^b, for any a≥ b>0, we get0≤-∫_Ω_α|∇ u|^p|ψ|^p+C_k,p∫_Ω_α|∇ u|(|∇φ_1|+|∇φ_2|)|ψ|^2+C_p∫_Ω_α|∇ u|^p-1(|∇φ_1|+|∇φ_2|)|ψ|^p+ +pC_k,p∫_Ω_α|∇ u||∇ψ||ψ|(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|)+pC_p∫_Ω_α|∇ u|^p-1|∇ψ||ψ|^p-1(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|)+ -∫_Ω_α|∇ u|^q|ψ|^p+C_k,q∫_Ω_α|∇ u|(|∇φ_1|+|∇φ_2|)|ψ|^2+C_q∫_Ω_α|∇ u|^q-1(|∇φ_1|+|∇φ_2|)|ψ|^q+ +pC_k,q∫_Ω_α|∇ u||∇ψ||ψ|(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|)+pC_q∫_Ω_α|∇ u|^q-1|∇ψ||ψ|^q-1(|u-u̅_B_r|+ +|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|)+ +∫_Ω_α|fψ||ψ|(|u-u̅_B_r|+|φ_1-φ_1_B_r|+|φ_2-φ_2_B_r|).Applying Young's inequality with conjugate exponents p and p/p-1 or p/q-1 and p/p-q+1, we obtain:0≤-∫_Ω_α|∇ u|^p|ψ|^p+σ/p∫_Ω_α|∇ u|^p|ψ|^p+C_k,p(p-1)σ^-1/p-1/p∫_Ω_α(|∇φ_1|^p/p-1+|∇φ_2|^p/p-1)|ψ|^p/p-1+ +σ_1(p-1)/p∫_Ω_α|∇ u|^p|ψ|^p+σ_1^-(p-1)/pC_p∫_Ω_α(|∇φ_1|^p+|∇φ_2|^p)|ψ|^p+σ_2∫_Ω_α|∇ u|^p|ψ|^p+ +C_k,p(p-1)σ_2^-1/p-1∫_Ω_α|∇ψ|^p/p-1(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +σ_3(p-1)∫_Ω_α|∇ u|^p|ψ|^p+ +C_pσ_3^-(p-1)∫_Ω_α|∇ψ|^p(|u-u̅_B_r|^p+|φ_1-φ_1_B_r|^p+|φ_2-φ_2_B_r|^p)-∫_Ω_α|∇ u|^q|ψ|^q+ +σ_4/p∫_Ω_α|∇ u|^p|ψ|^p+ +C_k,p,q(p-1)σ_4^-1/p-1/p∫_Ω_α(|∇φ_1|^p/p-1+|∇φ_2|^p/p-1)|ψ|^p/p-1+σ_5(q-1)/p∫_Ω_α|∇ u|^p|ψ|^p+ +σ_5^-q-1/p-q+1C_p,q(p-q+1)/p∫_Ω_α(|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)|ψ|^p/p-q+1+σ_6∫_Ω_α|∇ u|^p|ψ|^p+ +C_k,p,q(p-1)σ_6^-1/p-1∫_Ω_α|∇ψ|^p/p-1(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +σ_7(q-1)∫_Ω_α|∇ u|^p|ψ|^p+ +σ_7^-q-1/p-q+1C_p,q(p-q+1)∫_Ω_α|∇ψ|^p/p-q+1(|u-u̅_B_r|^p/p-q+1+|φ_1-φ_1_B_r|^p/p-q+1+|φ_2-φ_2_B_r|^p/p-q+1)+ +||fψ||_p'|||ψ(u-u̅_B_r)|+|ψ(φ_1-φ_1_B_r)|+|ψ(φ_2-φ_2_B_r)|||_p,where for the last term we used Hölder's inequality.Now, thanks to Poincaré's inequality in the last term of (<ref>), we have:||fψ||_p'|||ψ(u-u̅_B_r)|+|ψ(φ_1-φ_1_B_r)|+|ψ(φ_2-φ_2_B_r)|||_p≤ ≤||fψ||_p'||ψ(u-u̅_B_r)||_p+||fψ||_p'||ψ(φ_1-φ_1_B_r)||_p+||fψ||_p'||ψ(φ_2-φ_2_B_r)||_p≤ ≤||fψ||_p'd||∇(ψ(u-u̅_B_r))||_p+||fψ||_p'd||∇(ψ(φ_1-φ_1_B_r))||_p+||fψ||_p'd||∇(ψ(φ_2-φ_2_B_r))||_p= =d||fψ||_p'[||∇ψ(u-u̅_B_r)+ψ∇ u||_p+||∇ψ(φ_1-φ_1_B_r)+ψ∇φ_1||_p+||∇ψ(φ_2-φ_2_B_r)+ψ∇φ_2||_p]≤ ≤ d||fψ||_p'(||∇ψ(u-u̅_B_r)||_p+||∇ψ(φ_1-φ_1_B_r)||_p+||∇ψ(φ_2-φ_2_B_r)||_p+||ψ∇ u||_p+||ψ∇φ_1||_p+||ψ∇φ_2||_p).Eventually, using Young's inequality, term by term, with conjugate exponents p and p/p-1, we obtain:||fψ||_p'|||ψ(u-u̅_B_r)|+|ψ(φ_1-φ_1_B_r)|+|ψ(φ_2-φ_2_B_r)|||_p≤σ_8/p||ψ∇ u||^p_p+σ_8/p||ψ∇φ_1||^p_p+σ_8/p||ψ∇φ_2||^p_p+ +σ_8/p||∇ψ(u-u̅_B_r)||^p_p+σ_8/p||∇ψ(φ_1-φ_1_B_r)||^p_p+σ_8/p||∇ψ(φ_2-φ_2_B_r)||^p_p+d^p/p-1σ_8^-1/p-1(p-1)/p||fψ||^p'_p'.Hence, putting together (<ref>) and (<ref>) and suppressing the term -||ψ∇ u||_q^q, we have(1-σ/p-σ_1+σ_1/p-σ_2-σ_3p+σ_3-σ_4/p-σ_5q/p+σ_5/p-σ_6-σ_7q+σ_7-σ_8/p)∫_Ω_α|ψ|^p|∇ u|^p≤ +C_k,p(p-1)σ^-1/p-1/p∫_Ω_α(|∇φ_1|^p/p-1+|∇φ_2|^p/p-1)|ψ|^p/p-1+σ_1^-(p-1)/pC_p∫_Ω_α(|∇φ_1|^p+|∇φ_2|^p)|ψ|^p+ +C_k,p(p-1)σ_2^-1/p-1∫_Ω_α|∇ψ|^p/p-1(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +C_pσ_3^-(p-1)∫_Ω_α|∇ψ|^p(|u-u̅_B_r|^p+|φ_1-φ_1_B_r|^p+|φ_2-φ_2_B_r|^p)+ +C_k,p,q(p-1)σ_4^-1/p-1/p∫_Ω_α(|∇φ_1|^p/p-1+|∇φ_2|^p/p-1)|ψ|^p/p-1+ +σ_5^-q-1/p-q+1C_p,q(p-q+1)/p∫_Ω_α(|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)|ψ|^p/p-q+1+ +C_k,p,q(p-1)σ_6^-1/p-1∫_Ω_α|∇ψ|^p/p-1(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +σ_7^-q-1/p-q+1C_p,q(p-q+1)∫_Ω_α|∇ψ|^p/p-q+1(|u-u̅_B_r|^p/p-q+1+|φ_1-φ_1_B_r|^p/p-q+1+|φ_2-φ_2_B_r|^p/p-q+1)+ +σ_8/p∫_Ω_α|ψ|^p|∇φ_1|^p+σ_8/p∫_Ω_α|ψ|^p|∇φ_2|^p+d^p/p-1σ_8^-1/p-1(p-1)/p∫_Ω_α|f|^p/p-1|ψ|^p/p-1+ +σ_8/p∫_Ω_α|∇ψ|^p|u-u̅_B_r|^p+σ_8/p∫_Ω_α|∇ψ|^p|φ_1-φ_1_B_r|^p+σ_8/p∫_Ω_α|∇ψ|^p|φ_2-φ_2_B_r|^p.Lastly, choosing σ=σ_1=σ_7=1/5pq, σ_4=σ_5=1/5q, σ_2=σ_3=1/5p^2, σ_6=1/5p, σ_8=1/5and considering as C=C(k,p,q,Ω) the maximum among all the coefficients of the integrals at the right-hand side of the last inequality, we get:∫_Ω_α|ψ|^p|∇ u|≤ C[∫_Ω_α|ψ|^p(|∇φ_1|^p+|∇φ_2|^p)+ +∫_Ω_α|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇φ_2|^p/p-1)+∫_Ω_α|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)+ +∫_Ω_α|∇ψ|^p(|u-u̅_B_r|^p+|φ_1-φ_1_B_r|^p+|φ_2-φ_2_B_r|^p)+ +∫_Ω_α|∇ψ|^p/p-1(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +∫_Ω_α|∇ψ|^p/p-q+1(|u-u̅_B_r|^p/p-q+1+|φ_1-φ_1_B_r|^p/p-q+1+|φ_2-φ_2_B_r|^p/p-q+1)+∫_Ω_α|f|^p/p-1|ψ|^p/p-1],where C=C(k,p,q,Ω)=2p/p-1C. So, the desired relation is proved. For completeness, let us show (<ref>). The proof is completely analogous.Let us consider w:=(φ_1∨ g)φ_2= φ_1, ifg<φ_1g, if φ_1≤ g≤φ_2 φ_2, ifg>φ_2 and v=u-ψ^p(u-w)=(1-ψ^p)u+ψ^pw. We have that v∈ W_g^1,p(Ω_α).Moreover, since φ_1≤ w≤φ_2 by definition, we get that φ_1≤ v≤φ_2. Then v∈ℋ_p.With this choice of v we have ∇(v-u)=∇(-ψ^pu+ψ^pw)=pψ^p-1∇ψ (-u+w)-ψ^p∇ u+ψ^p∇ w. Analogously to the proof of part (i), it hold that |∇ w|≤ |∇φ_1|+|∇ g|+|∇φ_2|. Then, substituting in (<ref>), we have:0≤∫_Ω_α(k^2+|∇ u|^2)^p-2/2∇ u·[pψ^p-1∇ψ (-u+w)-ψ^p∇ u+ψ^p∇ w]+ +∫_Ω_α(k^2+|∇ u|^2)^q-2/2∇ u·[pψ^p-1∇ψ (-u+w)-ψ^p∇ u+ψ^p∇ w]+ ∫_Ω_αfψ^p(u-w)≤≤-∫_Ω_α(k^2+|∇ u|^2)^p-2/2|∇ u|^2|ψ|^p+∫_Ω_α(k^2+|∇ u|^2)^p-2/2|∇ u|(|∇φ_1|+|∇ g|+|∇φ_2|)|ψ|^p+ +p∫_Ω_α(k^2+|∇ u|^2)^p-2/2|∇ u||∇ψ||ψ|^p-1|u-w|-∫_Ω_α(k^2+|∇ u|^2)^q-2/2|∇ u|^2|ψ|^p+ +∫_Ω_α(k^2+|∇ u|^2)^q-2/2|∇ u|(|∇φ_1|+|∇ g|+|∇φ_2|)|ψ|^p+ +p∫_Ω_α(k^2+|∇ u|^2)^q-2/2|∇ u||∇ψ||ψ|^p-1|u-w|+ +∫_Ω_α|fψ||ψ|^p-1|u-w|.So, with the same arguments of before, we have:0≤-∫_Ω_α|∇ u|^p|ψ|^p+C_k,p∫_Ω_α|∇ u|(|∇φ_1|+|∇ g|+|∇φ_2|)|ψ|^2+ +C_p∫_Ω_α|∇ u|^p-1(|∇φ_1|+|∇ g|+|∇φ_2|)|ψ|^p+ +pC_k,p∫_Ω_α|∇ u||∇ψ||ψ||u-w|+pC_p∫_Ω_α|∇ u|^p-1|∇ψ||ψ|^p-1|u-w|+ -∫_Ω_α|∇ u|^q|ψ|^p+C_k,q∫_Ω_α|∇ u|(|∇φ_1|+|∇ g|+|∇φ_2|)|ψ|^2+ +C_q∫_Ω_α|∇ u|^q-1(|∇φ_1|+|∇ g|+|∇φ_2|)|ψ|^q+ +pC_k,q∫_Ω_α|∇ u||∇ψ||ψ||u-w|+pC_q∫_Ω_α|∇ u|^q-1|∇ψ||ψ|^q-1|u-w|+∫_Ω_α|fψ||ψ||u-w|≤≤-∫_Ω_α|∇ u|^p|ψ|^p+σ/p∫_Ω_α|∇ u|^p|ψ|^p+C_k,p(p-1)σ^-1/p-1/p∫_Ω_α(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)|ψ|^p/p-1+ +σ_1(p-1)/p∫_Ω_α|∇ u|^p|ψ|^p+σ_1^-(p-1)/pC_p∫_Ω_α(|∇φ_1|^p+|∇ g|^p+|∇φ_2|^p)|ψ|^p+σ_2∫_Ω_α|∇ u|^p|ψ|^p+ +C_k,p(p-1)σ_2^-1/p-1∫_Ω_α|∇ψ|^p/p-1|u-w|^p/p-1+σ_3(p-1)∫_Ω_α|∇ u|^p|ψ|^p+C_pσ_3^-(p-1)∫_Ω_α|∇ψ|^p|u-w|^p+ -∫_Ω_α|∇ u|^q|ψ|^q+σ_4/p∫_Ω_α|∇ u|^p|ψ|^p+C_k,p,q(p-1)σ_4^-1/p-1/p∫_Ω_α(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)|ψ|^p/p-1 +σ_5(q-1)/p∫_Ω_α|∇ u|^p|ψ|^p+σ_5^-q-1/p-q+1C_p,q(p-q+1)/p∫_Ω_α(|∇φ_1|^p/p-q+1+|∇ g|^p/p-q+1+|∇φ_2|^p/p-q+1)|ψ|^p/p-q+1+ +σ_6∫_Ω_α|∇ u|^p|ψ|^p+C_k,p,q(p-1)σ_6^-1/p-1∫_Ω_α|∇ψ|^p/p-1|u-w|^p/p-1+σ_7(q-1)∫_Ω_α|∇ u|^p|ψ|^p+ +σ_7^-q-1/p-q+1C_p,q(p-q+1)∫_Ω_α|∇ψ|^p/p-q+1|u-w|^p/p-q+1+||fψ||_p'||ψ(u-w)||_p.Now, using Poincaré's inequality in the last term of (<ref>), we have:||fψ||_p'||ψ(u-w)||_p≤||fψ||_p'd||∇ψ(u-w)+ψ∇ u-ψ∇ w||_p≤ ≤ d||fψ||_p'(||∇ψ(u-w)||_p+||ψ∇ u||_p+||ψ∇ w||_p).Applying Young's inequality, term by term, with conjugate exponents p and p/p-1, we obtain:||fψ||_p'||ψ(u-w)||_p≤σ_8/p||ψ∇ u||^p_p+σ_8/p||ψ∇ w||^p_p+σ_8/p||∇ψ(u-w)||^p_p+d^p/p-1σ_8^-1/p-1(p-1)/p||fψ||^p'_p'.Putting together (<ref>) and (<ref>), and with the same choice of the parameters σ and σ_i, i=1…8, we get∫_Ω_α|ψ|^p|∇ u|≤ C[∫_Ω_α|ψ|^p(|∇φ_1|^p +|∇ g|^p+|∇φ_2|^p)+∫_Ω_α|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)+ +∫_Ω_α|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇ g|^p/p-q+1+|∇φ_2|^p/p-q+1)+∫_Ω_α|∇ψ|^p|u-w|^p+∫_Ω_α|∇ψ|^p/p-1|u-w|^p/p-1+ +∫_Ω_α|∇ψ|^p/p-q+1|u-w|^p/p-q+1+∫_Ω_α|f|^p/p-1|ψ|^p/p-1],where C=C(k,p,q,Ω)>0.§.§ Integrability of the gradientNow, thanks to the technical lemmas just seen, it is possible to prove the following summability result.Let f∈ L^p'(Ω_α), φ_1,φ_2∈ W^1,p(Ω_α), g∈ W^1,∞(Ω_α) and u solution to Problem (<ref>). Then there exists δ>0, depending only on p , q and the p-thickness constant of ^2∖Ω_α such that |∇ u|∈ L^p+ε(Ω_α), ∀ε∈(0,δ).Since Ω_α is bounded, we can consider a ball B_2a, , such that Ω_α⊂⊂ B_a. Then, let us fix r>0 and let us consider a ball B_2r⊂ B_2a.Two cases are possible:(i) B_2r⊂Ω_α;(ii) B_2r∩(^2∖Ω_α)≠∅.Case (i): Let us consider ψ∈ C_0^∞(B_2r), with 0≤ψ≤1, |∇ψ|≤4/r, and ψ=1 in B_r. Then, by relation (<ref>) of Lemma <ref> we have:∫_B_r|∇ u|≤ C[∫_B_2r (|∇φ_1|^p+|∇φ_2|^p+|∇φ_1|^p/p-1+|∇φ_2|^p/p-1+|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)+ +2^2pr^-p∫_B_2r(|u-u̅_B_r|^p+|φ_1-φ_1_B_r|^p+|φ_2-φ_2_B_r|^p)+2^2p/p-1r^-p/p-1∫_B_2r(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +2^2p/p-q+1r^-p/p-q+1∫_B_2r(|u-u̅_B_r|^p/p-q+1+|φ_1-φ_1_B_r|^p/p-q+1+|φ_2-φ_2_B_r|^p/p-q+1)+∫_B_2r|f|^p/p-1].Hence, considering p=2t/2-t, that is t=2p/p+2,by using relation (<ref>) of Lemma <ref>, Hölder' inequality and Poincaré's inequality, by previous relation (<ref>), we have:_B_r|∇ u|^p≤ C[_B_2r (|∇φ_1|^p+|∇φ_2|^p+|∇φ_1|^p/p-1+|∇φ_2|^p/p-1+|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)+ +2^2pr^-p_B_2r(|u-u̅_B_r|^p+|φ_1-φ_1_B_r|^p+|φ_2-φ_2_B_r|^p)+2^2p/p-1r^-p/p-1_B_2r(|u-u̅_B_r|^p/p-1+|φ_1-φ_1_B_r|^p/p-1+|φ_2-φ_2_B_r|^p/p-1)+ +2^2p/p-q+1r^-p/p-q+1_B_2r(|u-u̅_B_r|^p/p-q+1+|φ_1-φ_1_B_r|^p/p-q+1+|φ_2-φ_2_B_r|^p/p-q+1)+_B_2r|f|^p/p-1]≤≤ C_B_2r (|∇φ_1|^p+|∇φ_2|^p+|∇φ_1|^p/p-1+|∇φ_2|^p/p-1+|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1)+ +C2^2pr^-pr^p(_B_2r|∇ u|^t)^p/t+C2^2p/p-1(r^-p_B_2r|u-u̅_B_r|^p)^1/p-1+C2^2p/p-q+1(r^-p_B_2r|u-u̅_B_r|^p)^1/p-q+1+ +C2^2pr^-pr^p_B_2r|∇φ_1|^p+C2^2pr^-pr^p_B_2r|∇φ_2|^p+C2^2p/p-1r^-p/p-1r^p/p-1_B_2r|∇φ_1|^p/p-1+C2^2p/p-1r^-p/p-1r^p/p-1_B_2r|∇φ_2|^p/p-1+ +C2^2p/p-q+1r^-p/p-q+1r^p/p-q+1_B_2r|∇φ_1|^p/p-q+1+C2^2p/p-q+1r^-p/p-q+1r^p/p-q+1_B_2r|∇φ_2|^p/p-q+1+C+_B_2r|f|^p/p-1≤≤ C(_B_2r|∇ u|^p(1-ε_1))^1/1-ε_1+C[(_B_2r|∇ u|^p(1-ε_1))^1/1-ε_1]^1/p-1+C[(_B_2r|∇ u|^p(1-ε_1))^1/1-ε_1]^1/p-q+1+ + C_B_2r (|∇φ_1|^p+|∇φ_2|^p+|∇φ_1|^p/p-1+|∇φ_2|^p/p-1+|∇φ_1|^p/p-q+1+|∇φ_2|^p/p-q+1+|f|^p/p-1)with 0<ε_1≤p/p+2.Since, by Young's inequality, we have that a^1/p≤a/p^1/p+p-1/p≤ a+1,∀ a≥0and ∀ p≥1,we get_B_r|∇ u|^p≤ C(_B_2r|∇ u|^p(1-ε_1))^1/1-ε_1+_B_2r[C^1/p (|∇φ_1|+|∇φ_2|+|∇φ_1|^1/p-1+|∇φ_2|^1/p-1+|∇φ_1|^1/p-q+1+|∇φ_2|^1/p-q+1+|f|^1/p-1+2^1/p)]^p. Case (ii): First of all, let Ext_J(g) be the extension of g on ^2, which is possible since Ω_α is a (ϵ,δ)-domain (see <cit.> and <cit.>, f.i.). So, we extend u, w and f on ^2 setting that they are equal to Ext_J(g) in ^2∖Ω_α. Let us consider ψ as in case (i) and let D=B_2r∩Ω_α be. By relation (<ref>) of Lemma <ref>, we have:∫_D|ψ|^p|∇ u|≤ C[∫_D|ψ|^p(|∇φ_1|^p +|∇ g|^p+|∇φ_2|^p)+∫_D|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)+ +∫_D|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇ g|^p/p-q+1+|∇φ_2|^p/p-q+1)+∫_B_2r|∇ψ|^p|u-w|^p+∫_B_2r|∇ψ|^p/p-1|u-w|^p/p-1 +∫_B_2r|∇ψ|^p/p-q+1|u-w|^p/p-q+1+∫_D|f|^p/p-1|ψ|^p/p-1].Let us consider q=p(1-ε_2), with0<ε_2≤min{p-2/2,1/2}. So, definingα=2/2-q,if1<q<22,ifq≥2 , we have α q≥ p and by lemma <ref> we get:(_B_2r|u-w|^p|∇ψ|^p)^1/p≤4r^-1(_B_2r|u-w|^p)^1/p≤ ≤ 4r^-1(_B_2r|u-w|^α q)^1/α q≤ C(r^2-q/_q(N(u-g);B_4r)_B_2r|∇(u-w)|^q)^1/q,with N(u-w)={x∈ B_2r : u=w}; in ^2∖Ω_α we have u=w=Ext(g). We point out that w=g on ∂Ω_α, since φ_1≤ g≤φ_2 on ∂Ω_α.Since p>2⟹ q>1, so by Remark <ref>, we have that_q(N(u-w);B_4r)≥_q(B_2r∖Ω_α;B_4r)≥ Cr^2-q.Since, by Hölder's inequality, we have that_B_2r|u-w|^p/p-1|∇ψ|^p/p-1≤(_B_2r|u-w|^p|∇ψ|^p)^1/p-1;_B_2r|u-w|^p/p-q+1|∇ψ|^p/p-q+1≤(_B_2r|u-w|^p|∇ψ|^p)^1/p-q+1,then, by relations (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we get:r^-2∫_D|ψ|^p|∇ u|≤ ≤ C{r^-2∫_D|ψ|^p(|∇φ_1|^p +|∇ g|^p+|∇φ_2|^p)+r^-2∫_D|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)+ +r^-2∫_D|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇ g|^p/p-q+1+|∇φ_2|^p/p-q+1)+(_B_2r|∇(u-w)|^q)^p/q+[(_B_2r|∇(u-w)|^q)^p/q]^1/p-1 +[(_B_2r|∇(u-w)|^q)^p/q]^1/p-q+1+r^-2∫_D|f|^p/p-1|ψ|^p/p-1}≤ ≤ C{r^-2∫_D|ψ|^p(|∇φ_1|^p +|∇ g|^p+|∇φ_2|^p)+r^-2∫_D|ψ|^p/p-1(|∇φ_1|^p/p-1+|∇ g|^p/p-1+|∇φ_2|^p/p-1)+ +r^-2∫_D|ψ|^p/p-q+1(|∇φ_1|^p/p-q+1+|∇ g|^p/p-q+1+|∇φ_2|^p/p-q+1)+3(_B_2r|∇(u-w)|^q)^p/q+2+r^-2∫_D|f|^p/p-1|ψ|^p/p-1}.Observing that |D|≤|B_2r|=4π r^2 and(_B_2r|∇(u-w)|^q)^p/q= C(r^-2∫_D|∇(u-w)|^q)^p/q≤ C(r^-2∫_D|∇ u|^q)^p/q+C(r^-2∫_D|∇ w|^q)^p/q≤ ≤ C(r^-2∫_D|∇ u|^q)^p/q+Cr^-2∫_D|∇ w|^p≤ C(r^-2∫_D|∇ u|^q)^p/q+r^-2C∫_D(|∇φ_1|^p+|∇ g|^p+|∇φ_2|^p),finally, we getr^-2∫_B_r∩Ω_α|∇ u|^p≤ C(r^-2∫_D|∇ u|^p(1-ε_2))^1/1-ε_2+ +r^-2∫_D[C^1/p (|∇φ_1|+|∇φ_2|+|∇φ_1|^1/p-1+|∇φ_2|^1/p-1+|∇φ_1|^1/p-q+1+|∇φ_2|^1/p-q+1+|f|^1/p-1+2^1/p)]^p.Now, let g(x)= |∇ u|^p(1-ε), x∈Ω_α0, x∈^2∖Ω_α, h(x)=[C^1/p (|∇φ_1|+|∇φ_2|+|∇φ_1|^1/p-1+|∇φ_2|^1/p-1+|∇φ_1|^1/p-q+1+|∇φ_2|^1/p-q+1+|f|^1/p-1+2^1/p)]^p(1-ε), x∈Ω_α0, x∈^2∖Ω_αand s=1/1-ε, with ε=min{ε_1,ε_2} such that (<ref>) and (<ref>) hold.Thus, we get the following_B_rg^s≤ C(_B_2rg)^s+_B_2rh^sdx,for any B_2r⊂ B_2a. Hence, by lemma <ref> we have the thesis and the proof is over.§ ASYMPTOTICS AS P→∞ AND N FIXEDIn this section we focus our attention on the asymptotic behavior of the solutions. In particular, we prove an analogous result to the one presented in <cit.> for the homogeneous case without obstacles (see <cit.>, <cit.>, <cit.>, <cit.> and <cit.>for the p-Laplacian case).Let us assume f∈ L^1(Ω_α), φ_1,φ_2∈ C(Ω_α), g∈ W^1,∞(Ω_α) (with Lipschitz constant L),ℋ={v∈ W_g^1,∞(Ω_α): φ_1≤ v≤φ_2 in Ω_α, ||∇ v||_∞≤max{1,√(L^2+k^2)}}≠∅ and u_p,q the solution to Problem (<ref>).Then for any subsequence u_p_k,q there exists a subsubsequence, still denoted with u_p_k,q, such that, as k→∞, u_p_k,q→ u_∞,q uniformly in C(Ω_α) and weakly in W^1,t(Ω_α)where the limit u_∞,qbelongs to W^1,t(Ω_α) and verifies||∇ u_∞,q||_∞≤max{1,√(L^2+k^2)}. Moreover, if L^2+k^2≤1, then u_∞,q is the unique solution to the following variational problemmin_v∈ℋJ_q(v),withJ_q(v)=1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv𝒫_qandif L^2+k^2>1 thenu_∞,qis a minimal Lipschitz extension, that is, u_∞,qis a solution tomin_v∈ℋ||∇ v||_∞. 𝒫_q.L Let w∈ℋ and u_p,q solution to Problem (<ref>). By the equivalence between this problem and Problem (<ref>), we have1/p||∇ u_p,q||^p_p≤1/p∫_Ω_α(k^2+|∇ u_p,q|^2)^p/2+1/q∫_Ω_α(k^2+|∇ u_p,q|^2)^q/2≤ 1/p∫_Ω_α(k^2+|∇ v|^2)^p/2+1/q∫_Ω_α(k^2+|∇ v|^2)^q/2+∫_Ω_αfu_p,q-∫_Ω_αfv≤ ≤ |Ω_α|[(k^2+L^2)^p/2/p+(k^2+L^2)^q/2/q]+C(f,φ_1,φ_2).So||∇ u_p,q||_p≤ p^1/p{|Ω_α|[(k^2+L^2)^p/2/p+(k^2+L^2)^q/2/q]+C}^1/p,that is {u_p,q}_p>q is bounded in W^1,p(Ω_α). Moreover, we getlim sup_p→∞||∇ u_p,q||_p≤max{1,√(L^2+k^2)}.Now, for any p>t>q, by Hölder's inequality, it holds||∇ u_p,q||_t≤|Ω_α|^1/t-1/p||∇ u_p,q||_p.Then, we obtainlim sup_p→∞||∇ u_p,q||_t≤|Ω_α|^1/tmax{1,√(L^2+k^2)}.Hence, by Ascoli-Arzelà compactness criterion there exist a subsequence {u_p_k,q}_k∈ converging to u_∞,q weakly in W^1,t(Ω_α) and uniformly in Ω_α. Thus, we get||∇ u_∞,q||_∞≤lim_t→∞||∇ u_∞,q||_t≤lim_t→∞lim inf_p→∞||∇ u_p,q||_t≤max{1,√(L^2+k^2)}, that is u_∞,q∈ℋ.Finally, if k^2+L^2≤1, for any v∈ℋ, we obtain1/q∫_Ω_α(k^2+|∇ u_p,q|^2)^p/2-∫_Ω_αfu_p,q≤1/p∫_Ω_α(k^2+|∇ v|^2)^p/2+ +1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv≤ ≤|Ω_α|/p+1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv.Hence, passing to the limit as p_k→∞, p_k subsequence of p, we get that u_∞,q solvesmin_v∈ℋJ_q(v),withJ_q(v)=1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv.In the case k^2+L^2>1, by the fact that ||∇ u_∞,q||_∞≤√(k^2+L^2), it follows that u_∞,q solvesmin_v∈ℋ||∇ v||_∞.Until now, we have considered the problem in the setting of fractal boundary domain Ω_α. However, it is possible to consider the corresponding approximating problems, that is the problems on the pre-fractal approximating domains Ω_α^n.We point out that the introduction of the following Problems (<ref>), besides being interesting in itself, is justified also from the possibility to perform on it numerical analysis.Let p>q be, with q∈[2,∞) fixed as before. Given f_n∈ L^1(Ω_α), g∈ W^1,∞(Ω_α) and φ_1,n,φ_2,n∈ C(Ω_α), let us introduce the following problems: findu_p,q,n∈ℋ_p,n such that a_p,n(u_p,q,n,v-u_p,q,n)+a_q,n(u_p,q,n,v-u_p,q,n)-∫_Ω^n_αf_n(v-u_p,q,n)⩾ 0,∀ v∈ℋ_p,n, 𝒫_p,q,nwherea_p,n(u,v)=∫_Ω^n_α(k^2+|∇ u|^2)^p-2/2∇ u∇ vand ℋ_p,n={v∈ W_g^1,p(Ω^n_α): φ_1,n≤ v≤φ_2,n in Ω^n_α}is assumed to be non-empty.Thanks to the equivalence between this problem and the analogous of Problem (<ref>) and Proposition <ref>, adapted to Ω_α^n, we have existence anduniqueness of the solution for the approximating Problem (<ref>).For this problem, the following analogous result to Theorem <ref> holds.Let us assume f_n∈ L^1(Ω_α), φ_1,n,φ_2,n∈ C(Ω_α),g∈ W^1,∞(Ω_α) (with Lipschitz constant L),ℋ_n={v∈ W_g^1,∞(Ω^n_α): φ_1,n≤ v≤φ_2,n in Ω^n_α, ||∇ v||_∞,Ω^n_α≤max{1,√(L^2+k^2)}}≠∅ and u_p,q,n the solution to Problem (<ref>).Then for any subsequence u_p_k,q,n there exists a subsubsequence, still denoted with u_p_k,q,n, such that, as k→∞, u_p_k,q,n→ u_∞,q,n uniformly in C(Ω^n_α) and weakly in W^1,t(Ω^n_α), being u_∞,q,nsolution of min_v∈ℋ_nJ_q(v),withJ_q(v)=1/q∫_Ω^n_α(k^2+|∇ v|^2)^q/2-∫_Ω^n_αfv,ifk^2+L^2≤1,𝒫_q,n min_v∈ℋ_n||∇ v||_∞,Ω^n_α,ifk^2+L^2>1𝒫_q,L,n § ASYMPTOTICS AS N→∞ AND P FIXEDAs recalled is Section<ref>, the sets Ω_α^n give at the limit Ω_α. Then, it makes sense to ask whether the solutions to the approximating problems converge in some sense to a solution of the corresponding problem on Ω_α.As far as we know, this double study on convergence, that is the analysis of the behavior with respect to n as well as on p, was done for first in <cit.>, and then in <cit.>. Nevertheless, the study of the asymptotic behavior with respect to n was done by many authors for different problems (see, f.i., <cit.>, <cit.>, <cit.> and <cit.>).In order to prove the following result, let us consider u_p,q,n solutions of Problems (<ref>) and defineũ_p,q,n(x):= u_p,q,n(x), x∈Ω^n_αg(x), x∈Ω_α∖Ω^n_α .Let f_n,f∈ L^p'(Ω_α), g∈ W^1,∞(Ω_α), φ_i,n,φ_i∈ W^1,p(Ω_α), for i=1,2. Moreover let us assume ℋ_p,n≠∅, ℋ_p≠∅ and, as n→∞,f_n→ finL^1(Ω_α)andφ_i,n→φ_i, i=1,2, inW^1,p(Ω_α). Then the sequence ũ_p,q,n defined in (<ref>) strongly converge, as n→∞, in W^1,p(Ω_α) to the solution to Problem (<ref>).Before the proof we need some preliminary results.Let us recall how to construct a suitable array of fibers Σ^n around the boundary of Ω_α^n (see, for instance, <cit.> and <cit.>).To show how this construction works, we start considering the open triangle of verticesA(0,0), B(1,0) and C(1/2,-√(3)/2).Denoting with T^+_0the open triangle of verticesA(0,0), B(1,0) and D^+(1/2,δ_+/2), with δ_+= tan(ϑ/2) and ϑ the rotation angle defined in (<ref>), we have that T^+_0 satisfies the open set condition with respect to the family of maps Ψ_α; that is ψ_i|n,α(T^+_0)⊂ T^+_0 for every i|n and ψ_i|n,α(T^+_0)∩ψ_j|n,α(T^+_0)=∅ for every i|n≠ j|n. Furthermore, with T^-_0 we denotethe open triangle of vertices A(0,0), B(1,0) and S^-(1/2,-δ_-/2), where δ_-=tan(ϑ^-), with 0<ϑ^-≤min{π/2-ϑ,ϑ/2}. So, we obtain the fiber Σ^0_1 corresponding to the side AB setting Σ_1^0=T^+_0⋃ T^-_0⋃ K^0. Now, applying the maps ψ_i|n=ψ_i_1∘ψ_i_2∘⋯∘ψ_i_n, for any integer n>0, Σ_1^0 is iteratively transformed into increasingly fine arrays. In particular, for every n≥ 1, we set Σ^n_1= Σ^n_1,+⋃Σ^n_1,-⋃ K^n withΣ^n_1,+=⋃_i|nΣ_1,+^i|n,Σ_1,+^i|n=ψ_i|n(T^+_0), Σ^n_1,-=⋃_i|nΣ_1,-^i|n,Σ_1,-^i|n=ψ_i|n(T^-_0) . Denoting by Σ^n_2,+, Σ^n_3,+, Σ^n_2,- andΣ^n_3,- the corresponding arrays of fibers obtained applying the same procedure to the others sides of the starting domain, we haveΣ^n=⋃_j=1,2,3Σ^n_j,Σ_+^n= ⋃_j=1,2,3Σ^n_j,+,Σ_-^n=⋃_j=1,2,3Σ^n_j,-. Hence, we define the setsΩ̂_α^n= int(Ω_α^n⋃Σ^n_+)andΩ̆_α^n=Ω_α^n∖Σ^n_-.In particular, we observe that for these sets it holds thatΩ̆_α^n ⊂Ω_α^n⊂Ω̂_α^n, Ω̂_α^n+1⊂Ω̂_α^n and Ω̆_α^n⊂Ω̆_α^n+1. Figure <ref> shows first iterations of the procedure just described.Let us introduce a suitable function, which plays the role of coefficient of a convex combination. It allows us to construct an appropriate sequence of functions.For every n∈, given P(x_1,x_2)∈Σ^n_-, we define P_⊥(x_1^⊥,x_2^⊥)∈∂Ω^n_α as the orthogonal projection of (x_1,x_2) on ∂Ω^n_α. Then, with P_-(x_1^-,x_2^-) we indicate the intersection of the straight line passing through (x_1,x_2) and (x_1^⊥,x_2^⊥) with ∂Σ^n_-∖ K^n, where the symbol - indicates the inner intersection. Hence, we defineλ_n(x)=1,x∈Ω̆_α^n |x_1^⊥-x_1|+|x_2^⊥-x_2|/|x_1^⊥-x_1^-|+|x_2^⊥-x_2^-|,x∈Σ^n_-0,x∈Ω_α∖Ω_α^nwith x(x_1,x_2).Now, let us state and prove a result which will play a central role in the analysis of the asymptotic behavior, as n→∞. Let u be in W_0^1,r(Ω_α), r>2. Then, the function w_n(x)=λ_n(x)· u(x), where λ_n(x) is defined in (<ref>), has the following properties: (i) w_n(x)∈ W_0^1,s(Ω_α), ∀ 2<s<r; (ii) ||w_n||_1,s,Ω_α≤ C,withCindependent onn; (iii) w_n→ uinW_0^1,s(Ω_α),asn→∞. To prove (i) and (ii), let us consider 2<s<p and ||w_n||_1,s,Ω_α=||w_n||^s_1,s.||w_n||^s_1,s= ||w_n||^s_s+||∇ w_n||^s_s≤ ||u||^s_s+||(∇λ_n)u+λ_n∇ u||^s_s≤ ≤ ||u||^s_s+2^s-1(||(∇λ_n)u||_s^s+||λ_n∇ u||^s_s)≤ ||u||^s_s+2^s-1||∇ u||^s_s+2^s-1||(∇λ_n)u||_s^s≤ ≤ 2^s-1(||u||^s_1,s+∫_Ω_α|(∇λ_n)u|^s).By definition of λ_n, we get:||w_n||^s_1,s≤2^s-1(||u||^s_1,s+∫_Σ_-^n|(∇λ_n)u|^s)=2^s-1(||u||^s_1,s+∑_i=1^3·4^n∫_T^n_i|(∇λ_n)u|^s),were T^n_i indicate the i-th triangle of the internal fiber (see the definition of Σ^i|n_1,-).Now, let us focus our attention on an half-fiber triangle (that we indicate with T_n) having a vertex on the point A(0,0) and a side on the abscissa axis (see Figure <ref>). By rotation and translation, the conclusions hold also for the other verteces of Ω^n_α.In our model case λ_n(x) and T_n have the following forms:λ_n(x)=x_2/x_1a, x∈Σ^n_-, with a=tanθ^-, andT_n={(x_1,x_2)∈^2 :0≤ x_1≤1/2·3^n, 0≤ x_2≤ ax_1}.By (<ref>) we have∇λ_n=(-x_2/ax_1^2,1/ax_1)and then|∇λ_n|=√(x_2^2/a^2x_1^4+1/a^2x_1^2)=√(x_2^2+x_1^2/a^2x_1^4)=1/|x_1|√(x_2^2/a^2x_1^2+1/a^2)≤1/|x_1|√(1+1/a^2)Moreover, since u(A)=0, applying Morrey's inequality, we obtain that:|u(x)|=|u(x)-u(A)|≤ C||∇ u||_r,T_n(x_1^2+x_2^2)^1/2-1/r≤ C||∇ u||_r,T_n(x_1^2+a^2x_1^2)^1/2-1/r= C||∇ u||_r,T_n(1+a^2)^1/2-1/r|x_1|^1-2/r.So, by (<ref>), (<ref>) and non-negativity of x_1, we get∫_T^n|(∇λ_n)u|^sdx≤ C_a||∇ u||^s_r,T_n∫_0^1/2· 3^n(∫_0^ax_1x_1^-2s/r_2)_1=aC_a||∇ u||^s_r,T_n∫_0^1/2· 3^nx_1^1-2s/r_1= =aC_a/2-2s/r||∇ u||^s_r,T_n(1/2· 3^n)^2-2s/r=C_a||∇ u||_r,T_n^s· (3^n)^-2+2s/r,with C_a=C/(2-2s/r)·2^2-2s/ra(1+1/a^2)^s/2(1+a^2)^s/2-s/r.Thus, putting together (<ref>) and (<ref>), we obtain||w_n||^s_1,s≤2^s-1||u||^s_1,s+C(3^n)^-2+2s/r∑_i=1^3·4^n(∫_T^n_i|∇ u|^r)^s/r,with C=2^s-1C_a.Now, applyingHölder inequality for sums, with conjugate exponents r/s and r/r-s, we have||w_n||^s_1,s≤2^s-1||u||^s_1,s+C(3^n)^-2+2s/r(∑_i=1^3·4^n∫_T^n_i|∇ u|^r)^s/r· (3·4^n)^1-s/r= =2^s-1||u||^s_1,s+3^1-s/rC(3^n)^-2+2s/r(∑_i=1^3·4^n∫_T^n_i|∇ u|^r)^s/r· (3^n)^d_f(1-s/r) =2^s-1||u||^s_1,s+3^1-s/rC(∫_Σ_-^n|∇ u|^r)^s/r· (3^n)^-2+2s/r+d_f-s/rd_f.By observing the second term in the last member of the previous chain, we have that it goes to 0, as n→∞; in fact:∫_Σ_-^n|∇ u|^rdx→0,asn→∞,since |Σ_-^n|→0and -2+2s/r+d_f-s/rd_f=-2(1-s/r)+d_f(1-s/r)=-(2-d_f)(1-s/r)<0.Thus w_n∈ W_0^1,s(Ω_α), ∀ 1<s<r and ||u||_1,s,Ω_α≤ C, with C independent on n. To show (iii), we have that||w_n-u||^s_1,s,Ω_α=||w_n-u||^s_1,s≤ C||∇(w_n-u)||^s_s=C∫_Ω_α|∇(w_n-u)|^s= =C∫_Σ_-^n|∇(λ_nu-u)|^s+C∫_Ω_α∖Ω_α^n|-∇ u|^s→0, as n→∞,since |Ω_α∖Ω_α^n|→0, as n→∞. It is possible to prove that:(i) if u∈ C_0(Ω_α), then w_n∈ C_0(Ω_α);(ii) if u∈ W^1,∞_0(Ω_α), then w_n∈ W^1,∞_0(Ω_α).Moreover, we observe that w_n in (ii) has a (possibly) different Lipschitz constant with respect to the one of u and it is again independent on n.Let u be in W_g^1,r(Ω_α), r>2, with g∈ W^1,∞(Ω_α). Then, the function z_n(x)=λ_n(x)· u(x)+(1-λ_n(x))g(x), where λ_n(x) is defined in (<ref>), has the following properties: (i) z_n(x)∈ W_g^1,s(Ω_α), ∀ 2<s<r; (ii) ||z_n||_1,s,Ω_α≤ C,withCindependent onn; (iii) z_n→ uinW^1,s(Ω_α),asn→∞. By its definition, we can write z_n(x)=g(x)+λ_n(x)v(x), with v(x)=u(x)-g(x), for each x∈Ω_α.Since v(x)∈ W^1,p_0(Ω_α), we get our thesis applying Theorem <ref>. It is possible to prove that:(i) if u∈ C_g(Ω_α), then z_n∈ C_g(Ω_α);(ii) if u∈ W^1,∞_g(Ω_α), then z_n∈ W^1,∞_g(Ω_α).Moreover, we observe that z_n in (ii) has a (possibly) different Lipschitz constant with respect to the sum of the ones of u and g and it is again independent on n.By using the previous result, we now prove Theorem <ref>.Applying the same procedure of Theorem 3.1 in <cit.> (for instance) we get that the sequence {ũ_p,q,n}_n∈ is bounded in W_g^1,p(Ω_α). Then, there exists v∈ W_g^1,p(Ω_α) and a subsequence of ũ_p,q,n, that we denote again with ũ_p,q,n, such that ũ_p,q,n→ v weakly in W_g^1,p(Ω_α). So, we haveJ_p,q(v)=1/p∫_Ω_α(k^2+|∇ v|^2)^p/2+1/q∫_Ω_α(k^2+|∇ v|^2)^q/2-∫_Ω_αfv≤ ≤lim inf_n→∞(1/p∫_Ω_α(k^2+|∇ũ_p,q,n|^2)^p/2+1/q∫_Ω_α(k^2+|∇ũ_p,q,n|^2)^q/2-∫_Ω_αf_nũ_p,q,n)= =lim inf_n→∞(1/p∫_Ω^n_α(k^2+|∇ u_p,q,n|^2)^p/2+1/q∫_Ω^n_α(k^2+|∇ u_p,q,n|^2)^q/2-∫_Ω^n_αf_nu_p,q,n+ +1/p∫_Ω_α∖Ω^n_α(k^2+|∇ g|^2)^p/2+1/q∫_Ω_α∖Ω^n_α(k^2+|∇ g|^2)^q/2-∫_Ω_α∖Ω^n_αf_ng)≤ ≤lim sup_n→∞(1/p∫_Ω^n_α(k^2+|∇ u_p,q,n|^2)^p/2+1/q∫_Ω^n_α(k^2+|∇ u_p,q,n|^2)^q/2-∫_Ω^n_αf_nu_p,q,n+ +1/p∫_Ω_α∖Ω^n_α(k^2+|∇ g|^2)^p/2+1/q∫_Ω_α∖Ω^n_α(k^2+|∇ g|^2)^q/2-∫_Ω_α∖Ω^n_αf_ng)≤ ≤lim sup_n→∞J_p,q,n(u_p,q,n)+ +lim sup_n→∞(1/p∫_Ω_α∖Ω^n_α(k^2+|∇ g|^2)^p/2+1/q∫_Ω_α∖Ω^n_α(k^2+|∇ g|^2)^q/2-∫_Ω_α∖Ω^n_αf_ng)= =lim sup_n→∞min_w∈ℋ_p,nJ_p,q,n(w)Since, u_p,q is the unique solution to Problem (<ref>), if we show that J_p,q(v)=J_p,q(u_p,q), we will get our thesis.Now, thanks to the Theorem <ref>, for x∈Ω_α, let us consider the functionsv_n(x)=(w_n(x)∨φ_1,n(x))φ_2,n(x),withw_n(x)=λ_n(x)u_p,q(x)+(1-λ_n(x))g(x) and let us show that: (a)v_n∈ℋ_p,n; (b)v_n→ u_p,q strongly inW^1,p(Ω_α). Let us prove (a).v_n∈ W_g^1,p(Ω^n_α) by Corollary <ref> and the fact that φ_1,n≤ g≤φ_2,n on ∂Ω_α^n. Finally, the fact that φ_1,n≤ v_n≤φ_2,n follows by the definition of v_n.Now, let us prove (b).It follows by (iii) of Corollary <ref>, the fact that φ_i,n→φ_i∈ W^1,p(Ω_α), for i=1,2, and the fact that φ_1≤ u_p,q≤φ_2 in Ω_α. Hence, we havelim sup_n→∞min_w∈ℋ_p,nJ_p,n(w)≤lim sup_n→∞J_p,q,n(v_n)=J_p,q(u_p,q).Thus, by (<ref>), (<ref>) and the fact that J_p(u_p,q)≤ J_p(v), we get that v=u_p,q and then the whole sequence ũ_p,q,n converge to u_p,q. Furthermore, we obtain thatJ(u_p,q)=lim_n→∞J_p,q,n(u_p,q,n)and the proof is over.We can also perform the asympotic analysis for u_∞,q,n solutions to Problems (<ref>) or (<ref>)when n goes to ∞. In this case the convergence result is achieved by considering theproblems separately: more precisely for thesolutions to Problems (<ref>) we can apply the convergence result ofTheorem 4.1 of<cit.> andfor thesolutions to Problems (<ref>) we can apply the convergence result ofTheorem 4.2 of<cit.>.Now, let us consider u_∞,q,n solutions to Problems (<ref>) or (<ref>)respectively and defineũ_∞,q,n(x):= u_∞,q,n(x), x∈Ω^n_αg(x), x∈Ω_α∖Ω^n_α . Let f_n,f∈ L^p'(Ω_α), g∈ W^1,∞(Ω_α), φ_i,n,φ_i∈ W^1,p(Ω_α), for i=1,2. Moreover let us assume ℋ_p,n≠∅, ℋ_p≠∅ and, as n→∞,f_n→ finL^1(Ω_α)andφ_i,n→φ_i, i=1,2, inW^1,p(Ω_α). Then ũ_∞,q,n(x) defined in (<ref>) admit a subsequence which *-weakly converge to solution to Problem(<ref>) or(<ref>) respectively.We briefly discuss about uniqueness. Beside being interesting in itself,it is a crucial issue in order to obtain the possibility to switch the order of the limits with respect to n and p. In <cit.>, uniqueness results for p-Laplacian unilateral problems are stated (see also <cit.>, <cit.>, <cit.> and the references quoted there for the problem of the uniqueness).In our situation, we point out that for the case L^2+k^2>1 the issue of the uniqueness is still an open problem both for the fractal and pre-fractal case. We point out that it is possible to extend the present result to other domains with prefractaland fractal boundarieslike, for example, quasi-filling fractal layers or random snowflakes (see <cit.> and the reference therein); the key tool is that the domains havegood extension" properties (see <cit.>).Moreover, itis possible to perform asymptotic analysis also in the so-called Sobolev admissible domains" (see <cit.>, <cit.>). We remark that it is also possible to considertheseproblemsonfractals structures like, for example, the Sierpinski gasket, where a notion of infinity harmonic functions has been introduced recently (see <cit.>).§.§ Acknowledgment The corresponding author is member of GNAMPA(INdAM) and is partially supported by Grants Ateneo Sapienza" 2022. 99 0pt AST Y. Achdou, C. Sabot, N. Tchou,Diffusion and propagation problems in some ramified domains with a fractal boundary. M2AN Math. Model. Numer. Anal.40( 2006), no.4, 623–652. ACJ G. Aronsson, M. G. Crandall, P. Juutinen, A tour of the theory of absolutely minimizing functions. Bull. Amer. Math. Soc. (N.S.), 41 (2004), 439–505. BCMP. Baroni, M. Colombo, G. Mingione,Regularity for general functionals with double phase. Calc Var Partial Differential Equations57(2) (2018), 1–48.BDM T. Bhattacharya, E. DiBenedetto, J. Manfredi, Limits as p→+∞ of Δ_p u_p=f and related extremal problems. Some topics in nonlinear PDEs (Turin, 1989). Rend. Sem. Mat. Univ. Politec. Torino 1989, Special Issue, 15-68 (1991). BPP D. Bonheure, P. d'Avenia, A. Pomponio,On the electrostatic Born Infeld equation with extended charges, Comm. Math. Phys. 346 (2016), 877–906.BJ D. Bonheure, J. D. Rossi, The behavior of solutions to an elliptic equation involving a p-Laplacian and q-Laplacian for large p. Nonlinear Analysis 150 (2017),104–113.CAF L. A. Caffarelli,The obstacle problem revisited. J. Fourier Anal. Appl.4(1998), no.4-5, 383–402.CCVF. Camilli, R. Capitanelli, M. A. Vivaldi, Absolutely Minimizing Lipschitz Extensions and Infinity Harmonic Functions on the Sierpinski gasket, Nonlinear Anal. 163 (2017), 71–85.CR. Capitanelli, Asymptotics for mixed Dirichlet-Robin problems in irregular domains, J. Math. Anal. Appl. 362 (2010), 450–459. CDO R. Capitanelli, M. D'Ovidio, Fractional Cauchy problem on random snowflakes. J. Evol. Equ.21 (2021), no.2, 2123–2140.CFR. Capitanelli, S. Fragapane, Asymptotics for quasilinear obstacle problems in bad domains. Discrete Contin. Dyn. Syst. Ser. S 12 (2019), no. 1, 43–56. CFVR. Capitanelli, S. Fragapane, M. A. Vivaldi, Regularity results for p-Laplacians in pre-fractal domains. Adv. Nonlinear Anal. 8 (2019), no. 1, 1043–1056.CV3R. Capitanelli, M. A. Vivaldi, Reinforcement problems for variational inequalities on fractal sets. Calculus of Variation 54 (2015) 2751–2783. CV1R. Capitanelli, M. A. Vivaldi, FEM for quasilinear obstacle problems in bad domains. ESAIM: M2AN 51 (2017) 2465–2485.CV2R. Capitanelli, M. A. Vivaldi, Limit of p-Laplacian obstacle problems.Adv. Calc. Var. 15 (2022), no. 2, 265–286. CI L. Cherfils, Y. IloyasovOn the stationary solutions of generalized reaction diffusion equations with p&q-Laplacian Commun. Pure Appl. Anal., 4 (1) (2005), 9–22. DekA. Dekkers, A.Rozanova-Pierrat, A Teplyaev Mixed boundary valued problems for linear and nonlinear wave equations in domains with fractal boundaries Calc. Var. Partial Differential Equations61(2022), no.2, Paper No. 75, 44 pp.DIA J. I. Diaz, Nonlinear partial differential equations and free boundaries. Vol. I. Elliptic equations. Research Notes in Mathematics. 106. Pitman, Boston, MA, 1985.FOP D. Fortunato, L. Orsina, L.Pisani Born-Infeld type equations for electrostatic fields.. J Math Phys. 2002; 43(11): 5698–5706.FS. Fragapane, ∞-Laplacian obstacle problems in fractal domains. SEMA SIMAI Springer Series, 2021. FRA. Friedman,Variational principles and free-boundary problems, John Wiley & Sons, Inc., New York,1982.GML. Gongbao, O. Martio, Local and Global integrability of gradients in obstacle problems. Annales Academiæ Scientiarum Fennicæ Series A. I. Mathematica Volumen 19, 1994, 25-34. GM2L. Gongbao, O. Martio, Stability and higher integrability of derivatives of solutions in double obstacle problems. J. Math. Anal. Appl. 272 (2002) 19–29.GP. Grisvard, Elliptic Problems in Nonsmooth Domains. Monogr. Stud. Math. 24, Pitman, Boston, 1985. HUJ. E. Hutchinson, Fractals and selfsimilarity. Indiana Univ. Math. J. 30 (1981) no. 5, 713-747.ILH. Ishii, P. Loreti, Limits of solutions of p-Laplace equations as p goes to infinity and related variational problems. SIAM J. Math. Anal. 37 (2005), no. 2, 411–437.HKMJ. Heinonen, T. Kilpeläinen, O. Martio, Nonlineal Potential Theory of Degenerate EllipticEquations. Oxford University Press, Oxford (1993). Hinz M. Hinz, A. Rozanova-Pierrat, A. Teplyaev, Non-Lipschitz uniform domain shape optimization in linear acoustics SIAM J. Control Optim. 59 (2021), no. 2, 1007–1032.K2B. Kawohl, On a family of torsional creep problems. J. Reine Angew. Math., 410 (1990), pp. 1–22.KKT. Kilpeläinen, P. Koskela, Global integrability of the gradients of solutions to partial differential equations. Nonlinear Anal. 23 (7), 899–909, 1994.KJ. Kinnunen, Sobolev spaces. Department of Mathematics and Systems Analysis, Aalto University 2020.J2R. Jensen, Uniqueness of Lipschitz extensions: Minimizing the sup norm of the gradient. Arch. Rational Mech. Anal., 123 (1993), 51–74. JP. W. Jones, Quasiconformal mapping and extendability of functions in Sobolev spaces. Acta Math., 147 (1981), 71-88. LV M. R. Lancia,M.A. Vivaldi, Asymptotic convergence of transmission energy forms, Adv. Math. Sci. Appl.13 (2003), no. 1, 315–341.LG. Leoni, A first course in Sobolev spaces. Graduate Studies in Mathematics Volume 105. American Mathematical Society (2009).LB W. B. Liu, J. W. Barrett, Quasi-norm error bounds for the finite element approximation of some degenerate quasilinear elliptic equations and variational inequalities. RAIRO - Modélisation mathématique ey analyse numérique, tome 28,n°6 (1994),p. 725-744.MANB. B. Mandelbrot, The Fractal Geometry of Nature. W. H. Freeman & Co, 1982.MAN2B. B. Mandelbrot, Fractals and Scaling in Finance. Springer, 1997. MMS.A. Marano, S.J.N. Mosconi, Some recent results on the Dirichlet problem for (p,q)-Laplace equations. Discrete Contin Dyn Syst Ser S. 11 (2018), 3, 279–291.MarP. MarcelliniRegularity and existence of solutions of elliptic equations with p,q-growth conditions J. Differential Equations90 (1991) 1, 1–30.MRT J. M. Mazón, J. D. Rossi, J. Toledo, Mass transport problems for the Euclidean distance obtained as limits of p-Laplacian type problems with obstacles. Journal of Differential Equations 256 (2014), 3208-3244.MVU. Mosco, M. A.Vivaldi,Layered fractal fibers and potentials. J. Math. Pures Appl. (9)103 (2015), no. 5, 1198–1227.TG. M. Troianiello, Elliptic Differential Equations and Obstacle Problems. The University Series in Mathematics. Plenum Press, New York, 1987.VC. Villani, Optimal Transport. Old and new. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 338. Springer-Verlag, Berlin, 2009.ZW. P. Ziemer, Weakly differentiable functions: Sobolev spaces and functions of bounded variation,Graduate Texts in Mathematics, Vol. 120. Springer, New York (1989).Z2V.V. Zhikov, Averaging of functionals of the calculus of variations and elasticity theory. Izv Akad Nauk SSSR Ser Mat.50(4)(1986) , 675-710, 877.
http://arxiv.org/abs/2312.16574v1
{ "authors": [ "Raffaela Capitanelli", "Salvatore Fragapane" ], "categories": [ "math.AP", "28A80, 35J87, 35J65, 35B65, 35B40" ], "primary_category": "math.AP", "published": "20231227134218", "title": "Convergence results for the solutions of $(p,q)$-Laplacian double obstacle problems on irregular domains" }
remarkRemark theoremTheorem lemmaLemma proofProof protProtocol
http://arxiv.org/abs/2312.16322v1
{ "authors": [ "Vikas Srivastava", "Paresh Baidya", "Sumit Kumar Debnath", "Sihem Mesnager" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20231226200658", "title": "Blockchain-Envisioned Post-Quantum Secure Sanitizable Signature for Audit Logs Management" }
We present convincing empirical results on the application of Randomized Signature Methods for non-linear, non-parametric drift estimation for a multi-variate financial market. Even though drift estimation is notoriously ill defined due to small signal to noise ratio, one can still try to learn optimal non-linear maps from data to future returns for the purposes of portfolio optimization. Randomized Signatures, in constrast to classical signatures, allow for high dimensional market dimension and provide features on the same scale. We do not contribute to the theory of Randomized Signatures here, but rather present our empirical findings on portfolio selection in real world settings including real market data and transaction costs.add1,add2]Erdinç Akyildirim erdinc.akyildirim@bf.uzh.ch add1]Matteo Gambara matteo.gambara@gmail.com add1]Josef Teichmann josef.teichmann@math.ethz.ch add1]Syang Zhou syang.zhou@math.ethz.ch[add1]Department of Mathematics, ETH, Zurich, Switzerland[add2]Department of Banking and Finance, University of Zurich, Zurich, SwitzerlandMachine Learning Randomized Signature Drift estimation Returns forecast Portfolio Optimization Path-dependent Signal JEL: C21 C22 G11 G14 G17Randomized Signature Methods in Optimal Portfolio Selection [ January 14, 2024 ===========================================================§ INTRODUCTIONOptimal portfolio construction is one of the most fundamental problems in quantitative finance. It refers to selecting and allocating assets to achieve a balance between risk and return. An optimal portfolio aligns with an investor's specific objectives, risk tolerance, and time horizon. In that sense, optimal implies achieving the best trade-off between expected return and risk based on the fact that different investors will have different optimal portfolios depending on their unique goals and risk tolerances. Notice that a priori neither the precise optimization problem nor the underlying model for the evolution of the market are known to the investor. The former needs a quantification of risk tolerance, time horizon, the latter needs an estimation of model parameters.There are several fundamental methods for constructing an optimal portfolio. Modern Portfolio Theory (MPT) which is developed by Harry Markowitz in his seminal work <cit.> provides a foundational approach given a model. MPT uses mean-variance optimization to construct the portfolio that maximizes the expected return while minimizing the associated risk (expressed in terms of variance or standard deviation), or, equivalently, minimizes the risk for a given level of expected return.The Capital Asset Pricing Model (CAPM) <cit.> is another cornerstone in portfolio management which helps investors determine the expected return of an investment, particularly for individual stocks or portfolios of stocks. The Capital Market Line (CML) and Security Market Line (SML) represent portfolios derived from MPT. The former shows the relationship between risk and return for a portfolio of all possible investments and the latter illustrates the relationship between the expected return and the systematic risk of an individual asset or portfolio of assets. Another method called the Maximum Sharpe Ratio Portfolio finds the optimal weights by maximizing the portfolio's Sharpe ratio, where the Sharpe ratio measures excess portfolio return over the risk-free rate relative to its standard deviation.As an enhancement to the traditional MPT, the Black-Litterman model <cit.> was developed to incorporate investors' subjective views and market equilibrium returns into optimal asset allocation in a portfolio. Similarly, factor-based portfolio optimization is used to improve CAPM by systematically considering factors that are believed to affect asset returns. Recently, <cit.> show that a parsimonious factor model mitigates idiosyncratic noise in historical data for portfolio optimization. They also prove that a combination of the factor model and forward-looking returns improves out-of-sample performance. As opposed to the Black-Litterman model, Risk-Parity distributes portfolio risk equally across assets and does not look at investor views or expected return projections. Especially after the Global Financial Crisis in 2008, Risk-Parity became a widely followed strategy <cit.>. Another recent yet classical approach for optimal portfolio allocation is using Monte Carlo simulations. This involves generating a large number of random scenarios to model the range of possible future returns for different assets. Of course this again is based on a given model. By repeatedly simulating portfolio performance under various market conditions, investors can assess the distribution of potential outcomes and make informed decisions (<cit.>, <cit.>, <cit.>.) The following methodology to take non-normality and fat-tailed distributions into account is bootstrapping which involves drawing random samples (with replacement) from historical returns data to estimate the distribution of returns. <cit.> propose a portfolio optimization methodology with equilibrium, views, and resampling. Another stream in the literature is Stochastic Portfolio Theory (SPT) developed by <cit.>. Unlike earlier theories like MPT and the CAPM, which prescribe how portfolios should be constructed given relatively strong model assumptions, SPT is more descriptive in nature assuming less on the underlying model. It aligns closely with actual observed market behavior. In SPT, the normative assumptions that underpin MPT and CAPM are absent, in particular one does not assume knowledge on hard to observe quantities. SPT employs a continuous-time random process, specifically a continuous semi-martingale, to model the prices of individual securities. Additionally, it incorporates processes that account for discontinuities, such as jumps, in its theoretical framework. More recently, Machine Learning (ML) approaches have been significantly instrumental in optimal portfolio allocation due to their ability to handle large datasets, complex relationships, and non-linear patterns. One of the pioneering studies, <cit.>, introduce performance-based regularization (PBR) and performance-based cross-validation for the portfolio optimization problem. They show that PBR with performance-based cross-validation is highly effective at improving the finite-sample performance of the data-driven portfolio decision. More specifically, <cit.> investigate the application of deep learning hierarchical models in the context of financial prediction and classification tasks. In particular, they show that applying deep learning methods to constructing portfolios can produce more convincing results than standard methods in finance. We refer the reader to <cit.> and references therein for a review of data-driven methods and machine learning–based models for portfolio optimization.Our methodology can be categorized within the broader context of signature methods which hold significant importance within the context of rough path theory. The signature method as an ML technique has found a wide range of application areas. For instance, <cit.>and <cit.> (discrete and continuous time, respectively) demonstrate that signature payoffs can be exploited to price and hedge exotic derivatives non-parametrically in case one has access to price data for other exotic payoffs. The resulting algorithm is claimed to be computationally tractable and accurate for pricing and hedging using market prices of a basket of exotic derivatives. There is a great deal of flexibility as to how signature methods can be applied. <cit.> provides a general approach by unifying the variations on the signature method. They establish a standard set of options that serve as a domain-neutral initial reference point. An empirical study on 26 datasets show the competitive performance against current benchmarks for multivariate time series classification. Recently, <cit.>propose a new approach for solving optimal stopping problems utilizing signature methods. In particular, their method can be used for American-type option pricing in fractional models on financial or electricity markets.In the current paper, we employ the method of randomized signatures which has been introduced to the literature by <cit.> and <cit.>. One application of this methodology can be found in <cit.>, where the authors employ this new technique to deliver state-of-the-art results in anomaly detection for pump-and-dump schemes with cryptocurrencies.Thanks to this novel methodology, we make several contributions to the literature. Our first contribution lies in the non-linear drift estimation of assets using randomized signatures. The goal of this traditional predict-then-optimize approach is to forecast the future (expected) returns and, thus, maximize the Sharpe Ratio, as other conventional methods in supervised learning would do (see, for example, <cit.>). The novelty and beauty of our framework, though, relies on a non-linear estimator that can (potentially) incorporate the full trajectory of the price processes extracting features, such as volatility, which has been advocated for being mainly path-dependent in <cit.>, or autocorrelation or other time-dependencies, that are usually partially disregarded. Even though drift cannot be precisely inferred, we believe that the use of a non-linear, non-parametric and robust estimator can make a difference in the resulting performance. Non-linearity comes from the fact that (randomized) signatures are able to capture non-linearities and geometric information lying in the path of a stochastic process; non-parametric since it does not involve parameters that need to be fit, but rather hyper-parameters (such as the time series length) which are dependent on the framework we want to adapt; and robust because it is model-independent. We show empirically that our trading strategy, based on such estimator, performs well compared to standard benchmark portfolios both under synthetic data and a real-world setting.The use of signature-based strategies on mean-variance portfolios is not yet very widespread in the literature, although two remarkable papers have been published in the last months.Compared to the paper by <cit.>, we can also embed exogenous information in the supervised learning framework, e.g. market factors, we consider proportional transaction costs and show how our strategy can be adjusted to still obtain very good performance under this setting. While in the former paper there is an elegant analytical expression for the optimized mean-variance portfolio involving signatures, this cannot be easily applied in real life for the curse of dimensionality which is inherent to signatures. In our case, by leveraging the Johnson-Lindenstrauss lemma (see <cit.> and <cit.>), we can consider several assets (and potentially many market factors) that are normally prohibitive for the standard signature algorithm.Another recently published paper is <cit.>, where the authors use exact and randomized signatures for stochastic portfolio optimization.In their article, the authors show that under certain relatively mild assumptions on the market conditions, the growth-optimal portfolio prescribed by SPT can be regarded as a path-functional portfolios, which can be approximated by signature portfolios and, hence, the portfolio weights can be calculated using an optimization on the randomized or truncated-exact signatures.In contrast to our approach, the authors do not use the signatures to predict the (expected) returns, but focus their numerical experiments towards an efficient approximation of the growth-optimal portfolio, whereas the focus of the present paper is on the non-linear prediction of mean returns. In this sense, our article might be of interest for many practitioners as well.The remainder of this paper is organized as follows. Section <ref> gives the theoretical foundation and description of our methodology. Section <ref> introduces transaction costs and how these are modeled, while Section <ref> describes the classical benchmark we will employ in the rest of the work. Section <ref> describes the simulated and real-world data we are using and, eventually, Section <ref> gives a summary of the results.§ METHODOLOGY In this section, we recap the theory of randomized signature, which describes several efficient regression bases for path space functionals. In Finance, as well as in other areas, it is an important problem to approximate path space functionals efficiently.It is one insight of neural network technology that an ungraded regression basis appears to be superior to a regression basis, where the number of basis elements up to grade N depends exponentially on the dimension of the input signal.More concretely, classical neural networks perform better than polynomial regression basis, in particular on high dimensional input spaces.In this section, we introduce the counterpart of neural networks for signature methodologies.We also emphasize the relation to Reservoir Computing, an area of machine learning where random, possibly recurrent networks are used to efficiently construct regression basis on path space.In particular, the focus is not on training the weights between the connections of the network, but rather on the training of a static and memory-less readout map, such as a linear regression, between the generated random basis and specific output. We summarize the notion used in this section in Table <ref>. One possible choice for the construction of reservoirs is the use of signatures, such as in <cit.>, which is an infinite dimensional system. Signatures, originating from rough path theory, have rich theoretical backgrounds and proven properties which make them very suitable as features for machine learning purposes. We refer the reader to <cit.> for the discussion of path integrals and their properties together with general information on signatures. However, signature feature suffer from similar disadvantages as polynomials like the curse of dimension or different scales for different signature components. Both can be overcome by Randomized Signatures.Instead of the previously (fixed) infinite dimensional system, we search for a better alternative to construct a reservoir. We first fix an activation function σ, a set of hyper-parameters θ∈Θ, and a dimension r_d.Then, depending onθ, we choose random matrices A_0,...,A_d on ℝ^r_d× r_d as well as shifts b_0,..., b_d such that maximal non-integrability holds on a starting point x ∈ℝ^d+1.One can tune the hyper-parameters θ∈Θ and dimension r_d such that paths ofd Z_t = ∑_i=0^d σ(A_i Z_t + b_i )dX^i_t,Z_0 = zapproximate path space functionals of (X_s)_s ≤ t via a linear readout up to arbitrary precision. Notice that we typically take X^0_t = t.The process (X_t)_t≥ 0 is the driving path along which we compute the randomized signature (Z_t)_t≥ 0 and can include endogenous and exogenous information as well. §.§ Problem statementIn a financial market with n ∈ stocks (n ≤ d), prices are denoted here by 𝒮 = (S^1, …, S^n), which are expressed by values of a factor process X_t= (X_t^1,..,X_t^n) for t ∈ [0,T].The meaning of the factor process can be price itself, or log price, or first differences of the log price.One of the fundamental questions is how to construct an optimal portfolio for a time horizon T ∈.As a possible solution, economic theory suggests choosing preferences and setting up an optimization problem given knowledge about the financial market.Already in the context of this simple setting, two immediate downsides appear: neither preferences nor the stochastic model for the financial market are easy to specify.In our context, we assume that the preferences are known and the stochastic process 𝒮 is determined by an Itô semi-martingale with continuous trajectories and we let [-T_obs,0] to be the observation period and (s ⟼ X_s)_s ∈ [-T_obs,0] to be the observed data trajectory. Then the question is to find a predictor for the law of dX_t = μ_t dt + ∑_i= 1^nσ^i_t dB^i_t, which are the stocks' factor processes,when both μ and σ^i are unknown quantities with full path dependence on the observation σ algebra generated by X.Once all factor processes are considered, the quadratic covariation is calculated as follows:[X,X]_s,t(ω) =∑_i=1^n ∫_s^tσ^i_v ⊗σ^i_v dv,whereσ^i_v ⊗σ^i_v is the instantaneous covariance matrix along t ⟼ X_t(ω) which is observable along trajectories of X.We estimate the instantaneous covariance using the shrinkage covariance estimator described in Section <ref>, which is basically an adjusted sample covariance estimator incorporating prior beliefs.Next we consider the task of drift estimation (under a mild martingale assumption on the stochastic integral):E[X_t-X_s| ℱ_s ] = E[ ∫_s^tμ(X_s) ds| ℱ_s ] + 0which requires a lot of trajectories in order to estimate F_Δ( X_[0,s] ) = 𝔼 [X_s+Δ - X_s| ℱ_s ]= ∫_s^s+Δ𝔼[μ_v| ℱ_s ] dv,where F_Δ is path space functional, i.e. almost surely defined on C([0,s];^n) with values in ℝ^n. The necessity of a large amount of trajectories (only obtainable by long term observations) required in order to capture the drift within a certain interval is obvious and is outlined in the following. This fundamental fact does not depend on the simplicity of the underlying model. Thus, for ease of exposition,assume that a single stock follows geometric Brownian motion with drift μ∈ and volatility σ > 0, then we can write dS_t/S_t = μ dt + σ dW_tfort > 0 andS_0 = s_0 > 0.If we collect observations from this stock at every > 0 in a time duration of T such that N· = T then we observe N points. In this setting, an (optimal) unbiased estimator for the drift is given byμ := 1/N ∑_i=1^N Δ S_i/S_i = 1/T∑_i=1^N Δ S_i/S_i,where Δ S_i = S_i - S_i-1. From Equation (<ref>), we can compute the main contribution to the variance of μ as (μ) = 1/T^2∑_i=1^N (σ Δ W_t) = σ^2/T,which obviously implies a standard deviation of σ/√(T). Consequently, if we want to compute95% confidence interval to get the value of the drift in a 1% window (i.e. ± 0.5%), then we haveq(α) σ/√(T)≤ 0.5%,which leads to T ≥ (1.96σ / 0.005)^2. That is, for σ= 20% we should wait for more than 6'146 years in order to get an unbiased estimator of the drift with 1 % significance. This is clearly unfeasible.In a financial market with n stocks, prices are denoted here by 𝒮 = (S^1, …, S^n), which are expressed by values of a factor process X_t= (X_t^1,..,X_t^n) for t ∈ [0,T]. The meaning of the factor process can be price itself, or log price, or first differences of the log price.One of the fundamental questions is how to construct an optimal portfolio for time horizon T ∈.As a possible solution, economic theory suggests choosing preferences and setting up an optimization problem given knowledge about the financial market.Already in the context of this simple setting, two immediate downsides appear: neither preferences nor the stochastic model for the financial market are easy to specify.In our context, we assume that the preferences are known and the stochastic process S is determined by an Itô semi-martingale with continuous trajectories and we let [-T_obs,0] to be the observation period and (s ⟼ X_s)_s ∈ [-T_obs,0] to be the observed data trajectory. Then the question is to find a predictor for the law of dX_t = μ_t dt + ∑_i= 1^nσ^i_t dB^i_twhen both μ and σ^i are unknown quantities with full path dependence on the observation σ algebra generated by X. Quadratic covariation is calculated as follows:[X,X]_s,t(ω) =∑_i=1^n ∫_s^tσ^i_v σ^i_v dv,whereσ^i_vσ^i_v is the instantaneous covariance matrix along t ⟼ X_t(ω) which is observable along trajectories of X. We estimate the instantaneous covariance using the shrinkage covariance estimator <ref>, which is an adjusted sample covariance estimator incorporating prior beliefs.Next we consider the task of drift estimation (under a mild martingale assumption on the stochastic integral):E[X_t-X_s |ℱ_s ] = E[ ∫_s^tμ(X_s) ds | ℱ_s ] + 0which requires a lot of trajectories in order to estimate F( X_[0,s] ) = 𝔼 [X_s+Δ - X_s | ℱ_s ]= ∫_s^s+Δ𝔼[μ_v |ℱ_s ] dv,where F is path space functional, i.e. F: C([0,s];^n) →. The necessity on a large amount of trajectories (only obtainable by long term observations) required in order to capture the drift within a certain interval is obvious.This fundamental fact does not depend on the simplicity of the underlying model. More specifically, assume that the stock follows geometric Brownian motion with drift μ∈ and volatility σ > 0, then we can write dS_t/S_t = μ dt + σ dW_tfort > 0 andS_0 = s_0 > 0.If we collect observations from this stock at every > 0 in a time duration of T such that N· = T then we observe N points. In this setting, an (optimal) unbiased estimator for the drift is given byμ := 1/N ∑_i=1^N Δ S_i/S_i = 1/T∑_i=1^N Δ S_i/S_i,where Δ S_i = S_i - S_i-1. From Equation (<ref>), we can compute the main contribution to the variance of μ as (μ) = 1/T^2∑_i=1^N (σ Δ W_t) = σ^2/T,which obviously implies a standard deviation of σ/√(T). Consequently, if we want to compute95% confidence interval to get the value of the drift in a 1% window (i.e. ± 0.5%), then we haveq(α) σ/√(T)≤ 0.5%,which leads to T ≥ (1.96σ / 0.005)^2. That is, for σ= 20% we should wait for more than 6'146 years in order to get an unbiased estimator of the drift with 1 % significance. This is clearly unfeasible. Even if we had a reliable measure for the volatility of a single stock, we would anyway need many data to be able to capture the drift within a certain interval.For example, let us assume that the estimated value for the standard deviations of the monthly returns of a stochastic process were σ = 20% on an annual basis (a plausible value that can be computed from markets). Then, if we want to compute the confidence interval with 95% probability to get the value of the drift in a 1% window (i.e. ± 0.5%), we haveq(α) σ/√(12)/√(n)≤ 0.5%,where q(α) is the quantile of a standard normal distribution computed at the level α=95%. In this case, we obtain n ≥(q(α)σ /√(12)/0.005)^2, which gives the minimal number of needed observations we need for this confidence interval. Plugging in the desired values, we obtain that n ≥ 512 months and so more than 42 years.Then, the first problem is identifying a good drift estimator.Moreover, let us put ourselves in simple conditions and assume that the stocks are actually geometric Brownian motions (with drift μ∈ and volatility σ > 0), for which we can writedS_t/S_t = μ dt + σ dW_tfort > 0 andS_0 = s_0 > 0given.If we collect observations from this stock every > 0, in an interval of duration T we suppose to have N points, that is N· = T.In these settings, a plausible and unbiased estimator for the drift is given byμ := 1/N ∑_i=1^N Δ S_i/S_i = 1/T∑_i=1^N Δ S_i/S_i,where Δ S_i = S_i - S_i-1. Because of Equation (<ref>), we have that the variance of μ is (μ) = 1/T^2∑_i=1^N Var(σ Δ W_t) = σ^2/T,which implies a standard deviation equal to σ/√(T). Consequently, if we want to compute the confidence interval with 95% probability to get the value of the drift in a 1% window (i.e. ± 0.5%), we haveq(α) σ/√(T)≤ 0.5%,which leads to T ≥ (1.96σ / 0.005)^2. That is, we should wait for more than 6'146 years. This is clearly unfeasible. Note also that we are tacitly assuming that the law of the process remains constant during the entire period, which is not likely the case in reality.Quadratic covariance is calculated as follows:[X,X]_s,t(w) =∑_i=1^d ∫_s^tσ^i(X_v) σ^i(X_v)^T dv, whereσ^i(X_v)σ^i(X_v)^T is the instantaneous covariance matrix along t ⟼ X_t(w) which is the second best observable quantifiable? quantity1) Can you really observe it? 2) w is ω? after the price. Parameterize σ^i by a neural network and learn it from data.Next we consider the considerably more delicate drift estimation:E[X_t-X_s |ℱ_s ] = E[ ∫_s^tμ(X_s) ds | ℱ_s ] + 0where a lot of trajectories needed toestimate F( X_[0,s] ) = 𝔼 [X_s+Δ - X_s | ℱ_s ]= μ(X_s) , where F is path space functional, i.e. F: C([0,s];^d) →. Given a set of stocks N, either using simulated stocks or using real daily stock data as described in Section <ref>, our goal is to use reservoir computing to generate portfolio weights that can beat standard benchmarks.As a benchmark prediction, we use the t_w length running average, i.e.R_ = 1/t_w∑_=-_^-1 R_.§.§ Drift Estimation using reservoir computingWe start our analysis with a set of n stocks, 𝒮 = (S^1, …, S^n), such thatS^j_t represents the price of stock j for day t. After normalizing each stock's price by its initial value, we then take the first differences of log prices to obtain log-returns which are denoted by _t^j = log(S_t^j) - log(S_t-1^j). In our model, the evolution of randomized signatures of these log-returns is determined byd R_t = ∑_i=0^d σ(A_i R_t + b_i )dX^i_t,R_0 ∼𝒩(0, I_r_d), where σ is used as activation function and d+1 represents the input dimension, with n ≤ d (see <cit.> or <cit.> for more information).Here we have followed the standard approach by setting time as the 0th dimension.Moreover, A_i ∈ℝ^r_d × r_d are linear random projection operators where each entry follows a normal distribution with mean r_m and variance r_v, and the entries of the random biases b_i ∈ℝ^r_d follow a standard normal distribution.Note that A_i and b_i are only generated once per “simulation”, which means that they will be constant for different instants of time, but vary across different experiments.Given the stochastic nature of such experiments, we will then take the average to obtain the mean-behavior.The initial value of the randomized signature R_0 is drawn randomly as a standard normally distributed vector in ^r_d. Finally, X represents the leading increments in Equation (<ref>) and it contains the log returns of the prices, _t, or the log-prices themselves, log S_t.Note that the information content is the same independent of choosing log S_t or _t as input.In the following, we follow the usual convention in finance and we use _t for our experiments to make our results comparable to the standard benchmarks, but we experimented with log S_t and obtained similar results[Results are available from the authors upon request].Finally, the data is augmented using additional inputs as described in Section <ref> and X^0 =t so that we end up with X = (t, LR^1, …, LR^n, a_1, …, a_d-n) ∈^d+1, where a^i denote the additional inputs described in Section <ref>, as our final input for computing the reservoirs. The general notation is also grouped in <ref>.Our analysis starts with the use of stocks' data 𝒮 = (S^1, …, S^n) which is then reworked for the convenience of the algorithms used. Every vector S_j represents the price for stock j taken at end of day, as explained in Section <ref>. In particular, after having normalized all shares' prices by the initial value of the time series, we apply a log-transformation and then take the difference to obtain log-returns that we denote with _t^j = log(S_t^j) - log(S_t-1^j) for stock j and discrete time t ∈. In our model, the evolution of the signatures of these log-returns is determined byd R_t = ∑_i=1^d σ(A_i R_t + b_i )dX^i_t,R_0 ∼𝒩(0, I_r_d), where σ is used as activation function and d represents the input dimension.A classical example would entail selecting d = n+1 where n is the number of stocks under examination and the added element is the variable `time'. Moreover, A_i ∈ℝ^r_d × r_d are linear random projection operators where each entry follows a normal distribution with mean r_m and variance r_v, and the entries of the random biases b_i ∈ℝ^r_d follow a standard normal distribution.Note that A_i and b_i are only generated once per “simulation”, which means that they will be constant for different instants of time, but vary across different experiments. Given the stochastic nature of such experiments, we will then operate averages to obtain the mean-behavior. The initial value of the randomized signature R_0 is drawn randomly as a standard normally distributed vector in ^r_d. Finally, X represents the leading increments of Equation (<ref>) and it could coincide with the log returns of the prices, X_t = _t or the prices themselves, X_t = S_t.Of course, the information content is the same independent of choosing X_t = S_t or X_t = _t. In the following, we will choose X_t = _t for our experiments although we have also experimented with X_t = S_t, where we obtain similar results. Notation: in the following, we will use type-written characters to denote indices (integer numbers) on a time grid and normal characters to denote the associated instant of times.For example, if we uniformly discretize the time horizon with a step of length h, we denote it by using T = · h, where T denotes the instance of time andthe time index. For any time ∈{_, …, }[We start from _ because we need at least _ data-points to have a valid data-window to use for our algorithm.] we calculate the randomized signature R_ using the previous _ log-returns as an input.That is done so that the input data X for our prediction method is always the same as for the momentum benchmark <ref>. Hence, in the typical case where X_t ≡_t, any single reservoir entry R_ depends on {_-_, …, _-1} and on hyper parameters (r_d, r_m, r_v). Letℱ^r_d, r_m, r_v: ℝ^_× d⟶ℝ^r_dbe a numerical scheme solving the Equation (<ref>) such as forward Euler scheme. For any time, ∈{_, …, }[We start from _ because we need at least _ data-points to have a valid data-window to use for our algorithm.] we calculate the randomized signature R_ using the previous _ log-returns as an input. This enables us to make our input data X same for the prediction algorithm which in turn makes the model comparable with the momentum benchmark. Hence, in the standard case where X_t ≡_t, any single reservoir entry R_ depends on both {_-_, …, _-1} and hyperparameters (r_d, r_m, r_v). Let ℱ^r_d, r_m, r_v: ℝ^_× d⟶ℝ^r_dbe a numerical scheme solving the Equation (<ref>), such as a forward Euler scheme. It is important to note that the evolution of the randomized signature is determined by the increments of time series X.As we mentioned above, the standard approach is to add a time-dimension to the log-returns and use them in Equation (<ref>).Our numerical experiments also showed that normalizing returns by their first value yields better results[Note that normalization on the stock prices becomes redundant if the returns are normalized.However, in our experiments we still work with the normalized price data because the same time series is then used for the benchmark algorithms.]. Note that the increments of the time series X are determining the evolution of the randomized signature. As already said, one classical choice would be to attach a time-dimension to the log-returns obtained from stocks' prices and then use them in Equation (<ref>). During our numerical experiments, we found out that the best choice was actually also providing re-normalized returns, that is dividing the log-returns of the utilized window by their first value[Note that given this second normalization (and the fact that we use returns) the first one that we operate before actually obtaining the log-returns is redundant. We anyway keep the price data normalized by the very first value since the same time series are then used for benchmark algorithms.]. In principle, it is possible to add new information to the simple log-returns different from time to enhance the prediction accuracy.For instance, we experimented addingrealized standard deviations and obtained similar results as including time as an extra dimension[Results for the standard deviation are available from the authors upon request].The only shortcoming of this approach is that we increase the dimension of the input time series with other d new time series.However, note that this does not create any problem in our methodology since we exploit random projections which are easily generated and do not suffer from any curse of dimensionality.In principle, it is possible to add new information to the simple log-returns apart from time. For example, another option that we have explored has been using realized standard deviation of the log-return series.The obvious intuition is that by adding other sources of information, we could enhance our prediction abilities. The only drawback of such approach is increasing the dimension d of the input time series. Despite being generally an undesirable property, this does not cause problems to our methodology since we are exploiting random projections that are quickly created and do not suffer from any type of curse of dimensionality. Our goal is to learn stock behaviour from past observations, hence we apply a supervisedlearning algorithm. Since a minimum amount of data is necessary for learning, we split the dataset in two different groups: the “burn-in” set is denoted by I_burn(t_s), and the train set is denoted by I_train(t_s) where thet_s is the time of separation. As Table <ref> shows, the two subsets I_burn(t_s) and I_train(t_s) are always non-overlapping and connected regions of the ordered interval {_, …, }. Hence I_burn(t_s) will be a static region always used in its entirety and I_train(t_s) will be a gradually expanding training set.Since our goal is to gather stock behaviour from past observations, we apply algorithms with a supervised philosophy. Since a minimum amount of data is necessary for learning, we split the entire dataset in two different groups: the “burn-in” set I_burn(t_s) and the train set I_train(t_s), where the instant of time t_s is used to neatly separate them. As denoted in Table <ref>, the two subsets are non-overlapping and connected regions of the ordered interval {_, …, }, and the idea is that the first dataset will be always used in its entirety, while the second will only be gradually increased. As it is standard for reservoir systems, we only need to learn the read-out map from the randomized signature controlled by X given by Equation (<ref>).In our numerical application, we try to predict one day ahead in the future.The read-out map is obtained from a classical Ridge regression that involves a regularization parameter α = 10^-3[We chose this particular value for the smoothing parameter, as in our experiments, this results in regression coefficients in a reasonable range.].This map is obtained for every time and it is computed on input spaces of increasing dimension.In other words, at each time instant, we will have a new randomized signature element in ^r_d added to the previous ones.The increase in the sample space can be considered a priori as a downside of the method, however we emphasize that this does not represent a practical obstacle for usgiven the linear nature of the operated regression. As typical for reservoir systems, also in this case we only need to learn the read-out map from the randomized signature controlled by X (and given by Equation (<ref>)) to the future log-returns, in the sense that that we try to predict one day in the future. The read-out map is obtained from a classical Ridge regression that involves a regularization parameter α = 10^-3.This map is obtained for every instant of time and it is computed on input spaces of increasing dimension: for every new instant of time, we will have a new randomized signature element (in ^r_d) added to the previous ones. Despite the increase in the sample space could be seen a priori as a downside of the method, we underline that this does not represent a practical hindrance given the linear nature of the operated regression. In our numerical example, we chose t_s such that t_s = T/10 > t_w which in turn implies that we only look at t in the interval {_+1, …, }. Onceis fixed, then we compute the randomized signature of X over the interval {_, …, } and map it against the log-returns {__+1, …, _+1}. The readout which is an L^2-regularized linear regression is deployed on the input sample R_+2 to obtain the prescribed output _+2. The process is then repeated for alluntil -1 is reached.In the numerical example, we chose t_s = 10 %· T > t_w, which means that we are only looking at those times t ∈ in the interval {_+1, …, }. Onceis selected, we compute the randomized signature of X over the interval {_, …, } and we map it against the log-returns {__+1, …, _+1}.The learnt function, i.e. the readout (an L^2-regularized linear regression), is thus deployed on the input sample R_+2 to obtain the prescribed output _+2. The process is then repeated for alluntil -1 is reached.§.§ Covariance EstimationIn the previous section, we discussed our methodology for estimating the expected returns. The remaining ingredient required to apply Markowitz portfolio optimization is an estimation of the covariance matrix of returns. It is well known that the use of a standard covariance matrix estimator given by Σ̂_ =√(1t_w-1∑_i=-_^-1(R_ - R_)^2)is not appropriate for this task. The main reason is that the parameter space can be too large compared to the sample size which turns into an unstable and unreliable estimator <cit.>.To overcome this difficulty, various techniques have been proposed like shrinkage estimators <cit.>.All linear shrinkage estimators follow a Bayesian approach, where the sample covariance matrix, Equation (<ref>), is shifted to a prior belief about the covariance structure, for instance, a diagonal matrix containing only the sample variance of the single stocks.The resulting matrix can be seen as a convex combination of the sample covariance and a target matrix, usually a scalar value multiplied by the identity matrix. In the limit case, when the number of samples grows to infinity, the shrinkage estimator converges to the sample covariance estimator, which is in line with the intuition that we slowly move away from our prior belief the more data we receive. <cit.> propose an improvement to the simple linear shrinkage estimator by using a nonlinear shrinkage estimator,which allows to shrink with a different intensity for the different eigenvalues of the sample covariance matrix (while keeping the same eigenvectors) and, most importantly, without requiring any prior knowledge on a target covariance matrix. Furthermore, they prove the optimality of their estimator inside the class of rotation-equivariant estimators (thus, estimators that do not modify sample covariance eigenvectors).Note that both linear and nonlinear shrinkage techniques give rise to positive definite, hence invertible, covariance matrices.The use of shrinkage makes the covariance estimator more robust in general, which means that it will vary less across time. As covariance estimation is not the main focus of this article, we refrain from formally stating the assumptions and the theoretical results and refer to <cit.> for a detailed description of the non-linear shrinkage estimator. In our experiments, we use an implementation of the estimator in , a statistical programming language (package provided by <cit.>).§.§ Portfolio weights generationTo assess the quality of our predictions, we construct the maximum Sharpe ratio portfolio first proposed in <cit.>.Additionally, we impose holding constraints on the weights by not allowing short selling and additionally limiting the maximum allowed weight per single asset at 20% of the total assets.Hence the Markowitz optimisation problem that we solve takes the following form for any time step:ww^⊤μ̂ - r_f√(w^⊤ Σ̂ w),∑_i |w^i| = 1, 0 ≤ w^i ≤ 0.2.Here μ̂ is given by the estimator described in Subsection <ref>, Σ̂ is the sample covariance matrix obtained after by non-linearly “shrinking” the sample unbiased estimator as described in Subsection <ref>, and r_f is the risk-free rate.For the sake of simplicity, we do not consider transaction costs for our experiments using simulated data and compare the annualized returns and the annualized Sharpe ratios of our methodology to the benchmark portfolios described. For real-world data, we compare the annualized returns and the annualized Sharpe ratios under different levels of transaction costs.§.§ Additional input informationAs we mentioned before, we can in principle feed our methodology with any other kind of available information coming from the market or user generated data with the available information. For instance, we try our algorithm with the inclusion of the following: * Random generated portfolioThe idea is to add new artificial time series obtained by fixing some randomly chosen weights at starting time throughout the entire process. The intuition is that these new information should be able to increase the signal-to-noise ratio[For stochastic quantities, the signal-to-noise ratio (SNR) is defined as the ratio between the second moments of the random variables describing the signaland noise , that is SNR = [^2]/[^2].] and help to identify the stocks based on their first moment (drift). * Volatility of the mean-returnsIn this case, we computed the mean of all returns and then the volatility of such an average. The inspiration comes from the fact that in signature applications it is often advisable to add another time series obtained as a transformation from the previous one, namely applying the so-called lead-lag transformation which allows to take into account the quadratic variation of the process (see <cit.> on this topic). * Volatility of each stock's returnThe idea is similar to the previous one, but this time entails computing the volatility of each stock separately, avoiding information lost because of the initial average across stocks. * Future contract prices on VIXWe increase the time series with the tickerwhich refers to the first-month VIX futures closing price to consider the possibility of investing in VIX-related products when the volatility of the markets increases since it is not possible to trade on the VIX index directly. In practice, our algorithm invested in VIX futures only 5 days, without bringing a substantial increase in the Sharpe Ratio. Note that adding all this information does not cause any bottleneck for our algorithm because we rely on a random basis and the training only concerns the linear readout.§ TRANSACTION COSTSIn our experiments with S&P500 market data, we additionally investigate how the introduction of proportional transaction costs influences the performance by introducing transaction costs viaTC_t = λ∑_i|SH^i_t - SH^i_t-1|, where λ is the proportional transaction cost and SH_t^i is number of shares in the portfolio in stock i at time t.Hence, we consider costs when both buying and selling stocks. The portfolio value is adjusted by subtracting the transaction cost at each time step.As our methodology described so far is purely based on drift and covariance estimation to optimize the Sharpe Ratio, portfolio weights can vary widely between trading days leading to bad outcomes in a trading environment that includes transaction costs.To lower transaction costs in our methodology, we post-process the portfolio weights to make our trading strategy trade less and only on days with a strong trading signal. We achieve this by first smoothing the weights by using a moving average of the past weights and introducing a threshold τ, which has to be breached before trading occurs. Specifically, given non post-processed portfolio weights w_t, predictions R̂_t, and stock prices S_t, we update the portfolio shares held during the next trading period in the following way:SH_t+1 = w_t+1 P_t/S_t|R̂_t+1 - R̂_t| < τ,SH_t,otherwise,where P_t denotes the portfolio value at time t. Given this preliminary update, to further reduce trading costs, we take a moving average over the last k days and re-normalize to get the final share value of the next trading period.SH_t+1^k = 1/k(SH_t+1∑_i=t-k+1^t SH_t) SH_t+1 = SH_t+1^k P_t∑SH_t+1^k P_t For our numerical experiments, following the experiments performed in <cit.>, we range the proportional trading costs between 0% and 1%.§ BENCHMARKS' DESCRIPTION In this chapter, we introduce three benchmark portfolios to compare our methodology. §.§ Linear Regression PortfolioAs a direct comparison to our methodology, we compare our results to a portfolio based on a linear estimation of the drift. For this, we use the same parameters as for our methodology and estimate the drift using a linear regression where the past t_w log returns are used at each time step to predict the mean of the next log return. Subsequently, we use the linear predictions in exactly the same way to construct the portfolios. §.§ Momentum PortfolioFollowing trends of the market as an investment strategy has existed for a very long time.The general hypothesis of momentum-style portfolios is that stock market trends have “momentum” i.e. that on average stock prices, which have been going up, continue to go up and vice versa.There are many momentum-based strategies, for a classical overview of momentum strategies please refer to <cit.>.In more recent work, <cit.> constructed equally weighted combinations of momentum strategies of various time intervals. If during that time horizon, the past excess return is positive, this is considered an up trend and hence a long position is taken. Vice versa, a negative past excess return is considered a downtrend and a short position is taken.In their extensive analysis, the authors have shown a robust market over-performance of momentum following strategies even taking into account transaction costs and management fees. To make the momentum strategy more comparable to our strategy, we choose a momentum strategy with the same time interval as our strategy.Hence, the strategy will be the same as described in Section <ref> only with the average return used as estimators instead of the estimators described in Section <ref>.§.§1/n PortfolioThe story behind the 1/n portfolio is worth reading.As stated in <cit.>, Markowitz was awarded a Nobel prize for having defined the mean-variance portfolio, which is a portfolio that tries to maximize the gain (mean) for a given risk or to minimize the risk (variance) for a given return, as a young economist he decided to invest with a simple rule of thumb, that can be called “1/n”, that is allocate your money equally to each of n funds. During an interview, he said:I thought, ‘You know, if the stock market goes way up and I’m not in it, I’ll feel stupid. And if it goes way down and I’m in it, I’ll feel stupid. So I went 50–50. In practice, from a numerical perspective, this translates into having at all times the portfolio weights equal to 1/n, where n is the number of stocks that we pick for our portfolio. The popularity of this approach, which does not depend on complex strategies or availability of data, substantially grew when <cit.> empirically proved that the 1/n approach could outperform many other approaches on out-of-sample data. The importance of such an approach as a valid benchmark stems from <cit.> as well.In this case, the 1/n portfolio is said, for example, to outperform the other portfolios (e.g. the market portfolio and the entropy-weighted portfolio) in case of zero transaction costs. Because of these reasons, we decided to include the 1/n portfolio as a benchmark for our method. § DATAThe empirical analysis in this paper is based on both simulated data and real data. Both data sets consist of daily stock prices covering roughly 20 years. The next section gives a brief overview of the data used. §.§ Simulated data For our simulated data, we assume that the dynamics of asset prices are given bydS^i_t = S^i_t μ^i_t cos( 0.3 ·∑_j=1^10S^j_t ) dt+S^i_t σ^i dW^i_t,where i ∈{1, …, 10} and W_t^i are correlated Brownian Motions with correlation matrix R. For all i, we set S^i_0 = 100 and simulate 5040 time steps corresponding to 20 years of daily data. The parameters μ and σ are given byμ = [-0.1 0.2 -0.250.25 -0.350.22 -0.450.25-0.60.28;]^⊤, σ =[0.10.2 0.250.3 0.350.20.4 0.350.5 0.25;]^⊤.We aim to simulate stock data with a non-linear drift with this choice of dynamics. Because of the cosine term, the drift of each stock can fluctuate between positive and negative, and hence it is not easy to determine which stocks are the best to invest in. As we can extract the real drift term of the process (<ref>), we can compute the real trend evolution starting at some arbitrary value S_0. Hence we can also compare the predicted mean with the real mean of the process as derived from (<ref>).§.§ S&P 500 dataFor the real data, we collected daily closing prices for a random selection of 50 stocks in the S&P 500 index from January 2000 until July 2022.In the selection of the stocks, we also avoid stocks, which do not have a complete time series for our observation window. Apart from that, we are picking the stocks completely randomly and the list of the used tickers can be found in <ref>.Additionally, for our methodology, we perform data augmentation as described in Section <ref>.Note that this way of selecting stocks introduces survivorship bias, as stocks that have fallen out of the S&P 500 within our observation period are not picked. The results of our experiments should not be significantly affected, as we are comparing against benchmark portfolios on the same stock selection and which therefore benefit from the same advantages.We did not perform any experiments targeted at quantifying the impact, as it is not inherently obvious, how to deal with stock prices of varying length within our methodology and we leave this point open for future research.§ RESULTS In this section we present the empirical results from our algorithm with two different portfolio construction methodologies using both real and simulated prices by the constraint given in Equation (<ref>).We mainly compare the performance measured in annualized returns and in the Sharpe ratio which we calculate with the following formulas: r_a = (1 + r)^252/t - 1,and s_a = r_a - r_fσ_a, respectively. As benchmarks for our methodology, we use two separate portfolios.First, we compare our methodology with the performance generated using the benchmark log return prediction(see Section <ref>) and the same portfolio construction methodology.We call this strategy the momentum benchmark. As a second benchmark, we compare our results with those obtained from using a naive 1/n portfolio meaning that we invest equal weight in each asset re-balance to maintain this equal weighting every tradinig day (Section <ref>). In our methodology, there are many specific parameter choices such as the amount of “burn-in” data or the regularization constant chosen for the ridge regression. A full table of all parameters used can be found inTable <ref>.We choose the same parameters for the experiments with the simulated data and the S&P data. §.§ Results with simulated data Since we know the dynamics of the stochastic process for the simulated data, we can directly compare our predictions with the true drifts, denoted by μ^*,i for i ∈{1, 2, …, 10}, μ^*,i = S^i_t μ^i_t cos( 0.3 ·∑_j=1^10S^j_t ).Then we calculate the information coefficient which is the correlation between forecast and realized returns for both our estimates and the true mean. Figure <ref> displays the information coefficient for various stocks.As it is clear from the figure, almost all values for our estimates are positive, which provides empirical evidence for the validity of our approach. Furthermore, we can see that for most simulated stock prices, the information coefficient is very low even for the true drift μ^*,i, which indicates that for the simulated stocks, one can not perform much better than random picking.On average, the information coefficient of our predictions is 3.7%.Although this level of information coefficient may initially appear very low, it's important to consider the high level of difficulty associated with the problem statement, which limits the potential for better predictions.Even though at first glance this might seem low, one has to keep in mind that the problem statement itself is very difficult, such that much better results should not be expected. For instance, <cit.> show that an information coefficient of 3.7% is already sufficient to expect out-performance of the corresponding Markowitz portfolio compared to the 1/n portfolio.Figure <ref> displays the performance of our strategy compared to the benchmark portfolios.The figure displays the price evolution of portfolios starting at 1 every two years using the methodology described in Section <ref>. The full result, which counts on the compounding effect, can be found in <ref> (see Figure <ref>).Both pictures start from year 2 since the first two years (10% of data) have been used as a “burn-in” period for the algorithm.As we described in the methodology part, we generate n_s predictions for different seeds and use the average of those to generate the price path.The bold blue line shows the result of the portfolio constructed in this way.Additionally, we also displayed the price paths for each of the n_s predictions (pale blue lines).Averaging over the predictions results in a more stable result.To better compare the results, we restart the price paths at 1 every two years, as otherwise, due to the compounding effect of the returns, results are difficult to compare. This is denoted by a black solid line in Figure <ref>.One can see a clear out-performance of our methodology compared to all benchmarks in most of the path segments. In 7 out of 9 of the two-year segments, our strategy outperforms all benchmarks. In the last temporal segments, our strategy underperforms against two of the benchmark portfolios.Due to the low signal-to-noise ratio, it is expected that over-performance cannot always be achieved. Still, the overall results show an over-performance of our strategy across the full simulation period. In particular, if we consider the 1/n strategy as a possible competitor, from Figure <ref>we observe that our methodology over-performed the former 67.28% of the times on monthly returns.§.§ Results S&P 500 dataAs our methodology is dependent on the concrete choice of the reservoir hyper-parameters (r_d, r_m, r_v), we present our results for a grid of different values.In our experiments, we choose r_m ∈{ 0, 0.05, 0.1}, r_v ∈{0.01, 0.03, 0.05, 0.3, 1.0} and r_d ∈{50, 60, 70, 100}, respectively.The resulting grid of results compared to the momentum benchmark portfolio can be found in Table <ref>.The corresponding grid of results compared to the 1/n portfolio and the results compared to the linear regression portfolio can be found in Tables <ref> and <ref>, respectively.The tables show the differences in r_a and s_a correspondingly. As can be seen in the results, our strategy outperforms all benchmark portfolios. As a robustness check for our random signature algorithm, we again observe that positive over-performance numbers can be obtained for all different parameter values being considered.In addition to supporting our empirical results from the simulated prices, the case study on real-life prices also helps us to recognize the pattern for the optimal parameter values for the highest over-performance statistics.First, we observe that for a given level of mean, most of the time it is the case that both over-performance in r_a and s_a values tend to increase with the choice of higher variance values.Additionally, we observe that results heavily depend on the concrete choice of the hyper-parameters.More specifically, in Table <ref> one can see that over-performance in r_a ranges from 1.51% for our worst performing model to 6.2% in our best performing model. Naturally, this leads to a significant difference throughout our evaluation period. We also check the robustness of our results by displaying all n_s results for a given hyper-parameter configuration (r_d, r_m, r_v).Figure <ref> shows the monthly returns for a given hyper-parameter configuration (r_d, r_m, r_v) = (70, 0.0, 0.03) for each of the portfolios. As it is clear from the figure the shape of the distribution is similar for each of the portfolios and that of our strategy is indeed slightly higher than the ones for each benchmark portfolio with the 1/n benchmark portfolio being the closest contender.For the same hyper-parameter configuration (r_d, r_m, r_v) = (60, 0.02, 0.03), we also check the temporal robustness by comparing the quarterly returns from our methodology with those from the 1/n benchmark portfolio.Visual inspection of Figure <ref> shows that our method's quarterly over-performance does not indicate that our methodology performs better or worse in specific periods.Comparing the full period for this specific hyper-parameter configuration, our methodology outperforms the 1/n benchmark in 59.43%§.§ Comparisons Elaborating on our results, we observe a larger over-performance of our portfolio on the real data compared to the simulated data. Specifically, we only get an over-performance of 2.6% compared to the strongest benchmark portfolio which is less than average over-performance on the real data compared to the strongest benchmark (3.5%) in Table <ref>. While this result might seem counter-intuitive at first glance, this behavior can be attributed to the fact that simulated data have lower dimensionality (10 against 50 time series), have a precise correlation structure (which is constant - and invertible! - over time) and, thus, present lower arbitrage possibilities concerning real data. Note, however, that the covariance matrix used in the portfolio construction was obtained with the same procedures for both simulated and real datasets. We postulate that randomized signatures can better detect arbitrage opportunities when the signal-to-noise ratio is low enough.To corroborate our hypothesis, we performed another analysis of the simulated data. We use the same stochastic simulations for the Brownian motions, but we increased the standard deviation by a factor of 2. The new simulations can be seen in Figure <ref>. In this case, randomized signatures do not perform as well as in the previous case. This can be seen in Figure <ref>, where again we re-normalize to the unit value every two years (for the total compounded result, see Figure <ref>). Under this setting, the 1/n portfolio performs better than our strategy based on randomized signatures. In particular, our strategy generates monthly returns that are higher than those obtained by 1/n in only 45.16% of the times (see Figure <ref>).We also try the opposite which is decreasing the standard deviations of the Brownian motions by multiplication with a factor of 0.5 while keeping the same realizations of normal random variables. The purpose is to increase the signal-to-noise ratio and see how the different methodologies perform in this environment. In this case, the randomized signatures can capture the signal only when the variance r_v used for the construction of the random matrices A_i and random biases b_i for i=0, …, d in Equation (<ref>) are increased (we used a factor of 4, which corresponds to the reciprocal of the factor used for the standard deviation). The results are shown in Figure <ref>.As expected, a simple linear regression can also perform well in this low-noise context. Moreover, we see that our method is consistently better than the linear regression. This can be also seen in Figure <ref>, which displays the entire portfolio history (with a compounding effect). The monthly return over performance on the 1/n strategy is 94.93% of the times (see Figure <ref>).We also emphasize the fact that this result is only possible by changing the reservoir. In other words, using the same reservoir as before yields generally lower performance, in line with the 1/n strategy. It seems that this results from the fact that the random features are not rich enough to filter the correct future returns. §.§ Transaction costsTo make our strategy applicable in practice, we have to consider transaction costs. As mentioned before, we have to adjust our trading strategy in case of an environment with proportional transaction costs. Following the methodology described in Section <ref>, we introduce two additional hyper-parameters τ, k, corresponding to the threshold and the number of days over which we take the moving average respectively. Figure <ref> shows the effect of different levels of transaction cost on the portfolio performance.The x-axis shows different values of k and the y-axis different levels of τ.As can be seen, for the environment without transaction costs, the best hyper-parameter configurations are in the upper left corner, whereas with increasing transaction costs, the best-performing configurations move more and more to the lower right corner. This is in line with the intuition, as with increasing transaction costs, we want to trade less frequently.Finally, we fix τ=1% and k=5 and compare our portfolio with our benchmark portfolios, where we adjust all benchmark portfolios but the 1/n benchmark portfolio in the same way as described in Section <ref>.The reason why we do not similarly adjust the 1/n portfolio is that for this portfolio we do not have predictions and hence the criterion (<ref>) cannot be calculated.Figure <ref> shows the portfolio paths under varying transaction costs between 0% and 0.5%.Each of the subplots shows the comparison of the performance of our portfolio vs the benchmark portfolios under different transaction costs.Similar to Figure <ref>, to be able to better compare the performance, we restart the price paths every three years. The subfigures show that the performance deteriorates in particular for the linear regression portfolio. Also for all other portfolios, the performance deteriorates with increasing transaction costs. Overall, one can see that our strategy outperforms the benchmark portfolio on most of the time segments and trading cost levels. In particular, when we take a look at the performance across the full-time period (see, e.g., Figure <ref>), one can see that across the full period, our portfolio out-performs all benchmark portfolios for all transactions costs considered. § CONCLUSION In this article we investigate whether using non-linear estimators, namely randomized signatures, for return predictions on Markowitz portfolios can outperform a selection of benchmark portfolios.We analyze the performance of our portfolio against a portfolio obtained using linear predictions, mean returns, and finally a portfolio with equal weights at each time step.We compare the annualized performance and Sharpe ratios on both a simulated data set and a real-world data set.Overall, our empirical findings show that random signatures with carefully chosen hyper-parameters can be successfully used as a non-linear and non-parametric drift estimator, in order to optimize portfolio selections for given metrics. Furthermore, our methodology is also robust, as is it does not rely on any model assumptions of the market.Our experiments on simulated data confirm the intuition that using both linear and non-linear predictions of the mean returns yields better results when improving the signal-to-noise ratio. It also suggests that there is a lower bound for the signal-to-noise ratio under which all predictions break down and the one-over-n portfolio becomes the best-performing portfolio. In our experiments, we show that for signal-to-noise ratios above this lower bound, the portfolio using our methods for non-linear prediction yields the best results. For the real-world data set, our results show a significant over-performance of our portfolio compared to each benchmark portfolio, with the one-over-n portfolio being the strongest benchmark portfolio (see <cit.>).Even though the results vary significantly over the hyper-parameter space used for our experiments, we show that our portfolio over-performs for both annualized return and annualized Sharpe ratio even in the worst hyper-parameter configuration.Last but not least, we consider the impact of transaction costs on our strategy.As the original optimization does not take transaction cost into account, it is not surprising that all portfolios show very poor performance, when a sufficiently high proportional transaction cost is used.However, we have shown that using an adjustment to our trading strategy using a higher signal threshold can recover the over-performance described in the previous paragraph.The relationship between the hyper-parameters used for the randomized signatures on the quality of the predictions does not follow a clear pattern and to the best of our knowledge is not well understood.For future research, it would be interesting to investigate if it is possible to establish a theoretical link between factors from the input space to successful hyper-parameters configurations.Also, for all of our experiments, we have used a single hyper-parameter configuration throughout the whole timeline.It would be interesting to see in future research, if our results can be improved upon by varying the hyper-parameter configuration along the timeline potentially using some back-testing strategy.elsarticle-harvauthoryear § APPENDICES§ NOTATION § LIST OF STOCKS FROM S&P500 (TICKERS) 4 * PWR UN* ITW UN* SEE UN* IEX UN* ESS UN* BIIB UW* LUV UN* DD UN* PENN UW* ABMD UW* LUMN UN* BSX UN* DLTR UW* MTD UN* ZBRA UW* CB UN* XRAY UW* TJX UN* AAPL UW* BBY UN* PSA UN* CL UN* REGN UW* NEE UN* DRI UN* PNC UN* BEN UN* MMC UN* DHR UN* TECH UW* DIS UN* ROK UN* L UN* CHRW UW* IPG UN* TSN UN* EFX UN* PCAR UW* EA UW* UNP UN* BKNG UW* TFX UN* WHR UN* NLOK UW* CMA UN* K UN* WST UN* AON UN* VRTX UW* CVS UN§ NEW RESULTS In the following tables, we show the percentage over-performance (OP) of our model compared to the linear regression benchmark (Table <ref>), the momentum benchmark (Table <ref>) and the 1/n benchmark (Table <ref>). On the left hand side, it is possible to see over-performances with respect to the annualized returns, while on the right hand side with respect to the Sharpe ratio.§ OLD RESULTS§ ADDITIONAL FIGURES
http://arxiv.org/abs/2312.16448v1
{ "authors": [ "Erdinc Akyildirim", "Matteo Gambara", "Josef Teichmann", "Syang Zhou" ], "categories": [ "q-fin.PM", "cs.AI", "cs.LG", "q-fin.PR" ], "primary_category": "q-fin.PM", "published": "20231227072700", "title": "Randomized Signature Methods in Optimal Portfolio Selection" }
A Polarization and Radiomics Feature Fusion Network for the Classification of Hepatocellular Carcinoma and Intrahepatic Cholangiocarcinoma Jia Dong*, Yao Yao*, Liyan Lin, Yang Dong, Jiachen Wan, Ran Peng, Chao Li and Hui MaBoth authors contributed equally: Jia Dong; Yao Yao. Corresponding authors: Chao Li (lichao3501@163.com); Hui Ma (mahui@tsinghua.edu.cn). Jia Dong is currently with Department of Statistical Science at University College London (jia.dong.23@ucl.ac.uk). Jia Dong, Yang Dong, Jiachen Wan, and Hui Ma are with Guangdong Engineering Center of Polarization Imaging and Sensing Technology, Shenzhen Key Lab for Minimal Invasive Medical Technologies, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.Yao Yao, Yang Dong, Jiachen Wan, and Hui Ma are with the Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China. Liyan Lin, Ran Peng, and Chao Li are with the Department of Pathology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, 350014, China. This work was supported in part by National Natural Science Foundation of China (NSFC) (Grant Nos. 61527826 and 11974206) and Shenzhen Bureau of Science and Innovation (Grant No. JCYJ20170412170814624).January 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Classifying hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) is a critical step in treatment selection and prognosis evaluation for patients with liver diseases. Traditional histopathological diagnosis poses challenges in this context. In this study, we introduce a novel polarization and radiomics feature fusion network, which combines polarization features obtained from Mueller matrix images of liver pathological samples with radiomics features derived from corresponding pathological images to classify HCC and ICC. Our fusion network integrates a two-tier fusion approach, comprising early feature-level fusion and late classification-level fusion. By harnessing the strengths of polarization imaging techniques and image feature-based machine learning, our proposed fusion network significantly enhances classification accuracy. Notably, even at reduced imaging resolutions, the fusion network maintains robust performance due to the additional information provided by polarization features, which may not align with human visual perception. Our experimental results underscore the potential of this fusion network as a powerful tool for computer-aided diagnosis of HCC and ICC, showcasing the benefits and prospects of integrating polarization imaging techniques into the current image-intensive digital pathological diagnosis. We aim to contribute this innovative approach to top-tier journals, offering fresh insights and valuable tools in the fields of medical imaging and cancer diagnosis. By introducing polarization imaging into liver cancer classification, we demonstrate its interdisciplinary potential in addressing challenges in medical image analysis, promising advancements in medical imaging and cancer diagnosis.Hepatocellular carcinoma, intrahepatic cholangiocarcinoma, pathological aided diagnosis, polarization and radiomics feature fusion network, polarization imaging. § INTRODUCTION Liver cancer is one of the most common malignant tumors worldwide. There are approximately 906000 new cases of liver cancer and 830000 liver cancer-related deaths globally each year. Hepatocellular carcinoma (HCC) accounts for 75-85% of primary liver cancer and intrahepatic cholangiocarcinoma (ICC) for 10-15%. Trends lead that liver cancer becomes the sixth leading cause of morbidity and the third leading cause of mortality in 2020 <cit.>. Although HCC and ICC have different epidemiologic, etiological, and clinical characteristics, it is difficult to distinguish them completely. ICC is more invasive than HCC, indicating different treatment plans between them <cit.>, <cit.>. Therefore, accurate differentiation between HCC and ICC in pathological diagnosis enables clinicians to select appropriate treatment for patients, assist clinical decision-making, and improve patient prognosis and survival rate. The diagnosis of HCC and ICC is conducted by the observation and evaluation of the Hematoxylin and Eosin (H&E)-stained combined with immunohistochemistry (IHC)-stained sections of liver pathological tissues by experienced pathologists using high resolution optical microscope clinically <cit.>. In IHC detection, specific immunostaining Hep Par-1 and Arg-1 are markers for HCC, and CK19 could help to distinguish ICC from other cancers <cit.>. However, for poorly differentiated cancer, the detection ability of immunohistochemistry is limited, since some poorly differentiated HCC may lose or only focally express hepatocyte specificity markers. And studies about HCCs demonstrated substantial CK19 immunostaining could lead a higher recurrence rate of cancer and higher rate of lymph node metastasis <cit.>. Therefore, it is necessary to explore objective and rapid diagnostic methods that can effectively identify HCC and ICC without IHC.The development of imaging techniques and artificial intelligence methods provides new ways in the classification of HCC and ICC. There are relatively few studies in distinguishing the two types of liver cancers with pathological images, whereas some related researches were carried on with computed tomography (CT) <cit.>, ultrasound (US) <cit.> and magnetic resonance imaging (MRI) <cit.> images. In clinic, pathology is the golden standard of cancer diagnosis. Digital pathology has made great strides in recent years, including the digitization of pathological sections <cit.>and the feature extraction and analysis methods based on the digitized images <cit.>. The former can generate high-resolution pathological images by using whole slides scanning equipment, and the latter takes Artificial Intelligence (AI) as the main image analysis method to automate the pathological diagnosis process, reduce work intensity of pathologists, and improve the objectivity and accuracy of pathological diagnosis <cit.>. Currently, AI-based image analysis technologies for computer-aided diagnosis are becoming the core of digital pathology <cit.>. However, challenges remain to promote wide applications of AI approaches in digital pathology. For example, it requires large amounts of high-resolution color images of stained pathological tissue sections for extracting effective pathological information, whereas the number of pathological sections from patients in hospital is limited. Therefore, to obtain more dimensional information from pathological sections and expand the variety of input data, Machine Learning (ML) models combined with optical imaging techniques were proposed for enhancing the ability for extracting structural information and assisting pathological diagnosis <cit.>. For example, Dong et al. <cit.> proposed a dual-modality machine learning framework for cervical intraepithelial neoplasia grading task, identifying the macro-structure and segment the target region in pathological images by deep learning-based image analysis method and then extracting the micro-structure information of the target region by emerging polarization imaging technique. Polarization microscopy imaging obtains specific features and effective information of pathological sections through the change of polarization state in the process of polarized light scattering propagation. Especially, it is more sensitive to the scattering by sub-wavelength scale microstructure <cit.>. Mueller matrix, as a comprehensive description of the polarization-related properties of the sample, contains abundant information on microstructural and optical characterizations <cit.>. To interpret the information encoded in the Mueller matrix, sets of polarization parameters with physical meanings were derived from the Mueller matrix by several methods such as Mueller matrix polar decomposition (MMPD), Mueller matrix transform (MMT) and so on <cit.>. These polarization parameters have shown good potential in assisting diagnosis of various pathological tissues, such as liver fibrosis <cit.>, breast cancer <cit.>, skin cancer <cit.>, and colon cancer <cit.>. However, the existing polarization parameters have limited ability for recognition of more specific and finer pathological microstructures. Based on machine learning methods, Dong et al. <cit.> proposed a linear discriminant analysis (LDA) based method to derive polarimetry feature parameters composed of the existing polarization parameters for quantitative characterization of specific microstructures in various breast pathological tissues. Furthermore, Liu et al. <cit.> investigated the study about correlation degree between the polarization parameters of the breast pathological tissue samples and texture features of the corresponding pathological images. However, these two kinds of information are not only relevant, but also complementary. Specifically: It has been demonstrated that the contrast mechanism of polarization images depends on the polarization characteristics of the sample and less on the imaging resolution, enabling polarization imaging to see what is invisible in human eyes. Therefore, in the emerging polarization imaging, each pixel can be used as an independent sample with multi-dimensional polarization features, which contains high-resolution pathological features encoded in the low-resolution imaging and reveals abundant sub-wavelength scale micro-structure information at the pixel level; On the other hand, in traditional microscopic imaging, high-resolution color pathological images have rich visual information which is consistent with pathologists’ observation. The image feature parameters derived from pathological images quantify the relationship between pixels and spatial distribution and may provide macro-structural information within the target lesion area for histopathology at the image level. Image feature parameters can be automatically extracted by various data-characterization algorithms, one of which is radiomics <cit.>. Radiomics is an emerging field. It can be used to fully excavate the hidden information in medical images and output a large amount image feature parameters for clinical applications such as tumor parting, treatment option, efficacy testing, and prognostic evaluation <cit.>. It is also feasible to apply radiomics to the pathological images, which converts pathological image into a high dimensional feature space, including texture features, statistical features, and shape features. Overall, polarization features at the low-resolution pixel level and image features of the corresponding color pathological images at the high-resolution image level are complementary. Combining features of the two different modalities will increase the dimension of information and make the description of the tissue characteristics more comprehensive. In this paper, we present a dual-modality ML framework designed to classify HCC and ICC within H&E-stained pathological sections of liver tissues. This framework combines polarization features derived from Mueller matrix images with radiomics features extracted from corresponding H&E pathological images. Differing from prior work where the discriminative feature was solely the polarization feature, the proposed fusion network encompasses two levels of fusion, conducting early fusion at the feature level and late fusion at the classification level, using both polarization features and image features as discriminative characteristics. Experimental results confirm that our proposed polarization and radiomics feature fusion network (PRFFN) outperforms single-modality machine learning classifiers in the classification of HCC and ICC. Moreover, the accuracy of the PRFFN exhibits superior robustness as imaging resolution decreases compared to radiomics feature-based classifiers. This study underscores the synergy between the polarization properties and image features of pathological samples and highlights the benefits of incorporating polarization imaging technology into contemporary image-rich digital pathology. The approach promises to offer multidimensional information and a comprehensive description of pathological samples, ultimately enhancing the accuracy and objectivity of computer-aided pathological diagnosis.In summary, this work represents a significant advancement in the field by introducing dual-modality classification, fusing polarization features with image features to distinguish HCC and ICC in liver tissues, thus contributing to a more accurate and effective diagnostic tool for liver diseases.§ METHODS §.§ Liver Cancer Pathological SamplesThe 5-m-thick H&E-stained pathological slides of liver tissues used in this study were obtained from the Fujian Medical University Cancer Hospital. The pathological samples consisted of a total of 28 slides from 28 patients, including 14 cases of HCC and 14 cases of ICC, which were removed without receiving any preoperative treatments and confirmed by pathological examination after the operation. The specimens were treated by conventional production, and each case selected a wax block taken from tumor tissue without necrosis. According to the pathological extraction specification, the size of block is about (1.5–2) cm × 1 cm × 0.2 cm, and each block was cut into slices about 5-m thick for regular H&E staining. Several pathologists discussed and selected the regions-of-interest (ROIs) containing a large number of significant HCC cell structures, ICC cell structures, and non-cancerous structures in each sample, then used MATLAB Graphical User Interfaces (GUIs) to manually labelled the three structures in selected ROIs. The GUI calls the imfreehand function to enable the expert pathologist to label the target microstructures on the H&E image of the ROI and generates a binary image as the ground truth for the training and testing of the model. We classified the three structures by collecting and analyzing data from 148 ROIs (53 for HCC cell, 53 for ICC cell, 42 for non-cancerous) selected from 28 pathological samples. This work was approved by the Ethics Committee of Fujian Medical University Cancer Hospital. §.§ Data Acquisition§.§.§ Experimental SetupUsing the Mueller matrix microscope on H&E-stained sections, we obtained the sample’s Mueller matrix image and H&E pathological image <cit.>.The dual division of focal plane (DoFP) polarimeters-based Mueller matrix microscope (DoFPs MMM) is established by adding two compact modules, i.e., polarization state generator and analyzer (PSA and PSG), into a commercial transmission microscope (L2050, Guangzhou LISS Optical Instrument Co., Ltd., China), as shown in Fig. 1(a). Collimated light from the LED (633 nm, 1 = 20 nm) is modulated by PSG which consists of a fixed-angle linear polarizer (P1) and a rotatable quarter-wave plate (R1) and then transmit the tissue sample. Passing through the objective lens, the scattered light is detected by PSA which consists of two 16-bit DoFP polarimeters (PHX050S-PC, Lucid Vision Labs Inc., Canada, DoFP-CCD1 and DoFP-CCD2), a 50:50 non-polarized beam splitter prism, and a fixed-angle phase retarder (R2). During a measurement, R1 rotates to four preset angles in order that the PSG generates four independent polarization states S_in. When R1 arrives at an angle, two DoFP-CCD conduct data acquisitions at the same time, so that a total of 4 acquisitions are carried out. The instrument matrix A_PSA of PSA is pre-calibrated by measuring standard samples’ Mueller matrix, such as air and retarder, and calculated pixel by pixel before being applied to pathological samples. After calibration, the maximum error of the DoFPs MMM is about 1%. The polarization state S_out of the outgoing light can be obtained according to:S_out =A_PSA^-1I .where I represent the intensity of the polarization component images recorded by the DoFP-CCD1 and DoFP-CCD2. Therefore, the Mueller matrix of the sample can be reconstructed by the equation:M_sample= [ S_out ][ S_in ]^-1 .Fig. 1(b) presents the Mueller matrix measurement of the liver cancer pathological samples under a 4 × objective lens. The Mueller matrix elements are normalized by element m11 which is the intensity image. In addition, we can obtain the corresponding H&E pathological image of sample from a color CCD under a 20 × objective lens as shown in Fig. 1(c).§.§.§ Polarimetry Basis Parameters Mueller matrix encodes complete polarization properties of samples. Due to a lack of explicit connections betweenindividual Mueller matrix elements and the microstructural characteristics of samples, polarimetry basis parameter(PBPs) which have explicit physical meanings were decodedfrom the Mueller matrix to characterize themicrostructure. MMPD method proposed by Lu and Chipman<cit.> derived linear retardation ,depolarization Δ, optical rotation ,and diattenuation D. In our previous studies, weproposed MMT parameters <cit.>including normalizedanisotropy A, polarizance b, circularbirefringence , and anisotropy degree t_1, Mueller matrix rotation invariant parameters <cit.> including linear diattenuation D_L, linear polarizance P_L, circular diattenuation D_C, circularpolarizance P_C, linear birefringence related r_Land q_L, and k_C with different physical meaningsin pure depolarization and linear retarder system, Muellermatrix linear birefringence identity parameters (P_1,P_2, P_3, and P_4) based on the Mueller matrix of linear retardance, and Mueller matrix lineardiattenuation identity parameters (P_5, P_6, P_7, and P_8) based on the Mueller matrix of linear diattenuation <cit.>. In this study, we used PBPs composed of above parameters as the input polarization features of the classifiers for the classification of HCC cell structures, ICC cell structures, and non-cancerous structures. §.§.§ Radiomics FeaturesRadiomics is a comprehensive method of medical image analysis to improve the diagnostic, prognostic, and predictive accuracy. A total of 93 radiomic features <cit.> were used in this study. The same as the polarization features, radiomics features extracted from H&E images of the liver tissues are used as the input features of the classifiers. These radiomic features of H&E images quantified target microstructures’ characteristics and are subdivided into the two classes: intensity and texture. Intensity-based features were obtained by estimating the first order statistics of the intensity histogram, and described the distribution of pixel intensities within ROI image through commonly used metrics, including maximum, minimum, mean, standard deviation, variance, etc. Texture-based features were derived from the gray level co-occurrence matrix which describes the second-order joint probability function of an ROI image, gray level run length matrix which quantifies gray level zones in an ROI image, neighboring gray tone difference matrix which quantifies the difference between a gray value and the average gray value of its neighbors, gray level size zone matrix which quantifies gray level zones in an ROI image, and gray level dependence matrix which quantifies gray level dependencies in an ROI image. The texture-based features described the spatial distribution of grayscale values and quantified the heterogeneity. §.§ OverviewFig. 2 outlines the steps from the input of Mueller matrix image and H&E image of the liver tissues to the output of the classification results of HCC cell structures, ICC cell structures, and non-cancerous structures. Firstly, we obtained the liver tissue samples’ Muller matrix images under a 4 × objective and corresponding H&E images under a 20 × objective. The ROI that maximizes inclusion of the three target microstructures in each sample are selected by pathologists. The affine transformation method <cit.> was adopted for the image registration pixel by pixel between the Mueller matrix and H&E images. The mask labelled by the pathologist on H&E images is mapped to the corresponding polarization images at pixel level. After registration, we calculated the polarization parameters for each target pixel and radiomics features from the H&E image blocks of 100 × 100 size around the target pixel. The polarization features and radiomics features of each target pixel were treated as the two modalities input data of the PRFFN for classification. We also investigated the performance of three ML classifiers—polarization features-based ML classifier, radiomics features-based ML classifier, and PRFFN combining the two features—as the imaging resolution reducing. §.§ Data Preprocessing In this study, the input data of the PRFFN for classification were the target pixels with different polarization and radiomics features. In order to map the mask labelled by the pathologist on H&E images to the corresponding polarization images pixel by pixel and extract the polarization and radiomics features of the same pixel, the affine transformation method was adopted for the image registration between the two images. The element m11 in Mueller matrix represents the intensity image of the sample. As shown in Fig. 2, the H&E image of the sample as a moving image was transformed to match the corresponding m11 image as a fixed image for pixel level registration. In MATLAB, we conducted the registration by selecting control points common to both images and inferring the affine transformation matrix T that aligns the control points. After selecting control points interactively by calling the cpselect function, the transformation matrix T that best aligns the moving and fixed points was produced by calling the fitgeotrans function when the transformation type was set as “affine”. By applying the transformation matrix T and the H&E images with pathologists’ masks to the m11 images using the imwarp function, we transformed the mask to select target pixels in PBPs images as polarization features and calculated radiomics features of the H&E images after registration, as input data of the fusion.§.§ Network Algorithm ArchitectureAs depicted in Fig. 3, the proposed fusion network comprises two levels of fusion: early feature-level fusion and late classification-level fusion. In the early feature-level fusion, we employ a weighted summation approach to combine features from both modalities, integrating polarization features and image features by learning the appropriate weights. In the late classification-level fusion, we apply a similar operation to the fused features from different layers and depths, as in the feature-level fusion. Additionally, we use a classifier to determine which layer's predictions are more crucial in order to perform the final fusion of predictions from different layers. In both levels of fusion, we utilize a simple structure involving a linear layer followed by a softmax operation. This approach not only simplifies the structure but also provides a degree of interpretability, allowing the learned weights to indicate the relative importance of different feature categories."In this context, assuming the presence of two types of features, with polarization features denoted as x_P and image features as x_R, the classification results using only polarization features and image features are represented as y_P = f(x_P) and y_R = f(x_R), respectively. The classifier f employed in this context is a multi-layer perceptron (MLP). The expression for each layer in the latent space can be defined as follows:y_Pi =g (W_Pix_Pi+b_Pi ) , i=1,… ,k . y_Ri =g (W_Rix_Ri+b_Ri ) , i=1,… ,k .Where x_Pi, y_Pi, W_Pi, and b_Pi are the input, output, weight, and bias of the i-th layer in the MLP model using only polarization features. Similarly, x_Ri, y_Ri, W_Ri, and b_Ri represent the input, output, weight, and bias of the i-th layer in the MLP model utilizing only image features. The activation function g(h) employed here is the Rectified Linear Unit (ReLU), defined as g(h) = max(0, x).Subsequently, early fusion is performed on the features from each layer in the latent space:y_i =a_Piy_Pi+a_Riy_Ri .The weights at the feature level {a_Pi, a_Ri} are computed by the attention layer as follow:{a_Pi, a_Ri} =softmax ( Linear ( { y_Pi, y_Ri} ) ) .The classification results obtained by using the fused features as input are represented as X_i=f ( y_i ). Finally, late fusion at the classification level is performed on the classification results from each layer in the latent space:Y=∑_i^k b_iX_i .The weights at the classification level { b_1,…, b_k} are computed by the attention layer as follow:{ b_1,…, b_k} =softmax ( Linear ( { X_1,…, X_k} ) ) . The dataset for this study comprises 53 regions of measurement from HCC cells, 53 regions of measurement from ICC cells, and 42 regions of measurement from non-cancerous areas, all annotated by pathologists. From each of these cell collections, 50,000 pixels were randomly sampled for use as input data in training multiple MLP classifiers. Each pixel is characterized by a 23-dimensional polarization feature and a 93-dimensional image feature. Subsequently, we performed mean and variance normalization on each feature. The model parameters for the PRFFN were fine-tuned through grid search-based parameter optimization via cross-validation to maximize the classification accuracy of HCC cell structures, ICC cell structures, and non-cancerous structures. We implemented all classifiers using the open-source library Scikit-learn in Python version 3.8. For the MLP model using only polarization features, the hidden layer sizes were set as (512, 256, 128), with a learning rate of 0.01. Similarly, for the MLP model using only image features, the hidden layer sizes were (512, 256, 128), with a learning rate of 0.01.§ RESULTS AND DISCUSSION §.§ Comparison with Other Methods Employing different fusion strategies for integrating features from different modalities can significantly impact the classification performance of the model. To validate the classification performance of the proposed fusion strategies and determine whether the fusion of multimodal features has the expected effects, we compared the recognition results of our designed model with single-modal classification models and other fusion strategies.For this comparative analysis, we employed a leave-one-patient-out cross-validation approach, where the PRFFN model and other classifiers using the same input data and labels were compared through 14-fold cross-validation. As shown in Figure 3.6, in the comparison of single and dual-modal information, we compared our proposed method with the following two approaches: (1) a classifier that uses only polarization features from liver pathological tissue sections as input for HCC and ICC recognition and (2) a classifier that uses only H&E image features from liver pathological tissue sections as input for HCC and ICC recognition. These two methods serve as single-modal feature classifiers for comparison. In the comparison of classifiers with different fusion strategies, we compared our proposed method with the following two approaches: (1) direct early fusion of polarization and image features, integrating both features to form a new feature without further feature extraction, and using the new fusion feature as input for HCC and ICC recognition and (2) late fusion of the prediction results of classifiers using only polarization features as input and classifiers using only image features as input. The results from these two modalities are combined to produce the final prediction result. These two methods serve as other fusion classifiers for comparison.In the same manner, we considered four performance evaluation metrics: accuracy, precision, recall, and F1-score. These four metrics were employed as evaluation indicators for analyzing the classification results. During the cross-validation process, a random sample from HCC, a random sample from ICC, and a random sample from non-cancerous structures were grouped together. This process created a total of fourteen sets for classifying these three types of pathological tissue structures. In each iteration of cross-validation, one of these groups served as the test dataset, while the remaining groups were used for training. This cross-validation process was repeated 14 times. During each iteration of cross-validation, we input the true values of the test data and the features of the target microstructures into the trained model. This allowed us to predict which category each pixel in the test sample belongs to. Subsequently, we computed accuracy, precision, recall, and F1-score. Following the cross-validation process, we calculated the average values of these four metrics as the quantitative evaluation indicators. Table I summarizes the classification performance results of each network mentioned, after 14-fold cross-validation, for complex liver pathological tissue sections, including HCC cell structures, ICC cell structures, and non-cancerous structures. From Table I, the following conclusions can be drawn: (1) The classification performance of the dual-modal PRFFN is superior to single-modal feature classifiers. This indicates that the two types of features complement each other in the process of distinguishing HCC cell structures, ICC cell structures, and non-cancerous structures, making the structural information of pathological tissues more complete. In high-resolution H&E-based pathological diagnosis, it is challenging for pathologists to differentiate HCC and ICC, resulting in lower accuracy for classifiers that use only H&E image features from liver pathological tissue sections as input. On the other hand, classifiers using only polarization features from liver pathological tissue sections as input achieve higher accuracy, suggesting that the sub-wavelength microstructural polarization features, which are not visible to the human eye, play a significant role in HCC and ICC classification. (2) The proposed PRFFN outperforms classifiers that directly perform early fusion of polarization and image features, as well as classifiers that perform late fusion of prediction results from classifiers using only polarization features and classifiers using only image features. This demonstrates the superiority of the fusion approach proposed here, which combines the advantages of both early and late fusion. This fusion involves two types of fusion: feature fusion between the two modalities and fusion of prediction results from different layers and depths. It is the deep fusion involving both of these aspects that leads to higher prediction accuracy. (3) By comparing the evaluation metrics of each network, it is evident that the proposed PRFFN significantly outperforms all the other networks compared for the classification of HCC cell structures, ICC cell structures, and non-cancerous structures. It achieves an accuracy of 87.67%, precision of 88.39%, recall of 87.65%, and an F1-score of 86.26%. Using an effective fusion method that combines polarization features and H&E image information, the classification performance of HCC and ICC is substantially improved. This indicates that PRFFN could be a powerful tool for automatic identification of the two types of cancer cells under a high-resolution microscope, eliminating the need for immunohistochemical staining and manual observation by pathologists, which could alleviate the burden on medical professionals and the diagnostic complexity to some extent. §.§ Quantitative Characterization ResultsROIs selected and labelled by experienced pathologists from test samples consists of HCC cells and non-cancerous structures (as shown in Fig. 5(a)) or ICC cells and non-cancerous structures (as shown in Fig. 5(c)). The Mueller matrix images and H&E images of the selected ROIs can be obtained by using DoFPs MMM and be calculated the polarization features and image features as input of the proposed PRFFN. Fig. 5 summarizes the output of our network in the test ROIs, and pseudo color images are presented here: the brown, red, and white pixels represent HCC cell structures, ICC cell structures, and non-cancerous structures respectively. The PRFFN determined which class each pixel in ROIs belongs to. In Fig.5 (a) and (c), the corresponding cancer cells regions were labelled by pathologists using black solid line as the ground truth of the classification. The identification results of the network in the corresponding ROIs are shown in the Fig. 5 (b) and (d), from which we can observe that: (1) In the ROIs with HCC cell structures and non-cancerous structures, the brown pixels predicted by the network can indicate the positions of HCC cells. Meanwhile, there are almost no red pixels representing ICC; (2) In the ROIs composed by ICC cell structures and non-cancerous structures, the 2D images of the output of the network have a large number of red pixels at cells positions, indicating there is few HCC cells.§.§ Validation of the Stability of PRFFN with Decreasing Image Resolution By sliding on the PBPs images and H&E images with gradually increasing sizes of window of average filter, the image resolution decreases gradually. We evaluated the accuracy of classifiers that use only polarization features from liver pathological tissue sections as input for HCC and ICC recognition, classifiers that use only H&E image features, and the PRFFN on multi-resolution cases in 12 patients. As shown in Fig. 6, for the classification of HCC cell structures, ICC cell structures, and non-cancerous structures, the accuracy of classifier that uses only H&E image features significantly decreases from 77.3% to 66.9% with decreasing resolution of H&E images. At the same time, that of classifiers that use only polarization features remains stable with decreasing resolution of PBPs images. It means that classifiers that use only polarization features take full advantage of polarization imaging whose imaging mechanism depends on each pixel’s polarization characteristics and less on imaging resolution and which could provide effective information that may not be visible to human eyes. Due to the addition of polarization information, which performs well and steadily in classifying the target microstructures at multi-resolution cases, the accuracy of the PRFFN was reduced from 89.3% to 87.3%, which is obviously less compared with that of classifier that uses only H&E image features. It proved that the proposed PRFFN has a relatively stable and satisfactory performance on the classification of HCC and ICC under low image resolution cases, which paves the way for automated and rapid screening of different liver cancer cells in a low-resolution and wide-field system.§ CONCLUSIONIn this paper, we proposed a dual-modality PRFFN to fuse polarization features derived from low-resolution Mueller matrix images and radiomics features derived from high-resolution H&E images and demonstrated the application potential of Mueller matrix microscopy in the classification of HCC and ICC. Muller matrix image is a complete description of sample polarization characteristics, which contains rich information on the microstructure and optical properties of samples. Radiomics features enable quantifying the relationship between pixels and spatial distribution of structure on H&E images. The technique takes advantage of the sub-wavelength microstructural information reflected by polarization features at each pixel and spatial structure information decoded by radiomics features. We input polarization features and radiomics features to the PRFFN and output the classification of HCC cell structures, ICC cell structures, and non-cancerous structures at each pixel. In the designed fusion model, two levels of fusion are incorporated. Initially, there is an early fusion of features from different modalities, followed by the fusion of prediction results from different layers. The experiment results show that the classification performance of our proposed network is superior to that of a single-modality feature classifier or dual-modality fusion network with other fusion strategies. Especially, when reducing the resolution of H&E images, the classification performance of the PRFFN remains stable and satisfactory due to the addition of polarization features. This technique provides a potential tool for computer-aided diagnosis of HCC and ICC on H&E pathological samples, paves the way for automated and rapid screening of different liver cancer cells under a low-resolution and wide field system, and demonstrates the necessity and advantages of integrating polarization imaging methods into current image-based digital pathological diagnosis.IEEEtran
http://arxiv.org/abs/2312.16607v1
{ "authors": [ "Jia Dong", "Yao Yao", "Liyan Lin", "Yang Dong", "Jiachen Wan", "Ran Peng", "Chao Li", "Hui Ma" ], "categories": [ "eess.IV", "cs.CV", "stat.ML" ], "primary_category": "eess.IV", "published": "20231227151604", "title": "A Polarization and Radiomics Feature Fusion Network for the Classification of Hepatocellular Carcinoma and Intrahepatic Cholangiocarcinoma" }
Hybrid BF with Limited-resolution Phase ShifterHybrid Precoder Design for Angle-of-Departure Estimation with Limited-Resolution Phase Shifters Huiping Huang, Member, IEEE, Musa Furkan Keskin, Member, IEEE, Henk Wymeersch, Fellow, IEEE, Xuesong Cai, Senior Member, IEEE, Linlong Wu, Member, IEEE, Johan Thunberg, Fredrik Tufvesson, Fellow, IEEEThis paper is supported by the Vinnova B5GPOS Project under Grant 2022-01640. H. Huang, M. F. Keskin, and H. Wymeersch are with Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden (e-mail: {huiping; furkan; henkw}@chalmers.se). X. Cai, J. Thunberg, and F. Tufvesson are with Department of Electrical and Information Technology, Lund University, 22100 Lund, Sweden (e-mail: {xuesong.cai; johan.thunberg; fredrik.tufvesson}@eit.lth.se). L. Wu is with the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg (e-mail: linlong.wu@uni.lu).Received ...; accepted... ==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Hybrid analog-digital beamforming stands out as akey enabler for future communication systems with a massive number of antennas. In this paper, we investigate the hybrid precoder design problem for angle-of-departure (AoD) estimation, where we take into account the practical constraint on the limited resolution of phase shifters. Our goal is to design a radio-frequency (RF) precoder and a base-band (BB) precoder to estimate AoD of the user with a high accuracy. To this end, we propose a two-step strategy where we first obtain the fully digital precoder that minimizes the angle error bound, and then the resulting digital precoder is decomposed into an RF precoder and a BB precoder, based on the alternating optimization and the alternating direction method of multipliers. Besides, we derive the quantization error upper bound and analyse the convergence behavior of the proposed algorithm. Numerical results demonstrate the superior performance of the proposed method over state-of-the-art baselines.Hybrid beamforming, hybrid precoder, phase shifter, angle-of-departure estimation, alternating optimization, alternating direction method of multipliers. § INTRODUCTIONMillimeter wave (mmWave) and terahertz (THz) band have been proven to play an important role in future wireless systems, because they can provide ultra-high data rates <cit.>. However, high carrier frequencies result in severe path loss. Large-scale antenna systems, which are equipped with hundreds or even thousands of antennas, have emerged as a crucial technology for addressing this problem <cit.>.It is not feasible for large-scale antenna systems to employ fully digital beamforming at mmWave/THz, since fully digital beamforming requires as many radio-frequency (RF) chains (including digital-to-analog converters, mixers, etc.) as the antennas, leading to prohibitive hardware costs and power consumption <cit.>. On the contrary, hybrid beamforming where only a small number of RF chains are needed is a promising solution to handle this problem <cit.>. The RF chains are connected to antennas via phase shifters with a finite number of quantized phases <cit.>.Numerous works have been devoted to hybrid beamformer (precoder and/or combiner) design with practical constraints <cit.>. Among them, the following four methods attract much attention. (i) The authors in <cit.> proposed a hybrid beamforming algorithm with 1-bit resolutionphase shifter, which is based on alternating optimization framework and the Babai algorithm <cit.> (termed as “Alt-Babai”). (ii) An iterative hybrid transceiver design approach using alternating optimization and coordinate descent method (CDM) was developed in <cit.> (termed as “Alt-CDM”). (iii) <cit.> exploited the spatial structure of mmWave channels and proposed a method for optimal unconstrained precoders and combiners, which employs sparse representation and orthogonal matching pursuit (termed as “Spa-OMP”). (iv) Another hybrid precoding method was presented in <cit.>, which is on the basis of the manifold optimization <cit.> (termed as “ManiOpt”). All the above-mentioned hybrid beamforming design methods are from the communications perspective. In contrast, much less work has focused on hybrid beamforming design for channel parameters (such as angles, delays, Dopplers, etc.) estimation and positioning. Although optimal beamforming design for positioning has been investigated in e.g., <cit.>, these works investigated fully digital beamforming rather than hybrid beamforming. Note that existing hybrid beamformer design methods in <cit.> can be applied for positioning. However, these methods do not guarantee a good performance in positioning (since they are proposed for the purpose of communications). Therefore, there is lack of dedicated hybrid beamforming design for the purpose of channel parameters estimation and positioning.To fill the research gap, in this paper we delve into the intricate problem of hybrid precoder design for angle-of-departure (AoD) estimation, accounting for practical limitation on the finite resolution of phase shifters. Our objective is to derive a solution comprising an RF precoder and a base-band (BB) precoder that not only adheres to the practical constraint but also facilitates precise user AoD estimation. To achieve this goal, we present a two-step approach. We first find a fully digital precoder that minimizes the angle error bound, which is the theoretical lower bound on AoD estimation. Then, we decompose the resulting digital precoder into an RF precoder and a BB precoder, by using alternating optimization framework and the alternating direction method of multipliers (ADMM). The numerical results show that the proposed method outperforms existing state-of-the-art approaches while incurring less complexity. The main contributions of this work are listed as follows: * The problem of hybrid beamformer design under practical constraints has not yet been considered for positioning (specifically AoD estimation). We consider such a problem and develop an efficient algorithm to obtain the RF precoder and BB precoder.* In the existing literature of hybrid beamforming with limited-resolution phase shifters, e.g., <cit.>, no theoretical results are available regarding the quantization error bound. In this paper, we derive such an error bound.* We provide convergence analyses of the proposed algorithm. Our analyses differ from the related works in <cit.> since our algorithm involves a quantization operation, which is not the case in the related works. * The convergence analyses presented in this paper go beyond our previous works in <cit.>, as the former additionally reveal that the point sequence produced by the proposed algorithm is a Cauchy sequence and it converges to a fixed point after a finite number of iterations. The remainder of this paper is organized as follows. The system model is described in Section <ref>. Section <ref> presents the proposed method for hybrid precoder design for AoD estimation. Section <ref> analyzes the quantization error bound and convergence behavior of the proposed algorithm. Various numerical examples are provided in Section <ref> to demonstrate the effectiveness of the proposed approach, followed by conclusions in Section <ref>.§ SYSTEM MODELWe consider a mmWave downlink positioning scenario as in <cit.>, shown in Fig. <ref>, where the base station (BS) consists of a BB precoder, an RF precoder, and a uniform linear array (ULA) of N_Tx antennas; while the user equipment (UE) consists of a single antenna. The RF precoder is implemented by limited-resolution phase shifters.The BS transmits M pilot symbols sequentially with identical power, denoted as s_m, m = 1, 2, ⋯, M. Employing a two-timescale hybrid precoding approach <cit.>, we adopt a transmission model in which the analog RF precoder is optimized at a slower time scale compared to the digital BB precoder. This prevents high hardware costs (attributed to rapid adaptation of the analog precoder) and reduces computational complexity, along with minimizing signaling overhead <cit.>. In particular, each symbol is first precoded by a dedicated BB precoder vector, f_BB, m∈ℂ^N_RF, and then precoded by an RF precoder constant for all symbols, F_RF∈ℂ^N_Tx× N_RF, where N_RF≤ N_Tx denotes the number of RF chains. Considering highly directional mmWave transmissions, we assume a line-of-sight (LOS)-only channel[The LOS path is resolvable from the non-line-of-sight paths due to channel sparsity, large number of antennas, and large bandwidth in the mmWave/THz wireless communication systems <cit.>.].Thus, corresponding to the transmitted signal s_m, the received signal at the single-antenna UE can be modeled asy_m = β a^T(θ) F_RF f_BB, ms_m + n_m,   m = 1, 2, ⋯, M,where β∈ℂ is the complex amplitude of the path, θ is the AoD, n_m is the complex additive white Gaussian noise with zero mean and variance σ_n^2, and the steering vector is a(θ) = [ 1, e^- 2π d/λsinθ , ⋯ , e^- 2π d/λ (N_Tx - 1) sinθ]^T, with = √(-1), d being the element spacing of the ULA, and λ denoting the transmit signal wavelength. the channel state information (CSI) matrixH = ∑_p = 0^Pα_p a_Rx(ϕ_p) a_Tx^T(θ_p),and n_m∈ℂ^N_Rx is the complex additive white Gaussian noise with zero mean and covariance matrix σ_n^2I (σ_n^2 denotes the noise power and it is unknown). Besides, in (<ref>), α_p, ϕ_p, and θ_p denote the path gain, angle-of-arrival (AOA), and angle-of-departure (AOD) of the p-th path, respectively; while a_Rx(·) ∈ℂ^N_Rx and a_Tx(·) ∈ℂ^N_Tx are the steering vectors of the Rx and the Tx, respectively. The signal model (<ref>) can be written in vector form asy = β S( F_RF F_BB)^T a(θ) +n,where y =[y_1 , y_2, ⋯, y_M]^T∈ℂ^M, n =[n_1, n_2, ⋯, n_M]^T∈ℂ^M, F_BB = [ f_BB, 1,f_BB, 2, ⋯,f_BB, M] ∈ℂ^N_RF× M, and S = diag{[s_1, s_2, ⋯, s_M]} with diag{·} representing the diagonal matrix operator. In addition, n∼𝒞𝒩( 0 , σ_n^2 I) with known noise power σ_n^2. Our goal is to design an RF precoder and a BB precoder such that the accuracy of estimation of AoD is maximized, under the BS transmit power constraint and the hardware constraint on the limited resolution of phase shifters.§ PROPOSED METHOD §.§ CRB-Based Performance MetricDefine ỹ≜β S( F_RF F_BB)^T a(θ).Then, the Fisher information matrix (FIM) J( F_RF,F_BB;x) ∈ℝ^3 × 3 can be computed by using the Slepian-Bangs formula <cit.> as[ J]_ij = 2/σ_n^2{( ∂ỹ/∂ [ x]_i)^H( ∂ỹ/∂ [ x]_j) },where x = [θ, β_R, β_I]^T contains all the unknown parameters, [ J]_ij is the entry of J in the i-th row and j-th column, and [ x]_i is the i-th entry of x. In addition, β_R and β_I denote the real and imaginary parts of β, respectively. The derivative of ỹ with respect to (w.r.t.) [ x]_i is calculated as in Appendix <ref>. The corresponding Cramér-Rao bound (CRB) matrix is defined asC =J^-1. Thus, y∼𝒞𝒩( d , σ_n^2 I), and the log-likelihood function is <cit.>ℓ( x |y) = -M/2lnσ_n^2 - 1/2σ_n^2 y -d_2^2,where x = [θ, β_R, β_I, σ_n^2]^T contains all the unknown parameters, with β_R and β_I being the real and imaginary parts of β, respectively. The Fisher information matrix (FIM) J∈ℝ^4 × 4 is defined as[ J]_ij = 𝔼{( ∂ℓ( x |y) /∂ [ x]_i) ( ∂ℓ( x |y) /∂ [ x]_j) },where [ J]_ij is the entry of J in the i-th row and j-th column, and [ x]_i is the i-th entry of x. Substituting (<ref>) into (<ref>) yieldsJ = [ [ [ J]_11 [ J]_12 [ J]_13 [ J]_14; [ J]_21 [ J]_22 [ J]_23 [ J]_24; [ J]_31 [ J]_32 [ J]_33 [ J]_34; [ J]_41 [ J]_42 [ J]_43 [ J]_44; ]],where each entry can be calculated as in Appendix <ref>. The corresponding Cramér-Rao bound (CRB) matrix is defined asC =J^-1. To quantify the AoD estimation accuracy, we adopt the angle error bound (AEB) as our performance metric, computed as (<ref>) displayed at the top of the next page, where σ_s is the signal power, D≜diag{0, 1, ⋯, N_Tx-1}, F =F_RF F_BB, and we have employed the block matrix inversion lemma <cit.> as detailed in Appendix <ref>.AEB( F_RF,F_BB;x) = √([ C]_11) =  σ_n/σ_sλ/2√(2)π d ×√( a^H(θ) F^* F^T a(θ) / a^H(θ) F^* F^T[ |β|^2 a(θ) a^H(θ) D F^* F^T D -D a(θ) a^H(θ) D F^* F^T] a(θ) ), §.§ Problem Formulation for Optimal Precoder Design The AEB depends on the unknown parameters in x. We assume that x belongs to an uncertainty set 𝒳 that can be, e.g., determined via some tracking algorithms <cit.>. For any x∈𝒳, the AEB is only a function of F_RF and F_BB. The optimal precoder design problem can be formulated asmin_ F_RF,F_BB  AEB( F_RF,F_BB;x)s.t.     F_RF F_BB_F^2 = P, [ F_RF]_ij∈ℱ,    1 ≤ i ≤ N_Tx and  1 ≤ j ≤ N_RF, where P stands for the total transmit power of the BS antennas, and ℱ denotes the set for limited resolution of the phase shifters, which is defined as:ℱ≜{1/√(N_Tx) e^ 2 π b / 2^B| b = 0, 1, ⋯, 2^B- 1 },with B representing the total number of quantization bits of the phase shifters.§.§ Two-Step Strategy for Solving Problem (<ref>) It is difficult to directly solve Problem (<ref>) w.r.t. F_RF and F_BB, due to the complicated structure[The denominator of AEB( F_RF,F_BB;x) contains quartic terms w.r.t. F =F_RF F_BB.] of AEB( F_RF,F_BB;x) and the discrete-phase nature of the entries of F_RF. We provide a strategy for solving Problem (<ref>) via the following two steps: * Step 1: Finding the optimal fully digital precoder F_opt as a solution to Problem (<ref>).* Step 2: Finding a decomposition of F_opt to obtain the best approximation F_opt≈ F_RF F_BB in the least-squares (LS) sense.We now elaborate on these two steps.Step 1: Based on the fact that the unknown variables F_RF and F_BB appear as a product (i.e., F_RF F_BB) in both the objective function (<ref>) and the constraint (<ref>), for any x∈𝒳, we consider the following optimization problem:min_ F AEB( F;x) s.t.  F_F^2 = P,min_ F  AEB( F)s.t.    F_F^2 = P,where F =F_RF F_BB∈ℂ^N_Tx× M and we drop the constraint (<ref>) temporarily. This corresponds to a fully digital precoder optimization <cit.>. We define Z≜ F F^H, and relax Problem (<ref>) by removing the constraint rank( Z) = M, asmin_ Z, u  u   s.t.[ [ J( Z;x) e_1; e_1^T u ]] ≽ 0,  tr( Z) = P,   Z≽ 0,where e_1 = [1, 0, 0]^T, tr(·) is the trace of a matrix, and Z≽ 0 means that Z is positive semidefinite. Taking into account the uncertainty of x, i.e., x∈𝒳, and by discretizing 𝒳 into a uniform grid of G points { x_g}_g = 1^G, a robust design for the above problem can be given asmin_ Z, {u_g}  max_ x∈𝒳  u_g s.t.   [ [ J( Z;x) e_1; e_1^T u_g ]] ≽ 0,   g = 1, 2, ⋯, G,  tr( Z) = P,   Z≽ 0. Problem (<ref>) can be further formulated asmin_ Z, {u_g}, t   ts.t.   [ [ J( Z;x) e_1; e_1^T u_g ]] ≽ 0,   g = 1, 2, ⋯, G,   u_g≤ t,   g = 1, 2, ⋯, G,  tr( Z) = P,   Z≽ 0.It is shown in <cit.> that a codebook-based approach can be applied to decrease the complexity while achieving a optimal design. Specifically, a predefined codebook consists of directional and derivative beams <cit.>, that is, F^(pre) = [ F^(direc) ,F^(deriv)], where F^(direc) = [ a(θ_1),a(θ_2), ⋯,a(θ_G)] and F^(deriv) = [ȧ(θ_1), ȧ(θ_2), ⋯, ȧ(θ_G)], with ȧ(θ) = ∂ a(θ)/∂θ and G = M/2. With the predefine codebook, we consider the optimal beam power allocation problem in q = [q_1, q_2, ⋯, q_M]^T <cit.>:min_ q, {u_g}, t   ts.t.   [ [ J( Z;x) e_1; e_1^T u_g ]] ≽ 0,   g = 1, 2, ⋯, G,   u_g≤ t,   g = 1, 2, ⋯, G,   Z =F^(pre)diag( q)( F^(pre))^H,  tr( Z) = P,   Z≽ 0, which yields the optimal fully digital precoder asF_opt =F^(pre)diag([√(q_1), √(q_2), ⋯, √(q)_M]). With Z =F F^H∈ℂ^N_Tx× N_Tx, Problem (<ref>) can be relaxed (by removing the non-convex rank constraint: rank{ Z} = M) to a semidefinite programming (SDP) <cit.>:min_t,Z    ts.t.   [[ J e_1; e_1^T t ]] ≽ 0,  tr{ Z} = P,  and  Z≽ 0, where t is an auxiliary variable, e_1 = [1, 0, 0, 0]^T, and tr{·} denotes the trace of a matrix. The above problem is a convex SDP since its constraints are either linear matrix inequalities or linear equalities in the entries of Z <cit.>. Hence, it can be efficiently solved by standard convex optimization toolboxes, such as CVX <cit.>. Once the optimal solution to Problem (<ref>) is solved, the solution to Problem (<ref>), denoted as F_opt, can be obtained via eigenvalue decomposition. Furkan: An alternative way to obtain F_opt in Step 1 is through a predefined codebook consisting of directional and derivative beams in <cit.>. Step 2: We decompose F_opt into two matrices, i.e., F_RF and F_BB, by taking into account the constraints (<ref>) and (<ref>):min_ F_RF,F_BB 1/2 F_opt -F_RF F_BB_F^2s.t. (<ref>) and (<ref>).min_ F_RF,F_BB  1/2 F_opt -F_RF F_BB_F^2 s.t.    (<ref>) and (<ref>).In what follows, we propose an alternating optimization approach for solving Problem (<ref>). To be specific, we first solve F_BB with a fixed F_RF, asmin_ F_BB 1/2 F_opt -F_RF F_BB_F^2s.t. (<ref>).min_ F_BB  1/2 F_opt -F_RF F_BB_F^2 s.t.   (<ref>).It has a LS closed-form solution asF_BB = √(P)/ F_RF F_RF^† F_opt_F F_RF^† F_opt,where F_RF^† = ( F_RF^H F_RF)^-1 F_RF^H. Then, we solve F_RF with the obtained F_BB in (<ref>), asmin_ F_RF 1/2 F_opt -F_RF F_BB_F^2s.t. (<ref>).min_ F_RF  1/2 F_opt -F_RF F_BB_F^2 s.t.   (<ref>).We develop an algorithm based on the ADMM <cit.> to solve the above problem. To this end, we introduce an auxiliary variable F̃_RF∈ℂ^N_Tx× N_RF, and Problem (<ref>) can be equivalently expressed asmin_ F_RF, F̃_RF 1/2 F_opt - F̃_RF F_BB_F^2s.t. (<ref>) and F̃_RF =F_RF.min_ F_RF, F̃_RF  1/2 F_opt - F̃_RF F_BB_F^2 s.t.    (<ref>) and F̃_RF =F_RF.The corresponding scaled-form augmented Lagrangian function is given as <cit.>ℒ(F̃_RF,F_RF,  U) = 1/2 F_opt - F̃_RF F_BB_F^2 + ρ/2( F̃_RF -F_RF +U_F^2 -U_F^2),where U∈ℂ^N_Tx× N_RF is the scaled dual variable and ρ > 0 is the augmented Lagrangian parameter. Parameter ρ can be set based on the proposed convergence analyses in Section <ref>. The primal, auxiliary, and dual variables are updated as:F_RF^( k + 1 )= _[ F_RF]_i,j∈ℱ ℒ(F̃_RF^( k ),F_RF,U^( k )) = 1/√(N_Tx) e^𝒬( ∠(F̃_RF^( k ) +U^( k )) ) ,F̃_RF^( k + 1 )= _F̃_RF ℒ(F̃_RF,F_RF^( k + 1 ),U^( k ))= [ F_opt F_BB^H+ρ ( F_RF^( k + 1 )- U^( k ) )]( F_BB F_BB^H+ρ I)^-1,U^( k + 1 )=U^( k ) + F̃_RF^( k + 1 ) -F_RF^( k + 1 ). In (<ref>), ∠· denotes the angle of its argument in an element-wise manner, and 𝒬(·) stands for the quantization function rounding its argument to the available phases of the phase shifters (i.e., 2π/2^B×{ 0, 1, ⋯, 2^B- 1 }). The proposed algorithm for solving Problem (<ref>) is referred to as AltOpt-LS-ADMM, and summarized in Algorithm <ref>, where superscript ·^(i) denotes the corresponding variable at the i-th outer iteration, superscript ·^(k) denotes the corresponding variable at the k-th inner (i.e., ADMM) iteration, and I_max and k_max are the maximal numbers of the outer and the inner loops, respectively. Besides, F_RF^(init) and F̃_RF^(init) are obtained by randomly selecting from the feasible set (<ref>), the update of ρ in Line 3 comes from (<ref>) in Section <ref>, and O in Line 4 is an all-zeros matrix.It is worth mentioning that the proposed two-step strategy can find approximate (but not exact) solutions to the original optimal precoder design problem, i.e., Problem (<ref>).§.§ Computational Complexity AnalysisThe computational cost of the proposed AltOpt-LS-ADMM algorithm mainly comes from the pseudo-inverse operation and the multiplication operation in Line 2 and the inverse operation and the multiplication operation in Line 7, which incur the complexities 𝒪(N_RF^2N_Tx), 𝒪(N_RFN_TxM), 𝒪(N_RF^3), and 𝒪(N_RFN_TxM), respectively. Since we can compute the inverse operation and the multiplication operation in Line 7 outside the ADMM iteration and then use their results for all inner iterations, the total computational cost of the proposed AltOpt-LS-ADMM algorithm is 𝒪(k_max(N_RF^2N_Tx + 2N_RFN_TxM )).§ ANALYSIS OF ERROR BOUNDS AND CONVERGENCE§.§ Analysis of Quantization Error Bound In this subsection, we analyse the quantization error bound in the proposed ADMM algorithm (i.e., inner iteration of Algorithm <ref>) resulting from the quantization operation in Line 6 of Algorithm <ref>. We first denote F_RF and F_RF^⋆ as the RF precoder with (i.e., B < ∞) and without (i.e., B = ∞) quantization, respectively. Then, the relation of these two matrices is given asF_RF = Φ⊙ F_RF^⋆,where ⊙ denotes the element-wise product and Φ∈ℂ^N_Tx× N_RF is the quantization error matrix. Moreover, the elements of Φ can be formulated as [Φ]_ij= e^ϕ_ij, where 0 ≤ |ϕ_ij| ≤π/2^B for all 1 ≤ i ≤ N_Tx and 1 ≤ j ≤ N_RF. Therefore, the quantization error can be calculated asF_opt - F_RF F_BB_F -F_opt -F_RF^⋆ F_BB_F=  F_opt - (Φ⊙ F_RF^⋆)F_BB_F -F_opt -F_RF^⋆ F_BB_F ≤   ( F_RF^⋆ - (Φ⊙ F_RF^⋆))F_BB_F= [( 1 - Φ) ⊙ F_RF^⋆]F_BB_F ≤  ( 1 - Φ) ⊙ F_RF^⋆_F F_BB_F ≤   1 - Φ_F F_RF^⋆_F F_BB_F ≤  |1 - e^π/2^B| √(N_TxN_RF) F_RF^⋆_F F_BB_F, where 1 is the all-ones matrix of appropriate size, in (<ref>) we used the triangle inequality, in (<ref>) we employed the fact that M N_F≤ M_F N_F holds for any matrices M and N of appropriate sizes, in (<ref>) we utilized the Cauchy-Schwarz inequality, and in (<ref>) we used the following inequality:1 - Φ_F = √(∑_i = 1^N_Tx∑_j = 1^N_RF| 1 - e^ϕ_ij|^2) ≤  √(∑_i = 1^N_Tx∑_j = 1^N_RF| 1 - e^π/2^B|^2)= |1 - e^π/2^B| √(N_TxN_RF) . Note that √(N_TxN_RF) F_RF^⋆_F F_BB_F in (<ref>) is a constant w.r.t. the number of quantization bits of phase shifters. For notational simplicity, we define C ≜√(N_TxN_RF) F_RF^⋆_F F_BB_F, and rewrite the quantization error asF_opt - F_RF F_BB_F -F_opt -F_RF^⋆ F_BB_F≤ C |1 - e^π/2^B|.The values of |1 - e^π/2^B| with different numbers of quantization bits are presented in Table <ref>. It is seen from Table <ref> that when B ≥ 5, the quantization upper bound decreases by more than 10 times compared to B = 1, suggesting that B=5 can be sufficient to approach the performance of infinite-resolution phase shifters. In order to illustrate the impact of quantization bits B on the decomposition error upper bound (DecpUB), we defineDecpUB≜ F_opt -F_RF^⋆ F_BB_F + C |1 - e^π/2^B|,and then plot it w.r.t. the number of quantization bits in Fig. <ref>, where the simulation parameters are N_Tx = 16, M = 20, and P = 10 dBm. It can be observed from Fig. <ref> that when B ≤ 3 the DecpUB decrease sharply, and when B ≥ 5 the slopes of the DecpUB are approximately equal to 0. This leads to the same conclusion as the one drawn from Table <ref>, that is, B = 5 is sufficient to approach the performance of infinite-resolution phase shifters. This will be further verified through simulations in Section <ref>.§.§ Bound on Quantization ErrorThe upper bound of decomposition error (due to quantization) is stated in the following lemma. Denote respectively F_RF and F_BB as the RF and BB precoders with B < ∞ quantization bits of PSs. Denote respectively F_RF^⋆ and F_BB^⋆ as the RF and BB precoders with B = ∞ quantization bits of PSs. Then, we haveF_opt - F_RFF_BB_F -F_opt -F_RF^⋆ F^⋆_BB_F≤ 2 √(P),where P denotes the total transmit power of the BS antennas, as mentioned in (<ref>). See Appendix <ref>.Furkan: An alternative algorithm in Step 2 is to use manifold optimization to solve for F_RF, initialized with Algorithm <ref>. Do you plan to include this, as well? Because it was performing better if I remember correctly. In terms of _F, it is true, but in terms of AEB, it is not. So I did not plan to add this. §.§ Convergence Analysis In this subsection, we analyse the convergence behaviors of the proposed ADMM algorithm (i.e., the inner iteration of Algorithm <ref>) and AltOpt-LS-ADMM algorithm (i.e., the outer iteration of Algorithm <ref>), which are stated in the following two theorems, respectively.The augmented Lagrangian function value sequence {ℒ( F̃_RF^(k),F_RF^(k),U^(k))| k = 0 , 1, 2, ⋯} produced by the proposed ADMM algorithm converges ifρ≥max{√(2) F_BB F_BB^H_F ,F_BB_F^2}.Furthermore, as k →∞, we have F_RF^(k+1) =F_RF^(k), F̃_RF^(k+1) = F̃_RF^(k), U^(k+1) =U^(k), and F_RF^(k) = F̃_RF^(k); and the point sequence {( F̃_RF^(k),F_RF^(k),U^(k))} is a Cauchy sequence and it converges to a fixed point after a finite number of iterations.See Appendix <ref>. If (<ref>) holds, the sequence { F_opt -F_RF^(i) F_BB^(i)_F} generated by the proposed AltOpt-LS-ADMM algorithm converges.See Appendix <ref>. Theorem <ref> asserts that as long as the augmented Lagrangian parameter ρ is large enough (see (<ref>)), the proposed ADMM algorithm is convergent. Additionally, Theorem <ref> establishes that the proposed AltOpt-LS-ADMM can generate a convergent sequence of cost function values, as long as (<ref>) holds. § NUMERICAL RESULTS §.§ Scenario, Performance Metric, and BenchmarkIn this section, we conduct simulations to verify the performance of the proposed AltOpt-LS-ADMM algorithm. Two scenarios are considered as follows: * Scenario I: We randomly generate a digital precoder F_opt, and our performance metric is the decomposition error (DecpErr), defined as: F_opt -F_RF F_BB_F /F_opt_F. The simulation parameters are summarized in Table <ref>.* Scenario II: We obtain an optimal digital precoder F_opt as introduced in Step 1 in Section <ref>, and our performance metric is the AEB in (<ref>). The simulation parameters are summarized in Table <ref>.We compare the proposed method with the following methods: * Alt-Babai <cit.>: alternating optimization + the Babai algorithm* Alt-CDM <cit.>: alternating optimization + coordinate descent method* Spa-OMP <cit.>: spatially sparse representation + orthogonal matching pursuit* ManiOpt <cit.>: manifold optimization (where we utilize thefunction <cit.> for implementation) Note that ManiOpt in <cit.> utilizes infinite-resolution (i.e., B = ∞) phase shifters, which is adopted as a benchmark in this work. Also note that the ManiOpt is initialized with random value or with the output of the proposed AltOpt-LS-ADMM method, labelled as “ManiOpt (random initialization)” and “ManiOpt (proposed initialization)”, respectively.§.§ Results and Discussion of Scenario I §.§.§ DecpErr as a Function of N_RFWe randomly generate an F_opt and decompose it into F_RF and F_BB by using the proposed AltOpt-LS-ADMM algorithm. The DecpErrs are averaged over 500 Monte-Carlo trials, and the results w.r.t. the number of RF chains, N_RF, are plotted in Fig. <ref>. It can be seen that: (i) when N_RF or B increases, the DecpErr decreases; (ii) when B = 5, its performance approaches the one with B = ∞ (i.e., the infinite resolution phase shifter); (iii) when N_RF = N_Tx = 16, the DecpErrs are always 0. This is because when N_RF = N_Tx, F_RF is a square matrix and invertible, and thus there always exists a matrix F_BB =F_RF^-1 F_opt such that F_opt =F_RF F_BB.Next, the DecpErrs of different methods are displayed in Fig. <ref>, with B = 2. We see that ManiOpt with random initialization has the worst performance and ManiOpt with the proposed method as initialization achieves the highest performance. Besides, when N_RF≥ 7, the decomposition error of the proposed method is larger than that of ManiOpt with proposed initilization, and smaller than those of Alt-Babai, Alt-CDM, Spa-OMP, and ManiOpt with random initialization. Note that ManiOpt with proposed initialization attains the best decomposition performance at a cost of higher computational complexity, which will be verified in Fig. <ref>.§.§.§ DecpErr as a Function of BThe DecpErr of the proposed algorithm w.r.t. the number of quantization bits, B, is shown in Fig. <ref>. We observe that when N_RF increases, the decomposition error decreases, as expected. For N_RF < 16, when B increases from 1 until 5, the decomposition error decreases; when B ≥ 5, the decomposition error remains nearly unchanged.Therefore, taking into account the outcomes presented in Fig. <ref>, we can infer that B = 5 bits prove to be sufficient in achieving near-optimal hybrid precoding performance (i.e., reaching a performance level very close to that obtained by digital precoding). Besides, it is seen from Fig. <ref> that, the decomposition error is 0 when N_RF = N_Tx = 16, which has been explained in the first example. Next, the DecpErrs of different algorithms are depicted in Fig. <ref>, with N_RF = 8, which verifies the better performance of the proposed method against Alt-Babai, Alt-CDM, Spa-OMP, and ManiOpt with random initialization, especially when number of bits B ≥ 2. Note that the ManiOpt with random initialization has a horizontal line because it uses B = ∞ quantization bits; while ManiOpt with proposed initialization does not has a horizontal line because it is sensitive to its initialization (its initialization, i.e., the proposed method, has better performance as B increases from 1 until 5). §.§ Results and Discussion of Scenario II §.§.§ AEB as a Function of N_TxWe now evaluate the AEB performance of the proposed algorithm and the benchmark methods. The optimal digital precoder F_opt can be obtained by the method[Since our main focus is the decomposition step, i.e., Step 2, in Section <ref>, in the following simulations, F_opt is obtained heuristically by assigning 0.1 of power to derivative beams and the rest to directional beams.] as introduced in Section <ref>. Then we decompose this F_opt into F_RF and F_BB by using different algorithms. The AEBs w.r.t. the number of Tx antennas, N_Tx, achieved by the different methods are shown in Fig. <ref>, where the curve labelled as “Optimal (fully digital)” is the result by using F_opt directly without decomposition. We can see that the proposed algorithm attains lower AEB than those of Alt-Babai, Alt-CDM, Spa-OMP, and ManiOpt with random initialization. The ManiOpt with proposed initialization outperforms the others and is much closer to the “Optimal (fully digital)” curve.§.§.§ AEB as a Function of N_RFThe results of AEB w.r.t. the number of RF chains, N_RF, are presented in Fig. <ref>. It can be observed from Fig. <ref> that the AEB of the proposed algorithm is smaller than those of Alt-Babai, Alt-CDM, Spa-OMP, and ManiOpt with random initilization, while ManiOpt with proposed initialization has the best performance. §.§.§ AEB as a Function of BWe compare AEB versus different quantization bits of phase shifters, and the results are displayed in Fig. <ref>. We have similar findings as in Fig. <ref>.§.§.§ AEB as a Function of AoDThe predefined codebook for F_opt is set around 0^∘, while the true AoD changes from -80^∘ to 80^∘. The results of AEB versus AoD are plotted in Fig. <ref>. We see that when AoD is 0^∘ (matching with our predefined codebook), all the curves reach their lowest AEB. Besides, the ManiOpt with proposed initialization outperforms others, followed by the proposed AltOpt-LS-ADMM algorithm. In addition, the proposed algorithm for hybrid precoder design, both when used independently and as an initialization for ManiOpt, exhibits AoD estimation performance very close to that achieved via a fully digital array, which underscores its effectiveness over a broad range of AoD values. §.§.§ CPU Runtime as a Function of N_Tx or N_RFWe compare the computational complexity of different algorithms. The central processing unit (CPU) runtime versus number of transmit antennas is drawn in Fig. <ref> (left). On the other hand, the CPU runtime versus number of RF chains is drawn in Fig. <ref> (right). From Fig. <ref>, it can be seen that ManiOpt with random initialization and ManiOpt with proposed initialization have almost the same CPU runtime. The Optimal method has the least CPU runtime since it does not need to perform the decomposition operation on F_opt. Besides, the proposed algorithm consumes less CPU runtime than Alt-Babai, Alt-CDM, Spa-OMP, and ManiOpt. § CONCLUSIONIn this paper, we have investigated the hybrid precoder design problem for angle-of-departure (AoD) estimation, where we took into account practical limitation on the finite resolution of phase shifters. Our aim was to devise a radio-frequency (RF) precoder and a base-band (BB) precoder that could simultaneously adhere to the practical constraint and achieve highly precise AoD estimation. To accomplish this goal, we developed a two-step approach. Firstly, we derived a fully digital precoder that minimizes the angle error bound by using a predefined codebook. Then, we decomposed this digital precoder into an RF precoder and a BB precoder, employing the alternating optimization framework and alternating direction method of multipliers. We also analysed the quantization error bound, and provided convergence analyses of the proposed algorithm. Numerical results demonstrated the exceptional performance of the proposed method with low complexity, leading to the following key conclusions: * Number of Bits Sufficient for AoD Estimation: 5 bits are sufficient to achieve almost the same decomposition and AoD estimation performance as the case with infinite-resolution phase shifters. * Number of RF Chains Sufficient for AoD Estimation: For a 20-element transmit array, 4 RF chains are sufficient to attain the same AoD estimation performance as the fully-digital architecture.* High-Quality Initialization: The proposed algorithm can provide high-quality initialization that boosts the performance of manifold optimization compared to random initialization. * Covering Wide Range of AoDs: The proposed algorithm attains near-optimal (in the sense of achieving the fully-digital performance) AoD performance over a broad range of AoD values ranging from -80 to 80 degrees.§ CALCULATION OF THE FIM The derivatives of ỹ = β S( F_RF F_BB)^T a(θ) w.r.t. [ x]_i, i = 1, 2, 3, are given as follows ∂ỹ/∂ [ x]_1= ∂ỹ/∂sinθ = - 2 βπ d /λ S( F_RF F_BB)^T D a(θ), ∂ỹ/∂ [ x]_2= ∂ỹ/∂β_R =S( F_RF F_BB)^T a(θ), ∂ỹ/∂ [ x]_3= ∂ỹ/∂β_I =S( F_RF F_BB)^T a(θ), where D≜diag{0, 1, ⋯, N_Tx-1}. According to (<ref>), we have [ J]_11= 8 |β|^2π^2d^2/σ_n^2λ^2 a^H(θ) D F^* S^H S F^T D a(θ), [ J]_12= [ J]_21 = 4 π d/σ_n^2λ×{β^* a^H(θ) D F^* S^H S F^T a(θ) }, [ J]_13= [ J]_31 = - 4 π d/σ_n^2λ×{β^* a^H(θ) D F^* S^H S F^T a(θ) }, [ J]_22= [ J]_33 = 2/σ_n^2 a^H(θ)F^* S^H S F^T a(θ), [ J]_23= [ J]_32 = 0.First of all, we calculate ∂ℓ( x |y)/∂ [ x]_i, i = 1, 2, 3, 4, as ∂ℓ( x |y)/∂θ= cosθ×2 π d/λσ_n^2( β^* a^H(θ) diag{[0, 1, ⋯, N_Tx-1]}       ×( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ), ∂ℓ( x |y)/∂β_R= 1/σ_n^2(a^H(θ) ( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ), ∂ℓ( x |y)/∂β_I= - 1/σ_n^2(a^H(θ)( F_RF F_BB)^* S^H       ×[ y - β S( F_RF F_BB)^T a(θ) ] ), ∂ℓ( x |y)/∂σ_n^2= - M/2 σ_n^2 + 1/2 σ_2^4 y - β S( F_RF F_BB)^T a(θ)_2^2. Therefore, we have: [ J]_11= cos^2θ×4 π^2 d^2/λ^2σ_n^4( β^* a^H(θ) diag{[0, 1, ⋯, N_Tx - 1]}        ×(F_RF F_BB)^* S^H[y - β S( F_RF F_BB)^T a(θ) ] ) ^2,[ J]_12= [ J]_21 = cosθ×2 π d/λσ_n^4( β^* a^H(θ) diag{[0, 1, ⋯, N_Tx-1]}        ×( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] )         ×(a^H(θ) ( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ) ,[ J]_13= [ J]_31 = - cosθ×2 π d /λσ_n^4( β^* a^H(θ) diag{ [0, 1, ⋯, N_Tx-1] }        ×( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] )         ×(a^H(θ) ( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ) ,[ J]_14= [ J]_41 = (- M/2 σ_n^2 + 1/2 σ_2^4 y - β S( F_RF F_BB)^T a(θ)_2^2)         ×cosθ×2 π d/λσ_n^2( β^* a^H (θ) diag{ [0, 1, ⋯, N_Tx -1] }        ×( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ),[ J]_22= 1/σ_n^4(a^H (θ)( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] )^2,[ J]_23= [ J]_32 = - 1/σ_n^4(a^H(θ)( F_RF F_BB)^* S^H  ×[ y - β S( F_RF F_BB)^T a(θ) ] ) (a^H(θ)( F_RF F_BB)^* S^H  ×[ y - β S( F_RF F_BB)^T a(θ) ] ), [ J]_24= [ J]_42 = (- M/2 σ_n^2 + 1/2 σ_2^4 y - β S( F_RF F_BB)^T a(θ)_2^2)   ×1/σ_n^2(a^H(θ) ( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ), [ J]_33= 1/σ_n^4(a^H (θ) ( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] )^2,[ J]_34= [ J]_43 = (M/2 σ_n^2 - 1/2 σ_2^4 y - β S( F_RF F_BB)^T a(θ)_2^2)   ×1/σ_n^2(a^H(θ) ( F_RF F_BB)^* S^H[ y - β S( F_RF F_BB)^T a(θ) ] ), [ J]_44= (- M/2 σ_n^2 + 1/2 σ_2^4 y - β S( F_RF F_BB)^T a(θ)_2^2)^2. § CALCULATION OF THE AEB Based on the block matrix inversion lemma <cit.>, we have AEB( F_RF,F_BB;x) = ([ J]_11 - [ [ J]_12 , [ J]_13] [ [ [ J]_22 0; 0 [ J]_33 ]]^-1[ [ [J]_21; [J]_31 ]] )^-1/2= ( [ J]_11 -([ J]_12)^2 + ([ J]_13)^2/[ J]_22)^-1/2.Substituting the results in Appendix <ref> and S^H S = σ_s^2 I_M into the above equation yields (<ref>).§ PROOF OF LEMMA <REF> The decomposition error can be bounded as F_opt - F_RFF_BB_F -F_opt -F_RF^⋆ F^⋆_BB_F=  F_opt - F_RF√(P)/F_RFF_RF^† F_opt_FF_RF^† F_opt_F -F_opt - F_RF^⋆√(P)/F_RF^⋆(F_RF^⋆)^† F_opt_F (F_RF^⋆)^† F_opt_F ≤  ( √(P)F_RF^⋆(F_RF^⋆)^†/F_RF^⋆(F_RF^⋆)^† F_opt_F - √(P)F_RFF_RF^†/F_RFF_RF^† F_opt_F) F_opt_F ≤  √(P)F_RF^⋆(F_RF^⋆)^† F_opt/F_RF^⋆(F_RF^⋆)^† F_opt_F_F + √(P)F_RFF_RF^† F_opt/F_RFF_RF^† F_opt_F_F=  2 √(P), where in (<ref>) we used (<ref>); and in (<ref>) and (<ref>) we used the triangle inequality. This completes the proof of Lemma <ref>. § PROOF OF THEOREM <REF> To show the convergence of {ℒ(F̃_RF^(k),F_RF^(k),U^(k)) }, we first provide the following two lemmas: The proposed AltOpt-LS-ADMM algorithm, i.e., Algorithm <ref>, produces a monotonically decreasing sequence {ℒ^(k)| k = 0 , 1, 2, ⋯}, where ℒ^(k)≜ℒ(F̃_RF^(k),F_RF^(k),U^(k)), provided that the augmented Lagrangian parameter ρ satisfiesρ≥√(2) F_BB F_BB^H_F. The function ℒ(F̃_RF,F_RF,U) defined in (<ref>) is bounded from below by 0 during the iteration process (<ref>), provided that the augmented Lagrangian parameter ρ satisfiesρ≥ F_BB_F^2. The proofs of Lemmas <ref> and <ref> are relegated to Appendix <ref> and Appendix <ref>, respectively. These two lemmas straightforwardly implies that the sequence {ℒ(F̃_RF^(k),F_RF^(k),U^(k)) } is convergent. Therefore, when the augmented Lagrangian parameter ρ satisfies (<ref>), we haveℒ^(k+1) - ℒ^(k) = 0as k →∞. On the other hand, it is showed that if ρ≥ F_BB^H_F^2, ℒ^(k+1) - ℒ^(k)≤ (i) ×F̃_RF^(k+1) - F̃_RF^(k)_F^2≤ 0,where the term (i) is defined in Appendix <ref>. Combining (<ref>) and (<ref>) leads toF̃_RF^(k+1) = F̃_RF^(k).The above equation together with U = 1/ρ( F_opt - F̃_RF F_BB) F_BB^H(which is the result of combining (<ref>) and (<ref>)) yieldsU^(k+1) =U^(k).Since F_RF is calculated based on F̃_RF and U (see Line 6 in Algorithm <ref>), (<ref>) and (<ref>) yieldsF_RF^(k+1) =F_RF^(k).Further, according to Line 8 in Algorithm <ref>, we haveF_RF^(k) = F̃_RF^(k). On the other hand, from (<ref>) we have F̃_RF^(k+1)-F̃_RF^(k)_F→ 0 as k →∞. This leads to the fact that: for any positive ϵ, there always exists an integer T (large enough), such thatF̃_RF^(k_1)-F̃_RF^(k_2)_F=F̃_RF^(k_1)-F̃_RF^(k_1+1)+F̃_RF^(k_1+1)-F̃_RF^(k_1+2)+⋯+F̃_RF^(k_2-1)-F̃_RF^(k_2)_F ≤ F̃_RF^(k_1)-F̃_RF^(k_1+1)_F + F̃_RF^(k_1+1)-F̃_RF^(k_1+2)_F + ⋯⋯ + F̃_RF^(k_2-1)-F̃_RF^(k_2)_F ≤  ϵholds for all k_1, k_2 ≥ T (without loss of generality we assume k_2 > k_1 in the above inequalities). This indicates that sequence {F̃_RF^(k)} is a Cauchy sequence, and thus it converges to a fixed point after a finite number (i.e., T) of iterations <cit.>. Similarly, both sequences {U^(k)} and {F_RF^(k)} are Cauchy sequences and they converge to fixed points after T iterations, thanks to (<ref>) and Line 6 in Algorithm <ref>. This completes the proof of Theorem <ref>.§ PROOF OF THEOREM <REF> Since the proposed AltOpt-LS-ADMM algorithm, i.e., Algorithm <ref>, has unique optimal solutions for both F_BB (see (<ref>)) and F_RF (see Theorem <ref>) at each iteration, we haveF_opt -F_RF^(i + 1) F_BB^(i + 1)_F≤   F_opt -F_RF^(i) F_BB^(i + 1)_F ≤   F_opt -F_RF^(i) F_BB^(i)_F,which shows that sequence { F_opt -F_RF^(i) F_BB^(i)_F} is monotonically decreasing. On the other hand, it is straightforward to see that F_opt -F_RF^(i) F_BB^(i)_F is bounded from below by 0. This indicates that sequence { F_opt -F_RF^(i) F_BB^(i)_F} generated by the proposed algorithm converges. This completes the proof of Theorem <ref>. § PROOF OF LEMMA <REF> The difference between the augmented Lagrangian function values at two successive iterations is calculated asℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k+1)) - ℒ(F̃_RF^(k),F_RF^(k),U^(k))=[ ℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k+1)) -ℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k)) ]+ [ ℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k)) -ℒ(F̃_RF^(k),F_RF^(k+1),U^(k)) ]+ [ ℒ(F̃_RF^(k),F_RF^(k+1),U^(k)) -ℒ(F̃_RF^(k),F_RF^(k),U^(k)) ].The three terms in the above three square brackets are respectively calculated as follows. The first term is bounded asℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k+1)) - ℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k))= ρ/2( F̃_RF^(k+1) -F_RF^(k+1) +U^(k+1)_F^2 -U^(k+1)_F^2)- ρ/2( F̃_RF^(k+1) -F_RF^(k+1) +U^(k)_F^2 -U^(k)_F^2) = ρ/2( 2 U^(k+1)- U^(k)_F^2- 2 U^(k+1)_F^2+ U^(k)_F^2) = ρ U^(k+1) -U^(k)_F^2= 1/ρ(F_opt-F̃_RF^(k+1) F_BB)F_BB^H-(F_opt-F̃_RF^(k) F_BB)F_BB^H_F^2= 1/ρ(F̃_RF^(k) - F̃_RF^(k+1)) F_BB F_BB^H_F^2 ≤  1/ρ F_BB F_BB^H_F^2F̃_RF^(k+1) - F̃_RF^(k)_F^2, where in (<ref>) we used the definition of ℒ(F̃_RF,F_RF,U); in (<ref>) we employed F̃_RF^(k+1) -F_RF^(k+1) =U^(k+1) -U^(k) (due to (<ref>)); in (<ref>) we utilized (<ref>); in (<ref>) we used the fact that M N_F≤ M_F N_F holds for any matrices M and N of appropriate sizes. The second term is bounded asℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k)) - ℒ(F̃_RF^(k),F_RF^(k+1),U^(k)) ≤  {⟨∇_F̃_RFℒ( F̃_RF^(k+1),F_RF^(k+1),U^(k)) , F̃_RF^(k+1)-F̃_RF^(k)⟩} - γ/2F̃_RF^(k+1) - F̃_RF^(k)_F^2=  - λ_min( F_BB F_BB^H) + ρ/2F̃_RF^(k+1) - F̃_RF^(k)_F^2, wherein (<ref>) we utilized the strongly convexity of the Lagrangian function ℒ(F̃_RF,F_RF,U) w.r.t. F̃_RF with parameter γ > 0 <cit.>; in (<ref>) we adopted the optimality condition of (<ref>) and γ = λ_min( F_BB F_BB^H) + ρ with λ_min(·) being the minimal eigenvalue of its argument (which is due to the facts that ℒ(F̃_RF,F_RF,U) is twice continuously differentiable w.r.t. F̃_RF, and its strong convexity parameter γ satisfies ∇_F̃_RF^2ℒ =F_BB F_BB^H + ρ I≽γ I for all F̃_RF <cit.>). Finally, the third term is bounded as ℒ(F̃_RF^(k),F_RF^(k+1),U^(k)) -ℒ(F̃_RF^(k),F_RF^(k),U^(k)) ≤ 0, where we employed the fact that F_RF^(k+1) is the minimum of ℒ(F̃_RF^(k),F_RF,U^(k)) according to (<ref>). Substituting the results of (<ref>), (<ref>), and (<ref>) in (<ref>) yieldsℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k+1)) - ℒ(F̃_RF^(k),F_RF^(k),U^(k)) ≤  ( 1/ρ F_BB F_BB^H_F^2-λ_min( F_BB F_BB^H) +ρ/2) _(i)F̃_RF^(k+1)-F̃_RF^(k)_F^2.If ρ≥√(2) F_BB F_BB^H_F, the term (i) satisfies: (i) ≤ 0, and thusℒ(F̃_RF^(k+1),F_RF^(k+1),U^(k+1)) - ℒ(F̃_RF^(k),F_RF^(k),U^(k)) ≤ 0.This completes the proof of Lemma <ref>.§ PROOF OF LEMMA <REF> By using (<ref>), we haveℒ(F̃_RF,F_RF,U) = 1/2 F_opt-F̃_RF F_BB_F^2+ρ/2F̃_RF- F_RF+ U_F^2 - ρ/21/ρ( F_opt - F̃_RF F_BB) F_BB^H_F^2 ≥  1/2 F_opt-F̃_RF F_BB_F^2+ρ/2F̃_RF- F_RF+ U_F^2 - 1/2ρ F_opt - F̃_RF F_BB_F^2 F_BB_F^2= 1/2(1 - 1/ρ F_BB_F^2)F_opt-F̃_RF F_BB_F^2  + ρ/2F̃_RF -F_RF +U_F^2.If ρ≥ F_BB_F^2, then ℒ(F̃_RF,F_RF,U) ≥ 0, which completes the proof of Lemma <ref>.myIEEEtran
http://arxiv.org/abs/2312.15921v1
{ "authors": [ "Huiping Huang", "Musa Furkan Keskin", "Henk Wymeersch", "Xuesong Cai", "Linlong Wu", "Johan Thunberg", "Fredrik Tufvesson" ], "categories": [ "cs.IT", "eess.SP", "math.IT" ], "primary_category": "cs.IT", "published": "20231226072953", "title": "Hybrid Precoder Design for Angle-of-Departure Estimation with Limited-Resolution Phase Shifters" }
Observation of χ_cJ→ 3(K^+K^-)M. Ablikim^1, M. N. Achasov^4,c, P. Adlarson^75, O. Afedulidis^3, X. C. Ai^80, R. Aliberti^35, A. Amoroso^74A,74C, Q. An^71,58,a, Y. Bai^57, O. Bakina^36, I. Balossino^29A, Y. Ban^46,h, H.-R. Bao^63, V. Batozskaya^1,44, K. Begzsuren^32, N. Berger^35, M. Berlowski^44, M. Bertani^28A, D. Bettoni^29A, F. Bianchi^74A,74C, E. Bianco^74A,74C, A. Bortone^74A,74C, I. Boyko^36, R. A. Briere^5, A. Brueggemann^68, H. Cai^76, X. Cai^1,58, A. Calcaterra^28A, G. F. Cao^1,63, N. Cao^1,63, S. A. Cetin^62A, J. F. Chang^1,58, G. R. Che^43, G. Chelkov^36,b, C. Chen^43, C. H. Chen^9, Chao Chen^55, G. Chen^1, H. S. Chen^1,63, H. Y. Chen^20, M. L. Chen^1,58,63, S. J. Chen^42, S. L. Chen^45, S. M. Chen^61, T. Chen^1,63, X. R. Chen^31,63, X. T. Chen^1,63, Y. B. Chen^1,58, Y. Q. Chen^34, Z. J. Chen^25,i, Z. Y. Chen^1,63, S. K. Choi^10A, G. Cibinetto^29A, F. Cossio^74C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^78, A. Dbeyssi^18, R.  E. de Boer^3, D. Dedovich^36, C. Q. Deng^72, Z. Y. Deng^1, A. Denig^35, I. Denysenko^36, M. Destefanis^74A,74C, F. De Mori^74A,74C, B. Ding^66,1, X. X. Ding^46,h, Y. Ding^34, Y. Ding^40, J. Dong^1,58, L. Y. Dong^1,63, M. Y. Dong^1,58,63, X. Dong^76, M. C. Du^1, S. X. Du^80, Z. H. Duan^42, P. Egorov^36,b, Y. H. Fan^45, J. Fang^59, J. Fang^1,58, S. S. Fang^1,63, W. X. Fang^1, Y. Fang^1, Y. Q. Fang^1,58, R. Farinelli^29A, L. Fava^74B,74C, F. Feldbauer^3, G. Felici^28A, C. Q. Feng^71,58, J. H. Feng^59, Y. T. Feng^71,58, M. Fritsch^3, C. D. Fu^1, J. L. Fu^63, Y. W. Fu^1,63, H. Gao^63, X. B. Gao^41, Y. N. Gao^46,h, Yang Gao^71,58, S. Garbolino^74C, I. Garzia^29A,29B, L. Ge^80, P. T. Ge^76, Z. W. Ge^42, C. Geng^59, E. M. Gersabeck^67, A. Gilman^69, K. Goetzen^13, L. Gong^40, W. X. Gong^1,58, W. Gradl^35, S. Gramigna^29A,29B, M. Greco^74A,74C, M. H. Gu^1,58, Y. T. Gu^15, C. Y. Guan^1,63, Z. L. Guan^22, A. Q. Guo^31,63, L. B. Guo^41, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,g, A. Guskov^36,b, J. Gutierrez^27, K. L. Han^63, T. T. Han^1, X. Q. Hao^19, F. A. Harris^65, K. K. He^55, K. L. He^1,63, F. H. Heinsius^3, C. H. Heinz^35, Y. K. Heng^1,58,63, C. Herold^60, T. Holtmann^3, P. C. Hong^34, G. Y. Hou^1,63, X. T. Hou^1,63, Y. R. Hou^63, Z. L. Hou^1, B. Y. Hu^59, H. M. Hu^1,63, J. F. Hu^56,j, S. L. Hu^12,g, T. Hu^1,58,63, Y. Hu^1, G. S. Huang^71,58, K. X. Huang^59, L. Q. Huang^31,63, X. T. Huang^50, Y. P. Huang^1, T. Hussain^73, F. Hölzken^3, N Hüsken^27,35, N. in der Wiesche^68, J. Jackson^27, S. Janchiv^32, J. H. Jeong^10A, Q. Ji^1, Q. P. Ji^19, W. Ji^1,63, X. B. Ji^1,63, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^71,58, D. Jiang^1,63, H. B. Jiang^76, P. C. Jiang^46,h, S. S. Jiang^39, T. J. Jiang^16, X. S. Jiang^1,58,63, Y. Jiang^63, J. B. Jiao^50, J. K. Jiao^34, Z. Jiao^23, S. Jin^42, Y. Jin^66, M. Q. Jing^1,63, X. M. Jing^63, T. Johansson^75, S. Kabana^33, N. Kalantar-Nayestanaki^64, X. L. Kang^9, X. S. Kang^40, M. Kavatsyuk^64, B. C. Ke^80, V. Khachatryan^27, A. Khoukaz^68, R. Kiuchi^1, O. B. Kolcu^62A, B. Kopf^3, M. Kuessner^3, X. Kui^1,63, N.  Kumar^26, A. Kupsc^44,75, W. Kühn^37, J. J. Lane^67, P.  Larin^18, L. Lavezzi^74A,74C, T. T. Lei^71,58, Z. H. Lei^71,58, M. Lellmann^35, T. Lenz^35, C. Li^43, C. Li^47, C. H. Li^39, Cheng Li^71,58, D. M. Li^80, F. Li^1,58, G. Li^1, H. B. Li^1,63, H. J. Li^19, H. N. Li^56,j, Hui Li^43, J. R. Li^61, J. S. Li^59, Ke Li^1, L. J Li^1,63, L. K. Li^1, Lei Li^48, M. H. Li^43, P. R. Li^38,l, Q. M. Li^1,63, Q. X. Li^50, R. Li^17,31, S. X. Li^12, T.  Li^50, W. D. Li^1,63, W. G. Li^1,a, X. Li^1,63, X. H. Li^71,58, X. L. Li^50, X. Z. Li^59, Xiaoyu Li^1,63, Y. G. Li^46,h, Z. J. Li^59, Z. X. Li^15, C. Liang^42, H. Liang^71,58, H. Liang^1,63, Y. F. Liang^54, Y. T. Liang^31,63, G. R. Liao^14, L. Z. Liao^50, J. Libby^26, A.  Limphirat^60, C. C. Lin^55, D. X. Lin^31,63, T. Lin^1, B. J. Liu^1, B. X. Liu^76, C. Liu^34, C. X. Liu^1, F. H. Liu^53, Fang Liu^1, Feng Liu^6, G. M. Liu^56,j, H. Liu^38,k,l, H. B. Liu^15, H. M. Liu^1,63, Huanhuan Liu^1, Huihui Liu^21, J. B. Liu^71,58, J. Y. Liu^1,63, K. Liu^38,k,l, K. Y. Liu^40, Ke Liu^22, L. Liu^71,58, L. C. Liu^43, Lu Liu^43, M. H. Liu^12,g, P. L. Liu^1, Q. Liu^63, S. B. Liu^71,58, T. Liu^12,g, W. K. Liu^43, W. M. Liu^71,58, X. Liu^38,k,l, X. Liu^39, Y. Liu^80, Y. Liu^38,k,l, Y. B. Liu^43, Z. A. Liu^1,58,63, Z. D. Liu^9, Z. Q. Liu^50, X. C. Lou^1,58,63, F. X. Lu^59, H. J. Lu^23, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,63, C. L. Luo^41, M. X. Luo^79, T. Luo^12,g, X. L. Luo^1,58, X. R. Lyu^63, Y. F. Lyu^43, F. C. Ma^40, H. Ma^78, H. L. Ma^1, J. L. Ma^1,63, L. L. Ma^50, M. M. Ma^1,63, Q. M. Ma^1, R. Q. Ma^1,63, X. T. Ma^1,63, X. Y. Ma^1,58, Y. Ma^46,h, Y. M. Ma^31, F. E. Maas^18, M. Maggiora^74A,74C, S. Malde^69, Y. J. Mao^46,h, Z. P. Mao^1, S. Marcello^74A,74C, Z. X. Meng^66, J. G. Messchendorp^13,64, G. Mezzadri^29A, H. Miao^1,63, T. J. Min^42, R. E. Mitchell^27, X. H. Mo^1,58,63, B. Moses^27, N. Yu. Muchnoi^4,c, J. Muskalla^35, Y. Nefedov^36, F. Nerling^18,e, L. S. Nie^20, I. B. Nikolaev^4,c, Z. Ning^1,58, S. Nisar^11,m, Q. L. Niu^38,k,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^63, Q. Ouyang^1,58,63, S. Pacetti^28B,28C, X. Pan^55, Y. Pan^57, A.  Pathak^34, P. Patteri^28A, Y. P. Pei^71,58, M. Pelizaeus^3, H. P. Peng^71,58, Y. Y. Peng^38,k,l, K. Peters^13,e, J. L. Ping^41, R. G. Ping^1,63, S. Plura^35, V. Prasad^33, F. Z. Qi^1, H. Qi^71,58, H. R. Qi^61, M. Qi^42, T. Y. Qi^12,g, S. Qian^1,58, W. B. Qian^63, C. F. Qiao^63, X. K. Qiao^80, J. J. Qin^72, L. Q. Qin^14, L. Y. Qin^71,58, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, Z. H. Qu^72, C. F. Redmer^35, K. J. Ren^39, A. Rivetti^74C, M. Rolo^74C, G. Rong^1,63, Ch. Rosner^18, S. N. Ruan^43, N. Salone^44, A. Sarantsev^36,d, Y. Schelhaas^35, K. Schoenning^75, M. Scodeggio^29A, K. Y. Shan^12,g, W. Shan^24, X. Y. Shan^71,58, Z. J Shang^38,k,l, J. F. Shangguan^55, L. G. Shao^1,63, M. Shao^71,58, C. P. Shen^12,g, H. F. Shen^1,8, W. H. Shen^63, X. Y. Shen^1,63, B. A. Shi^63, H. Shi^71,58, H. C. Shi^71,58, J. L. Shi^12,g, J. Y. Shi^1, Q. Q. Shi^55, S. Y. Shi^72, X. Shi^1,58, J. J. Song^19, T. Z. Song^59, W. M. Song^34,1, Y.  J. Song^12,g, Y. X. Song^46,h,n, S. Sosio^74A,74C, S. Spataro^74A,74C, F. Stieler^35, Y. J. Su^63, G. B. Sun^76, G. X. Sun^1, H. Sun^63, H. K. Sun^1, J. F. Sun^19, K. Sun^61, L. Sun^76, S. S. Sun^1,63, T. Sun^51,f, W. Y. Sun^34, Y. Sun^9, Y. J. Sun^71,58, Y. Z. Sun^1, Z. Q. Sun^1,63, Z. T. Sun^50, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, Y. A. Tang^76, L. Y. Tao^72, Q. T. Tao^25,i, M. Tat^69, J. X. Teng^71,58, V. Thoren^75, W. H. Tian^59, Y. Tian^31,63, Z. F. Tian^76, I. Uman^62B, Y. Wan^55,S. J. Wang ^50, B. Wang^1, B. L. Wang^63, Bo Wang^71,58, D. Y. Wang^46,h, F. Wang^72, H. J. Wang^38,k,l, J. J. Wang^76, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, Meng Wang^1,63, N. Y. Wang^63, S. Wang^12,g, S. Wang^38,k,l, T.  Wang^12,g, T. J. Wang^43, W.  Wang^72, W. Wang^59, W. P. Wang^35,71,o, X. Wang^46,h, X. F. Wang^38,k,l, X. J. Wang^39, X. L. Wang^12,g, X. N. Wang^1, Y. Wang^61, Y. D. Wang^45, Y. F. Wang^1,58,63, Y. L. Wang^19, Y. N. Wang^45, Y. Q. Wang^1, Yaqian Wang^17, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^72, Z. Y. Wang^1,63, Ziyi Wang^63, D. H. Wei^14, F. Weidner^68, S. P. Wen^1, Y. R. Wen^39, U. Wiedner^3, G. Wilkinson^69, M. Wolke^75, L. Wollenberg^3, C. Wu^39, J. F. Wu^1,8, L. H. Wu^1, L. J. Wu^1,63, X. Wu^12,g, X. H. Wu^34, Y. Wu^71,58, Y. H. Wu^55, Y. J. Wu^31, Z. Wu^1,58, L. Xia^71,58, X. M. Xian^39, B. H. Xiang^1,63, T. Xiang^46,h, D. Xiao^38,k,l, G. Y. Xiao^42, S. Y. Xiao^1, Y.  L. Xiao^12,g, Z. J. Xiao^41, C. Xie^42, X. H. Xie^46,h, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^71,58, T. Y. Xing^1,63, C. F. Xu^1,63, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^66, M. Xu^71,58, Q. J. Xu^16, Q. N. Xu^30, W. Xu^1, W. L. Xu^66, X. P. Xu^55, Y. C. Xu^77, Z. P. Xu^42, Z. S. Xu^63, F. Yan^12,g, L. Yan^12,g, W. B. Yan^71,58, W. C. Yan^80, X. Q. Yan^1, H. J. Yang^51,f, H. L. Yang^34, H. X. Yang^1, Tao Yang^1, Y. Yang^12,g, Y. F. Yang^43, Y. X. Yang^1,63, Yifan Yang^1,63, Z. W. Yang^38,k,l, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Z. Y. You^59, B. X. Yu^1,58,63, C. X. Yu^43, G. Yu^1,63, J. S. Yu^25,i, T. Yu^72, X. D. Yu^46,h, Y. C. Yu^80, C. Z. Yuan^1,63, J. Yuan^34, L. Yuan^2, S. C. Yuan^1, Y. Yuan^1,63, Y. J. Yuan^45, Z. Y. Yuan^59, C. X. Yue^39, A. A. Zafar^73, F. R. Zeng^50, S. H.  Zeng^72, X. Zeng^12,g, Y. Zeng^25,i, Y. J. Zeng^59, X. Y. Zhai^34, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,63, B. L. Zhang^1,63, B. X. Zhang^1, D. H. Zhang^43, G. Y. Zhang^19, H. Zhang^80, H. Zhang^71,58, H. C. Zhang^1,58,63, H. H. Zhang^34, H. H. Zhang^59, H. Q. Zhang^1,58,63, H. R. Zhang^71,58, H. Y. Zhang^1,58, J. Zhang^80, J. Zhang^59, J. J. Zhang^52, J. L. Zhang^20, J. Q. Zhang^41, J. S. Zhang^12,g, J. W. Zhang^1,58,63, J. X. Zhang^38,k,l, J. Y. Zhang^1, J. Z. Zhang^1,63, Jianyu Zhang^63, L. M. Zhang^61, Lei Zhang^42, P. Zhang^1,63, Q. Y. Zhang^34, R. Y Zhang^38,k,l, Shuihan Zhang^1,63, Shulei Zhang^25,i, X. D. Zhang^45, X. M. Zhang^1, X. Y. Zhang^50, Y.  Zhang^72, Y.  T. Zhang^80, Y. H. Zhang^1,58, Y. M. Zhang^39, Yan Zhang^71,58, Yao Zhang^1, Z. D. Zhang^1, Z. H. Zhang^1, Z. L. Zhang^34, Z. Y. Zhang^76, Z. Y. Zhang^43, Z. Z.  Zhang^45, G. Zhao^1, J. Y. Zhao^1,63, J. Z. Zhao^1,58, Lei Zhao^71,58, Ling Zhao^1, M. G. Zhao^43, N. Zhao^78, R. P. Zhao^63, S. J. Zhao^80, Y. B. Zhao^1,58, Y. X. Zhao^31,63, Z. G. Zhao^71,58, A. Zhemchugov^36,b, B. Zheng^72, B. M. Zheng^34, J. P. Zheng^1,58, W. J. Zheng^1,63, Y. H. Zheng^63, B. Zhong^41, X. Zhong^59, H.  Zhou^50, J. Y. Zhou^34, L. P. Zhou^1,63, S.  Zhou^6, X. Zhou^76, X. K. Zhou^6, X. R. Zhou^71,58, X. Y. Zhou^39, Y. Z. Zhou^12,g, J. Zhu^43, K. Zhu^1, K. J. Zhu^1,58,63, K. S. Zhu^12,g, L. Zhu^34, L. X. Zhu^63, S. H. Zhu^70, S. Q. Zhu^42, T. J. Zhu^12,g, W. D. Zhu^41, Y. C. Zhu^71,58, Z. A. Zhu^1,63, J. H. Zou^1, J. Zu^71,58 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 BochumRuhr-University, D-44780 Bochum, Germany^4 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^6 Central China Normal University, Wuhan 430079, People's Republic of China^7 Central South University, Changsha 410083, People's Republic of China^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^9 China University of Geosciences, Wuhan 430074, People's Republic of China^10 Chung-Ang University, Seoul, 06974, Republic of Korea^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^12 Fudan University, Shanghai 200433, People's Republic of China^13 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^14 Guangxi Normal University, Guilin 541004, People's Republic of China^15 Guangxi University, Nanning 530004, People's Republic of China^16 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^17 Hebei University, Baoding 071002, People's Republic of China^18 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany^19 Henan Normal University, Xinxiang 453007, People's Republic of China^20 Henan University, Kaifeng 475004, People's Republic of China^21 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^22 Henan University of Technology, Zhengzhou 450001, People's Republic of China^23 Huangshan College, Huangshan245000, People's Republic of China^24 Hunan Normal University, Changsha 410081, People's Republic of China^25 Hunan University, Changsha 410082, People's Republic of China^26 Indian Institute of Technology Madras, Chennai 600036, India^27 Indiana University, Bloomington, Indiana 47405, USA^28 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione diPerugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy^29 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara,I-44122, Ferrara, Italy^30 Inner Mongolia University, Hohhot 010021, People's Republic of China^31 Institute of Modern Physics, Lanzhou 730000, People's Republic of China^32 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia^33 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile^34 Jilin University, Changchun 130012, People's Republic of China^35 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^36 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^37 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^38 Lanzhou University, Lanzhou 730000, People's Republic of China^39 Liaoning Normal University, Dalian 116029, People's Republic of China^40 Liaoning University, Shenyang 110036, People's Republic of China^41 Nanjing Normal University, Nanjing 210023, People's Republic of China^42 Nanjing University, Nanjing 210093, People's Republic of China^43 Nankai University, Tianjin 300071, People's Republic of China^44 National Centre for Nuclear Research, Warsaw 02-093, Poland^45 North China Electric Power University, Beijing 102206, People's Republic of China^46 Peking University, Beijing 100871, People's Republic of China^47 Qufu Normal University, Qufu 273165, People's Republic of China^48 Renmin University of China, Beijing 100872, People's Republic of China^49 Shandong Normal University, Jinan 250014, People's Republic of China^50 Shandong University, Jinan 250100, People's Republic of China^51 Shanghai Jiao Tong University, Shanghai 200240,People's Republic of China^52 Shanxi Normal University, Linfen 041004, People's Republic of China^53 Shanxi University, Taiyuan 030006, People's Republic of China^54 Sichuan University, Chengdu 610064, People's Republic of China^55 Soochow University, Suzhou 215006, People's Republic of China^56 South China Normal University, Guangzhou 510006, People's Republic of China^57 Southeast University, Nanjing 211100, People's Republic of China^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand^61 Tsinghua University, Beijing 100084, People's Republic of China^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey^63 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^64 University of Groningen, NL-9747 AA Groningen, The Netherlands^65 University of Hawaii, Honolulu, Hawaii 96822, USA^66 University of Jinan, Jinan 250022, People's Republic of China^67 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom^68 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany^69 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom^70 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^71 University of Science and Technology of China, Hefei 230026, People's Republic of China^72 University of South China, Hengyang 421001, People's Republic of China^73 University of the Punjab, Lahore-54590, Pakistan^74 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^75 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^76 Wuhan University, Wuhan 430072, People's Republic of China^77 Yantai University, Yantai 264005, People's Republic of China^78 Yunnan University, Kunming 650500, People's Republic of China^79 Zhejiang University, Hangzhou 310027, People's Republic of China^80 Zhengzhou University, Zhengzhou 450001, People's Republic of China^a Deceased^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China^h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China^i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China^j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China^k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China^l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China^m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan^n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland^o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, GermanyJanuary 14, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================plain plain Spatio-temporal forecasting of future values of spatially correlated time series is important across many cyber-physical systems (CPS). Recent studies offer evidence that the use of graph neural networks to capture latent correlations between time series holds a potential for enhanced forecasting. However, most existing methods rely on pre-defined or self-learning graphs, which are either static or unintentionally dynamic, and thus cannot model the time-varying correlations that exhibit trends and periodicities caused by the regularity of the underlying processes in CPS. To tackle such limitation, we propose Time-aware Graph Structure Learning (TagSL), which extracts time-aware correlations among time series by measuring the interaction of node and time representations in high-dimensional spaces. Notably, we introduce time discrepancy learning that utilizes contrastive learning with distance-based regularization terms to constrain learned spatial correlations to a trend sequence. Additionally, we propose a periodic discriminant function to enable the capture of periodic changes from the state of nodes. Next, we present a Graph Convolution-based Gated Recurrent Unit (GCGRU) that jointly captures spatial and temporal dependencies while learning time-aware and node-specific patterns. Finally, we introduce a unified framework named Time-aware Graph Convolutional Recurrent Network (TGCRN), combining TagSL, and GCGRU in an encoder-decoder architecture for multi-step spatio-temporal forecasting. We report on experiments with TGCRN and popular existing approaches on five real-world datasets, thus providing evidence that TGCRN is capable of advancing the state-of-the-art. We also cover a detailed ablation study and visualization analysis, offering detailed insight into the effectiveness of time-aware structure learning. Time series forecasting, spatio-temporal graph neural networks, time-aware graph structure learning§ INTRODUCTIONCyber-physical systems (CPS) that are capable of responding dynamically to real-time changes in the physical world based on input from sensors hold many benefits. In this setting, the forecasting of these time series produced by spatially distributed sensors plays an essential role, as it allows CPS to make informed decisions and dynamically adjust to the ever-changing physical world. This ultimately enables improved overall efficiency, reliability, and responsiveness in a wide range of domains, such as air quality forecasting <cit.>, weather forecasting <cit.>, transportation planning <cit.>, and vessel collision risk warning <cit.>.Existing spatio-temporal forecasting (STF) methods <cit.> demonstrate that forecasting accuracy improves significantly when considering both temporal and spatial correlations. For instance, in a metro system, predicting the future outbound passenger flow at one station (e.g., station 1 in Fig. <ref>) requires considering its historical flow (temporal correlation), and the passenger flow at connected stations, e.g., stations 2 and 3 (spatial correlation). The STF problem can be approached as spatio-temporal graph learning, where sensors are treated as graph nodes, and the spatial correlations between sensors are seen as edges. In this framework, time series serve as node features, and a temporal module, such as recurrent neural networks or convolutional neural networks (CNNs), captures temporal correlations for each node. Additionally, graph neural networks (GNNs) are employed to capture hidden spatial dependencies. In addition to enabling sophisticated GNN models, many studies focus on learning optimal graph structures for specific downstream tasks, since the success of GNNs can be attributed to their ability to exploit the potential correlations in graph structures <cit.>. In general, graph structures encompass pre-defined graphs <cit.> andself-learning graphs <cit.>. The former constructs a task-related graph structure based on domain knowledge, such as geospatial distances between nodes, route topology, or node feature similarity. The latter allows a network to learn a graph structure from data. Spatial correlations present regular time-varying dynamics, specifically manifested as trends and periodicities. We illustrate these two patterns using a public transportation scenario as an example. As shown in Fig. <ref>, the number of Origin-Destination pairs indicates the strength of the spatial correlation between the stations. Specifically, stations 1, 2, and 3 are metro stations located in residential, shopping, and business areas, respectively. Spatial Trend denotes an increasing or decreasing correlation over time. The red curve in Fig. <ref> shows a gradual increase during the morning rush hour since people commute to work and then decreases as they reach their destinations. Subsequently, around 18:00, there is an increase in passenger transfers between stations 1 and 3, as well as between stations 2 and 3, as people return home or engage in non-work activities. Spatial Periodicity denotes a temporal recurrence of correlations. Passenger transfers between stations show different cyclic daily patterns on weekdays and weekends, as indicated by the dashed red line in Fig. <ref> that separates the two. On weekends, the correlations display significant differences due to the impact of leisure activities and reduced work-related commuting. These dynamics are also observed in air quality, water quality, and other CPS applications, which are affected by the regularity of the underlying processes.However, existing forecasting solutions are ill-equipped to capture spatial trends and periodicities. Solutions employing pre-defined graphs require excessive computational and storage resources to pre-compute spatial correlations and may also introduce inevitable biases due to incomplete prior assumptions <cit.>. Solutions employing self-learning graphs either exhibit difficulties in representing dynamics or fail to explicitly consider the regularity of dynamics <cit.>. Overall, three challenges need to be addressed:1.Learning dynamic graph structures with trends and periodicities has been rarely explored in spatio-temporal forecasting.2. Constructing graphs that are capable of representing dynamics, which are time-varying, inevitably introduces more model parameters, making convergence harder.3. Dynamic spatio-temporal correlations are difficult to learn. On the one hand, temporal and spatial correlation affect each other dynamically. On the other hand, the current spatio-temporal state is affected by the past state, and this influence propagates and accumulates over time. In this paper, we propose a novel approach, Time-aware Graph Convolutional Recurrent Network (TGCRN) framework to tackle the aforementioned challenges. First, we construct graphs with trends and periodicities to represent spatial correlations. Instead of employing sophisticated neural networks, we decompose the graph learning into node and time representations and then blend them to build time-aware graphs. Next, we utilize graph convolution-based gated recurrent units to effectively capture both spatial and temporal dependencies, which combines the graph convolution on time-aware graph structures with the gating mechanism for integrating current input and previous state. Finally, we present TGCRN, which employs an encoder-decoder architecture integrating time-aware graph structure learning and the graph convolution-based gated recurrent unit. This recursive integration allows the model to effectively capture the trends and periodicities of spatio-temporal correlations. Our contributions are four-fold: * Our solution is the first to capture dynamic spatial correlations with trends and periodicities in spatio-temporal forecasting by learning the regular dynamics of graph structures. It opens a new avenue for spatio-temporal analysis research. * We propose a novel time-aware graph learning method, incorporating time discrepancy learning and a periodic discriminant function to construct a series of time-aware graphs. Our method enables graph structure learning that adopts a factorized learning perspective, allowing adaptive learning of the dynamics of spatial correlation from data. * We develop a holistic model that automatically learns node and time representations and graph structures. Further, it recursively captures regular spatio-temporal dependencies in an end-to-end fashion. This is done by employing an encoder-decoder architecture for multi-step time series forecasting. * Experimental results on five real-world datasets show that the proposed method is capable of outperforming the state-of-the-art graph-based approaches. We visualize the learned graph structures, thereby offering insight into the distinct trends and periodicities of spatial correlations over time. § PRELIMINARIESIn this section, we provide related preliminaries on spatio-temporal forecasting. Table <ref> summarizes frequently used notation. §.§ Definitions[Spatially Correlated Time Series] We use 𝒳 = (𝒳_t_1, 𝒳_t_2, ⋯, 𝒳_t_P) ∈ℝ^N× P × d to denote N spatially correlated multivariate time series, where each time series covers P timestamps with d-dimensional features.[Graph]We use graph 𝒢=(𝒱, ℰ) to represent spatial correlations between time series, where 𝒱 is a set of nodes (representing time series) and ℰ is a set of weighted edges. An adjacency matrix 𝒜∈ℝ^N × N, where N=|𝒱|, is used to represent the graph. Thus, 𝒜_i,j denotes the weight of the edge between nodes v_i and v_j, Further, 𝒜_i,j = 0 means that there is no edge between nodes v_i and v_j. [Time-aware Graph] A time-aware graph 𝒢^t, which we represent by an adjacency matrix 𝒜^t, captures the spatial correlations between correlated time series at time t.§.§ Spatial Periodicity and TrendTo verify the spatial patterns with trends and periodicities, we calculate the Origin-Destination (OD) transfer in the Hangzhou metro system. The OD transfers represent the spatial correlations between stations and can be denoted as an adjacency matrix 𝒜, where 𝒜_i,j denotes the number of passengers from station i to station j. As shown in Fig. <ref>, four stations are located in different areas of Hangzhou. First, we observe that the passenger flows of each station from 08:15 to 09:00 on weekdays are significantly higher than on weekends, and we see that these flows decrease as the morning peak ends. Then we visualize the OD transfer in time interval 08:00 – 08:15 of the week {𝒜_SAT^t_1, ⋯, 𝒜_FRI^t_1} via heat maps, where timestamp 08:00 is denoted as t_1 and where we omit the ending timestamp for brevity. We see that 𝒜_SAT^t_1 is similar to 𝒜_SUN^t_1 and that {𝒜_MON^t_1, ⋯, 𝒜_FRI^t_1} are similar to each other with minor fluctuation, showing distinct weekend and weekday periodicities because of the demand for work. Moreover, we randomly choose one workday and visualize the spatial correlations over consecutive 15-minute time spans from 08:00 to 17:30, finding a continuous dynamic pattern. For example, the number of passenger transfers from station 5 to station 4 decreases gradually from timestamp t_1 to t_4, i.e., 𝒜_5,4^t_1 > 𝒜_5,4^t_2 > 𝒜_5,4^t_3 > 𝒜_5,4^t_4.§.§ Problem StatementIn the spatio-temporal forecasting task, given a system of spatially correlated time series, our goal is to learn a function ℱ that maps historical observations 𝒳 to predictions of the following Q future time steps Ŷ = (ŷ_t_P+1, ⋯, ŷ_t_P+Q). We formulate ℱ as follows.(x_t_1, x_t_2, ⋯, x_t_P) ℱ⟶ (ŷ_t_P+1, ŷ_t_P+2, ⋯, ŷ_t_P+Q) § METHODOLOGYWe proceed to detail the proposed TGCRN method. First, we elaborate on how to capture spatial trends and periodicities between time series by enhancing an optimized time-aware graph structure in Section <ref>. Then we introduce a Graph Convolution-based Gated Recurrent Unit to extract spatio-temporal hidden dependencies in Section <ref>. Finally, we present the Time-aware Graph Convolutional Recurrent Network framework that integrates Time-aware graph Structure Learning and the Graph Convolution-based Gated Recurrent Unit with an encoder-decoder architecture for multi-step forecasting in Section <ref>.§.§ Time-aware Graph Structure Learning§.§.§ Overview of TagSLFig. <ref> shows that the correlations between time series are dynamic over consecutive time steps, and further exemplify periodicities and trends caused by the underlying processes. To capture such dynamics in graphs, an idea is to pre-construct the graph structure. However, this inevitably yields two problems: 1) high space and time complexity; 2) introducing human bias caused by the priori knowledge-guided metric used to measure correlations between time series, e.g., geographical distance.To address these problems, we propose Time-aware Graph Structure Learning (TagSL), a generic data-driven method. As illustrated in Fig. <ref>, TagSL learns time-aware graph structures by blending of node state and the representations of node and time. Formally, we define ϕ(E_ν, Φ(t), 𝒳_t):= ℱ_𝒢(t), where E_ν∈ℝ^N × d_N denotes the node embedding with d_N-dimensional vectors of N nodes, Φ(t): t ↦𝐑^d_T is a time encoding function that maps times to d_T-dimensional vectors, 𝒳^t ∈ℝ^N× d is the node state at time step t. ϕ(·) is a composition function that we study to generate the graph adjacency matrix at a specific time.The inspiration for TagSL stems from the self-learning graph <cit.>. As shown in Fig. <ref>, it uses the inner product of the representations of node pairs i.e., ⟨ E_ν^i, E_ν^j ⟩ := ℱ_𝒢 to measure edge weights, which can not only learn hidden inter-dependencies between nodes but can also reduce the number of parameters compared to directly learning an adjacency matrix. To build a time-aware graph structure, suppose that the concatenation of the node and time representations, e_i,t_1 = [E_ν^i; Φ(t_1)] enable the vector representation of node i to combine t_1 time and static node information. Thus, the time-aware correlation between nodes can be defined as follows:⟨ e_i,t_1, e_j,t_1⟩ = ⟨ E_ν^i,E_ν^j ⟩ + ⟨Φ(t_1),Φ(t_1) ⟩,where ⟨·⟩ is the inner product operator. Generally, the interactions between time series occur over time, meaning that calculations involving adjacent time steps can reveal additional temporal behavior than calculations at a single time step can. Thus, ⟨Φ(t_1),Φ(t_2) ⟩ would express more meaningful temporal information than ⟨Φ(t_1),Φ(t_1) ⟩. Specifically, ⟨ E_ν^i, E_ν^j ⟩ represents the static spatial correlation between node i and node j and⟨Φ(t_1),Φ(t_2) ⟩ intends to represent the temporal evolution of the graph structure. This way, the learning of spatial trend and spatial periodicity is transformed into time representation learning. We further include a time discrepancy learning module to preserve the spatial trend, and a periodic discriminant learning module to distinguish periods. §.§.§ Time Discrepancy LearningThe time encoding function Φ(·) should satisfy two criteria to learn spatial trends. First, it should conform to translation variance, i.e., the metrics of time representations vary over adjacent time steps. Assuming a function ⟨Φ(t), Φ(t+c) ⟩ := 𝒦(t, t+c), translation variance can be formulated as 𝒦(t, t+c) 𝒦(t+i*c, t+(i+1)*c), where i0 and c denotes the time interval. Second, the encoding should preserve discrepancies between time steps. For example, 08:00 and 09:00 should be more similar than 08:00 and 10:00. While several studies, like Time2vec <cit.> and TGAT <cit.>, explore model-agnostic and heuristic-driven time representation functions, they primarily focus on the intrinsic properties, such as invariance of time rescaling and the differences in ranges of time steps. We design a time encoding function based on embedding technology and self-supervised learning. Considering a minimum periodicity such as a day, we first discretize infinite time into a sequence of timestamps of duration a day, denoted as T=[t_0, t_1, ⋯, t_max]. Then we randomly initialize learnable time vectors E_τ∈ℝ^|T| × d_T in a finite-dimensional space for all elements, which are optimized using gradient descent. To learn the discrepancies between the vector representations of time steps and enable them to be proportional to the distances in the time domain, we propose a distance-based proportion regularization term to constrain the time embedding. As shown in Fig. <ref>, there are three different sets of time steps: adjacent, mid-distance, and distant time steps (|t_γ_1-t_γ_2| ≫ (P+Q)). We aim to enable the time representations to be more similar if their specific time steps are closer (e.g., the anchor and an adjacent time step), and the opposite if the two are farther apart (e.g., the mid-distance or distant time steps). This can be achieved by using the following objective loss:ℒ_𝓉𝒾𝓂ℯ = ∑_i,j||ζ_i/d_i - ζ_j/d_j||_1 + ∑_i,k||ζ_i/d_i - ζ_k/d_k||_1 + ∑_j,k||ζ_j/d_j - ζ_k/d_k||_1 under withζ_i = ℱ_sim(E_τ^t_i, E_τ^t_𝒪) and d_i = ℱ_dist(t_i, t_𝒪).Here, we omit the expressions of the calculation of ζ_j and ζ_k, which are the same as ζ_i, except for the time step. ℱ_sim denotes the similarity of time representations and ℱ_dist denotes the distance between time steps. Considering the similarity measurement in vector space, we utilize the Euclidean distance as ℱ_sim. To keep the symmetry of distance between time steps, we simplify ℱ_dist to L1 distance. t_i, t_j, and t_k denote adjacent, mid-distance, and distant time steps, which all are sampled up to t_𝒪, an anchor time step. The detailed sampling strategy is shown in Algorithm <ref>. We randomly select a time step for each sample in a batch as an anchor, and one of the previous or next γ_▵ time steps of the anchor is considered as an adjacent one. The one outside the adjacent range in each sample is taken as a mid-distant one, and one of the time steps in other samples is considered as a distant one. Empirically, we set γ_▵ half of the length of the input time steps. By involving more general sampled cases, we desire to regularize the model to learn a smooth translation invariance.§.§.§ Periodic Discriminant FunctionTo distinguish periods and generate a graph structure that captures corresponding spatial correlations, we design a periodic discriminant function. We observe that the node state is quite distinct at the same daily time in different periods. Taking traffic flow as an example, the traffic flow is different on weekdays and weekends at the same time due to different travel demands, and the pattern can be extracted from the observations to distinguish the two. Hence, we propose a discriminant function that identifies the corresponding period based on the current node state. Specifically, node states can be mapped to different ranges through piecewise nonlinear functions, and the inner product further expands the boundaries.Formally, combining self-learning graph construction, time representation, and periodic discrimination, we form TagSL. Given the node embeddings E_ν∈ℝ^N × d_N, time representations E_τ∈ℝ^|T| × d_T, and node features 𝒳∈ℝ^N × |T| × d_F, the adjacency matrix 𝒜^t of the learned time-aware graph can be formulated as follows.𝒜_ν = ⟨ E_ν, E_ν^T⟩ η_τ^t = ⟨ E_τ^t, E_τ^t-1^T⟩ 𝒜_ρ = tanh(⟨𝒳, 𝒳^T ⟩)𝒜^t=(1+ασ(𝒜_p)) ⊙ (𝒜_ν + η_τ^t),where 𝒜_ν, 𝒜_ρ∈ℝ^N× N denote the self-learning matrix and periodic discriminant matrix, η_τ is a scalar and denotes the trend factor, σ(·) is the sigmoid function, ⊙ denotes the Hadamard product, and α is the saturation factor, a hyperparameter for adjusting the weight of the periodic effect on current spatial correlations. §.§.§ Comparison with existing approaches First, we visualize how existing methods construct and utilize the graph structures in Fig. <ref> and give the corresponding formula in Table <ref>. Generally, a pre-defined graph structure (𝒜) is constructed by domain knowledge and remains fixed during both phases. The self-learning methods derive an optimized graph structure (𝒜_sl) using a metric function on node embeddings, such as inner product. The pre-defined and self-learning graphs are static for all samples during the testing phase and thus cannot handle dynamic spatial correlations. The dynamic method employs a neural network-based module that uses the nodes' hidden state to generate a series of evolving graph structures (𝒜_dyn) but lacks an in-depth consideration of regular spatial correlations. §.§ Graph Convolution-based Gated Recurrent Unit Most recent proposals employ graph convolutional networks to capture spatial dependencies between time series with the main objective of learning node representations through message passing. The prominent graph convolutional operation <cit.> adopts first-order approximations of "Chebyshev polynomial extensions" in the spectral domain. Given the multivariate time series X ∈ℝ^N× C_in of C_in-dimensional feature vectors, the convolution can be expressed as follows.Z = L^symX𝒲+b,where L^sym is a symmetric Laplacian regularization matrix, 𝒲∈ℝ^N × C_in× C_out, b ∈ℝ^C_out are trainable parameters, and Z ∈ℝ^N × C_out represents the convolved feature. In addition to capturing the inter-variable correlations, the gated recurrent unit, a variant of recurrent neural networks with a gating mechanism is used for capturing intra-variable temporal patterns. Considering both spatial and temporal dependencies, we propose a graph convolution-based gated recurrent unit that is defined as follows.𝒜̂^̂t̂ = Norm(𝒜^t)E^t = [E_ν; E_τ, t] z_t = σ(𝒜̂^̂t̂[𝒳_:t; h_t-1E^tW_z + E^tb_z)r_t = σ(𝒜̂^̂t̂[𝒳_:t; h_t-1]E^tW_r + E^tb_r)ĥ_t = tanh(𝒜̂^t[𝒳_:t; r_t ⊙ h_t-1]E^tW_ĥ + E^tb_ĥ)h_t = (1-z_t) ⊙ h_t-1 + z_t ⊙ĥ_tHere, 𝒜̂^̂t̂ is the adjacency matrix of the time-aware graph at time step t, Norm denotes a normalization function, e.g., the softmax function, tanh denotes the hyperbolic tangent function, and z_t, r_t, and ĥ_t are update, reset, and candidate activation vectors, respectively. Each gate considers the previous hidden state and the current input with learned parameters that include the weight matrix W ∈ℝ^C_in× C_out and the bias b ∈ℝ^C_out. Meanwhile, to reduce the parameter scale and control overfitting caused by the weight 𝒲∈ℝ^N × C_in× C_out, we employ the matrix decomposition 𝒲=ÊW, E∈ℝ^N × d withd ≪ N, where E is the node representations. §.§ Time-aware Graph Convolutional Recurrent NetworkHere, we present the overall TGCRN framework, shown in Fig. <ref>, that adopts an encoder-decoder architecture to output multi-step predictions. To enhance the capacity of feature representation, the encoder and decoder employ a multi-layer network and recursively extract time-aware spatial-temporal correlations. Specifically, given the inputs of i^th layer in the encoder or decoder X_t_j^i=h_t_j-1^i-1, the previous hidden state h^i_t_j-1, node embedding E_ν, and time vector E_τ,t_j at time t_j, they are first fed to TagSL to obtain a time-aware graph structure A^t_j. Then the GCGRU is utilized to aggregate the spatial correlation between nodes and their neighborhood derived from A^t_j, and capture the intra-variables temporal correlation. The output hidden state h^i_t_j is considered as the input of the next unit. Noting that X_t_j, enc^1=𝒳_t_j and X_t_j,dec^1=h_t_j, enc^l when i=1. Further, the decoder and the encoder have an identical structure, except for an additional output layer that transforms the hidden state [h^l_t_P+1, ⋯, h^l_t_P+Q] of the last layer in the decoder to an output with the desired dimensionality.Finally, we present the overall learning objective of TGCRN, including an auxiliary time discrepancy learning loss and error loss. Formally, ℒ = ℒ_error + λℒ_time with ℒ_error = 1/|Y|∑_i^|Y||Y_i-Ŷ_i|,where ℒ_error measures the mean absolute error between the ground truth Y and the prediction Ŷ, ℒ_time measures the MAE between timestamps, and λ is an adjustable hyperparameter. To sum up, the goal of our task is to optimize all the trainable parameters by minimizing the joint loss objective. § EXPERIMENTSWe proceed to report on comprehensive experiments on five large real-world datasets to answer four questions:Q1. How does TGCRN perform at spatio-temporal forecasting compared to competing approaches, especially graph-based methods? (Section <ref>)Q2. What are the impacts of the different components in TGCRN? (Section <ref>)Q3. Do the learned time-aware graphs align with spatial trends and periodicities? (Section <ref>)Q4. Does the learned time representation satisfy to the desired sequence constraint? (Section <ref>) §.§ Experimental Setup §.§.§ DatasetsThe experiments are conducted on five real-world datasets: HZMetro and SHMetro <cit.>, as well as NYC-Bike and NYC-Taxi <cit.>, and Electricity <cit.>. The former two are collected from the metro systems of Hangzhou and Shanghai, China. HZMetro contains 58.75 million transaction records from 80 stations from Jan./01/2019 to Jan./25/2019. SHMetro contains 811.44 million transaction records from 288 stations from Jul./01/2016 to Sept./30/2016. Each record contains the passenger ID, entry or exit station, and the corresponding timestamp. For each station, the inflow and outflow every 15 minutes are measured by counting the number of passengers who enter or exit the station. The historical flow length P is set to 4 time steps (1 hour), and we predict the values of the inflow and outflow of all stations for the next 4 time steps (1 hour).The NYC-Bike dataset[https://github.com/Essaim/CGCDemandPrediction] contains bike sharing records of people's daily usages in New York City, and each record contains a pick-up dock, a drop-off dock, and the corresponding timestamps. Each dock is considered a station, yielding 250 stations in total. The NYC-Taxi dataset[https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page] is collected from NYC OpenData and consists of 35 million taxicab trip records, where each record contains the pick-up longitude and latitude, the drop-off longitude and latitude, and the corresponding timestamps. NYC-Taxi is dockless-based, and 266 virtual stations are formed by clustering the records. The pick-up and drop-off demands for both datasets are measured every 30 minutes. Both datasets range from Apr./01/2016 to Jun./30/2016. The historical length P is 12 time steps (6 hours), and the prediction length Q is 12 time steps (6 hours).The Electricity dataset[https://archive.ics.uci.edu/dataset/321/electricityloaddiagrams20112014] records electricity consumption in kWh every 1 hour from 2012 to 2014. P and Q are both set to 12 time steps (12 hours). Brief statistics of five datasets are given in Table <ref>.We process the dataset as in previous studies <cit.>, except that we re-split HZMetro into a training set (Jan. 1–Jan. 19), a validation set (Jan. 20–Jan. 21), and a testing set (Jan. 22–Jan. 25). Traffic patterns are different during the original validation and testing periods, where the original validation set (Jan. 19–Jan. 20, i.e., Saturday and Sunday), and testing set (Jan. 21–Jan. 25, i.e., workdays), fail to verify the general effectiveness of the forecasting method.§.§.§ MethodsWe compare TGCRN with thirteen existing time series forecasting methods, including the latest spatio-temporal forecasting methods and transformer-based methods: * Historical Average (HA), a statistical approach calculating the average of the corresponding historical periods as the forecast values.* GBDT <cit.>, a weighted ensemble model consisting of a set of weak learners employing the gradient descent boosting paradigm.* XGBoost <cit.>, a scalable tree-boosting system for both regression and classification tasks.* LSTM <cit.>, a recurrent neural network variant with gating mechanisms.* Informer <cit.> and Crossformer <cit.> employ the transformer architecture for long-term and multivariate time series forecasting, respectively.* DCRNN <cit.>, an encoder-decoder architecture with gated recurrent units and diffusion convolution on a pre-defined distance-based graph structure for learning spatio-temporal dependencies.* Graph WaveNet <cit.> employs graph convolution on a self-learning adjacency matrix and stacked dilated 1D convolution to capture spatial and temporal correlations, respectively.* AGCRN <cit.>, adaptive graph convolutional recurrent network, performs graph convolution on a self-learning graph and employs gated recurrent units to model inter-dependencies among nodes and intra-node temporal correlations. * PVCGN <cit.>, physical-virtual collaboration graph network, integrates multiple pre-defined graphs into graph convolution gated recurrent units for learning spatio-temporal representations.* CCRNN <cit.> adopts different self-learning graphs in different layers of GCNs and provides a layer-wise coupling mechanism to bridge the adjacency matrices of the upper and lower levels.* GTS <cit.> combines a discrete graph structural learner and recurrent neural network for spatial and temporal forecasting.* ESG <cit.> learns a multi-scale dynamic graph through gated recurrent units and combines graph convolution and dilated convolution to capture evolving spatio-temporal representations.§.§.§ Evaluation MetricsFor consistency of performance evaluation with the previous studies <cit.>, we use Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) as the common evaluation metrics. Pearson Correlation Coefficient (PCC) is used to measure the linear correlation for traffic demand forecasting, and Mean Absolute Percentage Error (MAPE) is used to evaluate the relative error for traffic flow forecasting. For PCC, a higher value indicates better performance; for the others, the opposite holds.§.§.§ Implementation DetailsWe implemented all experiments on an Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz and two Nvidia Quadro RTX 8000 GPUs. Following <cit.>, we adopt the Adam <cit.> optimizer to update model weights. The L2 penalty is 10^-4. The initial learning rate is 10^-3 and decays by 0.3 when the number of epochs reaches [5,20,40,70,90]. For all datasets, the batch size is 16. The saturate factor of the periodic discriminant function is set to 0.3. The numbers of layers of the encoder and decoder and the hidden units of the GCGRU are set to 2 and 64, respectively. For HZMetro, we use a node embedding dimensionality of 64 and a time embedding dimensionality of 32. For the other datasets, both embeddings have a dimensionality of 64. We use the early stopping strategy to select the best model weights when the patience reaches 15. §.§ Main ResultsTables <ref>, <ref> and <ref> present the forecasting performance on the five datasets. The results of the baselines on SHMetro, NYC-Bike, and NYC-Taxi are taken from previous studies <cit.>. For HZMetro, due to the reorganization of the dataset, we produce the results using official source codes from the corresponding papers. Overall, we can find that 1) our method consistently achieves the best performance at all metrics, both for all datasets and forecasting horizons (Q1); 2) the methods that capture both spatial and temporal relationships show significant improvement over those capturing only temporal correlations (such as HA, GBDT, and FC-LSTM); 3) from the perspective of spatial correlation modeling, TGCRN outperforms the existing GCN-based approaches that adopt fixed, self-learning and dynamic graph structures, including DCRNN, AGCRN, and ESG, demonstrating the effectiveness of capturing time-aware spatial correlations. More specifically, TGCRN achieves 10.95% and 14.16% improvements on HZMetro, 8.44% and 7.44% improvements on SHMetro, and 6.15% and 6.33% improvements on NYC-Taxi in terms of MAE and RMSE with average horizons, Compared with existing state-of-the-art methods. TGCRN outperforms ESG due to its enhanced capacity to capture regular dynamics of spatial correlations; 4) from the perspective of multi-step forecasting, TGCRN consistently maintains superiority. As shown in Fig. <ref>, taking FC-LSTM as a benchmark, we find that as the time step increases, TGCRN shows more and more prominent predictive performance compared to the other methods. We further observe that ESG and Graph WaveNet struggle to extract meaningful temporal dependencies with their CNN-based temporal modules, limited by the short-term setting (P=4, Q=4) on the metro datasets, which widens the gap to TGCRN on NYC-Bike and NYC-Taxi. We also note that although PVCGN utilizes pre-defined graph structures, it achieves top-tier performance, benefiting from the ability of multiple graph learning. Yet, it requires more hand-crafted engineering and incurs higher computing costs (more discussion in Section <ref>). §.§ Model Analysis§.§.§ Ablation StudyWe conduct ablation studies to understand the impact of the time-aware graph structure learning and encoder-decoder architecture (Q2). First, we design four variants of our graph learning mechanism. * w/o tagsl replaces the time-aware graph structure learning with the self-learning mechanism of AGCRN.* w/ TE only utilizes time embedding in the time-aware graph structure learning. * w/o TDL removes the time discrepancy learning to assess the effect of the learned time representation.* w/o PDF removes the periodic discriminant function to assess its contribution by capturing the effects of different periodicities.Second, we utilize the most recent prominent time representation methods: Time2vec <cit.> and continuous-time representation <cit.> for encoding time to assess the effect of our simple but effective time representation. The variants Time2vec and CTR replace our time embedding and time discrepancy learning. Finally, w/o enc-dec replaces the recursively obtained decoding output with direct output based on a fully connected neural network.Table <ref> reports the results for variants of TGCRN on HZMetro and SHMetro. First, we observe that w/o tagsl suffers a significant drop in performance, which indicates that time-aware graph structures provide more accurate dynamics of spatial correlations. Next, the results for w/ TE indicate that the discretized time embedding used for making the time-aware graph is important for spatial patterns. However, leveraging the time embedding alone cannot guarantee the learning of a meaningful time representation, since the model trivially optimizes the representation based on the forecast loss of a downstream task, as mentioned in Section <ref>. The results for w/o TDL and w/o PDF show that time discrepancy learning and the periodic discriminant function are both crucial for spatio-temporal forecasting. In the further validation of our time representation, the results for Time2vec and CTR show that the combination of time embedding and time discrepancy learning is more suitable for our model. The result of w/o enc-dec suggests that iteratively predicting future values over multiple time steps helps the model to better capture spatio-temporal dependencies.§.§.§ Parameter SensitivityFig. <ref> and Fig. <ref> assess the influence our learned time-aware graph and joint loss optimization on the final forecasting performance. Thus, we conduct a parameter study to analyze the impacts of the three key parameters: node embedding dimensionality d_ν, time embedding dimensionality d_τ, and loss weight factor λ. From Fig. <ref>, we can find that the performance continues to improve as the dimensionality increases, except for slight fluctuations at dimensionality 64 (red line). TGCRN, with a larger node embedding dimensionality and a larger time embedding dimensionality, can contain more information on graph topology and their dynamics but occupies more parameters, which leads to over-fitting and higher computational costs. Thus, a good practice for finding suitable parameters is to consider the trade-off between performance and computation. We return to this in Section <ref>.Fig. <ref> shows an obvious turning point of the polyline around λ =0.1, which proves the effectiveness of the fact that time discrepancy learning can mutually promote the interpretability of the learned time-aware graph and the performance, but it is not recommended to make a large proportion too large as an auxiliary task.§.§.§ Computational CostTable <ref> reports the scale of parameters and training time per epoch of the graph-based models. The methods with dynamic graph structure modeling (e.g., TGCRN, ESG) incur higher computational costs to capture the dynamic spatial correlations than those using static graph structures. PVCGN imposes a significant computational burden because it combines multiple graphs on graph convolution. Specifically, TGCRN (d_ν=64, d_τ=32) has four times more parameters than ESG, but it can achieve significant improvements, as presented in Table <ref>. As discussed in Section <ref>, the prediction performance can be further improved when the model capacity increases. Moreover, TGCRN (d_ν=16,d_τ=16) with a moderate increase achieves MAE 24.35, RMSE 42.03, and MAPE 15.31% on average horizons and still outperforms all baselines. The computational overhead and large model size of TGCRN are due to the modeling of spatial correlations at each time step. However, the changes in correlations between time steps are often small, making it unnecessary to calculate them so frequently. In future work, we will consider how to infer spatial correlations only when crucial changes occur. §.§ Visualization§.§.§ Spatial CorrelationTo observe visually whether the learned time-aware graphs conform to the desired periodicities and trends of spatial correlations (Q3), as discussed in Section <ref>, we visualize the learned spatial correlations and time representation.First, we select four stations and their data in the 08:00 – 08:15 range from January 19th to 25th, 2019 from the testing dataset of HZMetro. Then we obtain the adjacency matrices at the corresponding timestamps and visualize them as heat maps by enlarging the matrix weights tenfold for highlighting the continuous variations. For the learned spatial correlations, the darker the color, the stronger the correlation. For OD transfer-based correlation, the warmer color, the stronger the association, and the cooler color, the weaker the association. Fig. <ref> shows the learned graphs follow distinct weekday and weekend patterns and are consistent with the OD transfer-based correlations, where there is more demand for metro travel on weekday mornings. Moreover, we derive the learned adjacency matrices and OD transfer flows from 08:00 to 09:00 on 24th January 2019 (Thursday). Fig. <ref> shows slight dynamics over consecutive time spans of learned spatial correlations, which has a similar trend as the passenger transfer.§.§.§ Time RepresentationTo observe the effect of the proposed Time Discrepancy Learning (Q4), we visualize the time representations with and without Time Discrepancy Learning. To do so, we reduce the dimensionality of the time embedding weights of the trained TGCRN from 64 to 2 using t-SNE <cit.>. Fig. <ref> shows that the representations of time nodes from 0 to 72 exhibit a positional ordering in 2D space with a clear proportional discrepancy, which indicates the effectiveness of the Time Discrepancy Learning module. In contrast, the representations of time nodes without any constraints yield a confusing pattern, as shown in Fig. <ref>.§ RELATED WORKSpatio-temporal forecasting extends time series forecasting to encompass also a spatial aspect and has attracted substantial attention. Early studies treat this problem as a task of forecasting multiple univariate time series. Then followed statistical methods, e.g., Autoregressive Integrated Moving Average (ARIMA) <cit.>, Vector Autoregressive (VAR), and Hidden Markov Models (HMMs). These are linear methods that smooth historical information to predict future state, but they disregard correlations between time series. Researchers have built handcrafted features and then utilize traditional machine learning models, e.g., Linear Regression <cit.> or Support Vector Regression <cit.>,to capture spatio-temporal dependencies between time series. However, such approaches rely heavily on complex feature engineering to obtain good forecasting performance. This approach is thus constrained by the available domain knowledge and the linear feature representation. More recently, deep learning with powerful non-linear capabilities has become used widely in spatio-temporal forecasting. ConvLSTM <cit.> combines LSTMs and CNNs to extract long-term temporal dependencies and spatial relationships among local regions and achieves great performance at the precipitation nowcasting problem. To capture multi-scale temporal dependencies, ST-ResNet <cit.> utilizes CNN-based networks to jointly extract spatial and temporal correlations. Other hybrid network-based methods <cit.> also employ CNNs to capture spatial interactions by representing them as global hidden state, but these methods struggle to explicitly model spatial correlations among series.Recently, GCNs have been leveraged to enable the modeling of hidden dependencies among nodes in graph-structured data. GCNs perform convolution on graph-structured data and can be categorized into two main directions: Spectral-based GCNs <cit.> have a solid mathematical foundation and apply convolution on node state and a normalized Laplacian matrix in the spectral domain after Graph Fourier Transform, and then reconstruct the node state after filtering by an Inverse Graph Fourier Transform. This direction faces the limitations of domain dependence and has cubic computational complexity. Next, spatial-based GCNs <cit.> recursively aggregate the representations of the neighbors of a node to update the node's representation in a message-passing manner. Motivated by the flexibility and efficiency of spatial-based GCNs, many spatio-temporal forecasting studies utilize these to capture spatial dependencies between time series, where the graph structure plays an important role in providing topological information. There are two prevailing graph structure types: pre-defined graphs and self-learning graphs, according to how the graphs are constructed. Generally, pre-defined graph structures are constructed from domain knowledge and maintain fixed weights during model training and testing <cit.>. To capture implicit spatial correlations and to contend with scenarios without pre-defined graphs, the self-learning-based methods <cit.> learn optimized adjacency matrices derived from node embeddings for downstream predictive tasks using a metric function such as inner product similarity. Both pre-defined and self-learning graphs are static during testing and cannot capture dynamic spatial correlations. We also note that <cit.> employ a neural network-based module that takes the hidden state of time series and builds a series of evolving graph structures. However, this proposal lacks explicit consideration of the periodicities and trends of spatial correlations. In contrast, TGCRN not only considers the representations of nodes and time; it also identifies periodicities based on the hidden state of series, and it thus can model dynamic correlations over time that exhibit spatial trends and periodicities.§ CONCLUSIONWe present TGCRN, a novel framework for the forecasting of spatially correlated time series. To the best of our knowledge, this is the first study that takes into account dynamics with periodicities and trends of spatially correlated time series for the purpose of time series forecasting. We proposed an effective method, called time-aware graph structure learning, to exploit time-related regular inter-variable correlations that are represented as a graph structure. We propose GCGRU to jointly capture dynamic spatial and temporal dependencies. Finally, we developed a unified framework with an encoder-decoder architecture that integrates the proposed graph structure learning and GCGRU to output multi-step forecasts. Experiments conducted on several real-world datasets demonstrated that TGCRN is capable of outperforming thirteen existing proposals in terms of forecasting performance.§ ACKNOWLEDGEThis research was supported by the National Natural Science Foundation of China (Nos. 62176221, 62276215). plainnat
http://arxiv.org/abs/2312.16403v1
{ "authors": [ "Minbo Ma", "Jilin Hu", "Christian S. Jensen", "Fei Teng", "Peng Han", "Zhiqiang Xu", "Tianrui Li" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231227042343", "title": "Learning Time-aware Graph Structures for Spatially Correlated Time Series Forecasting" }
Astronomical Institute, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 180 00 Prague, CzechiaInstituto de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, SpainDepartamento de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Symbiotic binaries exhibit a wide range of photometric variability spanning different timescales. These changes can be attributed to factors such as orbital motion, intrinsic variability of individual components, or the interaction between the two stars. In the range from minutes to hours, variability induced by accretion processes, likely originating from the accretion disks, denoted as flickering, is detected. This variability could mimic solar-like oscillations exhibited by luminous red giants. We aim to investigate whether it is possible to utilize the precise observations of the NASA TESS mission to detect flickering in symbiotic stars despite such studies being usually performed at shorter wavelengths than TESS observes. Additionally, our goal is to develop a quantitative method for the detection of accretion-induced flickering that does not rely solely on subjective assessment of the light curves. We obtain the light curves of known symbiotic stars and a comprehensive control sample of assumed single red giants from the TESS full-frame images. To ensure consistency, all the data are processed using the same methodology, which involves filtering out background, systematic, and long-term trends. From the processed light curves and their power spectral densities, we measure the amplitudes of the variability and other relevant parameters. We introduce a method that enables the differentiation between flickering sources and stars that do not exhibit this type of variability. We detect flickering-like variability in 20 symbiotic stars utilizing TESS data, with 13 of them being previously unidentified as flickering sources. Moreover, the TESS observations facilitate the detection of related variations occurring over timescales of a few days, as well as changes in the flickering behavior across multiple sectors. The flickering has now been likely detected in a total of 35 known symbiotic stars. Although this represents only a small subset of all symbiotic binaries, when focusing solely on accreting-only symbiotic stars where the detection of flickering is presumably more straightforward, the fraction could reach as high as ∼ 80%. This suggests that accretion disks may be rather prevalent in these binaries.Accretion-induced flickering variability amongsymbiotic stars from space photometry with NASA TESS J. Merc<ref>jaroslav.merc@mff.cuni.czP. G. Beck<ref>,<ref>S. Mathur<ref>,<ref>R. A. García<ref>Received 29 September 2023 / Accepted 18 December 2023 ====================================================================================================================== § INTRODUCTIONSymbiotic stars are strongly interacting binaries with long orbital periods, containing an evolved red giant and a white dwarf (or occasionally a neutron star) embedded in a circumbinary nebula. They constitute unique astrophysical laboratories important for understanding the binary evolution and the various processes that also occur in many other types of astrophysical objects <cit.>. These binaries exhibit significant photometric and spectroscopic variability on timescales between minutes to years, stemming from orbital motion effects (such as reflection, ellipsoidal effects, and eclipses), intrinsic variability of individual components (such as pulsations of the giant, rotation, or oscillations of the hot component), and interactions between the two stars (including outbursts and accretion-induced variability. Symbiotic systems typically have orbital periods ranging from a few hundred days to tens of years <cit.>. It was pointed out by <cit.> that this orbital period range is significantly lower than the distribution maximum of non-interacting red giant binaries, which have their orbital distribution peak between 1 000 and 2 000 days.While most of these stars are red-clump stars, red-giant branch stars are found on orbits with periods compatible with symbiotic stars <cit.>. The pulsations of the cool components occur on timescales of 50-200 days for semiregular pulsators <cit.> or 300-600 days for symbiotic Miras <cit.>. The most prominent changes in the light curves arise from the interaction between the components, leading to outbursts lasting weeks (symbiotic recurrent novae), months to years (active stages of classical symbiotic stars), or even decades ('slow' symbiotic novae), see, e.g., <cit.>.In addition to these long-term variations, accretion-induced, stochastic photometric fluctuations known as 'flickering' (in the symbiotic community[Not to confuse with the granulation-driven light curve 'flicker' known in the asteroseismic community <cit.>.]) have been observed in the light curves of a few symbiotic stars, characterized by amplitudes of several hundredths or tenths of a magnitude and time scales ranging from minutes to hours <cit.>. This variability exhibits an increasing amplitude toward the blue end of the spectrum and is most pronounced in the near-UV, while it is relatively negligible in the red part of the optical region, mainly due to the dominant contribution from the cool giant <cit.>.The phenomenon of flickering is not limited to symbiotic stars but is prevalent in various other accreting systems, including young stellar objects, accreting white dwarfs in cataclysmic variables, neutron stars and stellar-mass black holes in X-ray binaries, as well as supermassive black holes in active galactic nuclei <cit.>. The underlying mechanism causing the flickering signal in photometry is believed to be the accretion disk, which produces similar red-noise aperiodic variability, characterized by a broken power-law power spectral density (PSD) shape and a strong correlation between flickering amplitude and average flux <cit.>. The exact physical processes that occur in the disks are not yet fully understood, and their detailed discussion is beyond the scope of this work. Therefore, we only mention one of the promising models as an example, which is the fluctuating accretion disk model, in which variations in viscosity at different radii on the local viscous time scale lead to modulations in the accretion flow, causing the observed variability <cit.>. In cataclysmic variables, accreting white dwarf systems in which the donor is a red dwarf filling its Roche lobe <cit.>, the flickering is easily detectable <cit.>, as the contribution of the accretion disk emission dominates the optical region (the luminosity of the red dwarf is rather small). Moreover, the accretion disks of cataclysmic systems can also be studied thanks to the emission lines in the optical spectra (in particular of hydrogen) that arise from the disk and have a typical double-peaked structure. In contrast, symbiotic stars exhibit strong emission lines arising from the circumbinary nebula that is not present in cataclysmic variables. These nebular emission lines would obscure any emission lines arising from the disks in symbiotic systems, rendering flickering, although difficult to detect, the main observable evidence and a direct probe of accretion disks in them. Consequently, analyzing flickering becomes crucial not only for understanding accretion in symbiotic stars and their disks but also for comprehending their activity, as certain models of classical symbiotic outbursts require the presence of accretion disks <cit.>.To search for flickering in symbiotic stars from the ground, observations in the B band are commonly used due to the limited sensitivity of modern CCD cameras at bluer wavelengths, where flickering amplitudes are higher <cit.>. Flickering detection typically relies on visual inspection of light curves or comparing the target's variability amplitude with nearby stars of similar brightness and colors. No consistent method to prove the existence of flickering detection has yet been established. So far, only 22 confirmed symbiotic stars that show (or at least highly likely show) flickering have been firmly identified, all within the Milky Way, with no extragalactic detections reported (see their list with references in Table <ref>). It should be noted that this constitutes only a small fraction compared to the total number of known galactic symbiotic stars, currently at 283, listed in the New Online Database of Symbiotic Variables[<https://sirrah.troja.mff.cuni.cz/ merc/nodsv/>] <cit.>. The challenge lies in the small amplitude of optical flickering and the limitations of relatively small ground-based telescopes used for such searches. Furthermore, even in cases where flickering is undoubtedly detected, it may not be present at all epochs <cit.>. Therefore, it is necessary to obtain observations over an extended period of time.In this study, we analyze the short-term variability of a large sample of known symbiotic stars using data from the NASA Transiting Exoplanet Survey Satellite <cit.> for the first time. Previous studies utilizing this precise space-based photometry, which overcomes some limitations of ground-based datasets while introducing others (discussed below), have only focused on a few individual symbiotic stars, namely CN Cha <cit.>, T CrB <cit.>, RT Cru <cit.>, and IGR J16194-2810 <cit.>. It's worth noting that subsequent to the completion of this study, preliminary analysis of TESS data for NQ Gem was reported by <cit.> with a full analysis, including X-ray data, in preparation. At the same time, the authors mention possible hints of flickering in light curves of four other symbiotic stars, BD Cam, V1261 Ori, V1044 Cen, and V420 Hya, however, they do not present the data and their analysis in detail. This paper is organized as follows. Section <ref> discusses the expected amplitudes of the flickering variability in the TESS passband. Section <ref> presents the observational data used in this study, their limitations, processing, and analysis methods. Section <ref> discusses the analysis results of TESS light curves for symbiotic systems with previously reported flickering, including a method to distinguish flickering variability from other contributions in the TESS data. Section <ref> presents the results of applying the method to a sample of confirmed symbiotic stars without reported flickering and introduces newly detected flickering sources. Section <ref> briefly discusses the periodic signals present in TESS data of some symbiotic stars. Finally, Section <ref> presents our conclusions.§ EXPECTED PHOTOMETRIC AMPLITUDE OF FLICKERING VARIABILITY IN THE TESS PASSBANDThe TESS mission captures observations in a specific TESS T band, covering a wavelength range of approximately 600 - 1 000 nm <cit.>. This part of the spectral energy distribution of a symbiotic system is dominated by the luminous cool giant companion <cit.>, potentially obscuring accretion-induced variability associated with the disk.To assess the theoretical feasibility of detecting flickering in a symbiotic system with TESS, we estimated its amplitude in the TESS T band, assuming that the observed amplitude in the Johnson B filter falls within the range of 10-100 mmag. This range is typical for symbiotic stars, as reported in previous studies, based on ground-based photometry <cit.>. It is worth noting that, in some cases, even higher amplitudes have been observed. For instance, RS Oph exhibited a range of amplitudes from 0.16 to 0.59 mag in Johnson B <cit.>. Similarly, CH Cyg displayed variability of up to 0.4 mag in observations by <cit.>. V694 Mon, as reported by <cit.>, exhibited amplitudes ranging from 0.13 to 0.39 mag, while <cit.> detected variability ranging from 0.23 to 0.37 mag in the case of RT Cru.In the first scenario, we considered the optimistic case where the integrated flux of the accretion disk and the giant is equal in the Johnson B filter. The radiation from the disk was modeled as a black body with three different temperatures (7 000 K, 10 000 K, and 13 000 K), consistent with reported effective temperatures of flickering sources in symbiotic stars. For instance, <cit.> obtained a temperature of 9 000 K for the flickering source in T CrB. <cit.> reported a temperature range of 5 000 - 11 000 K for CH Cyg, <cit.> reported 7 200 - 18 900 K for RS Oph, and <cit.> obtained a temperature range of 6 300 - 11 000 K for the flickering source in V694 Mon. The radiation from the giant was modeled using the spectra from the BT-Settl grid <cit.>. We computed the amplitudes for a range of giant effective temperatures spanning from 3 000 to 4 200 K in increments of 50 K. Finally, the Johnson B and TESS T filter profiles (Fig. <ref>) employed in this analysis were acquired from the Spanish Virtual Observatory Filter Profile Service[<http://svo2.cab.inta-csic.es/theory/fps/>]<cit.>.The outcomes of this analysis, as depicted in Fig. <ref>, revealed some general trends. The amplitude of the flickering variability in the TESS band tends to increase with higher giant temperatures, for the same Johnson B filter amplitude. This is attributed to the lower ratio between the flux in the Johnson B and the TESS T band for the giant at the higher temperature end. Hence, if the flux of the giant and the disk is equal in the Johnson B band, the discrepancy in flux between the giant and the disk in the TESS T band would be less pronounced for hotter giants. Moreover, the amplitude of the flickering variability slightly rises with decreasing disk temperature, as the peak of its radiation shifts toward redder wavelengths. In all cases, the findings affirm that the anticipated flickering variability in TESS T falls within the detectable range (approximately 10^2-10^4 ppm).In the second scenario, we maintained the disk temperature at 10 000 K and examined the impact of varied flux ratios between the disk and the giant in the Johnson B filter on the outcomes. We considered three cases where the disk contributed 10%, 30%, and 50% of the total observed flux in Johnson B. The results, illustrated in Fig. <ref>, indicate that although the amplitude tends to decrease with lower disk contributions, it generally remains within the detectable range in most instances.To assess the validity of the predicted amplitudes, we can compare the Johnson B and TESS T flickering amplitudes observed nearly simultaneously in the case of RT Cru by <cit.>. The comparison between our model and their observations reveals a commendable agreement, within the same order of magnitude. § OBSERVATIONS, THEIR LIMITATIONS AND PROCESSING The TESS mission conducts an all-sky photometric survey by observing various portions of the sky in individual 'sectors', each covering a combined field of view of 24 by 96 degrees imaged by the 16 CCDs for approximately 27 days <cit.>.Due to the overlap between the sectors, some sky parts are observed continuously for an even longer time (up to almost one year in the so-called 'TESS continuous viewing zones' near the ecliptic poles). Individual 2-second exposures are combined into 2-minute images (with a 20-second mode introduced later during the first TESS extended mission). However, postage stamps of these data are available only for pre-selected targets, while full-frame images (FFIs) are downloaded to the ground with cadences of 30 minutes (Cycles 1 and 2; Sectors 1 – 26), 10 minutes (first extended mission; Cycles 3 and 4; Sectors 27 – 55), and 200 seconds in Cycle 5 and beyond (starting with Sector 56). The FFIs provided by the mission are calibrated, but the sky background (e.g., due to stray light from the Earth and Moon) is not removed and needs to be tackled during the analysis. We should consider certain limitations of the TESS data that are relevant to our study. Probably most importantly, the pixel scale of the CCDs is 21 arcseconds per pixel, which poses challenges in crowded regions, such as the vicinity of the galactic plane where numerous symbiotic stars are located, due to contamination issues (see Fig. <ref>). As previously noted in Section <ref>, the TESS wavelength range is predominantly influenced by the presence of the giant star, thereby reducing the amplitude of any potential accretion-induced variability. Additionally, the high brightness of symbiotic giants can lead to saturation in some cases, especially for stars brighter than a TESS T magnitude of around 6. The excess charge is distributed along the CCD columns <cit.>, which can lead to issues for very bright targets when the bleed columns are too long. Although data for some of these objects may still have some utility, they are not ideal for detecting low-amplitude aperiodic changes on short timescales like flickering. Furthermore, the nature of the variability we are investigating imposes constraints on the target brightness, as faint targets may suffer from significant noise domination in their light curves. The reasonable upper limit for our purposes is about TESS T ∼ 12 – 13 mag. For fainter stars, the expected variability usually has a smaller amplitude than the noise level (see the discussion in Section <ref> and Sections <ref> and <ref>). The distribution of the TESS T magnitudes of symbiotic targets (confirmed symbiotic stars and candidates) from NODSV is shown in Fig. <ref>. The distribution of symbiotic stars and candidates in the individual TESS sectors is shown in Fig. <ref>. Some symbiotic targets have been observed multiple times, resulting in data coverage spanning several sectors (Fig. <ref>), although not necessarily continuously. It should be noted that these figures do not account for the contamination issue mentioned earlier and are not magnitude-limited, meaning that a considerable number of depicted symbiotic targets do not possess usable TESS light curves in reality. In this study, our primary focus was on the FFIs data, as the short cadence data were only available for a small subset of symbiotic stars (obtained within TESS Guest Investigator program G03206, PI: J. Merc, and as a by-product of other programs). However, if short cadence data, either 2-minute or 20-second cadence light curves processed by the TESS Science Processing Operations Center pipeline (SPOC) were available, we also examined them alongside our FFI light curves.To extract the light curves from the FFIs, we employed thepackage <cit.>. We downloaded the target pixel files, which consisted of a 30 x 30 pixel region centered on the target star, and performed aperture photometry to measure the flux of the object. A simple background subtraction technique was applied, where the median flux of the background pixel was subtracted from the flux measured within the aperture. To assess the extent of contamination affecting our target stars in the TESS data and identify sources with potentially unreliable light curves for flickering analysis, we conducted a thorough examination of their surroundings using thetool[<https://github.com/jlillo/tpfplotter>] developed by J. Lillo-Box <cit.>. In this step, we considered all stars in the given field with magnitudes up to 6 mag fainter than the analyzed object. From the further analysis, we excluded symbiotic stars located near equally bright or brighter stars, as well as sources situated in crowded regions with multiple fainter neighboring sources. If there were only a few faint sources near the studied object, we examined its light curve and flagged them as 'possible contamination' in the tables and figures. Although it is highly unlikely that any periodic signal from nearby stars could be mistaken for flickering-like variability given our employed methods (such as the analysis of PSD, see below), in cases of potential contamination, we conducted a pixel-by-pixel analysis of TESS data to ensure, that the most probable source of the observed variability is the object of interest. In such a scenario, it is important to note that the observed amplitudes of the variability might be affected by the presence of neighboring objects within the photometric aperture.To investigate variability on timescales ranging from minutes to hours, we applied various filtering techniques to the resulting light curves. Specifically, we utilized 1-, 3-, and 5-day triangular filters as well as the Savitzky-Golay (SG) filter <cit.>. It is worth noting that the results obtained using different filters for smoothing yielded similar qualitative results. However, the application of wider triangular filters is not optimal due to the limited time base of the individual TESS sectors. Therefore, to avoid unnecessary repetition, only the results based on the SG filter are presented in this study. The filtering effectively eliminated long-term trends, whether real or systematic, from the light curves while preserving short-term variability (on the timescale of minutes and hours). At the same time, employing this approach helps to minimize the impact of potential inaccuracies in background subtractions on the results[Still some systematic effect might be preserved in the final data, such as the signal with a period of ∼3 days connected with the 'momentum dumps' of TESS. ]. Additionally, we implemented sigma clipping (set to 4 σ difference from a median in the particular filter window) to eliminate apparent outliers from the individual light curves. In instances where the artifacts in the light curves remained prominent even after the background removal process, we opted to utilize only the unaffected segments of the light curves. The resulting light curves were visually examined, and we measured the amplitudes of any possible variability using several methods commonly employed in the literature: a) peak-to-peak amplitude of variability <cit.>; b) difference between the median of the top 5% highest values and 5% lowest value in a light curve; c) absolute root-mean-square (rms) amplitude of variability <cit.>; d) full width at half maximum (FWHM) of a Gaussian fit to the distribution of magnitude points in a light curve <cit.>. It is worth noting that if the light curves are filtered, and the expected value for the rms calculation is the same as the median, the latter two methods differ only by a constant. Additionally, our analysis of the light curves for symbiotic targets and single red giants demonstrated that all the employed methods yielded qualitatively very similar results (the absolute values are different, but the values are strongly correlated; Fig. <ref>). Therefore, we present only the results obtained using the rms method.As described in the next section, we propose the Power Spectral Density (PSD) as an additional useful method to quantify accretion-induced flickering. We also compare these above-mentioned methods to the results of our new approach. § KNOWN FLICKERING SOURCES For the initial part of our analysis, we focus on confirmed symbiotic stars that have previously been detected showing flickering, as listed in Table <ref>. However, not all of the 22 symbiotic stars in the list could be readily analyzed using TESS data. Two stars, namely ASAS J190559-2109.4 and V2116 Oph, were neither observed in any of the available TESS sectors up to the present nor will they be in the planned Cycle 6 sectors. RS Oph, on the other hand, would only be observed in Sector 80 (June - July 2024), while data up to Sector 67 were available at the time of writing. Additionally, the observations of omi Cet (= Mira) and CH Cyg were found to be inapplicable for flickering analysis due to their excessive brightness (T < 3 mag) causing detector saturation.It is important to note that, additionally, several symbiotic stars from the list may have contaminated light curves. V1044 Cen, for instance, is located very close to a bright eclipsing binary system <cit.>, which dominates the signal in the light curve. As a result, V1044 Cen was excluded from further analysis. This leaves us with a total of 16 targets that were included in our study. However, it should be mentioned that Gaia DR2 5917238398632196736, Gaia DR2 6043925532812301184, CM Aql, and EF Aql are among the fainter stars in the list and are located in crowded regions. Additionally, there are some sources in close proximity to V407 Cyg and RT Cru, although the star themselves are brighter than those mentioned previously. For this reason, special caution must be exercised when interpreting the results for these stars.We acquired the TESS light curves for these targets, analyzing them on an individual sector basis to investigate temporal changes in variability (refer to Sect. <ref>). Our analysis process involved an initial visual examination of the light curves, followed by calculations of the variability amplitude, PSD, and its fitting (see Sect. <ref> for detailed information). To ensure our conclusions were not solely reliant on subjective assessments of the light curves, we aimed to establish a control sample of single red giants, as discussed in the following section. §.§ Distinction between flickering and stellar oscillations <https://archive.stsci.edu/prepds/kepseismic/>Figure <ref> illustrates the light curves of three distinct objects: TESS light curve of the symbiotic star RT Cru exhibiting accretion-induced flickering variability, and Kepler light curves of the solar-like oscillating giant KIC 1572175 <cit.>, and the rotationally-modulated light curve of the main-sequence star KIC 1164109 <cit.>. Despite their different underlying mechanisms for variability, the light curves exhibit a remarkable resemblance. The apparent larger scatter in the RT Cru light curve is caused by the presence of the variability occurring on the shortest timescales of minutes.Filtering out main-sequence stars from the potential sample of symbiotic stars, even when their variability timescales and light curve shapes are similar, is relatively straightforward, for instance, based on their positions in the Gaia HR diagram <cit.>. However, distinguishing solar-like oscillations in luminous red giants <cit.> from accretion-induced variability requires more careful consideration. The morphological similarity of the light curves poses a significant challenge in confirming flickering in symbiotic sources through visual evaluation of their light curves alone. As emphasized in Sect. <ref>, the flux contribution of the cool component in the TESS band and its possible variability must be taken into account. In particular, cool giants might oscillate, and related processes, such as granulation, may contribute to the variability in their light curves significantly as well.The oscillating giant shown in Fig. <ref> has a temperature ∼ 4 900 K <cit.>. This temperature is notably higher than the temperature range observed in the majority of symbiotic systems, where the peak of their distribution lies between 3 200 and 3 800 K <cit.>. We chose this example on purpose to demonstrate that the flickering could be masquerading as an asteroseismic signal.The timescales and amplitudes of oscillations and convection both rise as the star becomes more luminous <cit.> and the potential oscillations and granulation signal of cool giants, with temperatures typical of symbiotic cool components, are expected to manifest at longer timescales than those investigated in this study.Figure 1.3 in <cit.> provides a depiction of typical oscillation PSD for stars across various temperature ranges.The typical timescales of intrinsic variability in cool evolved stars motivated our selection of single giants as a reference sample for comparison with symbiotic sources. The comparison with randomly selected field red giants, despite being located nearby the target symbiotic stars in the sky, could introduce biases in the results. Therefore, we specifically chose Kepler and TESS red giants from the catalogs of <cit.> and <cit.>, with temperatures ranging from 3 000 and 4 000 K, similar to the temperature range of the symbiotic star sample. The choice of the catalogs of oscillating sources not only ensured the inclusion of evolved red giants in our analysis, but the oscillation frequencies also provided independent confirmation of the correct temperature range. However, since the initial samples did not adequately cover the magnitude range below the TESS magnitude, T = 8 mag, we supplemented them with additional red giants selected from the TESS Input Catalog <cit.>. The selection from the TIC was based on various parameters, in particular, the magnitude, temperature, and luminosity class. In total, we included 335 supposedly single red giants in our control sample. The TESS light curves for all these stars were obtained and subjected to the same analysis as the symbiotic sources (see Sect. <ref>). Upon visual examination of the resulting light curves, we confirmed that none of the analyzed red giants exhibited short-term flickering-like variability.To allow the quantitative comparison, we computed the PSD of the light curves. A piecewise linear fit in the log-log space was applied to the smoothed PSD (‘logmedian’ method in , that smooths the PSD using a moving median where the step size is determined by logarithmically increasing intervals in frequency space) of all the targets. From this fitting process, we obtained the low- and high-frequency slopes (in cases where higher frequencies were noise-dominated) or a single slope (when the noise level was not apparent from the PSD). Additionally, we determined the power at 25 and 215 μHz as part of the analysis. Qualitatively, the PSD of all single red giants from our control sample look like the one shown in Fig. <ref>. The higher frequencies are dominated by the photon noise, leading to a frequency-independent background (as indicated by the 'high-frequency' slope close to 0 ppm^2 μHz^-2), whose amplitude is dependent on the brightness of the star. At the lower frequencies, some signal is detectable, as demonstrated by the non-zero slope of that part of the PSD. These are likely to be instrumental systematics. The typical frequency at which the slope changes is around 80 μHz.We note that the 'low-frequency' slope is not very well constrained in our fitting procedure. This is caused by the fact that the length of the sectors analyzed is very limited (∼27 days). As a result, the scatter in the PSD is rather large.It is worth noting that the sample of red giants chosen for comparison with symbiotic sources exhibits some differences compared to giants in symbiotic systems. For example, single giants are not subjected to irradiation effects or tidal deformation. Nevertheless, these distinctions do not impact the results since the light curves of all giants in the control sample are noise-dominated at higher frequencies. Consequently, this sample effectively showcases the inherent noise properties in actual TESS observations, making it a more straightforward choice for comparison than relying solely on the noise performance presented in TESS documentation, especially considering the multitude of steps involved in our processing of TESS light curves.Furthermore, <cit.> have demonstrated that tidal interactions in binaries tend to suppress much of the intrinsic giant variability, except for activity-induced spot modulation of light curves. However, this phenomenon occurs with rotation periods that are usually synchronized with orbital periods in symbiotic stars <cit.> and are significantly longer than the timescales studied in this work. §.§ Detection of flickering with TESSAs described in previous sections, we extracted a consistent set of parameters (specifically variability amplitudes, and slopes and power in PSD) to characterize the variability observed in both the TESS light curves of symbiotic stars and the control set of single red giants. For each target, we analyzed each sector individually, but when there were no significant changes in the behavior, we show the average values for all sectors in the resulting figures for clarity. By comparing these samples, we were able to explore effective quantitative methods for detecting flickering in symbiotic stars.In figures <ref>, <ref>, and <ref> we compare parameters of the PSD (slopes and power) for the group of confirmed flickering sources and our sample of single red giant stars.Figure <ref> reveals that the slope at low frequencies (typically ν ≤ 80 μHz) is similar for both symbiotic stars and single red giants. Therefore, this portion of the PSD is not useful for detecting and characterizing flickering. Conversely, in Fig. <ref>, when comparing the slope at higher frequencies (ν ≥ 80 μHz), all single red giants and some symbiotic sources exhibit a white noise-dominated behavior, reflected by a slope around zero. However, the figure also demonstrates that certain symbiotic stars exhibit some variability at these frequencies, as evidenced by their non-zero slope in this region of the PSD. An even more straightforward test of the flickering can be conducted by comparing the power in the high-frequency part of the PSD (Fig. <ref>). We specifically measured the power at 215 μHz, either from the fit to the PSD or as a median in 10 μHz wide box centered at that frequency (the latter is shown in Fig. <ref> as the results are virtually the same). At this frequency, all giants from the control sample within the studied magnitude range show only white noise (typically above ∼80 μHz), but remains below the Nyquist frequency for the 30-minute cadence of the TESS observations (∼278 μHz). By selecting this frequency, we ensured that the parameter could be obtained consistently across all TESS sectors in which the targets were observed. Symbiotic stars exhibiting flickering-like variability in their light curves are clearly distinguishable in the diagram, appearing above the region where the single red giants are located. A similar conclusion can be drawn by analyzing the variability amplitudes measured as rms (Fig. <ref>). Unlike the power measured from the PSD, the rms method provides information about the overall variability in the light curve without distinguishing between different timescales. However, as demonstrated here, it can be effectively applied even to whole month-long light curves (corresponding to one sector), as long as any long-term variability is properly filtered out (SG filter in our case).Based on the combined analysis shown in the diagrams in Figs. <ref>, <ref>, and <ref>, and the light curves (Fig. <ref>) we have determined that out of the 16 symbiotic stars studied, flickering-like variability can be reliably detected in seven of them: V648 Car, CN Cha, T CrB, V694 Mon, RT Cru, BF Cyg, and ASAS J152058-4519.7. We have also verified that nearby stars do not show similar variability using the same analysis as for the target stars. Among these targets, RT Cru is particularly intriguing due to its significant variability changes between sectors, which are discussed in more detail in Sect. <ref>.Intriguingly, our analysis of TESS observations of V694 Mon obtained in Sector 7 (Jan 07, 2019 - Feb 02, 2019) has revealed signatures indicative of low-amplitude flickering-like variability. The star is presently undergoing a phase of unprecedented brightness, and since 2018, there has been a significant reduction in the amplitude of optical flickering of at least an order of magnitude when compared to earlier observations. Observations by <cit.>, <cit.>, and <cit.> did not identify any discernible flickering variability, establishing a limit of approximately 0.05 mag in the B filter. A subsequent study by <cit.> reported a limit to the flickering amplitude of 0.005 mag during observations in November 2021. Remarkably, the star exhibited a nearly one-magnitude increase in brightness from October 2018 to November 2021 in V, and this upward trend persists in the latest observations. In stark contrast, previous observations had recorded amplitudes ranging from 0.13 to 0.39 mag <cit.>. The identification of low-amplitude variability through TESS observations does not invalidate the findings from ground-based studies, in particular that the observations were not taken simultaneously and the variability changes with time. Moreover, our model, discussed in Section <ref>, suggests that the amplitude in the B band during the TESS observations likely falls below the reported limits.Similarly to the case of V694 Mon, we have identified flickering-like variability in T Crb, which was observed by TESS in Sectors 24-25 (Apr 16, 2020 - Jun 8, 2020) and Sector 51 (Apr 22, 2022 - May 18, 2022). In 2014, T Crb entered a super-active state that reached its peak in mid-2016 and recently concluded <cit.>. The super-active state is characterized by a notably reduced flickering amplitude in comparison to the quiescent variability. Ground-based observations in the B filter by <cit.> revealed a flickering amplitude of 0.08 mag in February 2016. Subsequent observations from January to August 2023 detected variability in the range of 0.11 to 0.26 mag <cit.>. This observed variability aligns broadly with the anticipated B amplitudes inferred from the TESS observations. Our analysis, in conjunction with these data, confirms the presence of flickering in the super-active state, albeit at significantly lower levels compared to quiescence. On the other hand, no evidence of flickering-like variability is observed in SU Lyn, EG And, ZZ CMi, BX Mon, V407 Cyg, EF Aql, CM Aql, and Gaia DR2 5917238398632196736. Lastly, Gaia DR2 6043925532812301184 stands out slightly in the diagrams; however, due to its faintness and its location in a crowded region, the detection of flickering with TESS in this target remains uncertain. It is important to emphasize that the non-detection of flickering in these sources in our study simply indicates that flickering was not detected during the TESS observation, given the available precision, within the specific wavelength range observed by TESS. This does not invalidate the general detection of flickering reported in the literature for these sources, in particular, given that the expected amplitude of flickering in the TESS band is significantly lower, up to several orders of magnitude, compared to observations in the optical blue or near-UV (see Section <ref>). In addition to the study of variability on the timescales of minutes and hours, the relatively long duration of TESS sectors (approximately 27 days) allows for the investigation of brightness changes over extended periods with high cadence. Previous studies of brightness variations from one observing night to another have been limited in ground-based observations. Hence, we conducted a review of the unfiltered light curves of the sources exhibiting flickering in TESS. Interestingly, all stars exhibited brightness changes over the timescale of a few days, with larger amplitudes compared to shorter timescales (refer to their light curves in Fig. <ref>). Among these, BF Cyg and V694 Mon showed the most dramatic changes. The majority of this variability is likely associated with accretion processes, while the long-term trends may have different origins. For example, in the case of CN Cha, the trend reflects the recovery of the system from a 'slow' symbiotic nova outburst <cit.>, whereas in T CrB, the long-term trend spanning the sector is linked to the orbitally-related ellipsoidal effect <cit.>. §.§ Changes in the flickering in RT CruIn most of the sources where flickering was detected with TESS, the parameters describing the variability remained relatively consistent across multiple sectors of observation. However, RT Cru is an exception to this pattern. Currently, data from four sectors are available for this star: Sector 11 (Apr 23, 2019 - May 10, 2019), Sectors 37 and 38 (Apr 02, 2021 - May 26, 2021), and Sectors 64 and 65 (Apr 06, 2023 - June 02, 2023).During the observations in Sector 11, no significant variability was observed in RT Cru, and the star exhibited similar characteristics to single red giants in our control sample, as shown in Figs. <ref>, <ref>, and <ref>. However, in the three sectors observed subsequently (2 and 4 years later), strong flickering variability became evident (Fig. <ref>). This change in behavior is not only noticeable when comparing with the control sample in the diagnostic diagrams but is also apparent in the light curves themselves (Fig. <ref>A) and in the comparison of the PSD across individual sectors (Fig. <ref>B). The substantial variation in flickering detected here with TESS was further confirmed by comprehensive ground-based follow-up observations, including photometric and spectroscopic measurements, as well as data from UV and X-ray observations <cit.>. The authors of the study attributed the disappearance of flickering in 2019 to a decrease in the accretion rate, followed by a later recovery of the accretion flow through the disk in the subsequent years. Our analysis of RT Cru, depicted in Fig. <ref>, serves as a compelling example, demonstrating the effectiveness of the methods developed in this work in distinguishing between sources with flickering and those without. It also highlights that the non-detection of flickering in some of the TESS observations does not definitively rule out the presence of this type of variability in a particular star, at least during certain epochs, and underscores the importance of repeated observations for analyzing the temporal variability in flickering.§ SEARCH FOR NEW FLICKERING SOURCES IN CONFIRMED SYMBIOTIC SYSTEMS Our analysis, as described in the preceding section, has verified that despite the limitations of TESS data for studying flickering in symbiotic stars, there are cases where the amplitude is sufficient for detection in the relatively red band of TESS. Consequently, we have extended our analysis to include other confirmed symbiotic stars listed in NODSV in which the short-term variability either has not been yet studied or the flickering was not detected from the ground. Specifically, we focused on stars within the TESS T magnitude range of 5 to 13 mag, which were observed in at least one TESS sector. Out of the 261 known symbiotic stars without flickering detection, 174 were located within the observed sectors up to Sector 67. Among these, 123 stars fell within the studied magnitude range. We excluded sources that were contaminated by nearby equally bright or brighter stars, as well as those situated in densely populated regions with numerous neighboring sources (within the photometric aperture). In total, we kept 72 targets for the analysis. Nonetheless, it is important to exercise caution when considering the possible detection of flickering in the remaining sources, as some may still be subject to contamination. When discussing the results in the subsequent analysis, we differentiate between sources that are well isolated in the TESS images and those that have other sources in close proximity, although these contaminants are fainter than the studied targets.We inferred the same parameters of the 72 symbiotic stars as we did for the known flickering sources and the control sample of single red giants. The rms diagram for the studied targets is presented in Fig. <ref>. Most of the targets are located in the region of non-flickering single red giants (see their list in Table <ref>). However, 15 objects exhibited a higher amplitude of variability than expected from white noise. The PSD power revealed a similar pattern, with some additional targets lying above the relation for the control sample. However, a careful study of the PSD of these stars revealed that the apparent higher power was caused by peaks or spikes in the PSD and not by the presence of flickering-like variability. We thoroughly reviewed the individual light curves of all the stars, along with the light curves of comparison stars located nearby. In two cases, V589 CrA and ASASSN-V J163807.84-284207.6, the light curves appeared to be affected by periodic signals (with periods of 2.57 h and 7.12 h, respectively; see Section <ref>) and did not resemble flickering. Therefore, we do not classify these objects as flickering sources. The remaining 13 targets (listed in Table <ref>) exhibited short-term variability reminiscent of flickering. Their light curves are shown in Fig. <ref>. Some of these sources also displayed variability on longer timescales (∼ days), as shown in Fig. <ref>, similar to the previously known flickering sources discussed earlier.Three of the symbiotic stars, namely Z And, Hen 3-461, and CL Sco, exhibit flickering-like variability only in certain observed sectors, similar to the case of RT Cru (Sect. <ref>). For Z And, the variability is detected in Sector 17 (Oct 08, 2019 - Nov 02, 2019), while three years later in Sector 57 (Sep 30, 2022 - Oct 29, 2022), this variability appears to be absent. Hen 3-461 shows relatively constant behavior in Sectors 9 and 10 (Feb 28, 2019 - Apr 22, 2019), followed by prominent variability in Sector 37 (Apr 2, 2021 - Apr 28, 2021), and then returns to a low state in Sector 63 (Mar 10, 2023 - Apr 6, 2023). We note that the same pattern is visible also in 120 and 20-sec SPOC processed data, that are available for Hen 3-461. Similarly, CL Sco exhibits a white-noise-dominated light curve in Sector 12 (May 21, 2019 - Jun 18, 2019), while showing flickering in Sectors 39 (May 27, 2021 - Jun 24, 2021) and 66 (Jun 2, 2023 - Jul 1, 2023).The appearance and disappearance of possible flickering in these sources may be connected to the activity of the systems. Z And experienced a very long active stage that began in 2000 <cit.>, and the TESS observations were obtained during its decline phase, when the system may be transitioning to quiescence. The initial dataset for CL Sco was obtained during its low state before the outburst activity started in October 2019, as indicated by its ASAS-SN light curve <cit.>, and the system has since remained in a high state. Regarding Hen 3-461, the detection of flickering occurred when the system was at its brightest over the past 7 years, as observed by the ASAS-SN survey. However, a more detailed analysis is necessary to determine if this brightness enhancement is associated with an outburst, as the light curves of Hen 3-461 are complex due to the pulsations of the cool star and the orbitally-related variability.Several of the newly detected flickering symbiotic stars had been previously studied from ground-based observations, but flickering was not detected at that time. Specifically, these stars include AR Pav <cit.>, AX Per <cit.>, and NQ Gem <cit.>. In addition, RX Pup has been proposed as a possible recurrent nova with a Mira companion <cit.>. The authors suggested similarities between RX Pup and other symbiotic recurrent novae, such as RS Oph and T CrB. However, the detection of flickering was not available at that time to support their model. Finally, Z And has been sometimes previously included in lists of flickering sources, but its inclusion in those lists was based on the detection of a periodic short-term signal rather than aperiodic flickering variability <cit.>. §.§ Occurrence-rate of flickering in symbiotic stars We detected flickering-like variability in TESS light curves of 20 confirmed symbiotic systems. For 13 sources, this type of variability is reported for the first time. This addition brings the total number of known symbiotic binaries with likely detection of flickering to 35, still accounting only for approximately 12% of known galactic symbiotic stars in the NODSV.However, we should highlight that this would be only a lower limit to the real number of symbiotic stars with flickering, as not all symbiotic binaries were observed by TESS, some sources are too faint to be subjected to thorough analysis using the available data, and densely populated regions introduce contamination issues that hinder drawing conclusive insights regarding any short-term variability. Furthermore, even when examining the subset of sources that had previously shown flickering, our analysis successfully detected some variability in only 7 out of 16 stars with available, relatively uncontaminated TESS data. This observation implies that the absence of flickering in TESS light curves does not necessarily indicate its absence altogether. It is plausible that repeated observations at different epochs or observations conducted at shorter wavelengths may unveil this variability in the stars where it was not apparent in the TESS data.On top of that, previous studies on flickering in symbiotic binaries have suggested that in symbiotic systems, where hydrogen-rich material undergoes shell burning on the surface of the white dwarf <cit.>, the luminosity generated by this process, in particular reprocessed to the optical by the symbiotic nebula, may be sufficiently high to overshadow the contribution from the accretion disk and potentially mask any flickering signal or at least significantly reduce its amplitude <cit.>. However, as demonstrated by shell-burning systems distinct from symbiotic stars, the process of the shell burning itself is not turning off the flickering variability, and these types of objects still can exhibit high-amplitude flickering if it is not diminished, e.g., due to the presence of a nebula. This phenomenon is evident in cases such as the recurrent nova T Pyx <cit.> or the super-soft X-ray binary MR Vel <cit.>. It is important to emphasize that the flux from shell-burning alone remains relatively stable over short time scales, and the nebula itself is unlikely to introduce rapid variability<cit.>. Consequently, while the data analyzed in the current study may not be entirely conclusive in establishing a unique link between the observed variability and accretion processes, in particular for shell-burning systems, it appears likely.More than two-thirds of the 35 likely flickering sources among symbiotic stars belong to the second, less prevalent group (because they are more challenging to detect) of accreting-only symbiotic stars that lack shell-burning. If we consider all accreting-only symbiotic stars from NODSV where flickering was previously detected through ground-based observations or those with usable TESS light curves (while excluding faint sources or those with contaminated light curves), the fraction of stars exhibiting detectable flickering rises to over 80% (22 out of 27). Only five accreting-only symbiotic stars with usable TESS light curves did not display any flickering-like variability (V934 Her, UV Aur, ER Del, GSC 06806-00016, and GH Gem; Tab. <ref>). These results, made possible by a substantial increase in the number of detected flickering sources in this study, strongly suggest that accretion disks are a common phenomenon in symbiotic stars.§ PERIOD SIGNAL IN Z AND AND IN OTHER SYMBIOTIC SYSTEMS WITH TESSZ And has been known to show short-term periodic variability, distinct from the typical flickering, thanks to the observations of <cit.>. The authors inferred a period of 1682.6 ± 0.6 s (approximately 28 minutes) from repeated B band observations and attributed it to the rotation of an accreting magnetic white dwarf. The presence of a similar period was later confirmed by <cit.> through U and B band observations. In the current study, the TESS Sector 17 data lacked sufficient cadence (30 min) to study this periodicity. However, a single significant period was detected in the Sector 57 data (200 sec cadence) using a Lomb-Scargle method <cit.>, and its value was determined to be 1601.9 ± 0.6 s (26.7 min; Fig. <ref>). This suggests that the period has changed by approximately 80 seconds over a time span of 25 years. This change may be related to the ongoing prolonged activity of Z And, however, a detailed investigation of this periodic signal is beyond the scope of the current work and will be presented elsewhere. It is worth mentioning that, in addition to Z And, we have also detected similar periodicities in the TESS light curves of two other objects: AE Cir with a period of 27.3 minutes and CI Cyg with a period of 66.6 minutes (Fig. <ref>). To increase the confidence that the identified variability is not attributed to neighboring sources, we utilized thepackage <cit.>. This tool allows the localization of the variability source within the target pixel file, pinpointing the most probable star responsible for the observed variability. In the case of AE Cir, the association with the target star is highly probable, with the variability source located within 0.02 arcsec of AE Cir position. However, the case of CI Cyg is somewhat less definitive. While the signal is clearly discernible in the processed light curve, the localization within the target pixel file is less constrained. Notably, even though thepackage identified the most probable source of the detected variability that coincides with CI Cyg, the positional constraints remain less definitive than desired. As expected, upon repeating the same analysis for Z And, the association of the variability source with the target star became undoubtful.If the subsequent follow-up observations validate the inferred periodicities in AE Cir and CI Cyg as real and not related to any background sources, we could speculate that these periods may have a similar origin as the one observed in Z And. This discovery would significantly increase the sample of symbiotic stars thought to have magnetic white dwarfs, following Z And <cit.> and FN Sgr, for which <cit.> inferred a rotation period of 11.3 minutes using Kepler data. As mentioned earlier, periodic variability was also detected in the TESS light curves of V589 CrA and ASASSN-V J163807.84-284207.6. However, the amplitudes and periods of these variations appear to be different from those observed in the TESS data of AE Cir, CI Cyg, and Z And. Furthermore, based on thepackage analysis, the signal detected in the TESS light curve of V589 CrA is undoubtedly linked to the pulsating variable star V756 CrA, located approximately 1 arcmin away. In the case of ASASSN-V J163807.84-284207.6, the variable source seems to coincide with the studied star, and the origin of the 7.12-h variability remains unclear. .§ CONCLUSIONSIn this study, we aimed to explore the short-term variability of symbiotic binaries using the precise photometric observations provided by the TESS space mission. Specifically, our focus was on the flickering phenomenon associated with the accretion disks around the hot components of these systems (white dwarfs or neutron stars). Flickering variability has been previously detected in only a small fraction of symbiotic systems, partly due to limitations imposed by ground-based observations, which are typically conducted in the optical region where the amplitude of flickering is significantly reduced compared to the near-UV range.Although the rather red TESS passband is not ideally suited for studying this type of variability for the same reason, our findings support previous research indicating that flickering signatures can still be detected at the wavelengths observed by TESS, where the dominant radiation in symbiotic systems originates from the cool evolved giants. Since the flux contribution from the red giants is prominent, we conducted a detailed analysis of potential signals originating from these giant stars, including oscillations and granulation. We established a control sample consisting of presumed single red giants to facilitate a comparison with the symbiotic sources. By examining various parameters of the light curves, we aimed to identify the most effective indicators for distinguishing between sources that exhibit flickering and those that do not, thereby providing a quantitative method to complement the visual assessment of the light curves, that may sometimes be subjective.Keeping in mind the limitations imposed by the large pixel scale of 21 arcseconds per pixel in crowded regions, we analyzed the TESS observations of both, already known flickering sources among the symbiotic stars and the confirmed symbiotic stars from NODSV for which the short-term variability either has not been studied or the flickering was not detected. Through our analysis, we were able to identify minutes-to-hours flickering-like variability in a total of 20 symbiotic stars, with 13 of them being previously unrecognized sources of flickering.Currently, the number of known symbiotic stars showing likely accretion-induced variability on short timescales is 35. Though this constitutes only a small fraction of all known galactic symbiotic stars, our study demonstrated that if we consider only accreting-only symbiotic systems – in which flickering is presumably more easily detected – the fraction could be as high as around 80%, possibly even higher. This finding strongly suggests that accretion disks are common in symbiotic systems.In addition to detecting variability on timescales of minutes and hours, our analysis of nearly uninterrupted 27-day-long time series has revealed that the short-term flickering variability often correlates with changes occurring over a few days. By leveraging repeated observations across multiple sectors, we have observed fluctuations in the presence and amplitude of flickering over time. The phenomenon of flickering disappearing and reappearing has been previously documented in several sources, including the symbiotic star RT Cru, as observed by TESS. The inclusion of three additional symbiotic stars exhibiting this behavior in our sample of 20 TESS flickering sources suggests that such variability patterns are not uncommon in these systems. Furthermore, it is possible that comparable behavior may emerge in other systems as more TESS sectors become accessible in the future.Finally, the findings presented in this study hold significant potential for future research, e.g., in near-UV using one of the planned UV facilities, such as Czech Quick Ultra-Violet Kilonova surveyor which will focus, among others, also on symbiotic binaries as a secondary science objective <cit.>. At the same time, our results serve as a precursor for the upcoming European space mission PLATO, scheduled for launch in 2026 <cit.>.The PLATO mission is equipped with telescopes that provide a cadence of 25 seconds for stars fainter than 8 magnitudes. Additionally, it incorporates two multi-color fast cameras operating at a cadence of 2.5 seconds within the magnitude range of 4 to 8, and features a smaller pixel scale of 15 arcseconds per pixel.These improvements offer an excellent opportunity to further advance the study of accretion processes in symbiotic stars. We are thankful to the referee, Ulisse Munari, for the comments and suggestions greatly improving the manuscript. JM acknowledges support from the Instituto de Astrofísica de Canarias (IAC) received through the IAC early-career visitor program and the support from the Erasmus+ programme of the European Union under grant number 2020-1-CZ01-KA203-078200. PGB acknowledges the support of the Spanish Ministry of Science and Innovation with the Ramón y Cajal fellowship number RYC-2021-033137-I and the number MRR4032204, and the financial support by NAWI Graz. SM acknowledges the support of the Spanish Ministry of Science and Innovation with the Ramón y Cajal fellowship number RYC-2015-17697,with the grants no. PID2019-107187GB-I00 and PID2019-107061GB-C66, and through AEI under the Severo Ochoa Centres of Excellence Programme 2020–2023 (CEX2019-000920-S). R.A.G. acknowledges the support from the PLATO and GOLF/SoHO Centre National D'Études Spatiales grant.This research made use of the Spanish Virtual Observatory (https://svo.cab.inta-csic.es) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00. This research made use of , a Python package for Kepler and TESS data analysis <cit.> andby J. Lillo-Box (publicly available in www.github.com/jlillo/tpfplotter).Additional software used in this study includes<cit.>,<cit.>,<cit.>, and<cit.>.aa
http://arxiv.org/abs/2312.16126v1
{ "authors": [ "J. Merc", "P. G. Beck", "S. Mathur", "R. A. García" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20231226171841", "title": "Accretion-induced flickering variability among symbiotic stars from space photometry with NASA TESS" }
Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking Thiem Hoang Received ...; accepted... ========================================================================================== Training state-of-the-art (SOTA) deep models often requires extensive data, resulting in substantial training and storage costs. To address these challenges, dataset condensation has been developed to learn a small synthetic set that preserves essential information from the original large-scale dataset. Nowadays, optimization-oriented methods have been the primary method in the field of dataset condensation for achieving SOTA results. However, the bi-level optimization process hinders the practical application of such methods to realistic and larger datasets. To enhance condensation efficiency, previous works proposed Distribution-Matching (DM) as an alternative, which significantly reduces the condensation cost. Nonetheless, current DM-based methods have yielded less comparable results to optimization-oriented methods due to their focus on aligning only the first moment of the distributions. In this paper, we present a novel DM-based method named M3D for dataset condensation by Minimizing the Maximum Mean Discrepancybetween feature representations of the synthetic and real images. By embedding their distributions in a reproducing kernel Hilbert space, we align all orders of moments of the distributions of real and synthetic images, resulting in a more generalized condensed set.Notably, our method even surpasses the SOTA optimization-oriented method IDC on the high-resolution ImageNet dataset.Extensive analysis is conducted to verify the effectiveness of the proposed method.§ INTRODUCTIONIn the era of deep learning, the utilization of large-scale datasets comprising millions of samples has become an indispensable prerequisite for achieving state-of-the-art (SOTA) models <cit.>. However, the associated storage expenses and computational costs involved in training these models present formidable challenges, often rendering them beyond the reach of startups and non-profit organizations <cit.>.To alleviate the challenges associated with larger datasets, Dataset Condensation (DC) <cit.> has emerged to reduce the training cost by synthesizing a compact set of informative images. Since its proposal, DC has attracted significant attention for addressing the challenges posed by the data burden <cit.>.Typically, DC condenses the dataset by minimizing the distance between real and synthetic images via a pre-defined metric. Based on whether to perform a costly bi-level optimization <cit.>, these methods can be generally categorized into two groups: (1) Optimization-Oriented methods <cit.>, which usually generate condensed examples by conducting performance matching or parameter matching via a bi-level optimization <cit.>; (2) Distribution-Matching(DM)-based methods <cit.>, which focus on aligning the feature distributions between real and synthetic data. Optimization-oriented methods have faced criticism for their inefficiency, primarily due to the involvement of bi-level optimization modules and time-consuming network updating processes <cit.>. In contrast, DM-based methods do not involve such nested optimization of models, which significantly reduces the computational cost associated with dataset condensation. Nevertheless, the informativeness of the condensed examples generated by current DM-based methods may not be as comparable to those produced by optimization-oriented methods.In this paper, we address a crucial oversight in existing DM-based methods, which is their neglect of higher-order moments of the distribution. As illustrated in Fig. <ref>, despite sharing the same first moment, the representation distributions of original and synthetic examples with misaligned second-order moments (Fig. <ref>) or third-order moments (Fig. <ref>) can exhibit much distinct characteristics. Motivated by this issue, we propose a novel DM-based method involving Minimizing the Maximum Mean Discrepancy (M3D) between the representation distributions of the real and synthetic images. Unlike previous DM-based methods that solely embed images in a feature representation space and align the first moment, our method further embeds the distribution of feature representations into a reproducing kernel Hilbert space. This transformation allows us to represent the infinite order of moments in a kernel-function form. By leveraging empirical estimation, we can readily align both first- and higher-order moments of the real and synthetic data with theoretical guarantees.Our method not only maintains the efficiency of the DM-based method but also exhibits significant improvements. Remarkably, the efficiency of our method makes it easily applicable to realistic and larger datasets like ImageNet <cit.>.Before delving into technical details, we clearly emphasize our contribution as:* We reveal the importance of the alignment of higher-order moments for distribution matching, which is overlooked by previous DM-based methods.* We propose a theoretical-guaranteed method for dataset condensation named M3D, which applies the classical kernel method to represent an infinite number of moments in a kernel-function form, enabling the improved alignment of the higher-order moments of the representation distributions.* We conduct extensive experiments to demonstrate the effectiveness and efficiency of our proposed method, where M3D yields SOTA performance with strong generalization across various scenarios.§ BACKGROUNDProblem Fromulation. Dataset Condensation (DC) <cit.>, also called dataset distillation, targets to condense a large-scale dataset 𝒯={(x_i, y_i)}_i=1^|𝒯| into a tiny dataset 𝒮={(s_j, y_j)}_j=1^|𝒮|, so that an arbitrary model trained on 𝒮 achieves comparable performance to the one trained on 𝒯.Typically, the condensed 𝒮 is obtained by minimizing the information loss between the synthesized and the original examples, which can be formulated as:𝒮^⋆ = _𝒮D(ϕ(𝒯),ϕ(𝒮)),where D represents a distance metric such as Mean Square Error (MSE), and ϕ denotes the matching objective.As mentioned before, various objectives can lead to different optimization processes <cit.>, and based on whether to perform a costly bi-level optimization, existing methods can be mainly divided into optimization-oriented methods and Distribution-Matching (DM)-based methods. [Note that the introduction about more dataset condensation works can be found in the Appendix.]Distribution Matching. Although optimization-oriented methods can achieve the SOTA performance, the inefficiency of them poses a significant obstacle to their application in realistic and larger datasets <cit.>. In response, DM-based methods have been developed as an alternative. In their pioneering work, DM <cit.> introduces a surrogate matching objective that focuses on aligning the representation distributions of 𝒮 and 𝒯. This objective can be formulated as:𝒮^⋆ = _𝒮E_θ∼ P_θ[D(g_θ(𝒮),g_θ(𝒯))],where g_θ is the deep encoder network parameterized as θ, which is instanced by the model f_θ without the output layer. With MSE as the distance metric, the training objective of DM can be reformulated as:𝒮^⋆ = _𝒮E_θ∼ P_θ‖1/|𝒯|∑_i=1^|𝒯|g_θ(x_i)-1/|𝒮|∑_j=1^|𝒮|g_θ(s_j)‖^2,which works as minimizing the gap between empirical first moment of the representation distributions between 𝒮 and 𝒯.Compared to previous optimization-oriented methods, DM <cit.> eliminates the need for network updating, relying instead on randomly initialized encoders. Furthermore, the costly bi-level optimization is avoided in DM, leading to significantly improved training efficiency.Remark. Given the lower effectiveness of DM compared to optimization-oriented SOTA methods, efforts have been made to enhance DM and generate more informative examples in previous works <cit.>. For instance, IDM <cit.> enhances DM through techniques such as partitioning, enriched model sampling, and class-aware regularization. Similarly, DataDAM <cit.> improves DM by incorporating attention matching. In contrast to these methods where only the first-order moment is matched, our focus is on enhancing DM through distribution embedding and higher-order moments, which are also noticed but not addressed explicitly by IDM <cit.>.Reproducing Kernel Hilbert Space. We provide a brief recap of the Reproducing Kernel Hilbert Space (RKHS) <cit.> here, which serves as the foundation of our method.Given a kernel 𝒦, ℋ is a Hilbert space of functions 𝒳→ℝ with dot product ⟨·,·⟩, if ∀ϕ, satisfying the reproducing property:⟨ϕ(·),𝒦(x,·)⟩=ϕ(x). That is to say, with the RKHS, we can map a function f on 𝒳 to its value at x as an inner product. In addition to the reproducing property mentioned above, the kernel function 𝒦 must also satisfy the following two properties:Symmetry: 𝒦(x,x')=𝒦(x',x) Positive: 𝒦(·,·) ≥ 0Commonly used kernel function include the polynomial kernel 𝒦(x,x')= (x^⊺ x'+c)^d, the Gaussian RBF kernel 𝒦(x,x')=exp(-λ‖ x-x'‖^2), and the Linear kernel 𝒦(x,x')=x^⊺ x'. § METHODOLOGYIn this section, we begin by analyzing the importance of the alignment of higher-order moments for distribution matching. Subsequently, we propose our method M3D by exploiting the classical kernel method <cit.> to align the higher-order moments of the representation distributions between real and synthesized data with theoretical guarantees.§.§ Importance of the Higher-Order AlignmentAs shown in Eq. (<ref>), it is evident that DM <cit.> only considers aligning the first moment (mean) of the representation distributions, while neglecting higher-order moments. At a high level, it may lead to the higher-order misalignment of the representation distributions of its condensed data and original data.To investigate this misalignment issue and highlight the importance of the higher-order alignment, we assessed the moment distances between the condensed set and the original training set on CIFAR-10 with 10 images per class. This was done by incorporating higher-order moment regularization terms into the original loss of DM <cit.>. The results, presented in Table <ref>, reveal that adding second-order regularization notably decreases the distance between higher-order moments of the condensed and original data, underscoring the inadequacy of aligning only the first moment. Furthermore, performing more regularization enhances the condensed dataset's performance through improved higher-order alignment. These results underscore the critical role of higher-order moment alignment in distribution matching, which is neglected in previous works.§.§ Minimizing Maximum Mean DiscrepancyFrom the preceding analysis, it becomes evident that perfecting distribution matching necessitates the consideration of higher-order moments. While incorporating higher-order regularizations directly aids in aligning these moments, it is limited to finite moments. Moreover, tuning the regularization coefficient becomes increasingly challenging with a growing number of regularization terms. In this subsection, we represent a new DM-based method that aligns the infinite order of moments in a kernel-function form. We depict the framework of the proposed M3D in Fig. <ref>. Embedding Distribution in RKHS.  Denoting the distribution of representations for real and synthetic examples as g_θ(𝒯)∼P_𝒯 and g_θ(𝒮)∼P_𝒮 respectively, where g_θ denotes the representation extractor parameterized by θ.As the order of moments extends infinitely, it is impractical to explicitly align an infinite number of moments. To address this, we need to first embed the distribution in an RKHS ℋ:μ[P_𝒯/𝒮]:=E_𝒯/𝒮[𝒦(g_θ(x/s),·)],which has been proven to be a valid embedding for distance based on the following theorem:  <cit.> If the kernel function 𝒦 is universal, then the mean map μ:=P→μ[P] is injective.Maximum Mean Discrepancy. Via the reproducing property of ℋ, ∀ϕ, we have⟨ϕ,μ[P_𝒯/𝒮]⟩ = E_𝒯/𝒮[ϕ(g_θ(x/s))],which indicate that we can compute expectations w.r.t. P_𝒯/𝒮 by taking the inner product with the distribution kernel embedding μ[P_𝒯/𝒮]. This property is favorable because it helps us to calculate the Maximum Mean Discrepancy (MMD) between P_𝒯 and P_𝒮:MMD(P_𝒯, P_𝒮): =sup(E_𝒯[ϕ(g_θ(x))] - E_𝒮[ϕ(g_θ(s))])= sup⟨ϕ,μ[P_𝒯]-μ[P_𝒮]⟩,where ϕ∈ℋ and ‖ϕ‖_ℋ≤ 1. In addition, based on the Cauchy-Schwarz inequality, we have ⟨ϕ,μ[P_𝒯]-μ[P_𝒮]⟩≤‖ϕ‖_ℋ‖μ[P_𝒯]-μ[P_𝒮]‖_ℋ≤‖μ[P_𝒯]-μ[P_𝒮]‖_ℋ, hence the MMD can be further simplified as:MMD(P_𝒯, P_𝒮)=‖μ[P_𝒯]-μ[P_𝒮]‖.It should be noted that μ[P_𝒯] and μ[P_𝒮] are characterized by infinite-dimensional spaces, which renders direct computation unattainable. However, we can leverage the reproducing property of the RKHS to transform them into a more tractable form using the kernel function 𝒦. This transformation can be formally expressed as:MMD^2(P_𝒯, P_𝒮)=𝒦_𝒯,𝒯+𝒦_𝒮,𝒮-2𝒦_𝒯,𝒮,where 𝒦_X,Y=E_X,Y[𝒦(g_θ(x), g_θ(y))] with x∼ X, y∼ Y. Due to limited page, we provide the derivation of Eq. (<ref>) in the Appendix. Last, note that we only have access to the datasets 𝒯 and 𝒮 rather than their underlying distributions. In order to tackle this issue, denoting the empirical approximation of μ[P_𝒯] and μ[P_𝒮] as μ[𝒯]=1/|𝒯|∑_i=1^|𝒯|𝒦(g_θ(x_i),·), μ[𝒮]=1/|𝒮|∑_j=1^|𝒮|𝒦(g_θ(s_j),·) respectively, we introduce the following theorem:  <cit.>Assume that ‖ϕ‖_∞≤ R for all ϕ∈ℋ with ‖ϕ‖_ℋ≤ 1. Then with probability at least 1-δ, ‖μ[P_𝒯/𝒮]-μ[𝒯/𝒮]‖≤ 2R̅(ℋ, P_𝒯/𝒮)+R√(-|𝒯/𝒮|^-1log(δ)), where R̅(ℋ, P_𝒯/𝒮) is the Rademacher average which is ensured to yield error of 𝒪(√(|𝒯/𝒮|^-1)).Theorem <ref> guarantees that the empirical approximations μ[𝒯/𝒮] are good proxies for μ[P_𝒯/𝒮]. Therefore, we can modify Eq. (<ref>) to the following empirical form as the M3D loss:ℒ_M3D=M̂M̂D̂^2(P_𝒯, P_𝒮)=𝒦̂_𝒯,𝒯+𝒦̂_𝒮,𝒮-2𝒦̂_𝒯,𝒮,where 𝒦̂_X,Y=1/|X|·|Y|∑_i=1^|X|∑_j=1^|Y|𝒦(g_θ(x_i), g_θ(y_j)) with {x_i}_i=1^|X|∼ X, {y_j}_j=1^|Y|∼ Y.Based on the analysis above, we have successfully achieved the transformation of an infinite number of moments into a finite form using RKHS. As shown in Table <ref>, this transformation allows us to effectively align the distributions between 𝒯 and 𝒮 during the condensing process.§.§ Training Algorithm of M3DThe pseudo-code of M3D is provided in the Appendix. In addition to the kernel method, we exploit the following two techniques to enhance the distribution matching.Factor & Up-sampling. The factor technique <cit.>, also termed as partitioning and expansion augmentation in IDM <cit.>, aims to increase the number of representations extracted from 𝒮 without additional storage cost. Specifically, with the factor parameter being l, each image s_i ∈𝒮 is factorized into l× l mini-examples and then up-sampled to its original size in training:s_i[ s_i^1,1 … s_i^1,l ⋮ ⋱ ⋮ s_i^l,1 … s_i^l,l] {s_i^'1, s_i^'2,…,s_i^'l× l}.In this way, the storage space of 𝒮 can be further leveraged. Following previous works, the same factor technique is incorporated into our framework, where we further exploit its benefits in aligning distributions in higher-order moments.Iteration per Random Model. Following DM <cit.>, we employ multiple randomly initialized models to extract representation embeddings from both 𝒯 and 𝒮. In contrast to DM, where only a single-step iteration is performed for each model, we posit that relying solely on the representation distributions of one batch of real and synthetic examples may introduce matching biases. To address this, without incurring additional memory usage, we empirically observe that conducting multiple iterations per model (IPM) enhances the performance of the condensed set.§ EXPERIMENTSIn this section, we begin by comparing our proposed M3D with SOTA baselines on multiple benchmark datasets. Subsequently, we conduct an in-depth examination of M3D through ablation analysis.§.§ Experimental SetupsDatasets. We evaluate the classification performance of networks trained on synthetic images that have been condensed using various baselines as well as our proposed method M3D. Our evaluation encompasses five low-resolution datasets: MNIST <cit.>, Fashion-MNIST (F-MNIST) <cit.>, SVHN <cit.>, CIFAR-10 <cit.>, and CIFAR-100 <cit.>. In addition, we also conduct experiments on the high-resolution dataset ImageNet subsets <cit.>. Detailed descriptions of datasets can be found in the Appendix.Network Architectures. We use a depth-3 ConvNet <cit.> for the low-resolution datasets, and a ResNetAP-10 <cit.> (ResNet-10 with the strided convolution replaced by average pooling) for the high-resolution ImageNet subsets.Baselines. We employ an extensive range of methods as baselines for comparison. Regarding coreset selection methods, we consider the following: (1) Random, (2) Herding <cit.>, and (3) K-Center <cit.>. For optimization-oriented DC methods, we evaluate (4) DC <cit.>, (5) DSA <cit.>, (6) IDC <cit.>. On the other hand, for DM-based DC methods, we include (7) CAFE <cit.>, (8) its variant CAFE+DSA <cit.>, (9) DM <cit.> and (10) IDM <cit.>.We provide detailed descriptions of baselines in the Appendix.Metric. Following previous works <cit.>, we employ the test accuracy of networks trained on condensed examples as the evaluation metric. All the networks are trained from scratch for multiple times — 10 times for low-resolution datasets and 3 times for ImageNet subsets. We report the average performance and the standard deviation.Implementation Details. We employ the Gaussian kernel for RKHS by default. The number of iterations is set to 10K for all low-resolution datasets. While for ImageNet subsets, we set 1K iterations. Additionally, the number of iterations per model is consistently set to 5 across all datasets. Regarding the learning rates for the condensed data, we assign a value of 1 for low-resolution datasets including F-MNIST, SVHN and CIFAR-10/100. For ImageNet subsets, we adopt a learning rate of 1e-1. Following IDC <cit.>, the factor parameter l is set to 2 for low-resolution datasets and 3 for ImageNet subsets. §.§ Comparison to the SOTA Methods Table <ref> and Table <ref> present the comparison of our method with coreset selection and dataset condensation methods. The results show that synthetic examples are more informative than the selected ones, especially when the number of image(s) per class is small.This is attributed to the fact that synthetic examples are not confined to the set of real examples. Furthermore, our method consistently outperforms other baselines across a diverse set of scenarios. Remarkably, M3D achieves over a 5% higher accuracy than the best baseline on SVHN, CIFAR-10 (IPC=10), and CIFAR-100 (IPC=1). Notably, for high-resolution ImageNet subsets <cit.>, our method surpasses all baselines in test accuracy, including the current SOTA optimization-oriented IDC <cit.>. It is worth noting that IDC <cit.> demands an exceptionally long time to condense ImageNet subsets, e.g., approximately 4 days on ImageNet-10 with IPC=20 <cit.>. In contrast, M3D achieves superior performance in a matter of hours. Additionally, our method eliminates the need for network updates, thereby circumventing the tuning of various hyper-parameters. Consequently, our method can be readily applied to realistic and larger datasets, maintaining efficiency and effectiveness simultaneously.To further demonstrate the advantages of our method, we provide the test accuracy across varying training steps in Fig. <ref>. As observed, our method consistently outperforms DM at different training steps. Even without the factor technique, our method still achieves considerable improvement, highlighting the effectiveness of M3D in aligning distributions compared to previous DM-based methods. Cross-Architecture Evaluation.  We further assess the performance of our condensed examples on different architectures. In Table <ref>, we present the performance of our condensed examples fromCIFAR-10 dataset on ConNet-3, ResNet-10 <cit.>, and DenseNet-121 <cit.>. Combining the results from Table <ref>, we can find that M3D outperforms the compared methods not only on the architecture used for condensation but on unseen ones.Visualizations. We visualize the condensed images of SVHN and ImageNet in Fig. <ref> and Fig. <ref>, respectively. For SVHN, we initialize the synthetic set 𝒮 using random images from the training set 𝒯 and then apply the condensation process using DM and M3D. As shown, the condensed images by DM and M3D appear as if the original images have been augmented with a distinct texture. Notably, the condensed images produced by our method exhibit a more pronounced and visually appealing texture compared to DM. While the overall appearance remains similar, our condensed images demonstrate better alignment with the higher-order moments of the original training set. In the case of ImageNet, the condensed images exhibit a texture reminiscent of a sunspot. In contrast to optimization-oriented methods, the images condensed by M3D retain more natural features and are more visually recognizable to humans. More visualization results are provided in the Appendix.§.§ Ablation Study Impact of the Iteration per Model (IPM).  We conduct experiments using various number of iterations per model, and the corresponding performance is depicted in Fig. <ref>. We adopt CIFAR-10 with 10 images per class to showcase the impact of IPM. In addition to the test accuracy of condensed examples, we also provide the training time required to achieve the reported accuracy. As shown, increasing the number of IPM may lead to improved performance of the condensed data, but it also increases the training time. Conversely, an excessively large IPM can compromise the generalization ability of the condensed examples.Impact of the Kernel Function.  Different kernel functions construct distinct Reproducing Kernel Hilbert Spaces (RKHS). To investigate their influence, we adopt two additional kernel functions in addition to the Gaussian kernel: the linear kernel and the polynomial kernel. Fig. <ref> illustrates the test accuracy under different kernel functions. As observed, the choice of kernel function 𝒦 has minimal impact on the test accuracy of the condensed dataset. This indicates that as long as the selected kernel function is valid, our M3D can effectively embed the distributions in the constructed RKHS, resulting in a robust method. § CONCLUSIONIn conclusion, this paper introduces a novel Distribution-Matching (DM)-based method called M3D for dataset condensation.With a theoretical guarantee, our method embeds the representation distributions of real and synthetic examples in a reproducing kernel Hilbert space, minimizing the maximum mean discrepancy between them to align their distributions in both first- and higher-order moments. Extensive experiments show the effectiveness and efficiency of our method. Notably, the efficiency of our method enables its application to more realistic and larger datasets.M3D first studies the alignment of higher-order moments of the representation distributions between real and synthetic examples, and establishes a strong baseline in DM-based methods for dataset condensation, which we believe will be valuable to the research community.§ ACKNOWLEDGMENTSThis work was partially supported by grants from the National Key Research and Development Plan (2020AAA0140001), and the Beijing Natural Science Foundation (19L2040), and the Open Research Project of NationalKey Laboratory of Science and Technology on Space-Born Intelligent Information Processing (TJ-02-22-01). *
http://arxiv.org/abs/2312.15927v2
{ "authors": [ "Hansong Zhang", "Shikun Li", "Pengju Wang", "Dan Zeng", "Shiming Ge" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20231226074532", "title": "M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy" }
0 < g r a p h i c s >ifnextchar[@i@i[] @i[#1] #1 ifnextchar[@ii@ii[] @ii[#1] #1 ifnextchar[@iii@iii[]@iii[#1] #1(0,0)(-15,-5)(0,0)3“[0pt][l](0,0)(0,0)(15,15)(0,0)3” 100pt0.5pt , 100pt0.5pt , ,Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4 Sondos Mahmoud Bsharat^*, Aidar Myrzakhan^*, Zhiqiang Shen^* ^*joint first author & equal contribution 1. VILA Lab, Mohamed bin Zayed University of AI  =============================================================================================================================================================== This paper introduces 26 guiding principles designed to streamline the process of querying and prompting large language models. Our goal is to simplify the underlying concepts of formulating questions for various scales of large language models, examining their abilities, and enhancing user comprehension on the behaviors of different scales of large language models when feeding into different prompts.Extensive experiments are conducted on LLaMA-1/2 (7B, 13B and 70B), GPT-3.5/4 to verify the effectiveness of the proposed principles on instructions and prompts design. We hope that this work provides a better guide for researchers working on the prompting of large language models. Project page is available at <https://github.com/VILA-Lab/ATLAS>.§ INTRODUCTION[ChatGPT][2023]Prompt engineering is the art of communicating with a generative large language model. Large language models (LLMs) like ChatGPT <cit.> have shown impressive abilities in various domains and tasks, such as answering questions <cit.>, mathematical reasoning <cit.>, code generating <cit.>, etc. However, their application and usage, especially on designing the optimal instructions or prompts, can sometimes be unclear to the common users. In this work, we aim to reveal these mysteries for developers or general users when inquiring and interacting with LLMs, and further enhance the quality of the responses from the pretrained LLMs by simply curating better prompts.Given that directly fine-tuning LLMs for particular tasks tends to be impractical or unattainable for the majority of users and developers due to inefficiency, the research community has turned its attention to the optimization of prompts. The technique of prompt engineering, which entails the crafting of precise, task-specific instructions in natural language, either manually or through automated means, and the careful selection of representative examples for inclusion in the prompt, has become a central area of investigation for LLMs. Despite these dedicated efforts, the task of reliably guiding LLMs to produce specific responses and making full use of the capability of pretrained LLMs continues to pose a considerable challenge.In this work, we present comprehensive principled instructions to improve the quality of prompts for LLMs. Specifically, we investigate a wide range of behaviors when feeding into different types and formulations of prompts, such as integrating the intended audience in the prompt, e.g., add “the audience is an expert in the field”, or “the audience is the 5-year-old child”, as well as other multiple aspects of the characteristics of LLMs. Our findings indicate that larger models possess a considerable capacity for simulation. The more precise the task or directive provided, the more effectively the model performs, aligning its responses more closely with our expectations. This suggests that LLMs do not merely memorize training data but are capable of adapting this information to suit varying prompts, even when the core inquiries remain constant. Therefore, it proves beneficial to assign a specific role to LLMs as a means to elicit outputs that better match our intended results. We elaborate the principled instructions for LLM prompting, provide further motivation, and detail several specific designing principles in Section <ref>. In Section <ref> we show experimentally that the proposed principles can produce higher quality, moreconcise, factual and less complicated or intricate responses than standard prompts for LLMs. Specifically, with the manually-designed ATLAS benchmark, which includes multiple questions for each principle, the specialized prompts we introduced have enhanced both the quality and accuracy of the LLM responses by an average of 57.7% and 67.3%, respectively, when applied to GPT-4. Furthermore, the improvements are more pronounced with the increase in model size, for example, the performance gains when moving from LLaMA-2-7B to GPT-4 exceed 40%. § RELATED WORK Large Language Models. The evolution of large language models (LLMs) has been pivotal in advancing natural language processing (NLP). This section reviews key developments in LLMs, providing a foundation for the current study. Beginning with Google’s BERT <cit.> revolutionized context understanding through its bidirectional training approach, while T5 <cit.> further advanced the field by unifying various NLP tasks into a single framework. Concurrently, GPT-1 <cit.> introduced a pioneering model leveraging transformer architectures for unsupervised learning. This was followed by its successor, GPT-2 <cit.> which significantly expanded its parameter count to 1.5 billion, demonstrating remarkable capabilities in text generation. Then, GPT-3 <cit.> marked a substantial leap in scale and capability, boasting 175 billion parameters and showcasing proficiency across a wide range of language tasks. Regarding other recently proposed LLMs, Gopher <cit.>, not only advanced language processing capabilities with its 280-billion parameter model but also brought ethical considerations to the forefront. Meta’s LLaMA series <cit.> highlighted the importance of efficiency, suggesting powerful performance with fewer resources, a concept also advocated by Chinchilla <cit.>, which proposed that smaller, optimally trained models could achieve exceptional results. The latest in this series of innovations is Mistral <cit.> excels in efficiency and performance, outperforming larger models. The most recent milestones in this trajectory are OpenAI's GPT-4 <cit.> and Google’s Gemini family <cit.>. They represent another significant advancement in the field with their enhanced understanding and generative capabilities, setting new benchmarks for the application of LLMs in various domains. Prompting. Prompting, as a distinct aspect of interacting with language models and its simplicity with no need to fine-tune the model, has evolved into a nuanced field of study, highlighting the intricate relationship between user inputs and LLM responses. Early explorations, such as those by <cit.>, delved into how varying prompt designs could dramatically influence the performance and outputs of language models, marking the birth of prompt engineering. This area rapidly expanded, uncovering the critical role of prompts in few-shot and zero-shot learning scenarios, exemplified by <cit.> work with GPT-3, where strategically crafted prompts enabled the model to perform tasks with minimal prior examples. Beyond mere task instruction, recent studies have shifted towards understanding the semantic and contextual nuances in prompts, examining how subtle changes can lead to significantly different responses from the LLM. Ask-Me-Anything <cit.> prompting introduced focusing on using multiple imperfect prompts and aggregating them to improve model performance, particularly in question-answering formats. Another one, Chain-of-Thought method <cit.>, where the model generates a series of intermediate reasoning steps to improve performance on complex tasks. Also, least-to-most prompting <cit.> a novel strategy to break down complex problems into simpler subproblems, significantly enhancing the model's capability to tackle more challenging problems than those presented in the prompts. The effectiveness of explanation was explored <cit.>, finding that explanations can enhance LLM's learning capabilities on complex tasks. Furthermore, a catalog of prompt engineering techniques was examined with ChatGPT <cit.>, emphasizing the importance of prompt engineering in enhancing LLM applications in software development and education. It also highlighted that effective prompt design is crucial in improving LLM performance, particularly in coding practices and learning experiences. Lastly, Directional Stimulus Prompting <cit.> presents a novel framework that uses a tunable policy model to generate auxiliary prompts, guiding LLMs towards specific desired outcomes. This diversity in prompting strategies underscores the rapidly evolving landscape of LLMs, offering multiple directions to harness their capabilities more effectively.§ PRINCIPLES§.§ Motivation Since the quality of the responses generated by a pretrained and aligned LLM is directly relevant to the quality of the prompts or instructions provided by the users, it is essential to craft prompts that the LLM can comprehend and respond to effectively. The prompts delivered to an LLM serve as a way to program the interaction between a user and the LLM, enhancing its ability to address a diverse range of tasks. The primary focus of this work is on the methodology of crafting and customizing prompts to enhance output quality. This necessitates a comprehensive grasp of the functioning and behaviors of LLMs, their underlying mechanisms, and the principles governing their responses. In this work, we achieve this goal through elaborating 26 principles for comprehensive prompts in different scenarios and circumstances, examples are shown in Fig. <ref>. §.§ Overview The overview of principles is presented in Table <ref>. According to their unique nature, we group them into five categories as in Table <ref>: (1) Prompt Structure and Clarity, e.g., integrate the intended audience in the prompt such as the audience is an expert in the field; (2) Specificity and Information, e.g., Add to your prompt the following phrase “Ensure that your answer is unbiased and does not rely on stereotypes.”; (3) User Interaction and Engagement, e.g., Allow the model to elicit precise details and requirements from you by asking you questions until he has enough information to provide the needed output “From now on, I would like you to ask me questions to...”. (4) Content and Language Style, e.g., No need to be polite with LLM so there is no need to add phrases like “please”, “if you don't mind”, “thank you”, “I would like to”, etc., and get straight to the point; (5) Complex Tasks and Coding Prompts, e.g., Break down complex tasks into a sequence of simpler prompts in an interactive conversation.§.§ Design Principles In this study, a number of guiding principles are established for formulating prompts and instructions to elicit high-quality responses from pre-trained large language models: Conciseness and Clarity: Generally, overly verbose or ambiguous prompts can confuse the model or lead to irrelevant responses. Thus, the prompt should be concise, avoiding unnecessary information that does not contribute to the task while being specific enough to guide the model. This is the basic principle guidance for prompt engineering.Contextual Relevance: The prompt must provide relevant context that helps the model understand the background and domain of the task. Including keywords, domain-specific terminology, or situational descriptions can anchor the model's responses in the correct context. We highlight this design philosophy in our presented principles.Task Alignment: The prompt should be closely aligned with the task at hand, using language and structure that clearly indicate the nature of the task to the model. This may involve phrasing the prompt as a question, a command, or a fill-in-the-blank statement that fits the task's expected input and output format.Example Demonstrations: For more complex tasks, including examples within the prompt can demonstrate the desired format or type of response. This often involves showing input-output pairs, especially in “few-shot” or “zero-shot” learning scenarios.Avoiding Bias: Prompts should be designed to minimize the activation of biases inherent in the model due to its training data. Use neutral language and be mindful of potential ethical implications, especially for sensitive topics.Incremental Prompting: For tasks that require a sequence of steps, prompts can be structured to guide the model through the process incrementally. Break down the task into a series of prompts that build upon each other, guiding the model step-by-step. Also, prompts should be adjustable based on the performance of the model and iterative feedback, i.e., it needs to be well prepared to refine the prompt based on initial outputs and model behaviors. Moreover, prompts should be adjustable based on the performance and response of the model, and iterative human feedback and preference.Finally, more advanced prompts may incorporate programming-like logic to achieve complex tasks. For instance, use of conditional statements, logical operators, or even pseudo-code within the prompt to guide the model's reasoning process. The design of prompts is an evolving field, especially as LLMs become more sophisticated. As researchers continue to explore the limits of what can be achieved through prompt engineering, these principles will likely be refined and expanded.§ EXPERIMENTS §.§ Setup and Implementation Details All our evaluation is performed on ATLAS <cit.>, a manually crafted benchmark for principled prompt evaluation. For each principle, it contains 20 human-selected questions with and without the principled prompts. Following <cit.>, we evaluate the various scales of LLM outputs by human evaluation. §.§ Models and Metrics We use instruction finetuned LLaMA-1-{7, 13}, LLaMA-2-{7, 13}, off-the-shelf LLaMA-2-70B-chat, GPT-3.5 (ChatGPT) and GPT-4 as our base models. We group these models into different scales: small-scale (7B models), medium-scale (13B) and large-scale (70B, GPT-3.5/4). We evaluate these models in two settings: Boosting and Correctness. They are employed together to provide a comprehensive understanding of a model’s performance. * Boosting. We assess the enhancement in the quality of responses from different LLMs via human evaluation after applying the outlined prompt principles. The original, unmodified prompts act as a benchmark for measuring this enhancement. Demonstrating boosting confirms that a model's performance has improved due to the use of structured, principled instructions, as shown in Fig. <ref>.* Correctness. The concept of correctness refers to the precision of the model's outputs or responses, ensuring they are accurate, relevant, and devoid of errors. Human evaluators are utilized to gauge this aspect, which is crucial for verifying the model's accuracy. Correctness is a testament to the model's ability to generate outputs that align with the expected standards of accuracy, as shown in Fig. <ref>.§.§ Results§.§.§ Results on small, medium and large-scale LLMs Boosting. The results of improvement after employing the introduced principles are shown in Fig. <ref>. Generally, all principles can bring a significant improvement on the three scales of LLMs. In the cases of principles 2, 5, 15, 16, 25 and 26, the large-scale models get the most improvement by the principled prompts.Correctness. As shown in Fig. <ref>, the employment of all principles typically results in improvements exceeding 20% on the averaged various models. In particular, for small and medium scale models, the improvement can basically reach between 20% and 30%, and for large models, the improvement can reach more than 50%.§.§.§ Results on individual LLMs Boosting.Fig. <ref> illustrates the improvement of response quality on individual model and principle after using the revised prompts. On average, there is a stable 50% improvement across different LLMs. Fig. <ref> further provides the detailed results of improvement for each principle with different LLMs.Correctness. Fig. <ref> illustrates the enhancements in accuracy across different sizes of LLMs. From LLaMA-2-13B, LLaMA-2-70B-chat to GPT-3.5 and GPT-4, there is a noticeable trend: the larger the model, the greater the increase in correctness. Fig. <ref> further presents the correctness enhancements by each principle. §.§.§ More examples on various scales of LLMs We present additional examples for both small and medium-scale LLMs, as illustrated in Fig. <ref> and <ref> for the small-scale LLaMA-2-7B, and Fig. <ref> and <ref> for the medium-scale LLaMA-2-13B. Empirically, the use of the proposed principles on prompts has demonstrably enhanced the accuracy of the responses generated by these models. § CONCLUSION We presented 26 principles through an exhaustive analysis that enhances the LLM ability to focus on the crucial elements of the input context, leading to the generation of quality responses. By guiding the LLM with these meticulously crafted principles before the input is processed, we can encourage the model towards producing better responses. Our empirical results demonstrate that this strategy can effectively reformulate contexts that might otherwise compromise the quality of the output, thereby enhancing the relevance, brevity, and objectivity of the responses.There are numerous directions for future exploration. In our experiments, we utilized a constrained shot prompting approach to apply these principles. There is potential to refine our base models to align with our principled instructions further with alternative strategies, such as fine-tuning, reinforcement learning, direct preference optimization, or different prompting methods using our generated dataset. Moreover, the strategies that prove successful could be integrated into standard LLM operations, for instance, by fine-tuning with the original/principled prompts as inputs and the polished, principled responses as targets for training.§ LIMITATIONS AND DISCUSSION While the proposed 26 principles are designed to improve and enhance the quality of responses of LLMs across a diverse array of queries, the effectiveness of these principles may diminish when dealing with questions that are very complex or highly specialized. This limitation can mainly depend on the reasoning capabilities and training of each model. To address these variations, we have tested the principles across different scales to measure their effectiveness comprehensively.Despite our efforts in evaluating these principles on seven distinct language models, it is crucial to acknowledge that models with architectures different from those tested might respond in different ways to these principles. Additionally, our assessment of improvement and correctness percentages was based on a limited selection of questions. Expanding the question set in future research could yield more generalized findings and offer deeper insights into the applicability of each principle.ieee_fullname
http://arxiv.org/abs/2312.16171v1
{ "authors": [ "Sondos Mahmoud Bsharat", "Aidar Myrzakhan", "Zhiqiang Shen" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231226185933", "title": "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4" }
: Fine-Grained Inference Serving for Unpredictable Workloads[============================================================== Random variables X^i, i=1,2are `probabilistically equivalent’ if they have the same law. Moreover, in any class of equivalent random variables it is easy to select canonical representatives. The corresponding questions aremore involved for processes X^i on filtered stochastic bases (Ω^i, ^i, ^i, (^i_t)_t∈ [0,1]). Here equivalence in law does not capture relevant properties of processes such as the solutions to stochastic control or multistage decision problems. Thismotivates Aldous to introduce the stronger notion of synonymity based on prediction processes.Stronger still, Hoover–Keisler formalize what it means that X^i, i=1,2 have the same probabilistic properties. We establish that canonical representatives of the Hoover–Keisler equivalence classes are given precisely by the set of all Markov-martingale laws on a specific nested path space 𝖬_∞.As a consequence we obtain that, modulo Hoover–Keisler equivalence, the class of stochastic processes forms a Polish space. On this space, processes are topologically close iff they model similar probabilistic phenomena. In particular this means that their laws as well as the information encoded in the respective filtrations are similar. Importantly, compact sets of processes admit a Prohorov-type characterization. We also obtain that for every stochastic process, defined on some abstract basis, there exists a process with identical probabilistic properties which is defined on a standard Borel space.§ INTRODUCTION§.§ Outline In this article we seek to identify a natural topological structure on the collection of all stochastic processes.To this end, we want to identify stochasticprocesseswhich model the same probabilistic phenomenon and we would like to consider processes as closeif they have the similar probabilistic properties. For random variables, this usually means that they are regarded as equivalent if their laws are equal and proximity is expressed in the weak topology. For stochastic processes, this approach is not fine enough, because we lose the information that the filtration carries, e.g. whether a process is a martingale or the values of a stochastic control problems for a certain process are not a property of the law of the process alone.We refer to the work of Aldous <cit.> for a broaderdiscussion of this point. Building on the work of Hoover–Keisler <cit.> we define the adapted distribution of a stochastic process (see Subsection <ref>) which captures the law of a stochastic process together with the information structure inherent in its filtration. The adapted distributions which arise in this way are themartingale laws on a specific iterated path space 𝖬_∞ (see (<ref>)) and providecanonical representatives of stochastic processes. Stochastic processes then have similar properties if and only if their adapted distributions are close w.r.t. the weak topology on 𝒫 (𝖬_∞). That is,weak convergence of the adapted distributions induces an adapted weak topology for stochastic processes. Notably, this topology satisfiesan extended version of Prohorov's theorem: A family of stochastic processes is precompact if and only if the corresponding family of laws is tight.In particular, the set of all processes with one fixed law is compact.We note that in discrete time there is recently substantial interest in adapted weak topologies and adapted versions of Wasserstein distances from different communities, see Section 1.8 below.A main aim of this article is to provide the foundations for analogous developments also in continuous time. In particular, the setup we build here is used in the accompanying article <cit.> to construct an adapted Wasserstein distance between stochastic processes. It is shown there that the space of all filtered processes is precisely the completion of the class of processes equipped with their natural filtration. Adapted Wasserstein distance yields stability of optimal stopping problems.In fact, on processes with natural filtration, the adapted weak topology is the weakest topology which makes optimal stopping continuous. §.§ The iterated prediction process As noted above, it is important for our purposes that we consider stochastic processes together with theirfiltrations: An adapted filtered process is a 5-tuple X = (Ω,,, (_t)_t ∈ [0,1], X)consisting of a probability space (Ω,,), a filtration (_t)_t ∈ [0,1] that satisfies the usual conditions, and a process X with paths in D([0,1];^d)=: 𝖬_0.The collection of all filtered processes is denoted by .The concept of prediction process goes back to Knight <cit.> and is used by Aldous <cit.> to encode information whicha filtration provides on a stochastic process (beyond the law of the process).The prediction process ^1( X) of a filtered process X is defined via^1( X )= (_t^1( X) )_t ∈ [0,1] =(( X | _t) )_t ∈ [0,1]∈ D([0,1];(𝖬_0)) =:𝖬_1Aldous called processes X,Y synonymous if (^1( X)) = (^1( Y)). This relation is capable of capturingimportant probabilistic properties such as being Markov or being a martingale, i.e. if X and Y are synonymous X is a martingale if and only if Y is a martingale etc. Yet, this equivalence relation turns out to be not strong enough to preserve other properties of interest. For instance,there are synonymous processes X,Y that yield different values in optimal stopping problems (see <cit.>).In order to capture all probabilistic properties, Hoover–Keisler repeatthe Aldous-Knight construction andintroduce iterated prediction processes^n+1( X )= (( ^n( X) | _t) )_t ∈ [0,1]∈D([0,1];(𝖬_n)) =:𝖬_n+1.The prediction process of order ∞ is the vector valued process that contains all n-th order prediction processes, i.e. ^∞_t( X) := (_t^1( X), ^2_t( X), …)∈ D([0,1];∏_n=0^∞(𝖬_n) ) =: 𝖬_∞. §.§ Adapted distributionsWe call (^∞( X)) the adapted distribution of the filtered process X. Not all probability measures on 𝖬_∞ arise as adapted distributions, in fact: * The process ^∞( X) is a martingale in the sense that all of its components are (measure-valued) martingales. * Writing e_t(f) := f(t) for evaluation at t ∈ [0,1] and # forthe push-forward of measures, adaptednessof X impliesfor all n∈, e_t_#^n_t( X) = e_t_#(^n-1( X)|_t) = (^n-1_t( X)|_t) = δ_^n-1_t( X) . These two properties already characterize adapted distributions among all probabilities on 𝖬_∞. To formalize this, we write Z = (Z_t)_t ∈ [0,1]for the canonical process on 𝖬_∞. That is,for every t ∈ [0,1], Z_t is a (countable) vector of measuresZ_t = (Z^n_t)_n≥ 1, Z^n_t ∈(𝖬_n-1). We call μ∈(𝖬_∞)martingale measure if under μ for every n ≥ 1, the component process Z^n := (Z_t^n)_t ∈ [0,1] is a (𝖬_n-1)-valued martingalew.r.t. the filtration generated by thevector-valued process Z. We say that Z is consistently terminating ife_t_# Z^n_t = δ_Z^n-1_tfor all t ∈ [0,1] and n ≥ 1. Note that this reflects precisely condition (<ref>) above. A probability μ∈(𝖬_∞) is an adapted distribution if and only if it is a consistently terminating martingale measure. The interesting implication in Theorem <ref> is to show that every consistently terminating martingale measure μ is the adapted distribution of a filtered process.To build this process we attach a further coordinate Z^0_t to the process (Z_t^1, Z^2_t, …). For this, fix a Borel map δ^-1: ( D([0,1];^d) ) → D([0,1];^d) which satisfiesδ^-1(δ_x) = x for x ∈ D([0,1];^d) and isarbitrary otherwise, cf. Remark <ref>. We then setZ^0 := (Z^0_t)_t∈ [0,1]:= δ^-1(Z_1^1). This leads to the following `canonical' construction of the desired filtered process:For a consistently terminating martingale measure μ∈(𝖬_∞) consider the filtered process X^μ = (𝖬_∞, , μ, (_t)_t ∈ [0,1], Z^0), where (_t)_t ∈ [0,1] is the right-continuous augmentation w.r.t. μ of the filtration generated by the process Z on 𝖬_∞ and := _1. Then the adapted distribution of X^μ satisfies(^∞( X^μ)) = μ. §.§ Stochastic processes as a Polish spaceHoover–Keisler define the notion of adapted function of a stochastic process and argue that two stochastic processes have the same probabilistic properties if and only if all adapted functions take the same value on these processes. Furthermore, this happens if and only if these processes have the same adapted distribution, see Section <ref> for details. We thus identify processes based on having the same adapted distribution:Filtered processes X,Y are Hoover–Keisler-equivalent if and only if (^∞( X )) = (^∞( Y )). The space offiltered processes is the factor space := /_≈_∞.A sequence ( X^n)_n inconverges to X ∈ in the adapted weak topology if the corresponding sequence of adapted distributions converges weakly on (𝖬_∞) ByTheorem <ref>, the adapted distribution μ of a filtered process X provides a canonical representative X^μ≈ X. Note that as 𝖬_∞ is standard Borel, this implies that up to equivalence, every process can be assumed to be supported by a standard Borel space.Besides Theorem <ref>, amain result of this article is: The spaceequipped withadapted weak convergence is Polish.Our proof of Theorem <ref> (implicitly) yields a complete metric on . However, building on the present work, we provide in the parallel paper <cit.> an adapted Wasserstein distance which metrizes the adapted weak topology and which seems more convenient in view of applications.An important ingredient in the proof of Theorem <ref> is the following extension of Prohorov's theorem which appears interesting in its own right.A collection of filtered processes is relatively compact in the adapted weak topology if and only if the respective collection of laws is relatively compact.We emphasize that Theorem <ref> was in essence already established by Hoover <cit.> based on repeated applications of the compactness result of Meyer–Zheng <cit.> to the iterated prediction processes. The difference to Theorem <ref> is that Hoover <cit.> does not consider stochastic processes in the classical sense, in particular, no adaptedness-constraint is present in his setting. For simplicity the results in this section are stated forprocesses with values in ^d. We want to emphasize that most of our constructions are independent of the specific choice of the path space.§.§ Related literature This article is most closely related to works of Aldous <cit.>, which uses the prediction process to define synonymity and closeness of stochastic processes, Hoover–Keisler <cit.>, where the iterated predictionprocesses is used to define a refined notion of equivalence of stochastic processes, and to Hoover <cit.> where this iterated prediction process is also used to define a mode of convergence of stochastic processes (or more precisely, random variables with filtration). We note however, that the idea to extend the weak topology or to define metrics that account for flow of information has arisen independently in different communities. Specifically, different groups of authors have introduced`adapted' variants of the Wasserstein distance in discrete times, this includes the works of Vershik <cit.>,Rüschendorf <cit.>,Gigli <cit.>, Pflug–Pichler <cit.> and Nielsen–Sun <cit.>. Another distance which accounts for the order of time is induced by the Knothe-Rosenblatt distance <cit.>.Further adapted extensions of the weak topologywere introduced byHellwig in economics <cit.> and using higher rank signaturesby Bonnier, Liu, and Oberhauser <cit.>. Remarkably, when consideringprocesses in finite discrete time with their natural filtration, all of these approaches define the same adapted weak topology, see <cit.>. Very much in the spirit of these results, we will establish in the parallel paper <cit.> that also the adapted weak topology in continuous time defined in this paper admits a number of different characterizations, when restricting to processes with their natural filtration. In particular, we construct in <cit.> a Wasserstein-type metric for the adapted weak topology. We note however, that in continuous time several different adapted Wasserstein distances have been considered (typically for continuous semimartingales), see <cit.>. In these articles the focus is on specific applications and the respective Wasserstein distances are considered on smaller subclasses of processes (continuous (semi-) martingales or diffusions) and induce significantlyfiner modes of convergence, see <cit.> for more details.Adapted topologies and adapted transport theory have beenutilized in domains such asgeometric inequalities(e.g.<cit.>),stochastic optimization and multistage programming (e.g. <cit.>),mathematical finance (e.g. <cit.>), and maching learning (e.g. <cit.>). We refer to <cit.> for the estimation of 𝒜𝒲_p from statistical data, to <cit.> for efficientnumerical methods foradapted transport problems.§.§ Structure of the paperSection <ref> provides background on probability measures on Lusin spaces andpaths whose values are probability measures on Lusin spaces. These objects arise naturally in the construction of the nested spaces 𝖬_n, n ∈∪{∞}. The reader who is not interested in these technical details may skip the Sections <ref> and <ref>.Section <ref> concerns measure-valued martingales on Lusin spaces. We generalize the compactness result for real-valued martingales from Meyer–Zheng <cit.> to measure-valued martingales and strengthen their results on convergence of finite dimensional distributions. The results which are needed further on in this article are summarized in Section <ref>.For the first read the reader may skip the Sections <ref> and <ref> containing the proofs of these results.In Section <ref> we introduce the concept of filtered random variables, a generalization of filtered processes, which was first considered by Hoover–Keisler <cit.> and is technically more convenient for this article. Moreover, we discuss the construction of the iterated prediction process in more detail. Section <ref> is about the space of filtered processes (filtered random variables) modulo Hoover–Keisler-equivalence. We introduce the adapted weak topology on this space, prove that it is Polish and that it admits a Prokohorov-type compactness criterion.In Section <ref> we introduce an operator that discretizes filtered random variables in time. This operator is useful to connect the continuous-time framework to the discrete-time framework that was studied in <cit.>.Section <ref> concerns the concept of adapted functions. This concept makes precise that Hoover–Keisler-equivalence can be interpreted as `having the same probabilistic properties'.In Section <ref> we translate results on filtered random variables to the framework of filtered processes, in particular we prove the theorems stated in the introduction.§ PRELIMINARIES §.§ Notations and ConventionsThroughout this paper S, S_1, S_2, … are Lusin spaces.[A Lusin space is a topological space that is homeomorphic to a Borel subset of a Polish space. We use Lusin spaces because the space offunctions with convergence in measure is not a Polish space, but merely Lusin. Further details are explained in Section <ref>.] If we need stronger assumptions on the spaces (e.g. Polish, compact metrizable) we will always state it explicitly. We denote compatible metrics by d,d_1,d_2, … and always assume that they are complete, if the respective space is Polish. Following the ideas of Meyer–Zheng <cit.> we equip [0,1] with the measure λ = 1/2(+ δ_1 ), where Leb is the Lebesgue measure on [0,1]. We call the topology of convergence in measure w.r.t. λ the Meyer–Zheng topology. C_b(S) denotes the set of continuous bounded functions S →. Let S, S_1, S_2 be Lusin spaces. * Subspaces are always endowed with the subspace topology. * Product spaces S_1 × S_2 are always equipped with the product topology. * The space of probability measures on S is denoted by (S) and is always equipped with the weak topology (testing against continuous bounded functions). * The space of Borel functions [0,1] → S modulo λ-a.s. equality is denoted by L_0(S) and equipped with the Meyer–Zheng topology. * The space offunctions [0,1] → S is denoted by D(S). By minor abuse of notation, we consider it as subspace of L_0(S) and call f ∈ L_0(S)if it admits a representative in D(S). In particular, D(S) is equipped with the subspace topology inherited from the Meyer–Zheng topology. In Sections <ref> and <ref> below we will further discuss the topological properties of the spaces (S), L_0(S) and D(S). Let f : S → be bounded and Borel. Then we define the mappingf^∗ : (S) →: μ↦∫ f dμ.Note that by definition the weak topology is the initial topology w.r.t. the family { f^∗ : f ∈ C_b(S) }, in particular f^∗ is continuous if f is continuous. If f: S_1 → S_2 is a Borel measurable mapping and μ∈(S_1),we denote the pushforward of μ under f by f_#μ. We introduce a notation for the pushforward map(f) : (S_1) →(S_2) : μ↦ f_#μ.On the first glance, this might seem like a notational excess, but it will help to keep track when dealing with nested spaces (e.g. probability measures on probability measures) later in this paper. It is convenient to note that (f ∘ g) = (f) ∘(g) and (𝕀_S)=𝕀_(S) (i.e.the operationis a covariant functor in the category of Polish spaces). The operationpreserves many properties of spaces and functions: * S ↦(S) preserves the properties Lusin, Polish and compact metrizable. * f ↦(f) preserves the properties Borel measurable, continuous, injective, surjective, bijective, topological embedding and homeomeomorphism. We denote the mapping which maps s to the Dirac measure at s by δ, i.e.δ : S →(S) : s ↦δ_s.Note that δ is a topological embedding and its range δ(S) := {δ_s : s ∈ S}is closed in (S). We will often use its inverseδ^-1 : δ(S) → S : δ_s ↦ s.For t ∈ [0,1] we introduce the mape_t : D(S) → S : f ↦ f(t). First we need to argue that e_t is well-defined because we consider D(S) as a subset of L_0(S), which formally consists of equivalence classes of functions w.r.t. λ-a.s. equality. For t<1 this is guaranteed by the right continuity of the functions. This argument fails for the endpoint t=1, but this causes no problem because we adopted the convention that λ puts positive mass at the endpoint t=1. It is important to keep in mind that e_t is not continuous for t<1, but it is continuous for t=1.If T is a finite set, i.e. T={t_1,…, t_N} with t_1 < … < t_N we denote e_T : D(S) → S^N : f ↦ ( f(t_1),…,f(t_N) ). The σ-algebra generated by the random variable X augmented with -null sets is denoted by σ^(X). If X^n → X in law, we either write (X^n) →(X) or X^nX.Throughout this paperdenotes the set of natural number without 0 and _0 the set of natural numbers including 0.§.§ Probability measures on Polish and Lusin spacesRecall that a Polish space is a separable completely metrizable space. Polish spaces provide a convenient framework for measure theory, however, they are not sufficiently general for the purposes of this paper: The space offunctions equipped with the Meyer–Zheng-topology is not Polish, but only a Borel subset of a Polish space, see <cit.>. Clearly, this also applies to the spaces 𝖬_n, 1 ≤ n ≤∞, that have been introduced in (<ref>) and are crucial throughout this article. Therefore, wehave to deal with the class of topological spacesthat are homeomorphic to a Borel subset of a Polish space: A topological spaces S is called Lusin space if there is a Polish space S' and a topological embedding ι : S → S' such that ι(S) is a Borel subset of S'. As Polish spaces are separable metrizable and this property carries over to subspaces, Lusin spaces areseparable metrizable as well. The proof of the following result is given in Appendix <ref>: * A metrizable space (S,) is Lusin if and only if there is a stronger topology ' ⊃ such that(S,') is a Polish space. * Let (S,) be a Lusin space and A ⊂ S. Then A is Borel if and only if there is a Polish topology ' on A that is stronger than the subspace topology |_A. There are two non-equivalent definitions in the literature. The definition that we use follows Bourbaki<cit.> and Dellacherie-Meyer <cit.>. Many authors (e.g. <cit.>) use the following definition: A Hausdorff space S is called Lusin if it is the continuous bijective image of a Polish space (or equivalently, if there is a stronger Polish topology on S itself). Proposition <ref> implies that this is weaker than the Definition <ref> that we use. Indeed, it is strictly weaker: A separable infinite dimensional Banach space X equipped with the weak-∗-topology is Hausdorff and the norm topology is a stronger Polish topology on X, however, X with theweak-∗-topology is not Lusin according to Definition <ref> because theweak-∗-topology is not metrizable. We work with Definition <ref> because it is sufficiently general for our purposes (the space offunctions is Lusin according to that definition, see Proposition <ref> below) and we want to restrict to separable metric spaces in this paper because we want to work with a countable family of convergence determining functions, see Definition <ref> below.It is well known that separable metric spaces can be embedded into the Hilbert cube [0,1]^, see e.g. <cit.>. Polish spaces are (up to homeomorphism) the G_δ-subsets of [0,1]^ and Lusin spaces the Borel subsets of [0,1]^.Next, we introduce the notion of convergence determining families: A family (ϕ_j)_j ∈ J of [0,1]-valued functions on S is called * point separating, if for x,y ∈ S we have x=y ∀ j ∈ J : ϕ_j(x) = ϕ_j(y); * convergence determining, if for every sequence (x_n)_n ∈ in S and x ∈ S we have x_n → x ∀ j ∈ J ϕ_j(x_n) →ϕ_j(x). Note that (f_j)_j ∈ J is convergence determining if and only if the product map∏_j ∈ J f_j : S → [0,1]^J : s ↦ (f_j(s))_j ∈ J is a topological embedding. Conversely, if S can be embedded into a cube [0,1]^J, the projections are a convergence determining family.So, working with convergence determining functions is equivalent to using an embedding into the cube [0,1]^J, but notionally more convenient for our purposes. In particular, we have Let S be a separable metric space. Then there is a countable convergence determining family of functions on S that is closed under multiplication. Next, we discuss how point separating[convergence determining] families on S relate to point separating[convergence determining] families on (S). The following useful results can be found in <cit.>. Let S be a Lusin space and {ϕ_j : j ∈ J} be a family of [0,1]-valued functions on S that are closed under multiplication. Then we have: * If {ϕ_j : j ∈ J} is point separating and consists of continuous functions, then {ϕ^∗_j : j ∈ J} is point separating on (S). * If {ϕ_j : j ∈ J} is point separating, consists of Borel functions and J is countable, then {ϕ^∗_j : j ∈ J} is point separating on (S). * If {ϕ_j : j ∈ J} is convergence determining, then {ϕ^∗_j : j ∈ J} is convergence determining on (S). Next, we discuss compactness in (S). Recall that ⊂(S) is called tight, if for allϵ >0 there is a compact set K_ϵ⊂ S such that μ(K_ϵ) ≥ 1-ϵ for all μ∈. If S is a Polish space, Prohorov's theorem characterizes the compact subsets of (S) via tightness: Let S be a Polish space and ⊂(S). Thenis relatively compact if and only if it is tight.In the case that S is merely Lusin, the situation gets a bit more complicated because compact subsets of (S) are in general not tight (see <cit.> for an example in the case S =). We still have the following results (see <cit.>) Let S be a Lusin space. Then we have: * Tight subsets of (S) are relatively compact. * Convergent sequences in (S) are tight. Note that assertion (b) does not imply that all countable relatively compact sets in (S) are tight.The intensity operator will be crucial throughout this paper: The intensity operator I ((S)) →(S) is defined as the unique mapping that satisfies, for all Borel measurable fS → [0,1], ∫ f(s) I(P)(ds) = ∬ f(s) q(ds) P(dq).An equivalent way to write (<ref>) isf^∗(I(P)) = ∫ f^∗(q) P(dq).We see that I is continuous by applying(<ref>) to all f ∈ C_b(S).Consider the function δ : (S) →δ((S)) : μ↦δ_μ. Then we have I(δ_μ) = μ, so the intensity operator I : ((S)) →(S) is a continuous extension of δ^-1 from δ((S)) to the whole space ((S)). Of course, I is not the inverse of δ, it is merely a left inverse. If S is (a convex subset of) a separable Banach space, then (S) → S : μ↦∫ x μ(dx)is a continuous left inverse of δ. In general there is not always a continuous left inverse of the function δ : S →(S). Indeed, a left inverse of δ : S → (S) is a continuous surjective mapping (S) → S. As (S) is connected, this can only exist, if S is connected as well. In particular, if S is discrete, δ : S → (S) has no continuous left inverse. For 𝒜⊂((S)) we have the following implications: [ (i) 𝒜 is tight⇐(ii) I(𝒜)is tight;⇓ ⇓;(iii) 𝒜 is relatively compact⇔ (iv) I(𝒜)is relatively compact ] If S is Polish all of these statements are equivalent.We start with the case that S is Lusin. (i)(iii) and (ii)(iv) are immediate from Theorem <ref>. (ii)(i) can be found in <cit.>. The claim there is for Polish spaces, but this implication did not use Polish. (iii)(iv) is immediate as I is continuous and continuous images of relatively compact sets are relatively compact. (iv)(iii) Let (μ^n)_n be a sequence in 𝒜. We need to show that it has a convergent subsequence. As (I(μ^n))_n is a sequence in the relatively compact set I(𝒜), there is a subsequence (I(μ^n_k))_k converging to some ν∈(S). By Theorem <ref>(b), the sequence (I(μ^n_k))_k is tight. By the already proven implication (ii)(i), the sequence (μ^n_k)_k is tight as well and therefore relatively compact by Theorem <ref>(a). So,(μ^n_k)_k, and therefore (μ^n)_n, has a convergent subsequence. In the case that S is Polish, we get (iii)(i) and (iv)(ii) from Prohorov's theorem. Then we have proven enough implications to conclude that all statements are equivalent.Let S_n, n ∈ be Lusin spaces and S:= ∏_n ∈ S_n. Then we have * A sequence (μ^m)_m in (S) converges to μ in (S) if and only if (_1,…,_n)_#μ^m → (_1,…,_n)_#μ for all n ∈. * ⊂(S) is relatively compact if and only if (_n)() is relatively compact in (S_n) for all n ∈. For the sake of completeness, a proof is given in the appendix. While (a) is trivial, (b) is not because it relies on Prohorov's theorem and we need to be careful there in non-Polish spaces. Let _Γ(S_1 × S_2) denote the set of μ∈(S_1 × S_2) that are concentrated on the graph of a Borel function from S_1 to S_2. In <cit.>, it was shown that _Γ(S_1 × S_2) is a G_δ-subset of (S_1 × S_2), if S_1, S_2 are Polish. As Lusin spaces are Borel subsets of Polish space and Borel functions defined on a Borel subset can always be extended to the entire space, this assertion readily extends to Lusin spaces S_1,S_2. §.§ Pseudopaths with values in Lusin spaces The construction of Hoover–Keisler is based on processes withpaths in nested probability spaces whose path spaces are equipped with the pseudo-path topology of Meyer–Zheng <cit.>. For this reason, it is necessary to first establish some basic properties of this topology. An S-valued pseudo-path is a probability measure on [0,1] × S that is concentrated on a graph of function and has first marginal λ. We denote the set of all S-valued pseudo-paths as Ψ(S). A probability measure μ∈([0,1]× S) with first marginal λ is a pseudo-path, if and only if there is a measurable function f: [0,1] → S such that μ_t = δ_f(t)λ-a.s.,where (μ_t)_t is adisintegration of μ. The mapping ι_SL_0(S) →([0,1] × S)f ↦(𝕀,f)_#λis injective by the a.s. uniqueness of the disintegration and its range is Ψ(S). A pseudo-path μis calledpseudo-path if there is afunction f such that μ=ι_S(f). We denote the set of S-valuedpseudo-paths as Ψ_D(S), i.e. Ψ_D(S) = ι_S(D(S)). The main goal of this section is to prove the following proposition: * The mapping ι_S defined in (<ref>) is a topological embedding with range Ψ(S). In particular, Ψ(S) is homeomorphic to L_0(S). * Ψ(S) is a G_δ-subset of ( [0,1] × S ). In particular, if S is Polish, then Ψ(S) is Polish; if S is Lusin, then Ψ(S) is Lusin. * D(S) is a Borel subset of L_0(S). In particular, if S is Lusin, then D(S) is Lusin. Proposition <ref> implies that we will no longer have to distinguish between L_0-functions and pseudo-paths. Will will choose the point of view depending on what is technically more convenient in the respective context. For the rest of this section, which is dedicated to the proof of this equivalence, we will of course carefully distinguish between L_0-functions and pseudo-paths. The results of Proposition <ref> are well known from <cit.> in the caseS=. However, the proof of <cit.> made use of the fact that the paths are real-valued (they used that weak L_2-convergence plus convergence of the norms implies strong L_2-convergence).We will generalize these results using convergence determining real-valued functions (see Definition <ref> below). This generalization is straight forward, but we do not skip it because it gives us the opportunity to introduce notations that we need later and to derive lemmas about convergence determining families that will be useful several times throughout this paper.Given two Lusin spaces S_1, S_2 and a Borel function f : S_1 → S_2 one can consider the operation “compose with f” defined byL_0(S_1) → L_0(S_2) : g ↦ f ∘ g.If f is continuous, this operation mapspaths topaths.Next, we define this operation in the framework of pseudo-paths. For technical reasons, we define this operation not only on pseudo-paths, but we define an operation on the whole of ([0,1] × S_1) that acts on pseudo-paths as the composition. Given fS_1 → S_2 be Borel, we define Ψ(f) := ((t,s) ↦ (t,f(s)) ) : ([0,1] × S_1) →([0,1] × S_2). Then Ψ(f) has the following properties: * On a probability measureμ∈([0,1] × S_1) with disintegration (μ_t)_t ∈ [0,1] the mapping Ψ(f) acts as Ψ(f)(μ)(dt,ds) = f_#μ_t (ds)λ(dt). *Ψ(f) maps Ψ(S_1) to Ψ(S_2) and acts there as composition with f, i.e. Ψ(f)(ι_S_1(g)) = ι_S_2(f ∘ g). * If f is continuous, Ψ(f) is continuous from ([0,1] × S_1) to ([0,1] × S_2) and it maps Ψ_D(S_1) to Ψ_D(S_2). * If f is a topological embedding, Ψ(f) is a topological embedding of Ψ(S_1) into Ψ(S_2) and of Ψ_D(S_1) into Ψ_D(S_2). Straightforward. Let μ∈([0,1] × S) with first marginal λ, let {ϕ_nn ∈} be a point separating family of Borel functions on S, and let {ψ_nn ∈} be a convergence determining family. Then we have: * μ is a pseudo-path if and only ifΨ(ϕ_n)(μ) is a pseudo-path for all n ∈. * μ is apseudo-path if and only ifΨ(ψ_n)(μ) isapseudo-path for all n ∈. <ref> The forward implication is due to Proposition <ref> <ref>. For the backward implication, note that for all n ∈ there is a λ-full set A_n such that ϕ_n_#μ_t = δ_x_n,t for some x_n,t∈ [0,1]. Then the set A:= ⋂_n ∈ A A_n has λ-full measure and we have ϕ_n_#μ_t({x_n,t})=1 for all t ∈ A. The setS_t:= { x ∈ S: ϕ_n(x)=x_n,t for alln ∈} contains at most one point as (ϕ_n)_n∈ is point separating. Moreover, we have μ_t(S_t)=1 for all t ∈ A, so μ_t is a Dirac for all t ∈ A. <ref> Again, the direct implication is an easy consequence of Proposition <ref> <ref>. For the reverse implication note that μ is a pseudo-path by point (a) of this lemma, so there is some f ∈ L_0(S) such that μ=ι_S(f). The assumption and Proposition <ref><ref> imply that ψ_n ∘ f isfor all n ∈. As (ψ_n)_n is convergence determining we can conclude that f is . Before we prove the main result of this section, we recall some standard results about convergence in measure: Let, for n ∈, f_n,f ∈ L_0(S_1) and ϕ S_1 → S_2 be continuous. * (f_n)_n ∈ converges to f in probability if and only if every subsequence of (f_n)_n ∈ admits an a.s.-convergent subsequence with limit f. * If (f_n)_n ∈ converges to f in probability then (ϕ∘ f_n)_n ∈ converges to ϕ∘ f in probability. See e.g. <cit.>. Let {ϕ_nn ∈} be a convergence determining family on S and let f_k,f ∈ L_0(S), k ∈. Then (f_k)_k ∈ converges to f in probability if and only if, for all n ∈, (ϕ_n ∘ f_k)_k ∈ converges to ϕ_n ∘ f in probability. The forward implication follows from Lemma <ref> <ref>. The reverse implication follows from a standard diagonalization argument and Lemma <ref> <ref>.Finally we are in the position to prove the main result of this section. <ref>: Since ι_S is a bijection onto Ψ(S), it remains to prove continuity of ι_S and ι_S^-1 restricted to Ψ(S). In order to prove continuity of ι_S, let (f_n)_n ∈ be a convergent sequence in L_0(S) with limit f ∈ L_0(S) and g ∈ C_b([0,1] × S). By Lemma <ref> <ref> we have that (g ∘ (𝕀,f_n))_n ∈ converges to g ∘ (𝕀,f) in probability. Using dominated convergence we obtain lim sup_n →∞| ∫ g dι_S(f_n) - ∫ g dι_S(f) | ≤lim_n →∞∫ |g(t,f_n(t)) - g(t,f(t))|λ(dt) = 0. As g ∈ C_b([0,1] × S) is arbitrary we obtain that (ι_S(f_n))_n ∈ converges to ι_S(f) in ([0,1]× S). In order to prove the continuity of ι^-1_S|_Ψ(S) assume that (ι_S(f_n))_n ∈ converges weakly to ι_S(f). Pick a convergence determining family {ϕ_kk ∈}. The map Ψ(ϕ_k) : ([0,1] × S) →([0,1] ×) is continuous (cf.Proposition <ref><ref>), so using (<ref>) we get ι_(ϕ_k ∘ f_n) = Ψ(ϕ_k)(ι_S(f_n)) →Ψ(ϕ_k)(ι_S(f)) = ι_(ϕ_k ∘ f). By the corresponding real-valued result<cit.>, we have ϕ_k ∘ j_n →ϕ_k ∘ f in measure for all k ∈. Using Lemma <ref> we conclude that f_n → f in measure. <ref>: _Γ([0,1] × S) is G_δ, see Remark <ref>, and the set of μ∈([0,1] × S) with first marginal λ is closed and therefore G_δ. So, Ψ(S) is G_δ as intersection of two G_δ-sets. <ref>: Let (ϕ_n)_n ∈ be a convergence determining family on S. Lemma <ref> implies Ψ_D(S) = ⋂_n ∈{μ∈([0,1] × S) Ψ(ϕ_n)(μ) ∈Ψ_D() }. By the corresponding real-valued result <cit.>, Ψ_D() is a Borel subset of ([0,1] ×). As Ψ(ϕ_n) is continuous for every n ∈, (<ref>) implies that Ψ_D(S)is Borel as countable intersection of Borel sets. We conclude by <ref> and <ref> that D(S) is a Borel subset of L_0(S). Let J be a finite or countable set and let S_j, j∈ J be Lusin spaces. Then the mappingπ :∏_j ∈ J L_0(S_j) → L_0(∏_j ∈ J S_j) : (f_j)_j ∈ J↦ ( t ↦ (f_j(t))_j ∈ J )is a homeomorphism that maps ∏_j ∈ J D(S_j) ontoD(∏_j ∈ J S_j). This is a straightforward application of Lemma <ref> and Proposition <ref>.§ MEASURE VALUED MARTINGALESMeasure-valued martingales are central tools in the theories of Knight, Aldous and Hoover–Keisler to encode the information carried by the filtration w.r.t. a stochastic process. In this section we develop the theory of measure values martingales to the extend that we need for this article. The main results are Theorem <ref>, which implies that compactness properties are preserved through process of iterating prediction processes, and Theorem <ref> on convergence of finite dimensional distributions of measure-valued martingales.We summarize all results on measure values martingales that we will need later in the article in Section <ref>, postponing all longer proofs to the Sections <ref> and <ref>. §.§ Definitions and main resultsFirst we discuss the present notation to clarify things:[Throughout we use the following convention: Measure-valued random variables are denoted with Z, random variables with values in an arbitrary Polish space are denoted with X. Generally, identities involving X are true for all random variables (also for Z's), whereas identities formulated for Z's are only true (or the appearing expressions are only well-defined) for measure-valued random variables.] If Z is a random variable with values in (S), the expression I((Z)) plays the role of the expectation of Z and the expression I((Z|)) plays the role of the conditional expectation of Z given a sub-σ-algebra . In order to keep notation short and get identities which look similar to the case of real-valued random variables we introduce the followingnotations[Z]:= I((Z)),[Z|]:= I((Z|)).Indeed, recalling (<ref>), we have for all bounded Borel f : S →f^∗([Z])= [f^∗(Z)], f^∗([Z|])=[f^∗(Z)|].Using these identities, the properties of the expectation and conditional expectation carry over to[· ] and [· | ]. Indeed, we have for all σ-algebras _1 ⊂_2 ⊂ and all bounded Borel f : S → f^∗( [[Z|_2]|_1] ) = [ f^∗([Z|_2]) |_1 ] = [[f^∗(Z)|_2]|_1] = [f^∗(Z)|_1] = f^∗([Z|_1]),which implies[[Z|_2]|_1]= [Z|_1]if _1 ⊂_2,[ [ Z |]]= [Z].In view of these considerations, the following is the natural definition of a measure-valued martingale: A measure-valued martingale on the stochastic base (Ω, , , (_t)_t ∈ [0,1]) is a process Z=(Z_t)_t ∈ [0,1] with values in (S) that satisfies for all s ≤ t 𝖤[Z_t|_s] = Z_s.We denote by ℳ(S) ⊂(D((S))) the set of probability measures on D((S)) such that the canonical process on D((S)) is a measure-valued martingale in its own filtration. This space carries the topology as a subspace of (D((S))) according to Convention <ref>. We want to emphasize that in this paper the term “measure-valued martingale” always refers to martingales whose values are probability measures; we will never deal with the more general case ofmartingales whose values are positive measures of different mass or signed measures. We will always denote the canonical process on D((S)) by Z=(Z_t)_t ∈ [0,1].Given a Borel function f : S → we can assign to every measure-valued process Z the real-valued process Z[f]given byZ[f]_t := ∫ f(s) Z_t(ds) = f^∗(Z_t).Throughout this section we will use the processes Z[f] to investigate Z. For example, our previous considerations imply that 𝖤[Z_t|_s] = Z_s if and only if [f^∗(Z)|_s] =f^∗(Z_s) for all bounded Borel f. So, we can characterize measure-valued martingales via these associated real-valued processes: A measure-valued process Z is a measure-valued martingale w.r.t. (_t)_t ∈ [0,1] if and only if Z[f] is an (_t)_t ∈ [0,1]-martingale for all bounded Borel f : S →.In fact it is sufficient to test against continuous bounded functions or any other class of point separating functions. One has to read Lemma <ref> carefully: Let Z be a measure-valued process and (_t)_t ∈ [0,1] the filtration generated by Z.Applying Lemma <ref> to Z and (_t)_t ∈ [0,1] yields that Z is a measure-valued martingale in its own filtration if and only if Z[f] is a martingale w.r.t. the filtration generated by Z for all f : S → bounded Borel. It is not true that Z is a measure-valued martingale in its own filtration if and only if all processes Z[f] are martingales in their own filtrations.[In fact, the respective claim is already false in ^2: One can construct a vector-valued process (X_t)_t ∈ [0,1], which is not a martingale, but has the property that for all v ∈^2 the process (v · X_t)_t ∈ [0,1] is a martingale in its own filtration. Note that martingales with values in the unit-simplex { x ∈^n: x^1,…, x^n ≥ 0,x^1+… +x^n ≤ 1 } correspond to measure-valued martingales, where S has cardinality n+1. ]The proof of the following result is given in Section <ref> A measure-valued martingale has amodification if and only if it is right continuous in probability. In particular, every measure-valued martingale w.r.t. a filtration satisfying the usual conditions has amodification. If amodification exists, it is unique up to indistinguishability. In particular, any measure-valued martingale Z that is right continuous in probability can be seen as a random variable with values in D((S)). Its law (Z) is an element of ℳ(S) ⊂(D((S))).Recalling the definition of the evaluation map e_1(f):=f(1) from Remark <ref>, we can define the mappingΦ: ℳ(S) →(S) : μ↦ I(e_1_#μ) .If μ = (Z) then Φ(μ) = 𝖤[Z_1]. Via Φ we can characterize compactness in ℳ(S): ⊂ℳ(S) is relatively compact in (D((S))) if and only if Φ() is relatively compact in (S).Put differently, a collectionof measure-valued martingales { Z^j : j ∈ J } isrelatively compact in law (i.e. {(Z^j) : j ∈ J } isrelatively compact in ℳ(S)) if and only if {𝖤[Z^j _1] : j ∈ J } is relatively compact subset of (S). The proof of this theorem is given at the end of Section <ref>. The operation S ↦ℳ(S) preserves the properties Lusin, Polish and compact metrizable. The claim for Lusin spaces is already known from Proposition <ref><ref>. For compact metrizable spaces it is a direct consequence of Theorem  <ref>. If S is Polish, the mapping Φ: ℳ(S) →(S) : μ↦ I(e_1_#μ) is a continuous mapping into a Polish space. Theorem <ref>states that Φ-preimages of compact sets are compact. Hence, we are precisely in the setting of Lemma <ref> from the appendix and conclude that ℳ(S) is Polish. Continuity points. Continuity points of (measure-valued) martingales will play a central role below because they allow us to overcome the issue that point evaluation is not continuous w.r.t. the Meyer–Zheng topology. First we give the definition in a slightly more general framework:Let X be an S-valued process. The set of continuity points of X, denoted by (X), is defined as the set of t ∈ [0,1] such that the mapping [0,1] →(S) : s ↦(X_s)is continuous at t. We emphasize that (X) only depends on (X) and not on X itself. Let X be an S-valued process that is right continuous in probability. Then (X) is co-countable (i.e. [0,1] ∖(X) is countable). In particular, it is dense in [0,1] and λ((X)∪{1} )=1. If X is right continuous in probability, then s ↦(X_s) is right continuous as well, so it has at most countable many discontinuities, cf. Lemma <ref> in the appendix. In the case of (measure-valued) martingales continuity points are exactly those points, where the trajectories are a.s. continuous:Let Z be a measure-valued martingale. Then (Z) = { t ∈ [0,1] : Z_t =Z_t-}.The corresponding claim is true for [0,1]-valued martingales X because |X_t - X_t -ϵ|^2 = |X_t|^2 - |X_t-ϵ|^2 → 0 for ϵ→ 0 if and only if (X_t - ϵ→(X_t) for ϵ↘ 0. It then extends to measure-valued martingales by testing against a countable convergence determining family. After having introduced the notion of continuity points, we can finally state our main result about convergence of finite dimensional distributions:Let (Z^n)_n be a sequence of (S)-valuedmartingales, and let Z be a (S)-valuedprocess. Then the following are equivalent: * Z^nZ as D((S))-valued random variables. * Z^n_TZ_Tfor all finite T ⊂(Z). * There is a dense set T' ⊂ [0,1] that contains 1 such that Z^n_TZ_Tfor all finite T ⊂ T'.If one (and therefore all) of the above are satisfied, Z is a measure-valued martingale.The proof of this theorem is given at the end of Section <ref>. Terminating measure-valued martingales. Let X be an _1-measurableS-valued random variable on a filtered probability space (Ω,,, (_t)_t ∈ [0,1])satisfying the usual conditions.We have for all f : S → [0,1] Borel and all s ≤ tf^∗([(X|_t)|_s]) = [f^∗((X|_t))|_s] = [[f^∗(X)|_t]|_s] = [f^∗(X)|_s] = f^∗((X|_s))and therefore[(X|_t)|_s] = (X|_s).So, ((X|_t))_t is a measure-valued martingale. Moreover, we have (X|_1)=δ_X. This motivates the following definition: We call a measure-valued martingale Z=(Z_t)_t ∈ [0,1] terminating, if there exists a random variable X such that Z_1=δ_X. In this case we say that Z terminates in X.We denote the set of laws of terminatingmeasure-valued martingales by ℳ_0(S).Note that ℳ_0(S) is closed in ℳ(S) because it is the preimage of the closed set δ(S) under the continuous mapping (e_1).If X terminates Z, we have for all f : S → [0,1] Borel andt ∈ [0,1]f^∗(Z_t) = f^∗([Z_1|_t]) = f^∗([δ_X|_t]) = [f^∗(δ_X)|_t] = [f(X)|_t] = f^∗((X|_t))and therefore for all t ∈ [0,1]Z_t = (X|_t) a.s.This observation implies: If Z is ameasure-valued martingale w.r.t. (_t)_t ∈ [0,1] that terminates at X, then Z is the up to indistinguishability uniqueversion of ((X|_t))_t ∈ [0,1].The following result specializes Theorem <ref> to the case of terminating martingales.Let {Z^j : j ∈ J} be a family ofmeasure-valued martingales such that Z^j terminates atX^j for all j ∈ J. Then{(Z^j) : j ∈ J }⊂ℳ_0(S) is relatively compact in ℳ_0(S) if and only if {(X^j) : j ∈ J } is relatively compact in (S). We have Φ((Z^j)) = [Z^j_1] = (X^j) for all j ∈ J, so {(Z^j) : j ∈ J} is relatively compact in ℳ(S) by Theorem <ref>. As ℳ_0(S) is closed inℳ(S), we obtain relative compactness in ℳ_0(S) as well. §.§ Convergence of finite dimensional distributions Fubini's theorem implies that convergence in probability of stochastic processes whose path space is equipped with the Meyer–Zheng topology can be seen from different perspectives: Let (Ω ,,) be a probability space and equip [0,1] with the measure λ. Let X_n, X Ω× [0,1] → S, n ∈, be jointly measurable. Then the following are equivalent: * X_n → X in measure w.r.t. ⊗λ as S-valued functions. * X_n → X in measure w.r.t.as L_0(([0,1],λ);S)-valued functions. * X_n → X in measure w.r.t. λ as L_0((Ω,);S)-valued functions. Straightforward.It is well known that f_n → f in measure if and only ifevery subsequence (f_n_k)_k has a further subsequence (f_n_k_j)_j that converges a.s. to f, cf. Lemma <ref>. Adopting the point of view (ii) in the preceding lemma, this readily implies Let X^n, X, n∈, be jointly measurable processes with values in S. Then the following are equivalent: * X^n → X in probability as L_0([0,1];S)-valued random variables. * For every subsequence (X^n_k)_k there exists a further subsequence (X^n_k_j)_j and a set T ⊂ [0,1] satisfying λ(T)=1 such thatX^n_k_j_t → X_t in probability for all t ∈ T. In particular, if X^n → X in probability as L_0([0,1];S)-valued random variables, then there is λ-full set T and asubsequence X^n_k such that X^n_k_t → X_t in probability for all t ∈ T. By the Skorohod representation theorem (see e.g. <cit.>), this translates to the following assertion on convergence in law: Let X^n, X, n∈, be jointly measurable processes with values in S such that X^n → X in law as L_0([0,1];S)-valued random variables. Then there is a λ-full set T ⊂ [0,1] and asubsequence (X^n_k)_k such that the finite dimensional distributions of (X^n_k_t)_t ∈ T converge weakly to thefinite dimensional distributions of (X_t)_t ∈ T.Meyer–Zheng used this observation in<cit.> to prove that the set of martingale measures is closed in (D()). The following Lemma generalizes this idea: Denote the canonical process on D(S) by X = (X_t)_t ∈ [0,1], letϕ : S → [0,1] be continuous and let μ^n ∈(D(S)) be such thatthe process (ϕ(X_t))_t ∈ [0,1] is a martingale w.r.t. the filtration generated by X under μ^n. If μ^n →μ∈(D(S)), then (ϕ(X_t))_t ∈ [0,1] is a martingale w.r.t. the filtration generated by X under μ. By Corollary <ref> there exist a λ-full set T ⊂ [0,1] and a subsequence (μ^n_k)_k such that for all t_1, …, t_m ∈ T we have _μ^n_k(X_t_1,…, X_t_m) →_μ(X_t_1,…, X_t_m). As (ϕ(X_t))_t is a martingale w.r.t. the filtration generated by X under μ^n_k we have for all s ≤ t _μ^n_k[ ϕ(X_t) | X_r : r ≤ s ] = ϕ(X_s). This is equivalent to _μ^n_k[ ϕ(X_t) g_1(X_s_1) ⋯ g_m(X_s_m) ] = _μ^n_k[ ϕ(X_s) g_1(X_s_1) ⋯ g_m(X_s_m) ] for all s_1 ≤…≤ s_m ≤ s ≤ t and all g_1, …, g_m ∈ C_b(S). If s_1, …, s_m, s,t ∈ T we can take the limit k →∞ in (<ref>) due to (<ref>) and obtain_μ[ ϕ(X_t) g_1(X_s_1) ⋯ g_m(X_s_m) ] = _μ[ ϕ(X_s) g_1(X_s_1) ⋯ g_m(X_s_m) ]. As X is , (<ref>) for all s_1, …, s_m, s,t ∈ T implies that (<ref>) holds for all s_1, …, s_m, s,t ∈ [0,1], so under μ,(ϕ(X_t))_t is a martingale w.r.t. the filtration generated by X. Lemma <ref> will be helpful several times in this paper. Indeed, it addresses exactly the issue discussed in Remark <ref>: A (S)-valued processes Z is a measure-valued martingale in its own filtration if and only if for all f : S → the processes Z[f] are martingales w.r.t. the filtration generated by Z. So, applying Lemma <ref> to all functions f^∗, where f ∈ C_b(S), readily implies ℳ(S) is closed in (D((S))).In the context of Corollary <ref> we can say even more, when restricting to the closed set of martingale measures: We do not need to pass to a subsequence and we can characterize the set T ⊂ [0,1] on which the finite dimensional distributions converge. A first result in this direction is the following elementary observation about increasing functions. For the sake of completeness, a proof of this lemma is given in the appendix. Let f, f_n : [0,1] → be increasing functions. Then the following are equivalent: * f_n → f in the Meyer–Zheng topology. * f_n → f pointwise in all continuity points of f and f_n(1) → f(1). Applying this result to the increasing function t ↦ X_t^2, where X is a martingale, yields Let (X^n)_n bemartingales in their own filtration with values in [0,1] and X be a measurable process which is right continuous in probability. Denote by T the set of continuity points of the mapping t ↦ |X_t|^2. Suppose that X^n → X in probability w.r.t. the Meyer–Zheng topology on the path space. Then X is a martingale in its own filtration, [0,1] ∖ T is countable and we haveX^n_t → X_t in probability for all t ∈ T.For f ∈ L_0([0,1]) and δ >0 we defineη_δ(f)_t := 1/λ([t,t+δ])∫_[t,t+δ] f dλ.Note that the mapping L_0([0,1]) → : f ↦η_δ(f)_tis well-defined and continuous for all t ∈ [0,1] and δ>0. T is co-countable by Lemma <ref> and X is a martingale by Lemma <ref> applied with ϕ=𝕀. Let t ∈ T. Our aim is to show that |X^n_t - X_t|^2 → 0 as n →∞. Since λ({1})>0, the claim is trivial for t=1, so we may assume t<1. Fix ϵ >0 and denote f_s := |X_s|^2 and f^n_s := |X^n_s|^2. As f is right continuous in t and T is co-countable, there is δ>0 such that t+δ∈ T and |f_t - f_t+δ|<ϵ. By applying Lemma <ref> to the increasing functions f_n, f at the points t, t+δ∈ T, we find some n_0 such that |f^n_t-f_t| < ϵ and |f^n_t+δ -f_t +δ| < ϵ for all n ≥ n_0. Using Jensen's inequality we estimate |X_t - η_δ(X)_t|^2≤1/λ([t,t+δ])∫_t^t + δ|X_t-X_s|^2 ds= 1/λ([t,t+δ])∫_t^t + δ|X_s|^2 -|X_t|^2 ds ≤|X_t+δ|^2 - |X_t|^2. Notice that the analogous inequality holds true when X is replaced by X^n. Using this, we can further estimate for n ≥ n_0 1/3|X^n_t-X_t|^2 ≤|X^n_t - η_δ(X^n)_t|^2 + |η_δ(X^n)_t -η_δ(X)_t|^2 + |η_δ(X)_t -X_t|^2≤|X^n_t+δ|^2 - |X^n_t|^2 + |η_δ(X^n)_t -η_δ(X)_t|^2 + |X_t+δ|^2 - |X_t|^2 ≤ |f^n_t+δ - f_t+δ | + |f_t+δ -f_t| + |f^n_t -f_t| + |η_δ(X^n)_t -η_δ(X)_t|^2 + |f_t + δ -f_t|≤|η_δ(X^n)_t -η_δ(X)_t|^2 + 4 ϵ. As the mapping g ↦η_δ(g)_t is continuous w.r.t. the Meyer–Zheng topology and |X^n_t|,|X_t| ≤ 1, we have|η_δ(X^n)_t -η_δ(X)_t|^2→ 0 for n →∞ and we conclude |X^n_t-X_t|^2→ 0. Recall that the set of continuity points of a process X, denoted by (X), is defined as the set of continuity points of the mapping[0,1] →(S) : t ↦(X_t). Since the map ([0,1]) → [0,1] : μ↦∫ x^2 μ(dx) is continuous, every continuity point of t ↦(X_t) is also a continuity point of t ↦|X_t|^2. Therefore, we conclude X^n_t → X_t for all t ∈(X) in Proposition <ref>. Using this notion of continuity points, we can easily extend the result to measure-valued martingales.Let Z^n, Z, n∈, be (S)-valued martingales such that Z^n → Z in probability w.r.t. the Meyer–Zheng topology on the path space. Let T ⊂(Z) be finite. Then Z^n_T → Z_T in probability. Let { f_k : k ∈}⊂ C_b(S) such that { f_k^∗ : k ∈} is convergence determining on (S). By continuity of f_k we have T ⊂(Z) ⊂(Z[f_k]) for all k ∈. Fix t ∈ T. Proposition <ref> applied to Z^m[f_k], Z[f_k] yields f_k^∗(Z^m_t) = Z^m[f_k]_t → Z[f_k]_t =f_k^∗(Z_t)in probability, for all k ∈, so we can conclude Z^m_t → Z_t in probability. Since T is finite, this implies Z^m_T → Z_T in probability.Let Z^n, Z, n∈, be (S)-valuedmartingales such that Z^nZw.r.t. Meyer–Zheng on the path space, and let T ⊂(Z). Then (Z^n ,Z^n_T)(Z,Z_T). The Skorohod representation theoremapplied to Z^n,Z regarded as random variables with values in the Lusin space D((S)) yields the existence of a probability space (Ω,,) and random variablesZ^n, Z on that space such that Z^n ∼ Z^n,Z∼ Z and Z^n→Z -a.s. and hence in probability. Let T ⊂(Z) and recall that (Z) merely depends on (Z) and not on Z itself, so we have (Z) = (Z). Corollary <ref> implies that Z^n_T →Z_T in probability. Hence, (Z^n, Z^n_T) → (Z, Z_T) in probability. We conclude (Z^n,Z^n_T) = (Z^n, Z^n_T) →(Z, Z_T) = (Z,Z_T).§.§ Tightness, compactness and existence ofmodificationsIn this section we finally prove Theorem <ref> and Theorem <ref>. The following lemma can be seen as a Doob maximal inequality for measure-valued martingales.Letbe a family of measure-valued martingales such that { I((Z_1)) : Z ∈} is tight and let T ⊂ [0,1] be countable. Then, for every ϵ>0, there exists a compact set 𝒦_ϵ⊂(S) such that for all Z ∈ [ Z_t ∈𝒦_ϵ for all t ∈ T] ≥ 1 - ϵ.If additionally the measure-valued martingales havepaths, we have [ Z_t ∈𝒦_ϵ for all t ∈ [0,1]] ≥ 1 - ϵ.For all n ∈, let K_n ⊂ S be a compact set such that [Z_1(K_n^c)] = I((X))(K_n^c) ≤ 2^-2nϵ for all Z ∈. As (Z_t(K_n^c))_t is a [0,1]-valued martingale by Lemma <ref>, Doob's martingale inequality implies [sup_t ∈ TZ_t(K^c_n) ≥ 2^-n] ≤2^n[Z_1(K_n^c)]≤2^-nϵ.The set𝒦_ϵ := { p ∈(S) : p(K_n^c) ≤ 2^-n for alln ∈}is tight by construction and hence compact by Prohorov's theorem. We have[ Z_t ∈𝒦_ϵ for all t ∈ T ] ≥ 1 - ∑_n ∈[∃ t ∈ T : Z_t(K^c_n) ≥ 2^-n ] ≥ 1 - ∑_n ∈ 2^-nϵ = 1-ϵ,for all Z ∈. If Z haspaths, (<ref>) applied toT=[0,1] ∩ readily implies (<ref>). Using this, we can derive the existence ofmodifications for measure-valued martingales:Let Z be a measure-valued martingale on (Ω,,) that is right continuous in probability and let {ϕ_nn ∈} be a convergence determining family on S that is closed under multiplication. For all n ∈, Z[ϕ_n] is a real-valued martingale that is right-continuous in probability. Thus it admits aversion Y^n.By Lemma <ref>, there is for every m ∈ a set Ω^m ∈ s.t. [Ω^m] ≥ 1 - 1/m and a compact set 𝒦_m ⊆(S) such that for all ω∈Ω^m, n ∈ and t ∈ [0,1] ∩Z[ϕ_n]_t(ω) = Y^n_t(ω ) Z_t(ω) ∈𝒦_m.The set Ω':= ⋃_m ∈Ω^m is -full. On Ω' we defineY_t := lim_s ↘ t,s ∈ Z_s.First, we argue that this limit exists for all ω∈Ω'. If ω∈Ω', then there is an m ∈ such that ω∈Ω^m. Let (s_k)_k be as sequence in ∩ (t,1] that converges to t. As Z_s_k(ω) ∈𝒦_m for all k ∈, this sequences has at least one limit point. By the right-continuity of Y^n and (<ref>) any limitpoint μ of this sequence has to satisfy ϕ_n^∗(μ) = Y^n_t, so the limit point is unique, i.e. the sequence converges. By the same reasoning, this limit point is independent of the choice of(s_k)_k, so the limit in (<ref>) exists.As Z is right continuous in probability, we have Z_t=Y_t a.s. for all t ∈ [0,1]. Any furthermodification Y' satisfies Y'_t=Z_t=Y_t a.s. for all t ∈ [0,1] and, as Y, Y' are bothY and Y' are indistinguishable. Let _M(D([0,1])) denote the set of all μ∈(D([0,1])) such that the coordinate process on D([0,1]) is a martingale under μ. Meyer–Zheng proved in <cit.> that _M(D([0,1])) is compact.As discussed in Section <ref>, we can identify a path f ∈ D([0,1]) with the corresponding pseudopath ι_[0,1](f) ∈([0,1] × [0,1]), or more rigorously ι_[0,1] : D([0,1]) →([0,1] × [0,1]) is a topological embedding. Hence, we can consider measures on D([0,1]) as measures on ([0,1] × [0,1]), rigorously, (ι_[0,1]) : (D([0,1])) →(( [0,1] × [0,1] ))is again an embedding, cf. Remark <ref>. As _M(D([0,1])) is compact, (ι_[0,1])(_M(D([0,1]))) is a closed subset of (( [0,1] × [0,1] )). As already discussed in Remark <ref> we can suppress the embedding ι_[0,1] in order to avoid notational excess, so we can just say _M(D([0,1])) is a closed subset of (( [0,1] × [0,1] )).The following proposition generalizes this fact from [0,1]-valued martingales to measure-valued martingales. As usual, we prove results for a measure-valued process Z by considering the associated real-valued process Z[f], where f ∈ C_b(S), which were introduced in (<ref>). However, keeping Remark <ref> in mind, we have to be careful here.ℳ(S) is a closed subset of (([0,1] ×(S) )). Consider the set := {μ∈(([0,1] ×(S) )) : Ψ(f^∗)_#μ∈_M(D([0,1]))for allf ∈ C_b(S) }. We first show the following chain of inclusions:ℳ(S) ⊂⊂(D((S))) ⊂(([0,1] ×(S) )). The last inclusion is trivial. In order to show the first inclusion, let μ∈ℳ(S) and denote by Z the canonical process on D((S)). For all f ∈ C_b(S), Z[f] = Ψ(f^∗)(Z) is amartingale under μ, so Ψ(f^∗)_#μ∈_M(D([0,1])). Hence, μ∈. To show the second inclusion, let { f_n : n ∈} be convergence determining on S and closed under multiplication and recall that Lemma <ref> states that{ f^∗_n : n ∈} is convergence determining in (S) as well. Recall thatProposition <ref>(c) states that p ∈( [0,1] ×(S) ) is apseudo-path if and only if Ψ(f_n^∗)(p) is apseudo-path for all n ∈. Let μ∈, then Ψ(f_n^∗)_#μ is concentrated on [0,1]-valuedpaths for all n ∈, so μ is concentrated on (S)-valuedpaths, i.e. μ∈(D((S))), and we have shown (<ref>).Corollary <ref> states that ℳ(S) is closed in (D((S))), so it is closed inas well. But, it is easy to see thatitself is closed in (([0,1] ×(S) )).Indeed,(Ψ(f^∗)) is continuous for all f ∈ C_b(S) and(ι_[0,1])(_M(D([0,1]))) is closed in ( ( [0,1] × [0,1] )).Hence, we conclude that ℳ(S) is closed in (([0,1] ×(S) )). If S is compact, then (([0,1] ×(S) )) is compact as well, so Proposition <ref> implies that ℳ(S) is compact. So the only thing we need to do in the upcoming proof of Theorem <ref> is to relax the compactness assumption on S. This can be doneusing Lemma <ref>: Let ⊂ℳ(S) be such that{ I((Z_1)) : Z ∈} is relatively compact in (S). We need to show that every sequence(Z^n)_n inhas a convergent subsequence with limit Z ∈ℳ(S). After possibly passing to a subsequence, we may assume that (I((Z_1^n)))_n is a convergent sequence in (S). By Theorem <ref>(b), the set { I((Z^n_1)) : n ∈} is tight in (S). Denote μ^n:= (Z^n) ∈ℳ(S) ⊂(([0,1] ×(S) )). Our aim is to show that (μ^n)_n is tight in (([0,1] ×(S) )). Let ϵ>0. By Lemma <ref> there is a compact set 𝒦_ϵ such that for all n ∈,[ Z^n_t ∈𝒦_ϵ for all t∈[0,1] ]≥ 1 -ϵ. Put differently, this asserts that μ^n( ([0,1] ×𝒦_ϵ) ) ≥ 1-ϵ. As([0,1] ×𝒦_ϵ) is compact, we can conclude tightness of the sequence (μ^n)_n. So, Prohorov's theorem implies that there is some μ∈(([0,1] ×(S) )) such that, after passing to a subsequence, we have μ^n →μ. By Proposition <ref> we have μ∈ℳ(S). (i)(ii) was already shown in Corollary <ref>.(ii)(iii) is true since( Z) is dense and contains 1 according to Lemma <ref>.(iii)(i): Let D ⊂ [0,1] be dense and contain 1. Let (Z^n)_n be a sequence of(S)-valued martingales and let Z be a (S)-valuedprocess such that Z^n_TZ_T for all finite T ⊂ D. First note that we have Z^n_1Z_1 and hence, due to the continuity of the intensity operator, [Z^n] →[Z]. So, Theorem <ref> implies that ((Z^n))_n is relatively compact in ℳ(S). Hence, it suffices to show that any limit point of ((Z^n))_n is (Z).So, let Z^n_j Y for some (S)-valuedprocess Y. The already shown implication (i)(ii) yields thatZ^n_j_TY_T for all finite T ⊂(Y). This implies (Y_T) = (Z_T) for all finite T ⊂ D ∩(Y). Since Y,Z areand D ∩(Y) is dense andcontains 1, we conclude (Y)=(Z). Hence, Z^nZ.If either (i), (ii) or(iii) is satisfied, then (i) is satisfied as we have already shown that they are equivalent. So, we haveZ^nZ and conclude that Z is a measure-valued martingale because ℳ(S) is closed in (D((S))), cf. Corollary <ref>. § FILTERED RANDOM VARIABLES AND THEIR PREDICTION PROCESS§.§ Filtered processes and filtered random variables Recall from the introduction that a continuous-time filtered process is a 5-tuple X = (Ω,,, (_t)_t ∈ [0,1], (X_t)_t ∈ [0,1]),where (Ω,,) is a probability space, (_t)_t ∈ [0,1] is a filtration satisfying the usual conditions, and (_t)_t ∈ [0,1] is aprocess that isadapted to (_t)_t ∈ [0,1]. The collection of a filtered processes with S-valuedpaths (i.e. X_t ∈ S for all t ∈ [0,1]) is denoted by (S). Similarly, a discrete time filtered process is a 5-tuple consisting of a probability space (Ω,,), a complete filtration (_t)_t=1^N on (Ω,,) and an(_t)_t=1^N-adapted S-valued process X = (X_t)_t=1^N. The collection of all S-valued filtered processes with N timesteps is denoted by _N(S). Next, we introduce the more abstract setting of filtered random variables: Here the process (X_t)_t ∈ [0,1] is replaced by a random variable X.This notion was introduced by Hoover–Keisler <cit.>. As a stochastic process (X_t)_t ∈ [0,1] can be viewed as a random variable which takes values in a path space, the Hoover–Keisler viewpoint is more general. However,the actual reason to take this stance is that it is technically more convenient for the purposes of the next sections. Let S be a Lusin space and N ∈. An S-valued filtered random variable with N timesteps is a 5-tuple X = ( Ω, , , (_t)_t=1^N, X ) consisting of a probability space (Ω,,),a complete filtration (_t)_t=1^N on (Ω,,) and an_N-measurable random variable X : Ω→ S. _N(S) denotes the class of all S-valued filtered random variables with N time steps; if S is clear from the context we write _N instead of _N(S). If X is a filtered random variable we always refer to the elements of this tuple by Ω^ X, ^ X, ^ X, (_t^ X)_t =1^N, X, i.e.Ω^ X refers to the base set of the probability space of X etc. Let S be a Lusin space. An S-valued filtered random variable in continuous time is a 5-tuple X = ( Ω, , , (_t)_t ∈ [0,1], X ) consisting of a probability space (Ω,,), a right-continuous complete filtration (_t)_t ∈ [0,1] and an _1-measurable random variable X : Ω→ S. The class of all S-valued filtered random variables is denoted by (S); if S is clear from the context we writeinstead of (S). Again, we refer to the elements of the tuple X by Ω^ X, ^ X, ^ X, (_t^ X)_t ∈ [0,1] and X. Note that every filtered process is in particular a filtered random variable, but the converse is not true because in the definition of a filtered random variable no adaptedness condition is present. The relation between filtered processes and filtered random variables is further examined in Section <ref>. In particular, we show that these concepts are equivalent in a certain sense and that results on filtered random variables can be translated into results on filtered processes and vice versa.§.§ The prediction processWe start with defining the first order prediction processes for discrete time processes:The prediction process of a filtered random variable X ∈_N(S) is the (S)-valued filtered process ( X) defined by( X) = (Ω^ X, ^ X, ^ X, (_t^ X)_t=1^N, ( X)),where_t( X) = (X|_t^ X) for all t = 1,…, N.Note that ( X) is a discrete time measure-valued martingale w.r.t. the filtration (_t^ X)_t=1^Nthat terminates at X. The definition of the prediction processes in continuous time is analogous, but technically a bit more involved. The prediction process of a filtered random variable X ∈(S) is the D((S))-valued filtered process ( X) defined by( X) = (Ω^ X, ^ X, ^ X, (_t^ X)_t=1^N, ( X)),where( X) is the up to indistinguishablity uniqueversion of the measure-valued martingale ((X|_t^ X))_t ∈ [0,1].An elementary, but crucial observation is Let X be an S-valued random variable on (Ω,,) andbe a sub-σ-algebra of . Then we have (X|) = (X|(X|)). More generally, if ℋ is a σ-algebra satisfying σ((X|)) ⊂ℋ⊂, we have (X|) = (X|ℋ). Clearly, it suffices to prove the second claim. Let ℋ be a σ-algebra satisfying σ((X|)) ⊂ℋ⊂ and f: S → [0,1] be Borel. Since [f(X|)]= f^∗((X|)), [f(X)|] is σ((X|))-measurable. Therefore, it is ℋ-measurable as well, which yields [f(X)|] = [ [ f(X)| ] | ℋ ] = [f(X)|ℋ], where the second equality is due to the tower property. Equation (<ref>) expresses precisely a conditional independence property (see e.g. <cit.>), namely: Given the random measure (X|), the random variable X is independent of the σ-algebra . Loosely speaking, (<ref>) says that (X|) already contains all information thathas about X.In particular,_t^1( X) = (X|^ X_t) contains all information that _t^ X has about X.On a first glance, one might think that this implies that ^1( X) contains all information that the filtration (^ X_t)_t has about X. This is not true: ^1( X) does not contain “information of higher order”, e.g. for s<t, information of ^ X_s on whether ^ X_t knows something about X. Such information is relevant in sequential decision problems. For instance <cit.> provides an example of two filtered processes that have the same prediction process but yield different values in optimal stopping problems, see also Remark <ref> below.Indeed, ((X|^ X_t)|_s^ X) contains the information that _s^ X has about (X|^ X_t), which is precisely the information that ^ X_s has on what ^ X_t knows about X. This motivates to consider iterated prediction processes: ( X) is by definition again a filtered process, hence we can simply consider its prediction process (( X)). This second order prediction process of X takes values in the space ((S)^N)^N resp. D((D((S)))). As the path spaces of the iterated prediction processes are increasingly complicated, we need to introduce a notation for these spaces:In the case of N discrete time steps we define inductively M_0^(N)(S):=S,M_n+1^(N)(S) := (M_n^(N)(S))^N. In the continuous time case we define inductivelyM_0(S):=S,M_n+1(S) := { f ∈ D((M_n(S))) : f(1) ∈δ(M_n) }.For n= ∞, we set M_∞(S) := { f ∈D(∏_n=0^∞(M_n(S)) ) : f(1) ∈∏_n=0^∞δ(M_n(S))}. The iterated prediction process of a discrete- or continuous-time filtered random variable X is inductively defined as^0( X) :=X,^0( X) :=X,^n+1( X):= (^n( X)),^n+1( X):= (^n( X)).The n-th order prediction process ^n( X) is an M_n- resp. M_n^(N)-valued filtered process. The prediction process of order ∞ of a continuous time filtered process is given by_t^∞( X) :=(_t^1( X),_t^2( X),…),^∞( X) := ( Ω^ X, ^ X, ^ X,(_t^ X)_t, ^∞( X) ).It is an M_∞-valued filtered process. We collect a few remarks on the definition of the iterated prediction process: * Both in discrete and continuous time, ^n( X) is a measure-valued martingale that terminates at ^n-1( X) for all n ∈.The process ^∞( X) is a martingale in the sense that it is a (countable) vector, all of whose entries are measure-valued martingales. * In the definition of M_n+1, the condition f(1) ∈δ(M_n) is purely for technical convenience: It helps to avoid the problem of extending δ^-1 : δ(M_n) → M_n to a map that is defined on the whole space (M_n), (cf. Remark <ref>) in Lemma <ref> and applications of it. For all n ∈∪{∞}, the space M_n can be seen as a closed subspace of the space 𝖬_n introduced in the introduction.[M_1 is a closed subset of 𝖬_1. For n >1, one can easily construct canonical topological embeddings with closed range. To that end, consider the inclusion map ι_1 : M_1 →𝖬_1 and define inductively ι_n+1 :=Ψ((ι_n)) ∘ j_n+1, where j_n+1 : D((M_n)) →𝖬_n+1 is the inclusion. For n = ∞,set ι_∞ := π∘ (ι_1, ι_2, … ) ∘π^-1 : M_∞→𝖬_∞, where π is the map introduced in Lemma <ref>.] By definition, ^n( X) has paths inM_n, so (^n( X)) ∈(M_n), but we can also regard it as element of (𝖬_n) that is supported on M_n. * Lemma <ref> is also true in discrete time. We left it out for simplicity as we will not need it later on, but the reader may consult<cit.>. *In the case of N discrete time steps, only the predictions processes of order up to N-1 are relevant. More precisely, <cit.> imply that for every k > N-1 there exists a continuous map F_k : M^(N)_N-1→ M^(N)_k such that ^k( X) = F(^N-1( X)) for all X ∈_N. What matters is that ^k( X) can be computed using only the random variable ^N-1( X), without relying on the filtration (_t^ X)_t = 1^N. This means that all the necessary information is already present in ^N-1( X) and higher-order prediction processes do not offer any extra insights. In particular, in discrete time ^∞( X) is redundant, hence we omitted the definition. A first important observation is that the prediction processes of higher order contain more information: That ^n( X) terminates at ^n-1( X) means precisely that^n-1( X) = (δ^-1∘ e_1)(^n( X)),recalling that e_1 denotes the map that evaluates paths at time 1 and δ^-1(δ_x)=x. Iterating this procedure yieldsFor all 0 ≤ k ≤ n ≤∞, there is a continuous function R^n,k : M_n → M_k such that we have for all X ∈R^n,k(^n( X)) = ^k( X).For all ℓ≤ k ≤ n < ∞, we have R^n,ℓ = R^k,ℓ∘ R^n,k. For n ∈, let R^n,n-1 : M_n → M_n-1 := δ^-1∘ e_1. For k<n we set R^n,k : M_n → M_k := R^k+1,k∘ R^k+2,k+1∘…∘ R^n-1,n-2∘ R^n,n-1.For k ∈ let _k := ∏_n=0^∞(M_n(S)) →(M_k(S)) denote the projection. We set R^∞,k := Ψ(_k-1) : M_∞(S) → M_k(S).Moreover, we set R^∞,0 := R^1,0∘ R^∞,1; and for all n ∈∪{∞} we adopt the convention R^n,n=𝕀_M_n. It is straight forward to check that R^n,k(^n( X)) = ^k( X).Lemma <ref> implies that one can obtain the whole path of ^n( X) from the whole path of ^n+1( X). There is also an adapted variant of this, i.e. from knowing ^n+1( X) up to time t, one can obtain ^n( X) up to time t. This implies that the filtrations generated by the prediction processes are ordered: Let X ∈ and for n ∈∪{∞}, set ^n_t := ⋂_ϵ>0σ^^ X( ^n_s( X) : s ≤ t + ϵ). Then we have for every t ∈ [0,1],_t^1 ⊂_t^2 ⊆…⊂_t^∞⊂_t^ X.Moreover, we have ^∞_t = σ( ^n_t : n ∈). If in addition, X ∈, we set ^0_t := ⋂_ϵ>0σ^^ X(X_s : s ≤ t + ϵ) and have ^0_t ⊂^1_t. For t ∈ [0,1], let r_[0,t] be the map which restricts apath defined on [0,1] to [0,t]. Let n ∈ (or n ∈_0 if X ∈). As ^n( X) is adapted to (_t^ X)_t ∈ [0,1], we find for t ∈ [0,1],r_[0,t]_#_t^n+1( X) = ( ^n_[0,t]( X) | ^ X_t) = δ_^n_[0,t]( X).Hence, ^n_t( X) = (δ^-1∘(r_[0,t])) (^n+1_t( X)). Therefore, the filtration generated by ^n( X) is smaller than the filtration generated by ^n+1( X) and this inclusion carries over to the right-continuous augmentation.For the second claim observe that, as ^∞_t( X) = (^n_t( X))_n ∈, sets of type {^n_1_t_1( X) ∈ A_1 ,…,^n_k_t_k( X) ∈ A_k },where k ∈, n_1,… n_k ∈, t_1,… t_k ∈ [0,1] and A_i ⊂(M_n_i-1) generate σ(_t^∞( X)).As the set given in (<ref>) is σ(^n( X))-measurable for n = max{n_1,…,n_k},⋃_n ∈σ(^n_t( X)) generates σ(^∞_t( X)).This relation carries over to the right-contiguous augmentation.The following property of the prediction process will play animportant role throughout the paper. Let X ∈, t ∈ [0,1] and n ∈_0. Then we have ^n+1_t( X)=( ^n( X) |)for any σ-algebrathat satisfies σ(^n+1_t( X)) ⊂⊂_t^ X. Moreover, we have ( ^∞( X) | _t^ X) =( ^∞( X) |)for any σ-algebrathat satisfies σ(^∞_t( X)) ⊂⊂_t^ X.The first claim is an immediate consequence of Lemma <ref> applied to ^n( X) and .In order to prove the second claim, letbe a σ-algebra satisfying σ(^∞_t( X)) ⊂⊂_t^ X. Due to the already proved first claim we have that^n( X) is independent of _t^ X given, for all n ∈. Equation (<ref>) follows because⋃_n ∈σ(_t^n( X)) is a generator of σ(_t^∞( X)).Let X ∈ℱℛ and set X :=( Ω^ X, ^ X, ^ X, (_t)_t∈[0,1], X), where (_t)_t∈[0,1] is the right-continuous augmentation[We will see in Proposition <ref> below that the filtration of ^∞( X) is already right-continuous.] of the filtration generated by ^∞( X). Then X ≈_∞ X. We show by induction on n ∈ that ^n( X) = ^n( X) a.s. For n=0, we have ^0( X) = X =^0( X). Assuming^n( X) = ^n( X) a.s., (<ref>) implies that for all t ∈ [0,1]a.s.^n+1_t( X) = (^n( X) | _t) =(^n( X) | _t) = (^n(X)| _t) = ^n+1_t( X).As the paths of prediction processes are , we conclude ^n+1_t( X) = ^n+1_t( X) a.s.Next, we want to make precise that the procedure of iterating prediction processes naturally ends with ^∞( X), i.e. that ^1(^∞( X)) does not contain more information about X than ^∞( X). To that end, denote C : = {(μ^n)_ n ∈_0 ∈∏_n=0^∞(M_n(S)): R^n,k_#μ^n = μ^kfor allk ≤ n }. There is a homeomorphismF : C →(M_∞(S)) such that for all X ∈(S) and all t ∈ [0,1]^1_t(^∞( X)) = F(^∞_t( X)).The map π : ∏_n=1^∞M_n(S) → M_∞(S) : (z_n)_n ∈↦ (t ↦ (z_n(t))_n ∈) is a homeomorphism, cf.Lemma <ref>. We have for all t ∈ [0,1] (π^-1)(^1_t(^∞( X))) = (π^-1)( (^∞( X) |_t^ X )) = ( (^n( X))_n ∈ | _t^ X ). On the other hand, we have by definition ^∞_t( X) = (^n_t( X))_ n ∈ = ( (^n( X)|_t^ X))_n ∈_0. On the first glance it may appear that (<ref>) contains less information than (<ref>) because the sequence ( (^n( X)|_t^ X)(ω))_n ∈ is the collection of marginal distributions of( (^n( X))_n ∈ | _t^ X )(ω). However, this is not true because ^k( X) is a function of ^n( X) for k<n, so we can derive ( ^1( X),…,^n( X)|_t^ X ) from (^n( X)|_t^ X ). To formalize this, write _n : ∏_k=1^∞ (M_k) →(M_n) for the projection and set ψ_n := (R^n,1,…, R^n,n) ∘_n :∏_k=0^∞(M_k) →( ∏_k=1^n M_k ). Indeed, ψ_n is continuous and satisfies ψ_n(^∞_t( X)) = ( ^1( X),…, ^n( X) | _t^ X ). The next step is to “glue” them together to get( (^n( X))_n ∈ | _t^ X ). To that end, denote by G : { (μ^n)_n ∈∈Π_n = 1^∞𝒫(Π_k = 1^n M_k) : _M_1 ×⋯× M_n_#μ_n + 1 = μ_nfor all n ∈}→( ∏_k=1^∞ M_k ) the map which maps a consistent family (μ^k)_k to the unique measure μ on the infinite product space such that (_1,…,_k)_#μ=μ^k for all k ∈. Then the mapF := G ∘ (ψ_n)_n ∈:C →( ∏_k=1^∞ M_k )is continuous since (_1,…,_k) ∘F = ψ_k is continuous for all k ∈ and satisfiesF(( (^n( X)|_t^ X))_n ∈_0)= ( (^n( X))_n ∈ | _t^ X ).It is straightforward that its inverse (F)^-1(μ) = (R^∞, n_#μ)_n∈_0 is also continuous. We conclude bysetting F := π∘F. Using the map F from the previous lemma, it is easy to see that ^∞( X) is not only Markov, but even strong Markov. Let X ∈ and τ be an (_t^ X)_t ∈ [0,1]-stopping time. Then we have ^1_τ(^∞( X)) = F(^∞_τ( X)),where F is the map defined in (<ref>). In particular, ^∞( X) is strong Markov.If τ is a stopping time that takes finitely many values, equation(<ref>) immediately implies(^∞( X)|_τ^ X) = F(^∞_τ( X)). Given an arbitrary stopping time τ, let τ_n := d_n(τ), where d_n(t) = 2^-nmin{ k ∈_0 : k2^-n≥ t}. By backward martingale convergence, the continuity of F and right-continuity of the paths ^∞( X), we find(^∞( X)|_τ^ X) = lim_n (^∞( X)|_τ_n^ X) = lim_n F(^∞_τ_n( X)) = F(_τ( X)).Corollary <ref> underlines that the prediction processes “copies” all relevant information that the filtration (_t^ X)_t ∈ [0,1] encodes about X into more complicated processes Z, i.e. we do not want to lose relevant information, when just considering (X,Z) and forgetting about the filtration (_t^ X)_t ∈ [0,1].§.§ Continuity points of filtered random variablesRecall from Definition <ref> that the set of continuity points of a process X is defined as the set of continuity points of the map t ↦(X_t). We extend this definition to filtered random variables in the following way: The set of continuity points of X ∈ is defined as ( X) := (^∞( X)). A direct consequence of Lemma <ref> is Let X ∈. Then ( X) is co-countable. Hence, it is dense and λ( ( X)∪{ 1 } )=1. Let X be an S-valued random variable on (Ω,,) and (_t)_t ∈ [0,1] be a filtration[Here, we do not necessarily assume right continuity of the filtration.] on (Ω,,). * If (t_n)_n is strictly increasing to t, we have ( X |_t_n ) →(X|_t-) a.s. * If (t_n)_n is strictly decreasing to t, we have ( X |_t_n ) →(X|_t+) a.s.We only prove the first statement, since the proof of the second statement is a straight forward modification, replacing martingale convergence with backward martingale convergence. Fix a sequence (t_n)_n strictly increasing to t. Since I( (X|_s) ) = (X) for all s ∈ [0,1],Lemma <ref> applied with T={ t_n : n ∈} implies that for all m ∈, there is a compact set 𝒦_m ⊂(S) and Ω_m ⊂Ω with (Ω_m) ≥ 1-1/m such that for all ω∈Ω_m and n ∈( X | _t_n ) (ω) ∈𝒦_m.Let {f_k : k ∈} be a convergence determining sequence for S that is closed under multiplication. Then {f_k^∗ : k ∈} is convergence determining in (S). Fix a version of (X|_t-).As f_k^∗((X|_t_n)) is a version of [f_k(X)|_t_n] for all n∈ and f_k^∗((X|_t-)) is a version of [f_k(X)|_t-], the martingale convergence theorem implies that there there is a -full set Ω' such that for all ω∈Ω' and all k ∈ we have lim_n →∞ f_k^∗((X|_t_n)(ω)) = f_k^∗( (X|_t-)(ω) ).The set Ω” := Ω' ∩ ( ⋃_m Ω_m) is again -full. Let ω∈Ω”. Then there is m ∈ such that ( X | _t_n ) (ω) ∈𝒦_m for all n∈. Hence, the sequence (( X | _t_n ) (ω))_n has a limit point μ∈(S). By (<ref>) any limit point μ satisfies f_k^∗(μ) = f_k^∗( (X|_t-)(ω) ) for all k ∈, hence μ= (X|_t-)(ω). This shows that( X |_t_n ) →(X|_t-) on Ω”. Let X be an S-valuedprocess and let _s:=σ^(X_r: r ≤ s), s∈[0,1]. Then for t ∈ [0,1] the following are equivalent: *(_s)_s ∈ [0,1] is left-continuous (right-continuous) at t; *( X | _t_n) →(X|_t) a.s. whenever t_n → t and (t_n)_n is increasing (decreasing); *s ↦( (X|_s) ) is left-continuous (right-continuous) at t. As the proof of the corresponding claims of left- resp. right-continuity are simple modifications of each other, we only show the assertion for left-continuity. <ref><ref>: Let (t_n)_n be a sequence that increases to t. By Lemma <ref> we have(X|_t_n) →(X|_t-) = (X|_t) a.s. <ref><ref>: Is trivial because almost sure convergence implies convergence in law. <ref><ref>: The pair ((X|_t-), (X|_t)) is a two step martingale which has the same initial and terminal distribution, according to <ref>. Therefore, we find (X |𝒢_t-)= (X | _t) In order to check <ref>, we need to show that _t ⊂_t-. Since cylindrical sets of type { X_t_1∈ A_1, …, X_t_n∈ A_n}, where n ∈,t_1,…, t_n ≤ t and A_1, …, A_n ⊂ S Borel are generator of _t, so it suffices to show that all sets of type(<ref>) are contained in _t-. Indeed, (<ref>) implies that we have a.s. 1_{ X_t_1∈ A_1, …, X_t_n∈ A_n} = [X_t_1∈ A_1, …, X_t_n∈ A_n |_t] = [X_t_1∈ A_1, …, X_t_n∈ A_n |_t-]. As _t- is complete, we conclude { X_t_1∈ A_1, …, X_t_n∈ A_n}∈_t-. Let X ∈ and _s := σ^^ X(^∞_r( X) : r ≤ s), s ∈ [0,1]. The filtration (_s)_s is right-continuous and for t ∈ [0,1] the following are equivalent: * The filtration (_s)_s is continuous at t, that is _t=_t-. * The paths of ^∞( X) are a.s. continuous at t. * t ∈( X), that is s ↦(^∞_s( X)) is continuous at t. As the filtration (_t^ X)_t is assumed to be right-continuous, the implication <ref><ref> in Lemma <ref> yields that t ↦((^∞( X) |_t^ X )) is right-continuous. By (<ref>) we have ((^∞( X) |_t^ X )) = ((^∞( X) |_t )), thus the implication <ref><ref> in Lemma <ref> yields that (_t)_t is right-continuous. In order to show the claimed equivalence, we apply Lemma <ref> to ^∞( X). Again by imposing (<ref>), we can conclude that for t ∈ [0,1] that the following are equivalent: (i) _t=_t-. (ii) We have ^1_t_n(^∞( X)) →^1_t(^∞( X)) a.s. for every increasing sequence t_n → t. (iii) s ↦(^1_s(^∞( X))) is continuous at t. Note that (ii) is equivalent to a.s. continuity of the paths at t because ^1(^∞( X)) haspaths. Lemma <ref> yields that (ii) is equivalent to (ii) and that (iii) is equivalent to (iii).§ THE SPACE OF FILTERED RANDOM VARIABLES §.§ Equivalence classes of filtered random variablesFor 0 ≤ n ≤∞ we introduce the equivalence relation ≈_n on the class of filtered random variables (S) byX ≈_nY(^n( X)) =(^n( Y)).We call the factor space(S) := (S) /_≈_∞the space of S-valued filtered random variables.For 0 ≤ n ≤∞ we define the Hoover–Keisler topology of rank n, denoted by _n, as the initial topology w.r.t. the mapping (S) →(M_n) :X ↦(^n( X)).We call the Hoover–Keisler topology of rank ∞ just the adapted weak topology and we writeinstead of _∞. In particular, the law of ^∞( X) is called the adapted distribution of X. * We have X≈_0Y if and only if (X) = (Y). The equivalence relation ≈_1 was already considered by Aldous <cit.>. Two processes are called synonymous if their first order prediction processes have the same law. * Lemma <ref> implies that for 0 ≤ k ≤ n ≤∞ the equivalence relation ≈_n is finer than the relation ≈_k. It is straightforward to translate the example given in <cit.> to the continuous setup.[The construction in <cit.> also implies that for every k there exist filtered processes X,Y such that X ≈_kY but sup{ [X_τ]: τ is an ^ X-stopping time}≠sup{ [Y_τ]: τ is an ^ Y-stopping time}.] This yields that ≈_n is strictly finer than≈_k, if k <n. In view of Definition <ref> of the prediction process of order ∞, it is clear that X ≈_∞ Y if and only if X ≈_nY for all n ∈. *Note that the map →(M_∞) :X ↦(^∞( X)) is an embedding. This follows because the map is injective (its kernel is preciselythe equivalence relation ≈_∞, which is factored out in the definition of ) and the topologyis defined as the initial topology w.r.t. this map.As (M_∞) is separable metrizable and these properties are inherited by subspaces, (,) is separable metrizable as well. We will later see in Corollary <ref> that (,) is a Lusin space[This statement is not trivial because despite being “separable metrizable”,the property “Lusin” is in general not inherited by subspaces.] and that (,) is Polish, if S is Polish. * The considerations in <ref> imply that (,_n) is not Hausdorff for n < ∞. However, it is easy to see that is still a separable pseudometrizable space. For the rest of the paper n will always denote the rank of the Hoover–Keisler topology _n. If we state a result for _n it is always meant for all n∈_0∪{∞}, unless we explicitly specify something else. If a result is only true for n=∞ we state it as result for(without any index). Indeed,is the weakest topology that is stronger than all topologies _n, n ∈:Let ( X^m)_m be a sequence inand X ∈. Then X^m → X inif and only if X^m → X in _n for all n ∈. We deduce from Lemma <ref> that (^∞( X^m)) →(^∞( X)) is equivalent to the convergence of ((^n( X^m))_n∈) to((^n( X))_n∈). Lemma <ref> implies that this is again equivalent to (^1( X^m),…,^n( X^m)) →(^1( X),…,^n( X) ) for all n ∈. Lemma <ref> provides for k<n the existence of a continuous function R^n,k that satisfies R^n,k(^n( Y)) = ^k( Y), hence convergence of(^1( X^m),…,^n( X^m)) to (^1( X),…,^n( X) ) is equivalent to (^n( X^m)) →(^n( X) ). All in all, we have shown that (^∞( X^m)) →(^∞( X)) if and only if (^n( X^m)) →(^n( X) ) for all n ∈. We introduce a notion of applying a function f : S_1 → S_2 to an S_1-valued filtered random variable that will be useful later on.Let X ∈(S_1) and f : S_1 → S_2 be Borel. Then we define f ♢ X ∈(S') asf ♢ X := (Ω^ X, ^ X, ^ X, (_t^ X)_t, f ∘ X). Let X,Y ∈(S) and f : S → S' be Borel. * If X ≈_nY, then f ♢ X ≈_n f ♢ Y. * The operationX ↦ f ♢ X is well defined from(S_1) to(S_2). * The operation X ↦ f ♢ X inherits the following properties from the function f : S_1 → S_2 to: injective, surjective, bijective, Borel, continuous, being a topological embedding. * If f is continuous, we have (f ♢ X) ⊇( X). We assume that f is continuous and discuss at the end of the proof how to relax this assumption. First we prove by induction on n that we have ^n( f ♢ X) =(Ψ∘)^n(f) ( ^n( X ) ) a.s.This is trivial for n=0 because ^0(f ♢ X ) = f(X) = f(^0( X)). Assume that (<ref>) is true for n, then we have for all t ∈ [0,1] a.s.^n+1_t(f ♢ X )= ( ^n( f ♢ X) |_t^ X) =( (Ψ∘)^n(f) ( ^n( X ) ) |_t^ X)) = ( (Ψ∘)^n(f)) ( (^n( X) | _t^ X )) = ( (Ψ∘)^n(f)) (^n+1_t( X) ).As f is continuous, (Ψ∘)^n+1(f) mapspaths topaths (cf. Proposition <ref>), so (Ψ∘)^n+1(f) ( ^n+1( X ) ) is aprocess. Using Remark <ref> weconclude^n+1( f ♢ X) =(Ψ∘)^n+1(f) ( ^n+1( X ) ) from (<ref>).This immediately implies (a), (b) and (d). Also (c) follows then from the respective properties of the functors Ψ and , cf. Remark <ref> and Proposition <ref>. If f : S_1→ S_2 is not continuous but merely Borel, we may replace the topology on S_1 by a stronger Polish topology that generates the same Borel sets and renders the map f : S_1 → S_2 continuous, cf. <cit.>. Note that (a), (b), and (c) except for the items“continuous” and “topological embedding”, are statements that do not depend on the topology of S_1. Hence it is legitimate to replace the topology for the proof. §.§ Canonical representativesFormally, X ∈ is an equivalence class of filtered random variables whose prediction processes of order ∞ have the same law. In this section we construct a canonical representative of these equivalence classes.This canonical representative will allow us to characterize the probability measures on M_n that are the distribution of a prediction process of order n. As we will see below, thisis also a crucial ingredient for the proof of the compactness result in the next section.Let X ∈ be given. Whenever k ≤ n (or k<n if n=∞), the process ^k( X) is a measure-valued martingale w.r.t. (_t^ X )_t ∈ [0,1], so in particular,^k( X) is also a measure-valued martingale w.r.t. the filtration generated by ^n( X). The latter is just a property of the joint law ( ^n( X), ^k( X) ). As ^k( X) is a function of ^n( X) (cf.Lemma <ref>), it is in fact just a property of the law of ^n( X). It turns out that this property already characterizes the laws of predictions processes among all probabilities on M_n. To make this precise, we need to introduce some notation: Fix n ∈∪{∞}. We denote the canonical process onM_n ⊂ D((M_n-1)) n∈ D(∏_k=0^∞(M_k)) n = ∞ by Z^n = (Z^n_t)_t ∈ [0,1], i.e. Z^n is aprocess with values in (M_n-1) if n < ∞, and in ∏_k=0^∞(M_k) if n = ∞.For k < n we define Z^k := R^n,k : M_n → M_k ⊂ D((M_k-1)),so Z^k = (Z^k_t)_t ∈ [0,1] is aprocess with values in (M_k-1). Finally, we define the S-valued random variableX := Z^0 := δ^-1(Z^1_1). A probability μ∈(M_n) is called consistently terminating martingale law, if for all k <n it holds under μ that* Z^k+1 is a measure-valued martingale w.r.t. the filtration generated by Z^n and * Z^k+1 terminates at Z^k. Let μ∈(M_n). Then the following are equivalent: * The probabilityμ is a consistently terminating martingale law. * There exists some X ∈ such that μ = (^n( X)). We have already established <ref><ref> in the consideration at the beginning of this section. The other direction <ref><ref> is a consequence of the following proposition: Let μ∈(M_n) be a consistently terminating martingale law. Define_t := σ^μ(Z^n_s : s≤ t),_t := _t+, t∈ [0,1] and X^μ = (M_n, ℬ_M_n, μ, (_t)_t ∈ [0,1], X),where ℬ_M_n denotes the (completed) Borel σ-algebra on M_n and X is defined as in (<ref>).Then we have Z^k=^k( X^μ) a.s. for all k ≤ n.From now on X^μ will always denote the filtered random variable defined in (<ref>). Before we give the proof of Proposition <ref>, we stress the following immediate consequence of this proposition: Let X ∈. Then we have X ≈_∞ X^(^∞( X)). In particular, every -equivalence class contains a representative that is defined on a standard Borel probability space. We show Z^k=^k( X) a.s. by induction on k ≤ n. Indeed, the claim is trivial for k=0 because ^0( X^μ) = X =Z^0. Assume that the claim is true for k ≤ n-1. As Z^k+1 is a measure-valued martingale w.r.t. the filtration generated by Z^n and terminates at Z^k, we have for all t ∈ [0,1] (^k( X^μ)|_t) =(Z^k|Z^n_s : s ≤ t) = Z^k+1_t a.s.Lemma <ref> yields that for t <1 we have^k+1_t( X^μ) = (^k( X^μ)|_t+) = lim_n →∞(^k( X^μ)|_t+1/n) = lim_n →∞Z^k+1_t+1/n = Z^k+1_ta.s.,where the last equality holds because Z^k+1 is . As _1=_1, we have ^k+1_1( X^μ) =Z^k+1_1 as well. Hence, we conclude Z^k+1 = ^k+1( X^μ) by Remark <ref>.In the case n=∞, it remains to check the claim for k = ∞. Indeed, we have for all t ∈ [0,1] Z^∞_t = (Z^j_t)_j ∈ = (^j_t( X^μ))_j ∈ = ^∞_t( X^μ)a.s.As both sides are^∞( X^μ) = Z^∞ a.s. §.§ CompactnessThe main goal of this section is to prove the following compactness result for filtered random variables: A set ⊂(S) is relatively compact w.r.t. _n if and only if{(X) :X ∈ A } is relatively compact in (S).Using Prohorov's theorem (see Section <ref>) Theorem <ref> implies: If {( X ) :X ∈} is tight, then ⊂ is relatively compact. This is an equivalence when the space S is Polish.The proof of Theorem <ref> consists of two steps: * Preservation of compactness: We will show that {(^n( X)) :X ∈} is relatively compact in (M_n(S)) if and only if {(X) :X ∈} is relatively compact in (S). * Closedness of laws of prediction processes: Using Theorem <ref> we show that these form a closed subset of (M_n). The set of consistently terminating martingale laws is closed in (M_n). We stick to Notation <ref>. Let (μ^m)_m be a sequence of consistently terminating martingale laws that converge to μ∈(M_n). Fix k<n, let g ∈ C_b(M_k) and write ϕ := g^∗∘ R^n,k+1 : M_n →. Then ϕ(Z^n_t)_t ∈ [0,1] = Z^k+1[g] is a martingale under μ^m w.r.t. the filtration generated by Z^n. Hence, we are precisely in the regime of Lemma <ref> and conclude that the same is true under μ. This means thatunder μ, Z^k+1[g] is a martingale w.r.t. the filtration generated by Z^n for all g ∈ C_b(M_k). Hence Z^k+1 is a measure-valued martingale w.r.t. the filtration generated by Z^n.The property that Z^k+1 terminates at Z^k is exactly Z^k+1_1 = δ(Z^k) a.s. This is preserved by the limit μ^m →μ because evaluation at time 1 and the mapping δ are both continuous. Let ⊂(S). If {(X) :X ∈} is relatively compact in (S), then {(^n( X)) :X ∈} is relatively compact in (M_n(S)). Let ⊂(S) such that {(X) :X ∈} is relatively compact in (S).We first prove the result for n ∈ by induction.The claim is trivial for n=0. Assume it is true for n, i.e. {(^n( X)) :X ∈} is relatively compact in (M_n(S)). As ^n+1( X) terminates at ^n( X), Corollary <ref> implies that {(^n+1( X)) :X ∈} is relatively compact in ℳ_0(M_n). Recall that ℳ_0(M_n) is the set of laws of terminating (M_n)-valuedmartingales. We have ℳ_0(M_n) ⊂(M_n+1) andrelative compactness in ℳ_0(M_n) implies relative compactness in (M_n+1).Let n=∞. Due to our previous considerations, we already know that (^n( X)) :X ∈} is relatively compact in (M_n(S)) for all n ∈. By Lemma <ref>(b) we can conclude relative compactness of {( (^n( X))_n∈ ) :X ∈} in (∏_n ∈ M_n), and by Lemma <ref> this is equivalent to relative compactness of {(^∞( X)) :X ∈} in (M_∞). Let ⊂ be such that {( X) :X ∈} is relatively compact. Lemma <ref> yields the relative compactness of {(^n( X)) : X ∈} in (M_n). Let ( X^m)_m be a sequence inand denote μ^m:= (^n( X^m)). By relative compactness, there are a subsequence (μ^m_k)_kandμ∈(M_n) such that μ^m_k→μ. By Lemma <ref>, μ is a consistently terminating martingale law, hence by Theorem <ref>, there isX ∈ such that μ= (^n( X)). Therefore, X^m_k→ X w.r.t. _n. We conclude that is relatively compact in (,_n). Conversely, assume thatis relatively compact in (,_n) for some n ∈∪{∞}. As the mapping (,) ∋ X ↦(X) ∈(S) is continuous (see Lemma <ref>), {(X) :X ∈} is relatively compact as well.Theorem <ref> was already established by Hoover in <cit.>. Depending on the reader's taste, our proof might be considered more rigorous our just more pedantic than the one given by Hoover. Specifically, we aim to rigorously define theprediction processes as well asthe spaces where they take their values, we take some care to work out not only the case n=1, but also the inductive step n-1 ↦ n, as well as the limit step n=∞. We close this section with two useful corollaries of the compactness result. If S is Polish (Lusin), then ((S),) is Polish (Lusin) as well. By Remark <ref><ref>, (S) is homeomorphic to a subset of the Lusin space (M_∞(S)). By Theorem <ref> and Lemma <ref>, this subset is closed. As closed subspaces of Lusin spaces are Lusin, we conclude that (S) is Lusin.If S is assumed to be Polish, (S) →(S):X ↦(X)is a continuous (cf. Lemma <ref>) map into a Polish space. Theorem <ref> states precisely that preimages of compact sets are compact, so Lemma <ref> implies that ((S),) is Polish.§ DISCRETIZATION A natural way to discretize (w.r.t. time) filtered random variables is to just restrict the filtration (_t)_t ∈ [0,1] to a finite set T ⊂ [0,1]. Throughout this section we fix a finite set of times T = {t_1,…,t_N} with 0 ≤ t_1 < … < t_N = 1. We consider the discretization operatorD_T : (S) →_N(S) :X ↦ X_T := (Ω^ X,^ X, ^ X, (_t_i^ X)_i=1^N , X).The main result of this section is that this operator can be well defined on the factor spacesand _N and that we cangive conditions for continuity of D_T at a point X: The discretisation operator D_T : (,_n) → (_N,_n) : X ↦ X_T is well-defined and Borel measurable. Moreover, it is continuous at all X ∈ satisfying T ⊂( X), i.e. if X^m → X in (,_n) and if X satisfies T ⊂( X), then X^m_T → X_T in (_N,_n). Recall that Corollary <ref> ensures that ( X) is co-countable for all X ∈, so Theorem <ref> states that D_T iscontinuous at X for “typical” sets T. The following example shows that we cannot expect continuity of the discretization operator without assumptions on T: Let (Ω,,) =( {0,1}, 2^{0,1}, 1/2(δ_0+δ_1) ) and let X=𝕀. For s ∈ [0,1) define the filtration (^s_t)_t such that _t^s is trivial for t<s and _t^s := for t ≥ s. Then for each s ∈ [0,1] we have a filtered random variable X^s := (Ω, ,, (^s_t)_t ∈ [0,1], X). It is easy to see that ( X^s)=[0,1]∖{s} and X^s+1/n→ X^s in (,). However, we have ^1_s( X^s+1/n)(ω)= for all n ∈ and ω∈Ω, whereas ^1_s( X^s)(ω)=δ_ω for ω∈Ω. Hence, when setting T= {s,1} we do not have X_T^s+1/n→ X_T^s in (_2,). The rest of this section is dedicated to the proof of Theorem <ref>. The idea is to recursively constructmappings F^n_T : M_n → M_n^(N)that map the n-th order prediction processes of continuous-time filtered random variable X to the n-th order prediction process of its discretization X_T and investigate their continuity properties. We first set F_T^0 := 𝕀 : S → S. Assuming that F^n_T : M_n → M_n^(N) is already defined, we define F^n+1_T by F_T^n+1 : M_n+1→ M_n+1^(N) : z ↦ ( F^n_T_# z(t_1),…, F^n_T_# z(t_N) ).This is indeed welldefined: First we evaluate the path z ∈ M_n+1⊂ D(P(M_n)) at the times T = { t_1,…, t_N }, that is a map from M_n+1⊂ D(P(M_n)) to (M_n)^N. Then at every time t_i we apply the pushforward w.r.t. F_T^n, that is the map(F_T^n) : (M_n) →(M_n^(N)). Hence, the target space of F_T^n+1 is (M_n^(N))^N = M_n+1^(N). We first show that F^n_T indeed maps the n-th order prediction process of X ∈ℱ𝒫 to the n-th order prediction process of its discretization: Let n ∈. Then we have for all X ∈ℱℛ F_T^n(^n( X)) = ^n( X_T). We show this by induction on n. For n=0 it is trivial. Assume the result is true for n. Then we find for every X ∈ F^n+1_T(^n+1( X))= ( F_T^n_#(^n( X)| _t_1^ X )), …, F_T^n_#(^n( X)| _t_N^ X ))=( (F_T^n(^n( X))| _t_1^ X )), …,(F_T^n(^n( X))| _t_N^ X )) =((^n( X_T)|^ X_t_1),… , (^n( X_T)|^ X_t_N))=^n+1( X_T). We give some calculation rules for convergence in law which are useful to prove the main theorem of this section: Let Y^m_i, Y_i be S_i-valued random variables, i ∈{1,2,3},and let f : S_2 → S_3 be Borel. Then we have (Y_1^m,Y_2^m)(Y_1,Y_2) (Y_2^m,Y^m_3)(Y_2,Y_3) Y_3 =f(Y_2)a.s. (Y_1^m,Y_2^m,Y_3^m)(Y_1,Y_2,Y_3) . Let Y^m,Y be S-valued random variables and for i ∈{ 1,…, k } letf_i : S → S_i be Borel. Then we have(Y^m,f_1(Y^m))(Y,f_1(Y)) ⋮ (Y^m,f_k(Y^m))(Y,f_k(Y)) (Y^m,f_1(Y^m),…, f_k(Y^m))(Y,f_1(Y),…, f_k(Y)).This follows from Lemma <ref> by induction on k.Let Z, Z' be (S_1 × S_2)-valued random variables and f : S_1 → S_2 be Borel. If I((Z))((f))=I((Z'))((f))=1 and(_1_# Z ) = ( _1_# Z' ), then (Z)=(Z'). Observe that𝔼[Z((f))] = ∫ p((f)) (Z)(dp) = I((Z))((f)) = 1.Since Z((f)) ∈ [0,1], we deduce Z((f)) = 1 a.s., which entails Z = (𝕀,f)(_1#Z) a.s. We obtain(Z) = ((𝕀,f)(_1_# Z)) = (𝕀,f)_#(_1_#Z).Analogously, we find that (<ref>) holds when Z is replaced by Z'. Consequently, using that (_1_# Z) = (_1_#Z'), it follows from (<ref>) that (Z) = (Z').Let X^m, X be S-valued random variables on probability spaces (Ω^m,^m,^m) and (Ω,,) resp. and let ^m ⊂^m and ⊂ resp. be sub-σ-algebras. Then we have(Y^m,f(Y^m))(Y,f(Y))( Y^m|^m) (Y|) ( Y^m, f(Y^m)|^m)) (Y,f(Y)|)).Since I((( Y^m, f(Y^m)|^m) ) ) = ( Y^m,f(Y^m)) converges for m →∞, Proposition <ref> implies that((( Y^m, f(Y^m)|^m) ) )_m ∈ is relatively compact. Hence, it suffices to show that ( (Y,f(Y)|) ) is the only possible limit point of this sequence.To that end, write Z := (Y,f(Y)|) and observe that Z has the propertiesI((Z))((f)) = (Y,f(Y))((f))=1,_1_# Z = _1_#(Y,f(Y)|) = (Y|).Consider a subsequence (( Y^m_j, f(Y^m_j)|^m_j))_j such that( Y^m_j, f(Y^m_j)|^m_j)Z'.Our aim is to check that Z' also has the properties listed in (<ref>) and to invoke Lemma <ref> to conclude that (Z) = (Z'). Indeed, we have I((Z')) = lim_j →∞ I(( ( Y^m_j, f(Y^m_j)|^m_j) )) =lim_j →∞(Y^m_j,f(Y^m_j)) = (Y,f(Y))and hence I((Z'))((f))= 1.Moreover, we havein distribution_1_# Z' = lim_j →∞_1_#( Y^m_j, f(Y^m_j)|^m_j) = lim_j →∞_1_#( Y^m_j|^m_j) = (Y|).Hence, we can use Lemma <ref> to conclude that (Z)=(Z'). Let Y_1^m,Y_1 be S_1-valued random variables and Y_2^m, Y_2 be S_2-valued random variables on probability spaces (Ω^m,^m,^m) and (Ω,,) resp. and let ^m ⊂^m and ⊂ resp. be sub-σ-algebras. Then we have ( Y_1^m,Y_2^m |^m) (Y_1,Y_2|)( (Y^m_1|^m),(Y^m_2|^m) )( (Y_1|),(Y_2|) ). The map ϕ : (S_1 × S_2 ) →(S_1) ×(S_2) : μ↦ (_1_#μ,_2_#μ) is continuous and we have for all m ∈ (and resp. for Y_1,Y_2) ϕ_#( Y_1^m,Y_2^m |^m)= ( (Y^m_1|^m),(Y^m_2|^m) )a.s.Let X^m → X in _n and T ⊂( X). We prove by induction on k ≤ n(^k( X^m), F_T^k(^k( X^m)) ) (^k( X), F_T^k (^k( X))) . Indeed, this is trivial for k=0. Assume it is true for some k<n. Since k+1 ≤ n we have ^k+1( X^m) ^k+1( X). Since T ⊂( X) ⊂(^k+1( X)).Corollary <ref> yields (^k+1( X^m), ^k+1_T( X^m))(^k+1(X), ^k+1_T( X)).Fix t ∈ T. Since ^k+1_t(X^m ) = (^k ( X^m)|_t^ X^m ), equation (<ref>) implies( ^k( X^m) |_t^ X^m ) ( ^k ( X) | _t^ X ).So, (<ref>) and (<ref>) put us precisely in the setting of Proposition <ref> and we conclude that( ^k( X^m), F_T^k( ^k( X^m) )| _t^ X^m ) ( ^k( X), F_T^k( ^k( X) )| _t^ X ).Using Lemma <ref>,the fact that (^k( X^m |_t^ X^m )) = _t^k+1( X^m) and that ( F_T^k( ^k( X^m)| _t^ X ) ) = F_T^k_#_t^k+1( X^m),we obtain (^k+1_t( X^m), F_T^k_#_t^k+1( X^m) ) (^k+1_t( X),F_T^k_#_t^k+1( X)).We apply Lemma <ref> with Y_1^m = ^k+1( X^m), Y_2^m = Y_1^m = ^k+1_t( X^m), Y_3^m = F_T^k_#_t^k+1( X^m) and f = (F_T^k)to (<ref>) and (<ref>)andconclude(^k+1( X^m), F_T^k_#_t^k+1( X^m) ) (^k+1( X),F_T^k_#_t^k+1( X)).Noting that F_T^k+1( ^k+1( X^m) ) = (F_T^k_#_t^k+1( X^m))_t ∈ T, we derive from (<ref>) for all t ∈ T using Corollary <ref> with f_t = (F_T^k) ∘ e_t, t∈ T (^k+1( X^m), F_T^k+1(^k+1( X^m)) ) (^k+1( X), F_T^k+1 (^k+1( X))) . This is exactly (<ref>) for k+1, hence the induction is complete.From (<ref>) with k=n and F_T^n(^n( Y)) = ^n( Y_T) (cf. Lemma <ref>) we obtain ^n( X^m) ^n( X). This proves Theorem <ref> for n ∈. Note that there is no extra work to be done in the case n=∞ since on _Nall topologies _n, wheren>N-1, coincide with _N-1, cf. Remark <ref><ref>.§ ADAPTED FUNCTIONSWe recall the concept of adapted functions from <cit.>. Loosely speaking, adapted functions are operations that take a filtered random variables as an argument and return a random variable defined on the underlying probability space of that filtered random variable. The rank of an adapted function is a measure for its complexity. More precisely,it is the maximum number of nested conditional expectations appearing when evaluating this adapted function, as we will see below. An adapted function is a function[Strictly speaking, an adapted function is an element of a term algebra: (AF1) defines for every f ∈ C_b(S) an operation symbol of arity 0. (AF2) defines for every g ∈ C_b(^m) an operation symbol of arity m. (AF3) defines for every t ∈ [0,1] an operation symbol of arity 1. Definition <ref> below states how to interpret these terms. Note that two different terms can lead to the same map X ↦ f( X). The rank is a property of the term, not of the function X ↦ f( X), as a given function X ↦ f( X) can also be represented by `unnecessarily complicated' terms. ] that can be built using the following rules: (AF1) Every f ∈ C_b(S) is an adapted function and (f)=0. (AF2) If f_1, …, f_n are adapted functions and g ∈ C_b(^m), then g(f_1,…,f_m) is an adapted function with (g(f_1,…,f_m)) = max{(f_i)i = 1,…,m}. (AF3) If f is an adapted function and t ∈ [0,1], then (f|t) is an adapted function and ((f|t)) = (f)+1. We writefor the set of adapted functions and [n] for the set of adapted functions of rank at most n.Note that [0]=C_b(S). The value of an adapted function at a filtered random variable isagainrandom variable (on the same probability space). Formally it is defined through the following induction: Let X∈. (AF1) If f ∈[0], then its value at X is f( X) := f(X). (AF2) The value of g(f_1,…,f_m) at X is g(f_1,…,f_m)( X) := g(f_1( X), …, f_m( X)). (AF3) The value of (f|t) at X is (f|t)( X) = [f( X)|_t^ X]. One can define equivalence relations onusing adapted functions. Let X,Y∈. We say that X and Y have the same adapted distribution of rank n, denoted by X∼_nY, if [f( X)] = [f( Y)] for all f ∈[n]. We say that X and Y have the same adapted distribution, denoted by X∼_∞ Y, if [f( X)] = [f( Y)] for all f ∈. Our aim is to show that the equivalence relation ∼_n defined in terms of adapted functions coincides with the equivalence relation ≈_n defined in terms of prediction processes, cf. Definition <ref>. The next lemma implies that the relation ≈_n is stronger than the relation ∼_n, that is, X≈_nY implies X∼_nY. Let f ∈ and F : M_(f)→ be bounded and Borel. We say that F represents f, if we have for all X ∈ℱℛ f( X) = F(^(f)( X)). All adapted functions are representable. All adapted functions of rank 0 are represented by themselves. Hence, it suffices to show that the set of representable adapted functions is closed under the operations (AF2) and (AF3). Let g ∈ C_b(^m) and let f_1, …, f_m be adapted functions with (f_i)=n_i that are represented by F_i : M_n_i→. Set n := max_i n_i. Recalling the mappings R^n, k, n≥ k from Lemma <ref> we define the bounded Borel function GM_n → z ↦ g( F_1( R^n,n_1(z) ), …,F_m( R^n,n_m(z) )). Then we have for all X∈ℱℛ G(^n( X))= g( F_1( R^n,n_1(^n( X)) ), …,F_m( R^n,n_m(^n( X)) )) = g(F_1(^n_1)( X), …, F_n(^n_m)( X))= g(f_1( X),…,f_m( X)) = g(f_1,…,f_m)( X). Next, let f ∈ be represented by F and t ∈ [0,1]. Then GM_n+1→ z ↦∫ F(w)e_t(z)(dw) represents (f|t) because we have G(^n+1( X)) = ∫ F(w) (^n( X)|_t^ X)(dw) = [F(^n( X)) |_t^ X ] = [f( X)|_t^ X] = (f|t)( X), for all X ∈. We introduce a notion to keep trackwhich time evaluations were used in a given adapted function: For every f ∈ we define T(f) ⊂ [0,1] inductively via (AF1) T(f) = ∅ for all f ∈[0]; (AF2) T(g(f_1,…,f_m)) =⋃_i=1^m T(f_i); (AF3) T((f|t)) = T(f) ∪{t}. Note that T(f) is finite for all f ∈.Let n ∈ and f ∈[n] and denote N:= |T(f)|. By minor modifications in the proof of Lemma <ref> one can show that for every f ∈ there exists a continuous bounded function F: M_n^(N)→ such that f( X) = F(^n( X_T(f))).Indeed, the construction is the same as in the proof above and the continuity of F is due to the continuity of evaluations at time points in the discrete time setting.Next, weprove that X∼_nY implies X≈_nY. To that end, we inductively construct sufficiently big families 𝒜_n consisting of bounded Borel functions on M_n which represent adapted functions. Let T ⊂ [0,1] be a dense set that contains 1. For all n ∈, there is a family 𝒜_n consisting of bounded measurable functions M_n → such that *_n is countable, closed under multiplication, and separates points in (M_n); *Every F ∈_n represents some f ∈[n] satisfying T(f) ⊂ T. For each m ∈, there exists a point separating family 𝒞_m ⊂ C_b(^m; [0,1]) which is countable and closed under multiplication by Lemma <ref>. W.l.o.g. we can require that 𝒞_1 contains the functionψ_0: → [0,1]: x ↦ (x ∨ 0) ∧ 1. Let us prove the result by induction. For n = 0, we can choose _0 = 𝒞_1. Assume that we have defined an algebra _n with the desired properties. We define _n+1 as the collection of all functions of type z ↦ψ( ∫ F_1(w)e_t_1(z)(dw), …, ∫ F_m(w)e_t_m(z)(dw)), where m ∈, ψ∈𝒞_m, F_1,…, F_m ∈_n and t_1,…, t_m ∈ T. Clearly, _n+1 is countable and closed under multiplication. We show that _n+1 separates points on M_n+1. Let z ≠ z' ∈ M_n+1⊆ D((M_n)). Then there is some t ∈ T such that e_t(z) ≠ e_t(z'). Since _n separates points in (M_n), there is some F ∈_n such that ψ_0( ∫ F(w) e_t(z)(dw))= ∫ F(w) e_t(z)(dw) ≠∫ F(w) e_t(z')(dw)=ψ_0( ∫ F(w) e_t(z')(dw)), and we have ψ_0( ∫ F(w) e_t(· )(dw)) ∈_n+1. By Lemma <ref>(b),_n+1 separates points on (M_n+1). Let G ∈_n+1 be composed ofψ∈ C_b(^m), F_1,…, F_m ∈_n and t_1,…, t_m ∈ T as in (<ref>) and let f_1, …, f_m ∈[n] such that F_i represents f_i. Then G represents g:= ψ( (f_1|t_1),…, (f_m|t_m)) ∈[n+1]. For X,Y ∈ the following are equivalent: * X≈_nY, that is (^n( X)) = ( ^n ( Y)); * X∼_nY, that is [f( X)] = [f(Y )] for all f ∈[n]; * There exists a dense set T ⊂ [0,1] that contains 1 such that [f( X)] = [f(Y )] for all f ∈[n] satisfying T(f) ⊂ T. It suffices to show the claim for n ∈, cf. Remark <ref><ref>. Fix n ∈. (i)(ii): follows from Lemma <ref>. (ii)(iii): is trivial. (iii)(i): Assume that X≉_nY, that is (^n( X)) ≠(^n( Y)). By Lemma <ref> there is a point separating family _n of bounded Borel functions M_n → such thatevery F ∈_n represents some f ∈[n] satisfying T(f) ⊂ T. As _n is point separating, there is some F ∈_n such that [F(^n( X))] ≠ [F(^n( Y))]. By the definition of _n, there is some f∈[n] satisfying T(f) ⊂ T that is represented by F. For this f we have [f( X)] ≠[f( Y)]. The techniques developed in this section allow us to show that the Hoover–Keisler topology defined via the prediction processes is equal to the topology defined via adapted functions. This result can be seen as an extension of Theorem <ref>, in fact in the case n=1 it is just a slight reformulation of that theorem.Let ( X^m)_m be a sequence inand X ∈. Then the following are equivalent: * X^m → X in _n. * For all f ∈[n] satisfyingT(f) ⊂( X) ∪{ 1 } we have[ f( X^m) ] →[ f( X)]. * There exists a dense set T ⊂ [0,1] that contains 1 such that for all f ∈[n] satisfyingT(f) ⊂ T it holds[ f( X^m) ] →[ f( X)]. (i)(ii): LetX^m → X in . As T(f) ⊂( X) Theorem <ref> yields the convergence discrete time filtered random variables X^m_T(f) to X_T(f) in , i.e. (^n(X^m_T(f))) →(^n(X_T(f))). By Remark <ref> there is continuous bounded function F such that for all Y ∈ [f( Y)] = [ F(^n( Y_T(f)))]. Hence, we can conclude f( X^m) → f( X). (ii)(iii) is trivial. (iii)(i): We assume that there is T ⊂ [0,1] dense, containg 1, such that[ f( X^m) ] →[ f( X)] for allf ∈[n] satisfyingT(f) ⊂ T. Recall that for f ∈[0] we haveT(f) = ∅ and [f( Y)] = [f(Y)] for all Y ∈. So, we have(X^m) →(X), in particular, the sequence ((X^m))_m is relatively compact in (S). Theorem <ref> yields that the sequence ( X^m)_m is relatively compact in (,). So, it suffices to show that every limit point of this sequence is equal to X. Let Y be a limit point and let( X^m_k)_k be a subsequence such that X^m_k→ Y in (,_n). Denote T':= (T ∩( Y)) ∪{1}. For every f ∈ satisfying T(f) ⊂ T' we have [f( X)] = lim_k →∞ [f( X^m_k)] =[f( Y)], where the second equality is due toX^m_k→ Y and the already proven implication (i)(ii). By Proposition <ref> we conclude X =Y. § ADAPTED FILTERED PROCESSES §.§ Continuous timeEvery continuous-time filtered process can be seen as a D(S)-valued filtered random variable, hence (S) ⊂(D(S)). This inclusion is strict because the definition of a D(S)-valued filtered random variable contains no adaptedness constraint. For given X ∈(D(S)), the process X is adapted to (_t^ X)_t ∈ [0,1] if and only if (X_t|^ X_t) = δ_X_t for every t ∈ [0,1]. Note that (X_t |_t^ X) = e_t_#^1_t( X), where e_t : D(S) → S : f ↦ f(t) is the evaluation at time t. Hence, X ∈(S) if and only if for every t ∈ [0,1]e_t_#^1_t( X) = δ_X_t.This shows that being adapted is a property ofthe law of the first-order prediction process and in particular a well-defined notion in the factor spaces, i.e. we have(S) ⊂(D(S)). Moreover, using (<ref>) we show that (S) is a closed subset of (D(S)). Let (S) denote the set of S-valued filtered processes with continuous paths. Note that (S) is _n-closed in (S) for all n ∈_0 ∪{∞} if D(S) is equipped with the J_1-topology. (S) is _1-closed in (D(S)) if D([0,1],S) is equipped with either the Meyer–Zheng or the J_1-topology. Moreover, (S) is closed in (C(S)) if C(S) is equipped with the supremum distance. As _1 is the weakest topology among _n,n ∈∪{∞}, _1-closedness readily impliesclosedness in _n for all n ∈∪{∞}. In order to prove Proposition <ref>, we providetwo auxiliary lemmas: Let X,Y be S-valued random variables on a probability space (Ω,,) anda sub-σ-algebra of . If (X|) = δ_Y a.s. then X=Y a.s. Let {ϕ_nn ∈} be a point separating family on S. Then we have for all n ∈[ϕ_n(X) | ] = ϕ_n(Y) [ϕ_n(X)^2|] = ϕ_n(Y)^2.Using these equalities and the tower property we get [|ϕ_n(X)-ϕ_n(Y)|^2] = 0, soϕ_n(X) = ϕ_n(Y) a.s. for all n ∈. As the family {ϕ_nn ∈} is countable and point separating, we conclude that X = Y a.s.Let ϕ : S → S' be continuous and t ∈ [0,1]. Then{ X ∈(S) : ϕ(X) _t^ X}isclosed in (S) w.r.t. _1.Note that ϕ(X) is _t^ X-measurable if and only if ϕ(X) is _s^ X-measurable for all s ≥ t. By Lemma <ref>, the latter is equivalent to ^1_s( ϕ♢ X) taking values in δ(S') for all s ∈ [t,1].As the operation X ↦ϕ♢ X is continuous (cf. Proposition <ref>), the restriction to [t,1] is continuous, and D([t,1]; δ(S')) is closed in D([t,1]; (S')), the set (<ref>) is _1-closed.First observe that it suffices to prove the claim for the case of D(S) with the Meyer–Zheng topology. This readily implies the claim when D(S) is equipped with the J_1-topology because the J_1-topology is stronger than the Meyer–Zheng-topology[and hence for every n, the topology _n w.r.t. Meyer–Zheng on the path space D(S) is weaker than _n w.r.t. J_1 on D(S). To see this, apply Proposition <ref> the identity map from D(S) with Meyer–Zheng to D(S) with J_1.]. Moreover, the result for (D(S),J_1) implies the result for C(S) because the C(S) is J_1-closed subset of D(S) and the subspace topology of J_1 on C(S) is precisely the uniform topology. In order toovercome the issue that point evaluation is not continuous w.r.t. the Meyer–Zheng topology, we need to construct a family of continuous functions which allows us to characterize adaptedness. First note that there is a metric d that induces the topology of S and a set F ⊂ C(S) which has the following properties: F is convergence determining, countable, closed under multiplication, every f ∈ F is Lipschitz w.r.t. d and satisfies 0 ≤ f≤ 1.To see this, embed S into [0,1]^, set d((x_n)_n,(y_n)_n) := ∑_n ∈ 2^-n|x_n-y_n| and let F be the collection of all finite products of projections [0,1]^→ [0,1].For t ∈ [0,1] and n ∈, let g_t,n(s) = (1- n(t-s)_+)_+, i.e. we set g_t,n(s)=1 for s ≤ t, g_t,n(s) = 0 for s ≥ t+1/n and interpolate linearly between t and t+1/n. Forf ∈ F, t ∈ [0,1] and n ∈ we define the mapϕ_f,t,n : D(S) → D([0,1]) : h ↦ (f ∘ h) · g_t,n.Note thatd_1(h_1,h_2) := ∫ d(h_1(s),h_2(s)) λ(ds) is a metric for the Meyer–Zheng topology on D(S) and d_2(h_1,h_2) := ∫ |h_1(s)-h_2(s)| λ(ds) is a metric for the Meyer–Zheng topology on D([0,1]). A straightforward calculation shows that ϕ_f,t,n : (D(S),d_1) → (D(S),d_2) is Lipschitz and hence continuous.It remains to show that a filtered random variable X ∈(D(S)) is adapted if and only if ϕ_f,t,n(X) is _t+1/n^ X-measurable for all t ∈ [0,1], f∈ F and n ∈. Then Lemma <ref> yields the claim.Assume that X is adapted. As g_t,n(s)=0 for s >t+1/n, ϕ_f,t,n(X) only depends on X|_[0,t+1/n], so ϕ_f,t,n(X) is _t+1/n^ X-measurable. Conversely, assume that ϕ_f,s,n(X) is _t+1/n^ X-measurable for all s ∈ [0,1], f∈ F and n ∈.Fix t ∈ [0,1]. By the right-continuity of the paths of X and as g_t,n(s)>0 for s<t+1/n, the assumption implies that f(X_t) is _t+1/n^ X-measurable for all n ∈ and all f ∈ F.By the right-continuity of the filtration, f(X_t) is _t^ X-measurable for all f ∈ F. As F is convergence determining, the Borel-σ-algebra on S is the initial σ-algebra w.r.t. F. Hence, we conclude that X_tis _t^ X-measurable. The most important application of Proposition <ref> isto extend the compactness result Theorem <ref> from filtered random variables to filtered processes. Let n ∈_0 ∪{∞}. * Let C(S) be equipped with the supremum distance. Then ⊂(S) is relatively compact w.r.t. _n if and only if {(X) :X ∈} is relatively compact in (C(S)).* Let D(S) be either equipped with the J_1-topology or with the Meyer–Zheng topology. Then 𝒜⊂(S) is relatively compact w.r.t. _n if and only if {(X) :X ∈} is relatively compact in (D(S)).This is an immediate consequence of Theorem <ref> and Proposition <ref>. The strict inclusion (S) ⊂(D(S)) suggests that the concept of filtered random variables is more general than the concept of filtered processes. However, it turns out that these concept are equally general in a specific sense. One can “simulate” a filtered random variable by afiltered process that is constant on [0,1) and equal to the given random variable at t=1. To make this precise, fix s_0 ∈ S and define ι : S → D(S) by ι(s)(t) = s_0 if t<1 and ι(s)(1)=s. Note that ι : S → D(S) is a topological embedding with closed range if D(S) is equipped withMeyer–Zheng (or J_1) topology.For every n ∈_0 ∪{∞}, the mapι : ((S),_n) → ((S),_n) : (Ω,,, (_t)_t ∈ [0,1], X) ↦ (Ω,,, (_t)_t ∈ [0,1], ι(X))is a topological embedding with closed range. Note that ι( X) is adapted because ι(X)_t is constant and hence _t^ X-measurable for t<1 and ι(X)_1=X is _1^ X-measurable. The remaining claims follow by applying Proposition <ref> to ι. In Section <ref> we proved the characterisation of adapted distributions in the framework of filtered random variables. We translate this result into the framework of filtered processes as introduced in Sections <ref> and <ref>. Recall in particular the definition of 𝖬_∞ given in (<ref>). Before we prove Theorem <ref> stated in the introduction, we need the following lemma, which states that the definitions of “consistently terminating martingale measure” given in the introduction and in Section <ref> are consistent with each other:Let μ∈(𝖬_∞) be a martingale measure. Then the following are equivalent: * μ is consistently terminating in the sense of Definition <ref>(i.e. Z^n_1 = δ_Z^n-1 for alln ∈) and e_t_# Z^1_t = δ_Z^0_t for all t ∈ [0,1].* μ is consistently terminating in the sense of the introduction, i.e. e_t_# Z^n_t = δ_Z^n-1_t for all n ∈ and all t ∈ [0,1].Note that the extra condition appearing in (i) is precisely the adaptedness condition, cf. (<ref>). It suffices to prove that (i) implies (ii) as the reverse implication is trivial. To that end, let n>1. As Z is a martingale under μ, condition (i) yields Z^n_t = 𝖤[Z^n_1 | Z_t] = 𝖤[δ_Z^n-1 | Z_t]. Hence,e_t_# Z^n_t = e_t_#𝖤[δ_Z^n-1|Z_t] =𝖤[e_t_#δ_Z^n-1|Z_t] =𝖤[ δ_Z^n-1_t|Z_t] =δ_ Z_t^n-1,where the last equality follows because Z^n-1_t is Z_t=(Z^1_t, …, Z^n-1_t, … )-measurable.Let μ∈(𝖬_∞) and assumethat there exists a filtered process X ∈ such that μ = (^∞( X)). By Theorem <ref>, μ is a martingale measure and Z^n_1 = δ_Z^n-1 under μ. As X is adapted, we have e_t_# Z^1_t = δ_Z^0_t for all t ∈ [0,1] by (<ref>). Hence, Lemma <ref> yields e_t_# Z^n_t = δ_Z^n-1_t μ-a.s. for all n ∈ and t ∈ [0,1]. Conversely, assume that μ is a martingale measure ande_t_# Z^n_t = δ_Z^n-1_tfor all n ∈ and all t ∈ [0,1]. By Proposition <ref> there is an X ∈(D(S)), such that μ = (^∞( X)) and ^∞( X) = Z a.s. As we have e_t_# Z^1_t = δ_Z^0_tunder μ for all t ∈ [0,1], we have (X_t |_t^ X) = δ_X_t, so X is adapted, cf. (<ref>). Hence, X ∈(S).§.§ Discrete timeThe aim of this section is to reconcile the notions of filtered random variable and filtered process in discrete time. All results in this section can be proven analogously to the continuous-time case (or obtained as a corollary by considering piecewise constant processes), so we omit the proofs. Throughout this section N ∈ always denotes the number of time steps. Recall that an S-valued filtered process is a 5-tupel X = (Ω,,,(_t)_t=1^N, (X_t)_t=1^N), where (X_t)_t=1^N is an S-valued process (i.e. X_t takes values in S for all t) that is adapted to (_t)_t=1^N; whereas an S-valued filtered random variable is a 5-tupel X = (Ω,,,(_t)_t=1^N, X), where X is an S-valued _N-measurable random variable.Clearly, every S-valued filtered process is also an S^N-valued filtered random variable, i.e. _N(S) ⊂_N(S^N). If N>1, this inclusion is strict: For X ∈(S^N) we only require that X is _N-measurable but no further adaptedness conditions. Conversely, every S-valued random variable with filtration can be seen as an S-valued filtered process. To that end, fix some s_0 ∈ S and set ι(s) := (s_0, …, s_0,s). Consider the mappingι : _N(S) →_N(S) : (Ω,,,(_t)_t=1^N,X) ↦(Ω,,,(_t)_t=1^N,ι(X)).Note that ι(X)_t isconstant for t<N and hence _t-measurable, so ι( X) is indeed adapted to (_t)_t=1^N. In <cit.> the space of filtered processes _N(S) was defined as the factor space _N(S) modulo Hoover–Keisler equivalence and equippedwith the Hoover–Keisler topologies _n. One can perform the very same constructions for _N(S) to define the factor space of _N(S) := _N(S)/_≈_∞ and Hoover–Keisler toplogies _n on it. The following statements hold true: * (S) is a closed subset of _N(S^N) w.r.t. the topology _n for all n ∈∪{∞}. * For every n ∈_0 ∪{∞}, the map ι : (_N(S), _n) → (_N(S), _n) : (Ω,,,(_t)_t=1^N,X) ↦(Ω,,,(_t)_t=1^N,ι(X))is a topological embedding with closed range.Using this result, one can translate topological statements about _N into statements about _N and vice versa. In particular, we have: A set ⊂_N(S) is relatively compact w.r.t. _n if and only if {(X) :X ∈} is relatively compact in (S). This is an immediate consequence of Proposition <ref> and <cit.>.Let X^m,X ∈(S). Let T ⊂( X) and denote the cardinality of T by N. If X^m → X in ((S),_n), thenι(D^T( X^m)) →ι(D^T( X)) in (_N(D(S)),_n).This is an immediate consequence of Proposition <ref> and Theorem <ref>.§ OMITTED PROOFS (a) For the direct implication let (S',) be a Polish space and ι : S → S' a topological embedding such that ι(S) ⊂ S' is Borel. By <cit.> there is a stronger Polish topology ' on S' such that ι(S) is clopen in (S','). Thenι^-1 : (ι(S),'|_ι(S)) → (S,|_S) is a continuous bijection. Conversely, let (S,) be a metrizable space such that there is a stronger Polish topology ' on S. As (S,') is separable, (S,) is separable as well, so there is an embedding ι : (S,') → [0,1]^ (see <cit.>). Then ι : (S,) → [0,1]^ is a continuous injection from a Polish space to Polish space, so it maps Borel sets to Borel sets (cf. <cit.>). In particular, ι(S) is a Borel subset of [0,1]^. (b) is an easy modification of (a). (a) The collection of f ∈ C_b(S) that depend on only on finitely many coordinates is convergence determining for the product topology on S and closed under multiplication. Hence, Lemma <ref> yields that testing against these functions is convergence determining on (S). (b) Note that we have to be a bit careful because compactness of a set of measures does not imply tightness in Lusin spaces (see Section <ref>). Let (μ^m)_m be a sequence in . As _1_#() is relatively compact, there is a sub sequence (μ^m_ℓ)_ℓand some ν^1 ∈(S_1) such that _1_#(μ^m_ℓ) →ν_1. By inductively in n choosing further subsequences and then passing to a diagonal sequence, we find a subsequence(μ^m_k)_k and measures ν^n ∈(S^n) such that for all n ∈ we have _n _#(μ^m_k) →ν^n. As convergent sequences are tight (see Theorem <ref>(b)) we find for all n ∈ a compact set K_n ⊂ S_n such that ( _n )(μ^m_k)(K_n^c) ≤ϵ 2^-n for all k ∈. The set K := ∏_n ∈ K_n is compact by Tychonoff's theorem and we have for all k ∈ μ^m_k(K^c) ≤∑_n ∈_n_#μ^m_k(K_n^c) ≤∑_n ∈ϵ 2^-n = ϵ, so the sequence (μ^m_k)_k is tight. By Prohorohov's Theorem <ref>(a), a further subsequence converges to some μ∈(S). Hence,is relatively compact. (b) This is easy consequence of (a). As f is increasing, (f) is co-countable and (f) ∪{1 } is λ-full. Therefore, it remains to show that convergence in probability implies pointwise convergence for all t ∈(f) ∪{ 1 }. Since λ({1})>0, we have f_n(1) → f(1), thus, f_n(t) ≤ f_n(1) ≤sup_k f_k(1) < ∞, for all n ∈ and t ∈ [0,1]. Assume that there is some t_0 ∈ (0,1) such that (f_n(t_0))_n ∈ is not bounded from below. Then there is a subsequence f_n_k such that f_n_k(t_0) ≤ -k and hence f_n_k(t) ≤ -k for all t ∈ [0,t_0], which is a contradiction to f_n → f in measure. Fix some t_0 ∈(f). By the above considerations, (f_n(t_0))_n ∈ is a bounded real sequence. Therefore, it suffices to show that any convergent subsequence converges to f(t_0). Assume for the sake of contradiction that there is a subsequence f_n_k(t_0) → a > f(t_0). The case f_n_k(t_0) → a < f(t_0) can be treated similarly. Let ϵ := (a-f(t_0))/3. Then f_n_k(t_0) ≥ f(t_0) +2ϵ for sufficiently large k. Since f is continuous in t_0, there is some δ>0 such that t_0+δ≤ 1 and f(t) ≤ f(t_0) + ϵ for all t ∈ [t_0,t_0+δ]. By the monotonicity of f_n_k this implies f_n_k(t) - f(t) ≥ f_n_k(t_0) - f(t_0) - ϵ≥ϵ for t ∈ [t_0,t_0+δ]. We find λ( |f_n_k - f| ≥ϵ ) ≥λ([t_0,t_0+δ] ) >0, for sufficiently large k contradicting f_n → f in measure. Let S be a separable metric space and f : [0,1] → S such that for all t ∈ [0,1] the right-limit lim_s ↘ t f(s) exists. Then f is continuous except for at most countable many points.Assume w.l.o.g. that S ⊂ [0,1]^, cf. Section <ref>. As a function f : [0,1] → [0,1]^ is continuous if and only if every component is continuous, it suffices to show the claim for f :[0,1] → [0,1]. We define the oscillation of f at t by_f(t) := lim sup_δ→ 0sup_t_1,t_2 ∈ (t-δ,t+δ) ∩ [0,1]|f(t_1)-f(t_2)|.Observe that f is continuous at t if and only if _f(t) =0. In order to show that f is continuous except for countable many points, it suffices to show that A_n := { t ∈ [0,1] : _f(t) ≥ 1/n}is countable for all n ∈. We observe that ∀ t ∈ [0,1] ∃ϵ >0 : A_n ∩ (t,t+ϵ) = ∅.Otherwise, there would be a sequence (t_k)_k strictly decreasing to t such that |f(t_k)-f(t_k+1)| ≥ 1/(2n) for all k ∈, which contradicts the existence of the right limit in t. To see that this already implies that A_n is countable, consider the set B_n = { t ∈ [0,1] : A_n ∩ [0,t] iscountable}.Clearly, 0 ∈ B_n.If s ≤ s' and s' ∈ B_n then s ∈ B_n. So B_n is of the form [0,s) or [0,s] for some s ∈ [0,1]. Indeed, if A_n ∩ [0,s) is countable, then A_n ∩ [0,s] is countable as well, so B_n = [0,s] for some s ∈ [0,1]. Assume for the sake of contradiction that s<1. By (<ref>) there is ϵ >0 such that A_n∩ (s,s+ϵ) =∅, so A_n ∩ [0,s+ϵ] is countable as well, i.e. s +ϵ∈ B_n. Let S_1 be separable metrizable and S_2 be Polish. If there exists a continuous map f : S_1 → S_2 such thatthe preimages of compact sets are compact, then S_1 is Polish. Let d_1 and d_2 denote compatible metrics with the topologies on S_1 and S_2, respectively, and let d_2 be complete. We define a metric on S_1 by d(s_1,s_1') := d_1(s_1,s_1') + d_2(f(s_1),f(s_1')). As f is continuous from (S_1,d_1) to (S_2,d_2), the metrics d and d_1 induce the same topology on S_1. It remains to show that d is a complete metric. To that end,let (s_1^k)_k ∈ be a d-Cauchy sequence. The definition of d implies that (f(s_1^k))_k ∈ isd_2-Cauchy as well. As d_2 is complete, (f(s_1^k))_k ∈ is convergent in S_2 and hence relatively compact. Since f-preimages of compact sets are compact, (s_1^k)_k ∈ is relatively compact in S_1, i.e. it has a limit point in S_1. As (s_1^k)_k ∈ is d-Cauchy, it can have at most one limit point, so it converges.abbrv
http://arxiv.org/abs/2312.16725v1
{ "authors": [ "Mathias Beiglböck", "Gudmund Pammer", "Stefan Schrott", "Xin Zhang" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20231227212835", "title": "Representing General Stochastic Processes as Martingale Laws" }
RecRanker: Instruction Tuning Large Language Model as Ranker for Top-k Recommendation Sichun Luo1, Bowei He1, Haohan Zhao1, Yinya Huang1, Aojun Zhou2,Zongpeng Li3, Yuanzhang Xiao4, Mingjie Zhan2, Linqi Song111Corresponding Author 1City University of Hong Kong2The Chinese University of Hong Kong3Hangdian University4University of Hawaii{sichun.luo,boweihe2-c,haohazhao2-c}@my.cityu.edu.hk, {yinya.el.huang,aojunzhou,zmjdll}@gmail.com, zongpeng@hdu.edu.cn, yxiao8@hawaii.edu, linqi.song@cityu.edu.hk January 14, 2024========================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Large language models (LLMs) have demonstrated remarkable capabilities and have been extensively deployed across various domains, including recommender systems. Numerous studies have employed specialized prompts to harness the in-context learning capabilities intrinsic to LLMs.For example, LLMs are prompted to act as zero-shot rankers for listwise ranking, evaluating candidate items generated by a retrieval model for recommendation. Recent research further use instruction tuning technique to align LLM with human preference for more promising recommendations. Despite its potential, current research overlooks the integration of multiple ranking tasks to enhance model performance.Moreover, the signal from the conventional recommendation model is not integrated into the LLM, limiting the current system performance.In this paper, we introduce , tailored for instruction tuning LLM to serve as the Ranker for top-k Recommendations. Specifically, we introduce importance-aware sampling, clustering-based sampling, and penalty for repetitive sampling for sampling high-quality, representative, and diverse users as training data. To enhance the prompt, we introduce a position shifting strategy to mitigate position bias and augment the prompt with auxiliary information from conventional recommendation models, thereby enriching the contextual understanding of the LLM.Subsequently, we utilize the sampled data to assemble an instruction-tuning dataset with the augmented prompt comprising three distinct ranking tasks: pointwise, pairwise, and listwise rankings.We further propose a hybrid ranking method to enhance the model performance by ensembling these ranking tasks.Our empirical evaluations demonstrate the effectiveness of our proposed  in both direct and sequential recommendation scenarios. Recommender system, Large language model, Instruction tuning§ INTRODUCTIONRecommender systems serve as information filtering techniques designed to mitigate the problem of information overload <cit.>. Among various scenarios within recommender systems, the top-k recommendation paradigm is particularly noteworthy by providing users with a list of the top k items most relevant to their preferences <cit.>. Top-k recommendations encompass diverse tasks, including but not limited to, collaborative filtering-based direct recommendations and sequential recommendations.On the one hand, direct recommendationsare studied by some prominent methodologies includingNCF <cit.>, NGCF <cit.>, and LightGCN <cit.>. These techniques harness collaborative information via neural networks. On the other hand, for sequential recommendations, representative methods like SASRec <cit.> and BERT4Rec <cit.> utilize the attention mechanism <cit.> to model user sequences.In recent years, large language models (LLMs) <cit.> have exhibited significant prowess in natural language understanding <cit.>, generation <cit.>, and complex reasoning <cit.>. Consequently, they have been increasingly integrated into a multitude of domains, including recommender systems <cit.>. A typical example of LLMs in this context is to function as a ranker for a pre-filtered set of recommendations.This preference for LLMs as rankers arises primarily from the inherent limitations of LLMs, including their constrained context size and the potential for high computational costs when processing vast pools of candidate items.Therefore, a retrieval model is often employed to narrow down the candidate set, upon which the LLM utilizes its contextual understanding and reasoning capabilities to generate a ranked list of recommendations. For example, Hou et al. <cit.> operate LLM as a zero-shot ranker for sequential recommendation by formalizing the recommendation as a conditional ranking task based on sequential interaction histories. By employing carefully designed prompting templates and conducting experiments on standard datasets, they show LLMs exhibit promising zero-shot ranking capabilities that can outperform traditional models.Similar endeavors are also undertaken by <cit.>, where they also leverage the in-context learning abilities of LLMs. However, these methods possess certain limitations. The standard, general-purpose LLM does not inherently align with recommendation objectives.To remedy this, Zhang et al. <cit.> suggest employing instruction tuning to better align the LLM with specific recommendation tasks.They expressed user preferences as natural language instructions, tuning the LLM to deliver more precise and user-centric recommendations. This approach outperforms traditional models and even GPT-3.5 in evaluations. Nonetheless, current research has not provided a thorough study of the ranking task, i.e., most studies deploy LLMs for a singular ranking task, neglecting the exploration of the potential benefits of combining multiple ranking tasks for improved results.Furthermore, prevailing approaches rely exclusively on textual information of users and items for LLM processing and reasoning. This oversight of not integrating signals from conventional recommendation models may limit the effectiveness of existing methodologies.To address this shortfall, we introduce instruction tuning large language model as Ranker for top-k Recommendation, referred to as . Specifically,we propose an adaptive user sampling method to garner high-quality users, giving priority to users with a substantial history of interactions or who are representative of the broader user base, recognizing their heightened significance in the dataset. To enhance the prompt, we propose position shifting strategy to mitigate position bias. In accordance with the concept of self-consistency in LLM <cit.>, we posit that the answer that receives consensus among most replies is more likely to be accurate. We also incorporate signals from conventional recommendation models into prompts to augment LLM reasoning, as these signals can harness information from broader perspectives.The signals are seamlessly incorporated into the prompt using natural language descriptions in a uniform format. Subsequently, we curate an instruction-tuning dataset with enhanced prompts comprising three distinct ranking tasks, including pointwise, pairwise, and listwise ranking.The instruction tuning dataset is adopted to fine-tune the open-source LLM, resulting in a refined model that is well-aligned with the objectives of recommendation.Furthermore, we introduce a hybrid ranking approach that amalgamates all three ranking methods to bolster model performance.Experiments conducted on three real-world datasets validate the effectiveness of the proposed . In a nutshell, our contribution is fourfold.* We introduce , a compact framework that applies instruction-tuned LLMs for diverse ranking tasks in top-k recommendations. In addition, we propose a hybrid ranking method that ensembles various ranking tasks, aiming to further improve the model performance.*  employs adaptive user sampling to select high-quality users, thereby facilitating the construction of the instruction-tuning dataset. Furthermore, we propose a position shifting strategy within the prompt to mitigate the position bias in LLM.* Our approach incorporates information from conventional recommender systems into the instructions, enabling the LLM to synergistically leverage signals from both the conventional recommender system and textual information for better contextual understanding and user preferences reasoning.* We conducted extensive experiments on three real-world datasets to validate the effectiveness of our proposed . Impressively, outperforms backbone models in most cases by a large margin, demonstrating its significant superiority. § RELATED WORK§.§ Top-k RecommendationTop-k recommendations <cit.> have emerged as a burgeoning research field, aiming to suggest a list of k items that are most likely to align with a user's preferences. Two predominant categories of algorithms for top-k recommendations are collaborative filtering-based direct recommendation and sequential recommendation. For direct recommendation, memory-based approaches such as user-based and item-based collaborative filtering are employed <cit.>. These algorithms leverage the historical interactions between users and items to compute similarity scores and then generate recommendations. Advanced methods, including Neural Collaborative Filtering (NCF) <cit.> and Neural Graph Collaborative Filtering (NGCF) <cit.>, have been developed to better model collaborative user behavior and infer user preferences with more complex model structures. In contrast, sequential recommendation focuses on capturing the dynamic behavior of users. Techniques like Gated Recurrent Unit for Recommendation (GRU4Rec) <cit.>, Self-Attention-based Sequential Recommendation (SASRec) <cit.>, and the more recent transformer-based BERT4Rec <cit.> utilize the sequential nature of user interactions to predict the forthcoming items of interests to users. Though conventional algorithms achieve promising results in top-k recommendations, they still lack the ability to understand the content of the items. To address this issue,this paper proposes to facilitate recommender systems by leveraging the contextual understanding and reasoning capabilities of LLMs. §.§ LLMs for Recommendation Recently, LLMs have demonstrated remarkable capabilities and have found extensive applications across various domains, including recommender systems <cit.>.Some recent works utilize LLMs for data augmentation <cit.> or representation learning <cit.> in recommendations. Notably, one strand of research leverages LLMs as rankers for recommender systems <cit.>. This approach is necessitated by the limitations of LLMs' fixed window size, which prevents the direct input of an exhaustive set of candidate items. Consequently, a retrieval model is commonly employed to refine and reduce the candidate item set. Specifically, Wang et al. <cit.> investigated the in-context-learning ability of LLMs with designed task-specific prompts to facilitate ranking tasks in sequential recommendation. However, the misalignment between general-purpose LLMs and specialized recommendation tasks constrains the models' performance. To address this limitation, InstructRec <cit.> instruction tunes LLMs using a specially constructed dataset of natural language instructions. However, existing research has yet to fully exploit the ranking capabilities of LLMs; it has primarily focused on singular ranking tasks, thereby leaving the ensemble of ranking tasks for improved performance largely unexplored. To bridge this gap, we conduct a systematic investigation into the application of instruction-tuned LLMs for a variety of ranking tasks, including pointwise, pairwise, listwise, and their hybrid approaches, with the objective of fully elucidating the potential of LLMs in top-k recommendation scenarios.§ PRELIMINARIESWe consider a recommender system with a set of users, denoted 𝒰 = { u_1, u_2, …, u_n }, and a set of items, denoted ℐ = { i_1, i_2, …, i_m }. The top-k recommendation focuses on identifying a subset of items 𝒮_u ⊂ℐ for each user u ∈𝒰. The subset is chosen to maximize a user-specific utility U(u,𝒮) with the constraint |𝒮| = k, which is formally expressed as𝒮_u = max_𝒮⊂ℐ, |𝒮| = k U(u, 𝒮). In the context of LLM-based recommendation methods, let ℒ represent the original LLM.These kinds of methods first utilize prompts to interpret the recommendation task for user u into natural language. Given a prompt 𝒫_u, the LLM-based recommendation for user u with in-context learning is denoted by ℛ = ℒ(𝒫_u). To fine-tune our LLM using instruction-based approaches, we utilize a dedicated dataset, 𝒟_ins. The resulting instruction-tuned LLM is represented as ℒ'. Therefore, the recommendation process in the fine-tuned model can be succinctly represented as ℛ = ℒ'(𝒫_u) § METHODOLOGY §.§ OverviewThe overall training and inference pipeline are depicted in Fig. <ref> and Fig. <ref>, respectively. The training phase consists of four main stages: adaptive user sampling, candidate item selection via negative sampling, prompt construction, and instruction tuning.The adaptive user sampling stage aims to procure high-quality, representative, and diverse users. It incorporates three sampling strategies: importance-aware sampling, clustering-based sampling, and penalties for repetition.For each user sampled, the candidate items consist of items liked and disliked by the users, as well as some un-interacted items selected via a commonly used negative sampling method <cit.>. Given the users sampled and items selected, we construct prompts for each ranking task, augmenting them with signals from conventional recommender models. This strategy synergizes the strengths of both conventional recommendation systems and textual data, thereby enhancing the system's overall performance. Finally, we use the constructed data to fine-tune LLMs via instruction tuning. During the inference phase, for a user in the test data, we first select candidate items through a retrieval model. This item selection process is different from the training phase, where negative sampling is used. Subsequently, the prompt is constructed, following the approach in the training phase. After that, the instruction-tuned LLM performs a variety of ranking tasks. Notably, a hybrid ranking method, which is achieved through the ensemble of multiple ranking tasks, is employed in this stage to enhance the model performance.§.§ Adaptive User SamplingWe first describe how we sample the raw recommendation dataset to create a list of users to be included in the fine-tuning dataset 𝒟_ins.We do not use the original user set 𝒰, because we prefer to generate a list of users with improved distribution and diversity. We denote such a list of users by a multiset 𝒰_ins. A multiset is a modified set that allows for multiple instances of the same element <cit.>. A multiset is formally defined by a tuple 𝒰_ins = (𝒰_ins, M_ins), where 𝒰_ins is the underlying set of the multiset, consisting of its distinct elements, and M_ins:𝒰_ins→ℤ^+ is the multiplicity function, giving the number of occurrences of element u ∈𝒰_ins as M_ins(u). Therefore, the multiplicity M_ins(u) of user u will be the number of prompts regarding user u in the instruction-tuning dataset 𝒟_ins.Some works sample users with equal probabilities from the user set 𝒰 <cit.>, while other works sample nearest interactions <cit.>. However, these methods could be sub-optimal, since the recommendation dataset often follows a long-tail distribution. To compile a high-quality, representative, and diverse dataset, we introduce three strategies:importance-aware sampling, clustering-based sampling, and penalties for repetitive sampling. Specifically, we utilize importance-aware sampling and clustering-based sampling to create two multisets of candidate users, denoted by 𝒰_1 and 𝒰_2. Then from the combined multiset 𝒰_3 = 𝒰_1 + 𝒰_2 with multiplicity function is M_3 = M_1 + M_2, we apply a penalty for repetitive sampling to select the final multiset 𝒰_ins.§.§.§ Importance-aware Sampling Data in recommendation scenarios often exhibit a long-tail distribution, where a large number of items or users have minimal interactions, and a few have a large number of interactions <cit.>. To optimize the quality of the data for building effective recommendation models, we propose an importance-aware sampling strategy. This strategyprioritizes sampling from users with more interactions, based on the premise that users with a higher number of interactions provide more reliable and consistent data, crucial for modeling user preferences accurately. We definethe importance of a user by the natural logarithm of their interaction count. The importance w_u of user u is defined as w_u = ln(q_u), where q_u denotes the number of interactions for user u. The logarithmic scale is deliberately chosen to moderate the influence of users with extremely high interaction counts, ensuring that while they are given priority, they do not predominate the entire dataset.The probability of selecting user u is proportional to the importance w_u. This ensures that users with more interactions have a higher chance of being sampled, while still allowing for representation across the entire user base. In importance-aware sampling, the probability of sampling user u isp_u,importance = w_u/∑_v ∈𝒰 w_v,where the denominator is the sum of the importance across all the users, serving as a normalizing factor so that the probabilities sum up to 1.Importance-aware sampling, as a superior alternative to uniform sampling, offers several advantages. First, it improves data quality by prioritizing users who exhibit a higher volume of interactions, thereby generating a dataset with richer and more consistent patterns. Second, this strategy equitably balances both highly active and less active users by incorporating logarithmic scaling, thereby ensuring that less active users are not underrepresented.§.§.§ Clustering-based Sampling To obtain representative users, we also employ a clustering-based sampling strategy. This strategy is grounded in the understanding that users in recommendation systems exhibit diverse interests. By clustering users in the latent space, we can categorize them into distinct groups, each representing a unique set of interests. Such clustering enables us to capture the multifaceted nature of user preferences, ensuring that our sampling is not only representative but also encompasses the broad spectrum of user behaviors and tendencies. Our framework allows for any cluster method such as K-means <cit.> and Mean Shift <cit.>. In this paper, we choose K-means due to its effectiveness and simplicity in grouping data into cohesive clusters. We first represent each user as an embedding vector derived by the retrieval model, and then cluster the users into K groups based on the embedding vectors. We denote user u's cluster by k_u ∈{1,…,K}.Once the users are clustered, we select samples from each cluster. This selection is not uniform but proportional to the size of each cluster.Mathematically, the sampling probabilityof user u in clustering-based sampling satisfiesp_u,clustering∝|{v ∈𝒰: k_v=k_u}|,where |{v ∈𝒰: k_v=k_u}| is the number of users in the same cluster as user u. This strategy not only preserves the diversity within each cluster but also ensures that larger clusters, which potentially represent more prevalent interests, have a proportionally larger representation in the final sample.§.§.§ Penalty for Repetitive Sampling Given the two multisets 𝒰_1 and 𝒰_2 resulting from the importance-aware and clustering-based samplings, we need to construct the final user list 𝒰_ins from their sum 𝒰_3 = 𝒰_1 + 𝒰_2, where the multiplicity function is M_3 = M_1+M_2.To enhance diversity in the final multiset 𝒰_ins, we implement a penalty for repetitive selections. The rationale behind this strategy is to mitigate the overrepresentation of certain “advantage groups” –- users or items that might dominate the dataset due to their high frequency or popularity <cit.>. To achieve this, we assign a penalty weight for each repeated selection within our sampling process. The penalty weight for a user u ∈𝒰_3 is quantitatively expressed as ψ_u = C^M_3(u), where 0 C 1 is a predefined constant. Thus, the penalty weight is decreasing in the number of occurrencesM_3(u).This penalty weight directly influences the probability of a user being selected for the final dataset. To be specific, the probability of selecting user u is p_u,penalty = ψ_u/∑_v ∈𝒰_3ψ_v,which ensures that those with higher occurrences are less likely to be chosen repeatedly.This penalty for repetitiveness serves a dual purpose. Firstly, it significantly enhances the diversity of the sample by reducing the likelihood of repeatedly selecting the same users.Secondly, it ensures a more equitable representation of less frequent users, providing a more holistic view of user interests and preferences. In this way, by integrating this penalty mechanism into our sampling process, we achieve diversity and balanced representation in the final user list 𝒰_ins. §.§ Candidate Items Selection The selection of candidate items differs between the training and inference phases. During training, negative sampling is utilized to select a mixture of items with which users have not interacted, as well as a random assortment of items that users have liked or disliked, forming the set of candidate items. While in the inference phase, a retrieval model is employed to generate the entire set of candidate items.§.§.§ Selection via Negative Sampling in The Training Phase In the training phase, the candidate item set includes randomly chosen items that users have liked and disliked. Besides, we employ the widely-used negative sampling technique <cit.>, which involves randomly incorporating items with which users have not interacted into the candidate item set. These un-interacted items are considered as negative samples. It is presumed that un-interacted items are more likely to be preferred over items that users have explicitly disliked. Based on these selections, we establish the relative ranking comparison for the instruction tuning dataset construction.§.§.§ Selection via Retrieval Model in The Inference PhaseIn the realm of industrial recommender systems, platforms like YouTube[https://www.youtube.com/https://www.youtube.com/] often adopt a two-step process, initially utilizing a retrieval model to select a preliminary set of candidate items, which are subsequently re-ranked for final recommendations <cit.>. Specifically, within LLM-based recommendation systems, the retrieval model plays a crucial role as a primary filter, effectively narrowing the scope of potential recommendations. This is particularly important due to the intrinsic limitations in the window size of LLMs. The architecture of the retrieval model is tailored to suit the nature of the recommendation task at hand.For direct recommendation,models such as NCF <cit.>, NGCF <cit.>, and LightGCN <cit.> are often employed. For sequential recommendation tasks, where the order of interactions is significant, models like SASRec <cit.> and BERT4Rec <cit.> are typically favored.In the procedure of candidate item selection in the inference phase, we employ the retrieval model to compute a utility score for each item. Subsequently, we rank all the items based on their utility scores and select the top k^' items with the highest scores as the candidate items. For top-k recommendations,this process will sample k^' items with k^' k. §.§ Prompt ConstructionIn this section, we describe the construction of prompts. We begin by introducing a variety of ranking tasks, followed by a discussion of our proposed prompt enhancement method. This method involves augmenting prompts with signals from a conventional recommendation model.§.§.§ Pointwise, Pairwise, and Listwise Ranking Our recommendation system incorporates a multifaceted approach to ranking tasks, encompassing pointwise, pairwise, and listwise rankings. Each of these methods plays a distinct role in evaluating and ordering candidate items based on their relevance to user preferences. As demonstratedin Table <ref>, forpointwise ranking approach, each candidate item is assigned an individual relevance score. The entire list of candidates is then sorted based on these scores, providing a straightforward, score-based ranking. The pairwise ranking method involves a direct comparison between two candidate items, determining which of the two is more relevant or preferable in a given context. Differing from the above two, listwise ranking evaluates and sorts an entire list of candidate items. It considers the collective relevance of items, offering a comprehensive ranking based on overall suitability. §.§.§ Position Shifting in PromptPosition bias in LLMs arises when these models disproportionately favor items due to their locations in a list, rather than their inherent relevance or quality <cit.>. This bias can significantly undermine the consistency and reliability of the output of the model. To mitigate the position bias, we adopt a position shifting strategy. During the training phase, we randomize the order of candidates and user preference items. This strategy is designed to prevent the model from prioritizing the item position over its actual significance. Similarly, in the inference phase, we continue this strategy by randomly altering the positions of the items.The primary objective of this strategy is to preserve those responses from LLMs that demonstrate consistency irrespective of item position. Consequently, the items identified are reflective of the model's true preferences, less influenced by position bias. By employing this method, we ensure that the LLMs’ responses are founded on genuine relevance, thereby enhancing the overall trustworthiness of the inference process.§.§.§Prompt EnhancementExisting LLM-based approaches often rely solely on LLMs for processing and ranking textual information. This reliance, however, neglects the rich and valuable signals that conventional recommendation models, like collaborative filtering, can offer. Models such as LightGCN <cit.> excel in extracting high-order collaborative signals, which play a pivotal role in understanding user preferences through the influences of user networks. The absence of the collaborative information could lead to less effective outcomes in LLM-based recommendations. To bridge this gap, we propose a prompt enhancement method that integrates signals from conventional recommendation models into the prompts used for ranking tasks. This integration allows us to leverage the strengths of both LLMs and traditional recommendation models, creating a more informed and context-rich basis for decision-making. Specifically,for pointwise ranking, we could utilize a rating prediction model like MF <cit.> to forecast individual scores. These predictions are then transformed into natural language descriptions and seamlessly integrated into the prompt, providing a more nuanced basis for item evaluation. For pairwise and listwise rankings, task-specific models such as LightGCN <cit.> and SASRec <cit.> are employed to predict rankings.In this paper, we adopt MF <cit.> and the LightGCN <cit.> model for prompt enhancement. The insights from these predictions are then incorporated into the prompts, enhancing the context and depth of the ranking process. By augmenting prompts with data from conventional recommendation models, our method significantly enriches the ranking tasks in recommendation systems. This innovative approach not only capitalizes on the advanced capabilities of LLMs but also harnesses the collaborative or sequential information offered by conventional recommendation models. §.§Optimization via Instruction Tuning After constructing the dataset, we focus on fine-tuning the LLM in a supervised manner, specifically through instruction tuning. This process involves optimizing the LLM using a dataset generated from instructional data, aligning the model responses more closely with user intents and preferences.The approach we adopt for supervised fine-tuning is grounded in the standard cross-entropy loss, following the principles outlined in Alpaca <cit.>. The core of this process lies in the training set 𝒟_ins, which is comprised of natural language instruction input-output pairs (x, y). This dataset is instrumental in guiding the fine-tuning process, ensuring that the model outputs are aligned with the structured instructional data.The primary objective in this phase is to fine-tune the pre-trained LLM ℒ by minimizing the cross-entropy loss. This is mathematically formalized as:min _Θ∑_(x, y) ∈𝒟_ins∑_t=1^|y| -log P_Θ(y_t| x, y_[1:t-1]),where Θ represents the model parameters,P_Θ denotes the conditional probability of generating the t-th token y_t in the target output y, given the input x and the preceding tokens y_[1:t-1], and |y| is the length of the target sequence y.By minimizing this loss function, the model parameters Θ are refined to better accommodate the nuances of the instructional tuning dataset 𝒟_ins. This fine-tuning leverages the LLM's pre-existing capabilities in general language understanding and reasoning, as acquired during its initial training phase. The result is a more sophisticated and nuanced model that can accurately capture and interpret user preferences expressed in natural language. Such an enhancement is crucial for the subsequent recommendation tasks, as it allows the LLM to provide recommendations that are more aligned with the user's expressed needs and preferences. This approach, therefore, significantly boosts the efficacy and relevance of the recommendation system, ensuring that it serves users with high accuracy and personalization.§.§ Hybrid RankingInspired by self-consistency in LLM <cit.>, the result agreed by most LLM responses has a higher probability of being correct. Recognizing that each ranking task (i.e., pointwise, pairwise, and listwise ranking) captures different facets of the recommendation problem, we propose a hybrid ranking method. This method aims to amalgamate the strengths of each individual task to achieve a more holistic and effective recommendation process. The hybrid ranking method operates by ensembling the outputs of the three distinct ranking tasks. Mathematically, this process can be expressed as:𝒰 = α_1 𝒰_pointwise + α_2 𝒰_pairwise + α_3 𝒰_listwisewhere α_1, α_2, and α_3 are weighting coefficients that sum up to 1. Depending on the values of these coefficients, the hybrid ranking can effectively mimic any of the individual ranking methods, thus providing flexibility in the recommendation approach.For the pointwise ranking task, the utility score, 𝒰_pointwise, is initially determined by the relevance score from the LLM prediction. To refine this score and differentiate between items with identical ratings, an additional utility score from the retrieval model is incorporated, denoted as 𝒰_retrieval = -m ·𝒞_1. Here, 𝒞_1 is a constant and m, representing the item's position as determined by the retrieval model, varies from 1 to k^' (total number of candidate items). Therefore, the comprehensive utility score for the pointwise ranking task is 𝒰_pointwise = 𝒰_retrieval + ℒ(𝒫). In the pairwise ranking scenario, preferred items by LLM are attributed a utility score 𝒰_pairwise = 𝒞_2, where 𝒞_2 is a constant. For listwise ranking, the formula 𝒰_listwise = -m' ·𝒞_3 is employed to score each item, with m' is the position predicted by LLM and varying from 1 to k^' and 𝒞_3 being a constant. This formula assigns scores across the list of items, integrating the listwise perspective into the hybrid approach. § EXPERIMENT The primary goal is to investigate the extent to which integrating the introduced model can improve the performance of current recommendation systems. Therefore, we conduct comprehensive experiments to answer the following research questions: * RQ1: Does our proposed  framework enhance the performance of existing recommendation models?* RQ2: What impact do importance aware sampling and enhanced prompt have on the quality of recommendation respectively?* RQ3: How do various hyper-parameters influence the overall performance of the framework?* RQ4: How does the instruction-tuned model compare to other LLMs, such as GPT? §.§ Experimental Setup §.§.§ Dataset Following <cit.>, we rigorously evaluate the performance of our proposed framework by employing three heterogeneous, real-world datasets.MovieLens[<https://grouplens.org/datasets/movielens/>] <cit.> dataset is utilized as a standard benchmark in movie recommendation systems. We explore two subsets of this dataset: MovieLens-100K, containing 100,000 user-item ratings, and MovieLens-1M, which expands to approximately 1 million ratings. BookCrossing[In the absence of timestamp data within the BookCrossing dataset, we have reconstructed historical interactions via random sampling.] <cit.> datasetcomprises user-submitted book ratings on a 1 to 10 scale and includes metadata such as ‘Book-Author' and ‘Book-Title'. The key statistics of these datasets are detailed in Table <ref>.§.§.§ Evaluation Metrics In line with the methodologies adopted in prior works <cit.>, we employ two well-established metrics for evaluating the top-k recommendation task: Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG), denoted as H and N, respectively. Our experimental setup involves setting k to either 3 or 5, similar to the evaluation approach detailed in <cit.>, allowing for a comprehensive assessment. §.§.§ Data PreprocessingTo assure data quality in our study, we implement the 10-core setting, which involves excluding users and items that have fewer than ten interactions from the BookCrossing dataset.The processed BookCrossing dataset, configured with a 10-core setting, comprises 1,820 users, 2,030 items, and 41,456 interactions, resulting in a density of 0.011220. We adopt the leave-one-out evaluation strategy, aligning with the methodologies employed in prior research <cit.>. Under this strategy, the most recent interaction of each user is assigned as the test instance, the penultimate interaction is used for validation, and all preceding interactions constitute the training set. Regarding the construction of the instruction-tuning dataset,we sampled 10,000 instructions for each ranking task for the ML-1M dataset. In the case of the ML-100K and BookCrossing datasets, we formulated 5,000 instructions for each task, respectively. We eliminated instructions that were repetitive or of low quality (identified by users with fewer than three interactions in their interaction history), leaving approximately 56,000 high-quality instructions. These instructions are then combined to create a comprehensive instruction-tuning dataset, which is utilized to fine-tune the LLM.§.§.§ Model SelectionWe incorporate our  with the following direct recommendation models as the backbone models: * Matrix Factorization (MF) <cit.>: A foundational approach that decomposes user-item interaction matrices to uncover latent features. We use Bayesian Personalized Ranking (BPR) loss <cit.> to optimize the model.* LightGCN <cit.>: Simplifies the graph convolutional network for efficient recommendation by focusing on user-item graph embeddings.* MixGCF <cit.>: A hybrid method combining graph convolution with collaborative filtering, enhancing recommendation diversity and accuracy.* SGL <cit.>: Utilizes self-supervised learning within graph neural networks to improve recommendation quality through auxiliary tasks. We also employ several widely used sequential recommendation models as the backbones. * SASRec <cit.>: Employs a self-attention mechanism in sequential models to better capture user preferences over time.* BERT4Rec <cit.>: Adapts the BERT architecture to sequential recommendation, capturing complex item interaction patterns.* CL4SRec <cit.>: Leverages contrastive learning for sequential recommendation, enhancing model robustness and understanding of user-item sequences.The backbone models serve as the retrieval models in . For each backbone model, we choose the top ten items as candidate items, setting k^' = 10.We leave out the comparison with other instruction-tuning LLM for recommendation methods such as TALLRec <cit.> and InstructRec <cit.>. This exclusion is justified as these methods are not primarily designed for diverse ranking tasks. Specifically, TALLRec is tailored for a binary classification task, determining whether a user likes an item or not. InstructRec, on the other hand, relies on the powerful yet closed-source GPT model to generate information, rendering it impractical in our context. Nevertheless, it is important to note that these methods adhere to the standard approach for instruction tuning in LLMs. As detailed in Section <ref>, we include an ablation study that evaluates our method's enhancements over the standard instruction tuning LLMs, thereby underscoring the superiority of our approach. §.§.§ Implementation Details We chose LLaMA-2 (7B) <cit.> as the backbone of LLM in our experiment due to its strong capability among the open-source LLMs. In the training phase of LLaMA-2 (7B), we adopted a uniform learning rate of2 × 10^-5 , coupled with a context length of 1024. The batch size was fixed at 4, complemented by gradient accumulation steps of 2. Additionally, a cosine scheduler was implemented, integrating a preliminary warm-up phase of 50 steps. The training comprised a total of 6000 steps. We employed DeepSpeed's ZeRO-3 stage optimization <cit.> alongside the flash attention technique <cit.> for efficient training of these models. This training process was executed on 16 NVIDIA A800 80GB GPUs. During the inference process, the vLLM framework <cit.> was employed, setting the temperature parameter at 0.1, with top-k and top-p values at 10 and 0.1, respectively. Inference was conducted using a single NVIDIA A800 80GB GPU. For the top-k recommendation task, we utilize the SELFRec[<https://github.com/Coder-Yu/SELFRec>] library <cit.> for implementation. As for the hyper-parameter settings,we set α_1=α_2=α_3=1/3 for all experiments.𝒞 is set to 0.92 in this paper. 𝒞_1, 𝒞_2, and 𝒞_3 are set to 0.05, 0.5, and 0.025 respectively. We repeat the experiment five times and calculate the average. §.§ Main Results (RQ1) The experiment results for direct recommendation and sequential recommendation are shown in Table <ref> and Table <ref> respectively. We have the following key observations: * In the context of MF and LightGCN, pairwise and listwise ranking methods surpass the baseline model. However, these methods encounter difficulties in yielding favorable outcomes when applied to more advanced models like MixGCF or SGL. In contrast, pointwise ranking consistently outperforms the base models, achieving a marked improvement. This enhancement might be attributed to the LLM proficiency in making more objective judgments, rather than comparing multiple items. Additionally, the relative simplicity of pointwise tasks suggests that LLMs are more adept at handling simpler tasks.* Furthermore, hybrid ranking methods generally outperform pointwise ranking. Despite the significantly lower performance of pairwise and listwise ranking compared to pointwise ranking, integrating them into a hybrid ranking approach can still result in improvements. This is in line with the concept of self-consistency in LLMs; that is, when a model consistently agrees on a particular answer, there is a higher likelihood of its accuracy.*  demonstrates a more significant improvement on the Bookcrossing dataset than on the Movielens dataset. This enhancement may be due to the fine-grained ratings in Bookcrossing dataset, which range from 1 to 10, thereby enabling the tuned LLM to make more precise predictions. This observation can be attributed to the fact that the general recommendation models have the capability to mine collaborative information effectively, which makes them more excel at ranking items. As a result, the need for reranking is comparatively lower in these models.§.§ Ablation Study (RQ2)In this section, we study the benefits of each individual component of ReRanker. The results are demonstrated in Table <ref>. The results demonstrate that the complete model outperforms all three model variants.This outcome underscores the significant contribution of each main component to the enhancement of overall performance. A detailed analysis of each component's specific impact yielded the following insights:* w/o Adaptive User Sampling:This variant substitutes the proposed adaptive user sampling with a uniform sampling approach. The experimental results reveal a notable decline in model performance. This decline underscores the importance of adaptive user sampling in selecting critical, representative, and diverse user samples for training, thereby enhancing model performance.* w/o Position Shifting:The position shifting is excluded in this variant, maintaining other components the same. The observed performance reduction in this variant highlights the significance of position shifting. It mitigates position bias, leading to more consistent and reliable results. * w/o Prompt Enhancement:In this variant, prompt enhancement is removed while retaining other modules. A marked decrease in performance is observed, suggesting that conventional recommender models may provide valuable information for LLM to generate more accurate predictions.§.§ Hyper-parameter Study (RQ3)§.§.§ Analysis of hyper-parameters 𝒞_1, 𝒞_2 and 𝒞_3We analyze the influence of hyper-parameters 𝒞_1, 𝒞_2, and 𝒞_3 on the ML-1M dataset, employing MF as the underlying model, as depicted in Figure <ref>. We noted that increases in 𝒞_1 and 𝒞_3 led to fluctuations and a general decline in performance. This indicates that judicious selection of 𝒞_1 and 𝒞_3 is crucial for optimizing model performance, particularly since both pairwise and listwise ranking methods underperform compared to pointwise ranking, rendering high values of 𝒞_1 and 𝒞_3 suboptimal. On the other hand, a gradual improvement in performance was observed with the increment of 𝒞_2. These findings underscore the significance of appropriate hyper-parameter selection in achieving optimal model performance. §.§.§ Analysis of model scaling. We further instruction-tuned the LLaMA-2 (13B) model.[Training the LLaMA-2 (70B) model with the same experimental settings was impractical due to resource constraints, consistently resulting in Out-Of-Memory (OOM) errors.] We conducted a comparative analysis between the 7B and 13B versions of the instruction-tuned models. The performance differences between LLaMA-2 7B and LLaMA-2 13B were specifically assessed across various ranking tasks within the Bookcrossing dataset, as illustrated in Figure <ref>. Our observations revealed that the LLaMA-2 (13B) model generally outperformed the 7B model. This superiority can be attributed to the enhanced capabilities of the larger model, which result in better language comprehension and reasoning ability, ultimately leading to improved ranking outcomes. In addition, It is noteworthy that the improvements in pointwise ranking and listwise ranking were more pronounced compared to pairwise ranking. This suggests that LLMs still face challenges in certain ranking tasks. Furthermore, the hybrid ranking approach demonstrated significant progress across all evaluation metrics. This underscores the effectiveness of integrating multiple ranking tasks, highlighting the strengths of the proposed hybrid ranking method. §.§.§ Analysis of data scaling.The training of the LLM was conducted with varying quantities of instructions in the instruction-tuning dataset to evaluate the effect of data size. Specifically, the version with 5.6K instructions was trained over 600 steps, while the version with 28K instructions underwent 3000 steps of training, proportional to our original configuration. The experiment result is detailed in Table <ref>. An observable trend is that an increase in the number of instructions correlates with enhanced model performance. This underscores the significance of incorporating a larger and more diverse dataset for instruction tuning LLMs to achieve improved performance.§.§ Comparison with the GPT Model (RQ4)We compare our instruction-tuned LLM with the GPT model, specifically, the GPT-3.5-turbo[<https://platform.openai.com/docs/models/gpt-3-5>] model. We employed a sample of 100 listwise ranking task instances from the Bookcrossing dataset, using the CLSRec model as the backbone for evaluating the GPT model.This experiment setting aligns with the findings of <cit.>, which highlight the optimal cost-performance equilibrium achieved when GPT-3.5 is applied to the listwise ranking task.As demonstrated in Figure <ref>, our instruction-tuned  with hybrid ranking notably outperforms the GPT-3.5 model. This impressive result emphasizes the crucial role of instruction tuning in aligning general-purpose LLMs specifically for recommendation tasks.§.§ Further DiscussionIn our experiment, we observed that training the LLaMA-2 7B model with around 56K instructions on 16 A800 GPUs took approximately 4.6 hours. Besides, training the LLaMA-2 13B model under the same conditions required around 5.3 hours. The inference time for each instruction averaged about 17 instructions per second, translating to a requirement of around 0.059 seconds per item for computation by a single A800 GPU. This training and inference duration significantly exceeds that of conventional recommendation models, highlighting the limitations of current LLM-based recommender systems.The substantial demand for computational resources also represents a significant challenge. Consequently, employing instruction LLMs for large-scale industrial recommender systems, such as those with millions of users, is presently impractical. However, future advancements in accelerated and parallel computing algorithms for language model inference could potentially reduce inference times and computation resources. This improvement might make the integration of LLMs into large-scale recommender systems feasible, especially by leveraging many GPUs for parallel computation.§ CONCLUSION In this paper, we introduce , a novel framework for employing instruction tuning LLM as the Ranker in top-k Recommendations. Initially, we propose an adaptive user sampling for obtaining high-quality, representative, and diverse data. In the following step, we construct an instruction-tuning dataset that encompasses three distinct ranking tasks: pointwise, pairwise, and listwise rankings. We further improve the prompt byadopting position shifting strategy to mitigate position bias, as well as integrating auxiliary information from conventional recommendation models for prompt enhancement. Moreover, we introduce a hybrid ranking method that combines these diverse ranking tasks to improve overall model performance. Extensive empirical studies on three real-world datasets across diverse rankings tasks validate the effectiveness of our proposed framework. IEEEtran
http://arxiv.org/abs/2312.16018v1
{ "authors": [ "Sichun Luo", "Bowei He", "Haohan Zhao", "Yinya Huang", "Aojun Zhou", "Zongpeng Li", "Yuanzhang Xiao", "Mingjie Zhan", "Linqi Song" ], "categories": [ "cs.IR" ], "primary_category": "cs.IR", "published": "20231226121258", "title": "RecRanker: Instruction Tuning Large Language Model as Ranker for Top-k Recommendation" }
firstpage–lastpage Diagnosis of Small-world Bias in Random Graphs Georgios Argyris=================================================We investigate the relation between turbulence and magnetic field switchbacks in the inner heliosphere below 0.5 AU in a distance and scale dependent manner.The analysis is performed by studying the evolution of the magnetic field vector increments and the corresponding rotation distributions, which contain the switchbacks. We find that the rotation distributions evolve in a scale dependent fashion, having the same shape at small scales independent of the radial distance, contrary to at larger scales where the shape evolves with distance.The increments are shown to evolve towards a log-normal shape with increasing radial distance, even though the log-normal fit works quite well at all distances especially at small scales. The rotation distributions are shown to evolve towards the <cit.> rotation model moving away from the Sun. The magnetic switchbacks do not appear at any distance as a clear separate population. Our results suggest a scenario in which the evolution of the rotation distributions, including switchbacks, is primarily the result of the expansion driven growth of the fluctuations, which are reshaped into a log-normal distribution by the solar wind turbulence. Sun: corona – Sun: heliosphere – solar wind § INTRODUCTION The solar wind is a turbulent medium whose properties evolve with radial distance from the Sun <cit.>. Past works provided key findings such as the decrease of the power levels in the magnetic field magnitude and components <cit.>, the motion of the 1/f break <cit.> and the ion spectral break <cit.> to lower frequencies with increasing radial distance, the transition from a more imbalanced spectrum of fluctuations to a more balanced one moving away from the Sun <cit.> (even at high latitudes <cit.>) and the steepening of the velocity spectrum from the Earth to 5 AU <cit.>. In more recent years the Parker Solar Probe (PSP) <cit.> mission has improved our knowledge of the solar wind turbulence to distances below 0.3 AU. In PSP data the inertial range trace magnetic field spectrum was shown to evolve from a -3/2 slope to a -5/3 one with increasing radial distance <cit.>. The controlling parameter of this evolution seems to be the cross-helicity <cit.>, therefore the -3/2 spectrum is associated with more imbalanced turbulence, consistent with previous observation at 1 AU <cit.>. <cit.> showed that the velocity field spectrum does not steepen with increasing radial distance up to 85 solar radii, also consistent with the 1 AU results, where -3/2 velocity spectra are seen at all levels of imbalance <cit.>. Regarding the 1/f range, <cit.> and <cit.> have shown that below 0.3 AU the spectra can be shallower than 1/f and that they evolve towards 1/f with increasing advection time. The origin of this behaviour is still debated <cit.>.In this evolving near-Sun turbulent medium PSP has revealed the presence of large amplitude highly Alfvénic magnetic deflections known as switchbacks (SBs) <cit.>. For most of these structures the magnitude of the field is constant <cit.>, therefore they can be thought as magnetic rotations to a good approximation.Several models have been proposed to explain their formation. As suggested in <cit.> they can be grouped according to the invoked physical mechanismas: reconnection, shear flows and Alfvén-wave/turbulence based models. In the reconnection driven models, a SB is formed either by a kink impressed on the newly open magnetic field line <cit.> or by the formation of a flux rope after an interchange reconnection process <cit.>. The propagation of the kink from the reconnection event location to PSP is shown to be possible when the non-linear and dissipative terms are discarded in the MHD equations by <cit.>. Such a model is shown to be reliable when fitted to many SBs <cit.>. The reconnection driven models imply an ex-situ formation of the structures close to the Sun surface. Reconnection based models struggle to recover the Alfvénicity of the SBs, to explain the increasing occurrence rate with increasing radial distance <cit.> and numerical results show that the kinks are unfolded before they can reach PSP <cit.>. The shear driven models are based on the interaction of wind streams with different velocities. <cit.> propose that past the Alfvén surface the gradient in speed between adjacent flux tubes can exceed the Alfvén speed triggering the non-linear Kelvin-Helmholtz instability. The magnetic roll-up formed by this instability would appear as magnetic field reversals once crossed by PSP. In the model proposed by <cit.> the shear that forms a switchback is due to the magnetic field footpoint motion from a region of slow solar wind to a region of fast wind. Due to this the fast wind overtakes the previously released slow wind creating a compression region and a folded magnetic field configuration.The shear driven models are consistent with, at a qualitative level, the increasing occurrence rate of SBs with radial distance but it is unclear whether they can reproduce the symmetric (leading vs trailing edge) shape of the velocity profile observed within SBs, if the compression they produce is as mild as observed for SBs <cit.> and if they can match the observed occurrence rate.The Alfvén wave/turbulence models are based on the fact that large amplitude Alfvén waves are an exact non-linear solution of the MHD equations if density, pressure and the magnitude of the magnetic field are constant <cit.>. In these models SBs are then seen as spherically polarized Alfvén waves that have reached large amplitudes since δ B/B grows with the expansion. This scenario has been explored in both numerical simulation <cit.> and theoretical works <cit.>.Expansion not only steepens the Alfvén waves, but it also acts, due to non-WKB effects, as a reflection term for large scales waves that enhance the turbulent development of the medium <cit.>, hence the name Alfvén wave/turbulence models. These models, which are supported by many observations <cit.>, can reproduce most of the properties of SBs, but struggle in simulations to reproduce the filling factor observed in the data <cit.>. It is not clear, however, whether this is due to limited numerical resolution <cit.>.The different SB models are not mutually exclusive and a combination of them is possible. One could imagine, for example, that reconnection processes provide some of the seeds Alfvén waves that subsequently grow in amplitude in the expanding solar wind.The interplay between SBs and turbulence is a matter of debate. From the comparison of the turbulence properties inside and outside the SB structures we know that SBs present: about one order of magntitude increase in power <cit.> that is more isotropically distributed between the parallel and perpendicular direction to the magnetic field <cit.>, higher intermittency levels <cit.>, higher residual energy <cit.>, a more developed inertial range <cit.>, a larger occurrence of small scales current sheet <cit.> and enhanced kinetic Alfvén wave activity <cit.>. Despite these differences, both SBs and non SBs intervals present the same critical balance-like scaling in the inertial range <cit.> and the ion spectral break at the same scale <cit.>. An alternative way to investigate SBs, their link with turbulence and more in general the solar wind rotations is to study the full distribution of the magnetic field vector increments. This method has the advantage of being unbiased with respect to the choice of arbitrary deflection thresholds commonly used in the literature to define SBs <cit.> and to be more general since any different deflection threshold would correspond to looking at the right of different vertical cut in the rotation distributions. Furthermore, magnetic field increments have been extensively used to study the evolution of the magnetic field rotations in the solar wind <cit.>.The vector increments magnitude probability density functions (PDFs) at 1 AU computed at different time lags possess a log-normal shape <cit.> (even at kinetic scales <cit.>). For each lag the parameters of the log-normal distributions are different, but the PDFs can be rescaled to a universal log-normal in the inertial range. From the universal log-normal it is possible to recover a rotation model for the magnetic rotations that fits the data well. Interestingly the distributions of the magnetic rotations at 1 AU are to a large degree reproduced by MHD turbulent simulations (if the root mean square fluctuations are of the order of or greater than the background magnetic field), suggesting that turbulence might be the leading cause of the generation of both large and small magnetic deflections in the solar wind. The results above are described in <cit.>. Log-normal distributions are observed in the solar wind not only for magnetic increments <cit.> but also for the magnetic field magnitude <cit.>, for the scale dependent energy dissipation rate <cit.> and as probes of the energy cascade rate distributions <cit.> in the context of multiplicative random cascade models <cit.>.In these models the non conservative (intermittent) behaviour of the local energy dissipation rate is modeled through the multiplication of random variables drawn from the same distribution <cit.>.The log-normal is one of the possible distributions choices, but it seems to be the most common one in the solar wind and it is probably a consequence of dealing with intermittent turbulent signals. In order to understand the relation between turbulence and SBs we study the evolution of the magnetic increments and rotation distributions in the solar wind at different radial distances using PSP data.The questions we address are the following:* are the magnetic vector increments still well fitted by a log-normal function at different heliocentric distances? * is there a universal log-normal as suggested by <cit.>? * is the rotation model obtained at 1 AU still valid at different radial distances? * do SBs arise as a separate population in these distributions? * is the radial evolution of the PDFs consistent with a turbulent picture for SBs? In Section <ref> we describe the data set used in this study, in Section <ref> we report our results and in Section <ref> we discuss our conclusions.§ DATA AND METHODSWe use data from the fluxgate magnetometer MAG <cit.> at 4 samples per cycle cadence and the electron pitch angle distributions (ePAD) from the SPAN-e instrument <cit.>. The data in this study cover the fist eleven orbits of PSP at distances below 0.5 AU. In the dataset transients like coronal mass ejections (CMEs) are removed by eye and the heliospheric current sheet (HCS) crossings are removed with the aid of the ePAD. CMEs are excluded because they are not part of the steady-state solar wind, the HCS crossings are removed because they are large angle rotations related to the change in polarity rather than to switchbacks or to turbulence.We compute the distributions of the magnetic field increments:Δ B / B = |B(t+τ)- B(t) |/|B(t) | and the corresponding angular rotations Δθ = arccos( B(t+τ) ·B(t)/|B(t) ||B(t+τ) |).Under the assumption of pure rotations between t+τ and t with no field magnitude change, the angle and the increments are related by Δ B / B = 2 sin (Δθ /2) <cit.>. Each data point in the time series provides an increment value, unless B(t+τ) or B(t) are data gaps. In this case no increment value is obtained. Once we have a data series of increments at a given τ and distance we compute the corresponding distribution.In the distributions we consider values ofΔ B / B and corresponding rotations only for increments up to 2. This upper limit is set by the fact that a 180 degrees rotation can give a maximum increment value of 2. Therefore any value larger than this cannot be the result of a pure rotation. Applying this threshold has the effect of removing part of the tail of the distributions (the part due to highly compressive increments), but less than 0.6% of the pre-processed points (with HCS crossings and CMEs removed) are lost as a consequence. § RESULTS§.§ Evolution of the increments and rotation distributions with heliocentric distance The evolution of the magnetic field increments with distance and scale is plotted in Figure <ref>. The lags (τ) are chosen to be within the range of residence times observed for SBs <cit.>. The different curves change position with distance with respect to one another. For small τ the closest to the Sun distribution (blue line) presents the highest occurrence of large increments, whereas it presents the lowest occurrence at large τ.In Figure <ref> the rotation distributions are shown. Not surprisingly the curves behave similar to those of Figure <ref> since the magnetic field undergoes mostly rotation in the solar wind, especially at PSP distances. The dominance of rotations is highlighted in Figure <ref>. The parameter χ is a measure of the deviations from pure rotations. The distributions of χ are peaked around zero independent of distance and scale with a drop of more than 2 orders of magnitude between the peak value and the value at χ=0.1. This confirms the predominance of rotations in the solar wind also in the inner heliosphere.The behavior of the PDFs in Figures <ref> and <ref> is in part due to the fact that by using the same τ at different distances we compare distributions with a different underlying average level of Δ B/B and we neglect the evolution of the 1/f break with distance <cit.>.In order to see clearly the changes in the shape of the distributions we need to account for these effects. We do this with the following procedure: for each distance bin of Figure <ref> we compute ⟨Δ B/ B ⟩ with respect to the different τ, then through a linear interpolation we obtain a curve of ⟨Δ B/ B ⟩ against τ for each distance bin a curve. This allows us to determine a value for τ, for each distance bin, that corresponds to any value of ⟨Δ B/ B ⟩ we chose. In this manner we obtain a different τ for each radial distance that produces the same ⟨Δ B/ B ⟩. The increments and the corresponding rotations are then recomputed with the new set of τ.The results are shown in Figure <ref>.It can be seen that the curves at small ⟨Δ B/ B ⟩ values share the same shape at all distances but differ at large angles for the larger ⟨Δ B/ B ⟩ values.This behavior suggests that the small scale distribution is already fully evolved at these distances while the larger scales (⟨Δ B/ B ⟩> 0.1) are still in the process of evolving to their final state. Such a scale dependence is highly suggestive of a turbulence dominated evolution for the PDFs, since in a turbulent cascade the non-linear time is scale dependent with the smaller scales evolving faster. §.§ Log-normality and Zhdankin's rotation model with PSP In Figure <ref> we test whether Δ B/ B follows a log-normal distributionthroughout the full range of distances and scales considered here. The log-normal formula (Equation <ref>) is where the parameters μ and σ represent respectively the mean and the standard deviation of the logarithm of x. The results in Figure <ref> clearly show that as we move further out in the heliosphere the distributions are better fitted by a log-normal, even though the fit is reasonably good even at the closest distances. In order to make this statement more quantitative we compute the coefficient of determination defined as R = 1- Σ_i=1^n (y_i - < y >)^2 / Σ_i=1^n (y_i - f_i)^2, where y_i and < y > are respectively the measured values and their mean, f_i represents the values of the model and n the number of data points. The coefficient of determination is close to one for all the curves in Figure <ref>. For ⟨Δ B/ B ⟩=0.5, the value of R varies from 0.90 at distances below 0.1 AU to 0.97 at distances in the range 0.4-0.5 AU. For the smallest scales, ⟨Δ B/ B ⟩=0.1, the value is already 0.98 at the closest distances and reaches 0.998 for the furthest distances. Figure <ref> illustrates also the radial and scale dependent evolution of the σ parameter, which at 1 AU in Wind data is found to be σ≈ 1 <cit.>. We observe that σ increases with increasing radial distance for all values of ⟨Δ B/ B ⟩. At the closest distances the distributions with ⟨Δ B/ B ⟩ = 0.1 possess the closest value to 1. We also test whether other functions commonly used in the literature to describe the magnetic increments fit them as good as the log-normal function. A double exponential was used by <cit.> to fit Δθ, but it can be tested also for Δ B/ B.This function, that has two more fitting parameters than the log-normal, gives a coefficient of determination very close to one only for ⟨Δ B/ B ⟩ up to 0.3 but at larger values the fits fail to converge. This is due to the impossibility of reproducing the low Δ B/ B, downward section of the curve at ⟨Δ B/ B ⟩ > 0.3 with a double exponential, the same problem is found when fitting Δθ. We also test the log-Poisson distribution which is observed for other measures of turbulence <cit.>, but the fits in this case give a poor agreement (not shown). The log-normal seems to be the strongest candidate distribution for the solar wind fluctuations. The log-normality of the magnetic vector increments can be linked to turbulence in the context of random cascade models and might be ultimately linked to the log-normality of the scale dependent dissipation rate <cit.>.In Figure <ref> we compare the PSP rotation distributions with the rotation model (Equation <ref>) developed by <cit.>,g(Δθ) = 1/K√(8π)tanΔθ/2×exp( -1/2log^2( 2sin(Δθ/2)( τ/Δ t_0) ^-α) )where Δ t_0 and α are fitting parameters and K = 1/2[log (2( τ/Δ t_0) ^-α)/√(2) +1] is a normalization constant (independent of Δθ). At 1 AU Δ t_0 is interpreted to be the outer scale of the turbulence <cit.>. In Figure <ref> a different τ is used for each ⟨Δ B/ B ⟩ and for each distance, as discussed in Section <ref>. The key ingredients of the model are the possibility to rescale the increments PDFs for different τ into a single log-normal given that σ≈ 1, which is the case at 1 AU in the inertial range, and the fact that the increments are assumed to be due to pure rotations.The agreement between the PSP distributions and the Zhdankin rotation model (Equation <ref>) improves with increasing radial distance as shown in Figure <ref>, but at ⟨Δ B/ B ⟩=0.1 the agreement is quite good even close to the Sun. This is consistent with the evolution of the increment distributions, since for ⟨Δ B/ B ⟩=0.1 the log-normal fit gives a coefficient of determination closer to one than in the other cases.The reason why, for r<0.1AU, the fit for larger ⟨Δ B/ B ⟩ is not as good does not have to be attributed to the transition to the 1/f range. In fact, even though for ⟨Δ B/ B ⟩ = 0.5 the lag τ≃ 7 × 10^2s is in the 1/f range, the fits for ⟨Δ B/ B ⟩ = 0.3, whose τ≃ 50s is in the inertial range show the same behaviour. We attribute then, the discrepancy between the model and the observations in Figure <ref> to the fact that the distributions of the magnetic increments below 0.1 AU are not yet fully evolved to the log-normal with σ=1, i.e. the universal log-normal proposed by <cit.>. Consistent with the evolving state of the distributions, the fitting parameters Δ t_0 and α with increasing radial distance get closer to the value of Δ t_0 =6600 and α =0.46 observed at 1AU <cit.>.§ DISCUSSION AND CONCLUSIONS We presented the first report on the radial evolution of the scale-dependent increments and rotation angle distributions for distances below 0.3 AU.Our results show that the rotation angle distributions (Figure <ref> and Figure <ref>)evolve with radial distance in a scale dependent fashion. In agreement with this, the increment distributions are still evolving towards log-normality but this evolution is different for the PDFs at small and large values of ⟨Δ B/ B ⟩. At ⟨Δ B/ B ⟩=0.1 i.e. small scales, the coefficient of determination is close to one at all distances while for larger values i.e. large scales, it is still evolving with distance towards one (Figure <ref>). This suggests a scale dependent evolution towards a log-normal shape, with the small scales being approximately log-normal independent of the distance.The log-normal though is not the universal one proposed in <cit.> because σ is not equal one at the distances investigated here, but σ does evolve towards one with increasing radial distance. The evolution of σ for the small scales seems somewhat contradictory with the distributions having the same shape in Figure <ref>. The behaviour of σ though is dominated by the tails of the distributions since we are fitting in log-space. The reason for the similar behavior between small and large scales is that even at the smallest scales there is some evolution in the far tail of the distributions, it is possible to see this evolution in the ⟨Δ B/ B ⟩=0.1 curves in Figure <ref>.In the rotation distributions switchbacks do not arise as a distinct population, in the sense that they do not appear as an extra bump at large angles. Furthermore, as illustrated in Figure <ref>, a single function (<ref>) based on the log-normality of the increments is capable of capturing most of the rotations, other than those at close distances and large scales, where there are fewer large angle rotations. This suggests that switchbacks, considered as large-angle rotations, are part of a single distribution of solar wind fluctuations, as might arise, for example, from a turbulent cascade. The results shown here support the in-situ (during propagation in the heliosphere, not right at the spacecraft) formation of switchbacks. In fact, the large-scale PDFs at large angles, where most of the large angle deflections are present (see Figure <ref>), are increasingly filled with increasing radial distance indicating the presence of more switchbacks, in agreement with the results of <cit.>. This behaviour is not expected from the ex-situ models unless combined with a shear or turbulence/Alfvén wave based mechanism. The scale dependent evolution towards a log-normal shape and the change in shape of the the PDFs even at fixed ⟨Δ B/ B ⟩ is a key property to consider to investigate the origin of the distributions. The change in shape has two possible interpretations. One The turbulent interactions in the solar wind are reshaping the distribution into a log-normal. Indeed turbulence simulations are able to approximately produce log-normal distributions for the magnetic field vector increments, and can reproduce the rotation distributions at 1 AU <cit.>. Furthermore log-normal distributions are observed in turbulence simulations for the scale dependent energy dissipation rate and in solar wind data for a proxy of the same quantity <cit.>. The scale dependent evolution of the distributions is also consistent with a turbulence scenario. Turbulence interactions are faster at smaller scales, so one would expect the larger scales to evolve more slowly, in agreement with our results.Two The change in shape could be attributed to the growth of the fluctuations with the expansion with the constraint of having a constant magnetic field magnitude. This constraint has to be invoked because expansion alone can grow the amplitudes of Δ B/ B <cit.>, i.e., shift the unnormalised PDFs to larger Δ B/ B values, resulting in a growth of large angular deflection, but is not expected to change the shape of the PDFs (see Figure <ref>), therefore an additional process is required to explain the full distribution of rotations, including the switchbacks. At large scales, this constraint implies that there is a cutoff to the distribution at Δ B/ B=2, as a consequence the PDF perhaps changes its shape once this cutoff is reached due to the expansion driven growth of the fluctuations. However, it is not clear why such a cutoff would cause the PDFs to become log-normal, and it would not explain why the PDFs are log-normal at small scales. Furthermore, the physical origin of the constraint is also an open question <cit.>. Considering the results shown and the considerations made here, it seems most likely that expansion is causing the overall amplitudes to grow, and turbulence is reshaping the magnetic field rotations to create the fluctuation distributions that we measure.§ ACKNOWLEDGEMENTS AL is supported by STFC Consolidated Grant ST/T00018X/1. CHKC is supported by UKRI Future Leaders Fellowship MR/W007657/1 and STFC Consolidated Grants ST/T00018X/1 and ST/X000974/1. JRM is supported by STFC studentship grant ST/V506989/1. VKJ acknowledges support from the Parker Solar Probe mission as part of NASA's Living with a Star (LWS) program under contract NNN06AA01C. JRM and AL acknowledge support from the Perren Exchange Programme. We thank the members of the FIELDS/SWEAP teams and PSP community for helpful discussions.§ DATA AVAILABILITYPSP data are available at the SPDF (https://spdf.gsfc.nasa.gov).mnras@urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[Agapitov et al.,Agapitov et al.2022]agapitovSBs2022 Agapitov O. V.,et al., 2022, @doi [] 10.3847/1538-4357/ac4016, https://ui.adsabs.harvard.edu/abs/2022ApJ...925..213A 925, 213[Bale et al.,Bale et al.2016]Bale2016 Bale S. D.,et al., 2016, @doi [] 10.1007/s11214-016-0244-5, https://ui.adsabs.harvard.edu/abs/2016SSRv..204...49B 204, 49[Bale et al.,Bale et al.2019]bale19 Bale S. D.,et al., 2019, @doi [] 10.1038/s41586-019-1818-7, https://ui.adsabs.harvard.edu/abs/2019Natur.576..237B 576, 237[Bale et al.,Bale et al.2023]Bale2023Natur.618..252B Bale S. D.,et al., 2023, @doi [] 10.1038/s41586-023-05955-3, https://ui.adsabs.harvard.edu/abs/2023Natur.618..252B 618, 252[Barnes & HollwegBarnes & Hollweg1974]barnesHollweg1974 Barnes A.,Hollweg J. V.,1974, @doi [] 10.1029/JA079i016p02302, https://ui.adsabs.harvard.edu/abs/1974JGR....79.2302B 79, 2302[Bavassano, Dobrowolny, Mariani& NessBavassano et al.1982]bavassano_1_f_1982 Bavassano B.,Dobrowolny M.,Mariani F., Ness N. F.,1982, @doi [] 10.1029/JA087iA05p03617, https://ui.adsabs.harvard.edu/abs/1982JGR....87.3617B 87, 3617[BelcherBelcher1971]belcher1971WKB Belcher J. W.,1971, @doi [] 10.1086/151105, https://ui.adsabs.harvard.edu/abs/1971ApJ...168..509B 168, 509[BorovskyBorovsky2010]Borovsky2010 Borovsky J. E.,2010, @doi [] 10.1103/PhysRevLett.105.111102, https://ui.adsabs.harvard.edu/abs/2010PhRvL.105k1102B 105, 111102[Bourouaine, Perez, Klein, Chen, Martinović, Bale, Kasper& RaouafiBourouaine et al.2020]bourouaineSBs2020 Bourouaine S.,Perez J. C.,Klein K. G.,Chen C. H. K.,Martinović M.,Bale S. D.,Kasper J. C., Raouafi N. E.,2020, @doi [] 10.3847/2041-8213/abbd4a, https://ui.adsabs.harvard.edu/abs/2020ApJ...904L..30B 904, L30[Breech, Matthaeus, Minnie, Oughton, Parhi, Bieber& BavassanoBreech et al.2005]breech_matt_2005 Breech B.,Matthaeus W. H.,Minnie J.,Oughton S.,Parhi S.,Bieber J. W., Bavassano B.,2005, @doi [] 10.1029/2004GL022321, https://ui.adsabs.harvard.edu/abs/2005GeoRL..32.6103B 32, L06103[Bruno & CarboneBruno & Carbone2013]bruno_carbone2013 Bruno R.,Carbone V.,2013, @doi [Living Reviews in Solar Physics] 10.12942/lrsp-2013-2, https://ui.adsabs.harvard.edu/abs/2013LRSP...10....2B 10, 2[Bruno & TrenchiBruno & Trenchi2014]bruno_trenchi2014_ionBreakRadDist Bruno R.,Trenchi L.,2014, @doi [] 10.1088/2041-8205/787/2/L24, https://ui.adsabs.harvard.edu/abs/2014ApJ...787L..24B 787, L24[BurlagaBurlaga2001]burl2001Log Burlaga L. F.,2001, @doi [] 10.1029/2000JA000107, https://ui.adsabs.harvard.edu/abs/2001JGR...10615917B 106, 15917[Castaing, Gagne& HopfingerCastaing et al.1990]castaing1990 Castaing B.,Gagne Y., Hopfinger E. J.,1990, @doi [Physica D Nonlinear Phenomena] 10.1016/0167-2789(90)90035-N, https://ui.adsabs.harvard.edu/abs/1990PhyD...46..177C 46, 177[ChandranChandran2018]chandran2018JPl Chandran B. D. G.,2018, @doi [Journal of Plasma Physics] 10.1017/S0022377818000016, https://ui.adsabs.harvard.edu/abs/2018JPlPh..84a9006C 84, 905840106[Chandran & PerezChandran & Perez2019]Chandran2019JPlPh..85d9009C Chandran B. D. G.,Perez J. C.,2019, @doi [Journal of Plasma Physics] 10.1017/S0022377819000540, https://ui.adsabs.harvard.edu/abs/2019JPlPh..85d9009C 85, 905850409[ChenChen2016]chen2016JPlPh..82f5302C Chen C. H. K.,2016, @doi [Journal of Plasma Physics] 10.1017/S0022377816001124, https://ui.adsabs.harvard.edu/abs/2016JPlPh..82f5302C 82, 535820602[Chen, Bale, Salem& MarucaChen et al.2013]chen2013ApJResEnergy Chen C. H. K.,Bale S. D.,Salem C. S., Maruca B. A.,2013, @doi [] 10.1088/0004-637X/770/2/125, https://ui.adsabs.harvard.edu/abs/2013ApJ...770..125C 770, 125[Chen, Matteini, Burgess& HorburyChen et al.2015]chenRotKin2015MNRAS.453L..64C Chen C. H. K.,Matteini L.,Burgess D., Horbury T. S.,2015, @doi [] 10.1093/mnrasl/slv107, https://ui.adsabs.harvard.edu/abs/2015MNRAS.453L..64C 453, L64[Chen et al.,Chen et al.2020]chenTurb2020 Chen C. H. K.,et al., 2020, @doi [] 10.3847/1538-4365/ab60a3, https://ui.adsabs.harvard.edu/abs/2020ApJS..246...53C 246, 53[Cranmer & van BallegooijenCranmer & van Ballegooijen2005]Cranmer2005ApJS..156..265C Cranmer S. R.,van Ballegooijen A. A.,2005, @doi [] 10.1086/426507, https://ui.adsabs.harvard.edu/abs/2005ApJS..156..265C 156, 265[Davis et al.,Davis et al.2023]nooshin2023 Davis N.,et al., 2023, @doi [] 10.48550/arXiv.2303.01663, https://ui.adsabs.harvard.edu/abs/2023arXiv230301663D p. arXiv:2303.01663[Drake et al.,Drake et al.2021]drakeSBs2021 Drake J. F.,et al., 2021, @doi [] 10.1051/0004-6361/202039432, https://ui.adsabs.harvard.edu/abs/2021A A...650A...2D 650, A2[Dudok de Wit et al.,Dudok de Wit et al.2020]DudokdeWitSBs2020 Dudok de Wit T.,et al., 2020, @doi [] 10.3847/1538-4365/ab5853, https://ui.adsabs.harvard.edu/abs/2020ApJS..246...39D 246, 39[Fisk & KasperFisk & Kasper2020]kasperSBs2020 Fisk L. A.,Kasper J. C.,2020, @doi [] 10.3847/2041-8213/ab8acd, https://ui.adsabs.harvard.edu/abs/2020ApJ...894L...4F 894, L4[Fox et al.,Fox et al.2016]FoxVelli2016 Fox N. J.,et al., 2016, @doi [] 10.1007/s11214-015-0211-6, https://ui.adsabs.harvard.edu/abs/2016SSRv..204....7F 204, 7[FrischFrisch1995]frischbook Frisch U.,1995, Turbulence. The legacy of A.N. Kolmogorov[Grappin & VelliGrappin & Velli1996]grappin_velli1996JGR Grappin R.,Velli M.,1996, @doi [] 10.1029/95JA02147, https://ui.adsabs.harvard.edu/abs/1996JGR...101..425G 101, 425[Horbury & BaloghHorbury & Balogh2001]Horbury_Balogh2001JGR_Helios_Ulysses Horbury T. S.,Balogh A.,2001, @doi [] 10.1029/2000JA000108, https://ui.adsabs.harvard.edu/abs/2001JGR...10615929H 106, 15929[Huang et al.,Huang et al.2023a]Jia2023arXiv230110374H Huang J.,et al., 2023a, @doi [] 10.48550/arXiv.2301.10374, https://ui.adsabs.harvard.edu/abs/2023arXiv230110374H p. arXiv:2301.10374[Huang et al.,Huang et al.2023b]ZesenHuang1ovf2023 Huang Z.,et al., 2023b, @doi [] 10.48550/arXiv.2303.00843, https://ui.adsabs.harvard.edu/abs/2023arXiv230300843H p. arXiv:2303.00843[Jagarlamudi, Raouafi, Bourouaine, Mostafavi, Larosa& PerezJagarlamudi et al.2023]Jagarlamudi_2023 Jagarlamudi V. K.,Raouafi N. E.,Bourouaine S.,Mostafavi P.,Larosa A., Perez J. C.,2023, @doi [The Astrophysical Journal Letters] 10.3847/2041-8213/acd778, 950, L7[Johnston, Squire, Mallet& MeyrandJohnston et al.2022]johnston2022SBs Johnston Z.,Squire J.,Mallet A., Meyrand R.,2022, @doi [Physics of Plasmas] 10.1063/5.0097983, https://ui.adsabs.harvard.edu/abs/2022PhPl...29g2902J 29, 072902[Kasper et al.,Kasper et al.2016]Kasper2016 Kasper J. C.,et al., 2016, @doi [] 10.1007/s11214-015-0206-3, https://ui.adsabs.harvard.edu/abs/2016SSRv..204..131K 204, 131[Kasper et al.,Kasper et al.2019]kasper19 Kasper J. C.,et al., 2019, @doi [] 10.1038/s41586-019-1813-z, https://ui.adsabs.harvard.edu/abs/2019Natur.576..228K 576, 228[Krasnoselskikh et al.,Krasnoselskikh et al.2020]volodiaSBs2020ApJ...893...93K Krasnoselskikh V.,et al., 2020, @doi [] 10.3847/1538-4357/ab7f2d, https://ui.adsabs.harvard.edu/abs/2020ApJ...893...93K 893, 93[Larosa et al.,Larosa et al.2021]larosa2021A A Larosa A.,et al., 2021, @doi [] 10.1051/0004-6361/202039442, https://ui.adsabs.harvard.edu/abs/2021A A...650A...3L 650, A3[Liang, Zank, Nakanotani& ZhaoLiang et al.2021]Liang2021SBsApJ Liang H.,Zank G. P.,Nakanotani M., Zhao L. L.,2021, @doi [] 10.3847/1538-4357/ac0a73, https://ui.adsabs.harvard.edu/abs/2021ApJ...917..110L 917, 110[Liu, Ran, Hu& BaleLiu et al.2023]LiuSBs2023ApJ Liu Y. D.,Ran H.,Hu H., Bale S. D.,2023, @doi [] 10.3847/1538-4357/acb345, https://ui.adsabs.harvard.edu/abs/2023ApJ...944..116L 944, 116[Malaspina et al.,Malaspina et al.2022]MalaspineKinALfSBs2022ApJ Malaspina D. M.,et al., 2022, @doi [] 10.3847/1538-4357/ac87a7, https://ui.adsabs.harvard.edu/abs/2022ApJ...936..128M 936, 128[Mallet, Squire, Chandran, Bowen& BaleMallet et al.2021]MalletSBs2021 Mallet A.,Squire J.,Chandran B. D. G.,Bowen T., Bale S. D.,2021, @doi [] 10.3847/1538-4357/ac0c12, https://ui.adsabs.harvard.edu/abs/2021ApJ...918...62M 918, 62[Martinović et al.,Martinović et al.2021]martinovicSBs2021 Martinović M. M.,et al., 2021, @doi [] 10.3847/1538-4357/abebe5, https://ui.adsabs.harvard.edu/abs/2021ApJ...912...28M 912, 28[Matteini, Horbury, Pantellini, Velli& SchwartzMatteini et al.2015]matteini2015ApJ Matteini L.,Horbury T. S.,Pantellini F.,Velli M., Schwartz S. J.,2015, @doi [] 10.1088/0004-637X/802/1/11, https://ui.adsabs.harvard.edu/abs/2015ApJ...802...11M 802, 11[Matteini, Stansby, Horbury& ChenMatteini et al.2018]matteini2018ApJ Matteini L.,Stansby D.,Horbury T. S., Chen C. H. K.,2018, @doi [] 10.3847/2041-8213/aaf573, https://ui.adsabs.harvard.edu/abs/2018ApJ...869L..32M 869, L32[Matthaeus & GoldsteinMatthaeus & Goldstein1986]bill1986PhRvL..57..495M Matthaeus W. H.,Goldstein M. L.,1986, @doi [] 10.1103/PhysRevLett.57.495, https://ui.adsabs.harvard.edu/abs/1986PhRvL..57..495M 57, 495[McIntyre, Chen& LarosaMcIntyre et al.2023]JackCross2023 McIntyre J. R.,Chen C. H. K., Larosa A.,2023, @doi [] 10.48550/arXiv.2307.04682, https://ui.adsabs.harvard.edu/abs/2023arXiv230704682M p. arXiv:2307.04682[ParkerParker1965]Parker1965SSRv....4..666P Parker E. N.,1965, @doi [] 10.1007/BF00216273, https://ui.adsabs.harvard.edu/abs/1965SSRv....4..666P 4, 666[Pecora, Matthaeus, Primavera, Greco, Chhiber, Bandyopadhyay& ServidioPecora et al.2022]pecoraSBs2022ApJ Pecora F.,Matthaeus W. H.,Primavera L.,Greco A.,Chhiber R.,Bandyopadhyay R., Servidio S.,2022, @doi [] 10.3847/2041-8213/ac62d4, https://ui.adsabs.harvard.edu/abs/2022ApJ...929L..10P 929, L10[Perez & ChandranPerez & Chandran2013]Perez2013ApJ...776..124P Perez J. C.,Chandran B. D. G.,2013, @doi [] 10.1088/0004-637X/776/2/124, https://ui.adsabs.harvard.edu/abs/2013ApJ...776..124P 776, 124[Perrone, D'Amicis, De Marco, Matteini, Stansby, Bruno& HorburyPerrone et al.2020]perrone2020A A...633A.166P Perrone D.,D'Amicis R.,De Marco R.,Matteini L.,Stansby D.,Bruno R., Horbury T. S.,2020, @doi [] 10.1051/0004-6361/201937064, https://ui.adsabs.harvard.edu/abs/2020A A...633A.166P 633, A166[Podesta & BorovskyPodesta & Borovsky2010]podesta_borovsky_2010 Podesta J. J.,Borovsky J. E.,2010, @doi [Physics of Plasmas] 10.1063/1.3505092, https://ui.adsabs.harvard.edu/abs/2010PhPl...17k2905P 17, 112905[Raouafi et al.,Raouafi et al.2023]nourPSPreview2023 Raouafi N. E.,et al., 2023, @doi [] 10.1007/s11214-023-00952-4, https://ui.adsabs.harvard.edu/abs/2023SSRv..219....8R 219, 8[RobertsRoberts2010]roberts2010HeliosVoyager Roberts D. Aaron .,2010, @doi [Journal of Geophysical Research (Space Physics)] 10.1029/2009JA015120, https://ui.adsabs.harvard.edu/abs/2010JGRA..11512101R 115, A12101[RobertsRoberts2012]roberts2012prl Roberts D. A.,2012, @doi [] 10.1103/PhysRevLett.109.231102, https://ui.adsabs.harvard.edu/abs/2012PhRvL.109w1102R 109, 231102[Roberts, Klein, Goldstein& MatthaeusRoberts et al.1987]roberts1987Voyager Roberts D. A.,Klein L. W.,Goldstein M. L., Matthaeus W. H.,1987, @doi [] 10.1029/JA092iA10p11021, https://ui.adsabs.harvard.edu/abs/1987JGR....9211021R 92, 11021[Ruffolo et al.,Ruffolo et al.2020]ruffoloSBs2020 Ruffolo D.,et al., 2020, @doi [] 10.3847/1538-4357/abb594, https://ui.adsabs.harvard.edu/abs/2020ApJ...902...94R 902, 94[Sakshee, Bandyopadhyay& BanerjeeSakshee et al.2022]saksheeSBs2022 Sakshee S.,Bandyopadhyay R., Banerjee S.,2022, @doi [] 10.1093/mnras/stac1449, https://ui.adsabs.harvard.edu/abs/2022MNRAS.514.1282S 514, 1282[Schwadron & McComasSchwadron & McComas2021]schwadronSBs2021 Schwadron N. A.,McComas D. J.,2021, @doi [] 10.3847/1538-4357/abd4e6, https://ui.adsabs.harvard.edu/abs/2021ApJ...909...95S 909, 95[Shi et al.,Shi et al.2021]shi_velli2021 Shi C.,et al., 2021, @doi [] 10.1051/0004-6361/202039818, https://ui.adsabs.harvard.edu/abs/2021A A...650A..21S 650, A21[Shoda, Chandran& CranmerShoda et al.2021]shodaSBs2021 Shoda M.,Chandran B. D. G., Cranmer S. R.,2021, @doi [] 10.3847/1538-4357/abfdbc, https://ui.adsabs.harvard.edu/abs/2021ApJ...915...52S 915, 52[Sorriso-Valvo, Carbone, Veltri, Consolini& BrunoSorriso-Valvo et al.1999]sorrisoValvoIntermittencyCastaing1999 Sorriso-Valvo L.,Carbone V.,Veltri P.,Consolini G., Bruno R.,1999, @doi [] 10.1029/1999GL900270, https://ui.adsabs.harvard.edu/abs/1999GeoRL..26.1801S 26, 1801[Squire & MalletSquire & Mallet2022]squire_mallet_3dAlfvSol_2022 Squire J.,Mallet A.,2022, @doi [] 10.48550/arXiv.2206.07447, https://ui.adsabs.harvard.edu/abs/2022arXiv220607447S p. arXiv:2206.07447[Squire, Schekochihin, Quataert& KunzSquire et al.2019]squire2019JPlPh..85a9014S Squire J.,Schekochihin A. A.,Quataert E., Kunz M. W.,2019, @doi [Journal of Plasma Physics] 10.1017/S0022377819000114, https://ui.adsabs.harvard.edu/abs/2019JPlPh..85a9014S 85, 905850114[Squire, Chandran& MeyrandSquire et al.2020]squireSBs2020 Squire J.,Chandran B. D. G., Meyrand R.,2020, @doi [] 10.3847/2041-8213/ab74e1, https://ui.adsabs.harvard.edu/abs/2020ApJ...891L...2S 891, L2[Squire, Johnston, Mallet& MeyrandSquire et al.2022]squire_mallet_spiral2022 Squire J.,Johnston Z.,Mallet A., Meyrand R.,2022, @doi [Physics of Plasmas] 10.1063/5.0099924, https://ui.adsabs.harvard.edu/abs/2022PhPl...29k2903S 29, 112903[Tenerani & VelliTenerani & Velli2018]tenerani_velli2018AGUFMSH53A..06T Tenerani A.,Velli M.,2018, @doi [] 10.3847/2041-8213/aaec01, https://ui.adsabs.harvard.edu/abs/2018ApJ...867L..26T 867, L26[Tenerani, Sioulas, Matteini, Panasenco, Shi& VelliTenerani et al.2021]teneraniSBs2021ApJ Tenerani A.,Sioulas N.,Matteini L.,Panasenco O.,Shi C., Velli M.,2021, @doi [] 10.3847/2041-8213/ac2606, https://ui.adsabs.harvard.edu/abs/2021ApJ...919L..31T 919, L31[Tu & MarschTu & Marsch1995]tu_marsch1995 Tu C. Y.,Marsch E.,1995, @doi [] 10.1007/BF00748891, https://ui.adsabs.harvard.edu/abs/1995SSRv...73....1T 73, 1[Vasquez & HollwegVasquez & Hollweg1998]vasquez_hollweg1998 Vasquez B. J.,Hollweg J. V.,1998, @doi [] 10.1029/97JA02993, https://ui.adsabs.harvard.edu/abs/1998JGR...103..349V 103, 349[Velli, Grappin& MangeneyVelli et al.1989]Velli1989PhRvL..63.1807V Velli M.,Grappin R., Mangeney A.,1989, @doi [] 10.1103/PhysRevLett.63.1807, https://ui.adsabs.harvard.edu/abs/1989PhRvL..63.1807V 63, 1807[Velli, Grappin& MangeneyVelli et al.1990]velli1990CoPhC..59..153V Velli M.,Grappin R., Mangeney A.,1990, @doi [Computer Physics Communications] 10.1016/0010-4655(90)90165-W, https://ui.adsabs.harvard.edu/abs/1990CoPhC..59..153V 59, 153[Verdini, Grappin, Pinto& VelliVerdini et al.2012]Verdini2012ApJ...750L..33V Verdini A.,Grappin R.,Pinto R., Velli M.,2012, @doi [] 10.1088/2041-8205/750/2/L33, https://ui.adsabs.harvard.edu/abs/2012ApJ...750L..33V 750, L33[Whittlesey et al.,Whittlesey et al.2020]phyllisSpanE2020 Whittlesey P. L.,et al., 2020, @doi [] 10.3847/1538-4365/ab7370, https://ui.adsabs.harvard.edu/abs/2020ApJS..246...74W 246, 74[Wu, Huang, Wang, Yuan, He& YangWu et al.2023]wu2023ApJ...947L Wu H.,Huang S.,Wang X.,Yuan Z.,He J., Yang L.,2023, @doi [] 10.3847/2041-8213/acca20, https://ui.adsabs.harvard.edu/abs/2023ApJ...947L..22W 947, L22[Wyper, DeVore, Antiochos, Pontin, Higginson, Scott, Masson& Pelegrin-FrachonWyper et al.2022]wyper2022ApJ Wyper P. F.,DeVore C. R.,Antiochos S. K.,Pontin D. I.,Higginson A. K.,Scott R.,Masson S., Pelegrin-Frachon T.,2022, @doi [] 10.3847/2041-8213/aca8ae, https://ui.adsabs.harvard.edu/abs/2022ApJ...941L..29W 941, L29[Zank, Nakanotani, Zhao, Adhikari& KasperZank et al.2020]zankSBs2020ApJ Zank G. P.,Nakanotani M.,Zhao L. L.,Adhikari L., Kasper J.,2020, @doi [] 10.3847/1538-4357/abb828, https://ui.adsabs.harvard.edu/abs/2020ApJ...903....1Z 903, 1[Zhdankin, Boldyrev& MasonZhdankin et al.2012]Zhdankin2012 Zhdankin V.,Boldyrev S., Mason J.,2012, @doi [] 10.1088/2041-8205/760/2/L22, https://ui.adsabs.harvard.edu/abs/2012ApJ...760L..22Z 760, L22[Zhdankin, Boldyrev& ChenZhdankin et al.2016]zhdankin2016dissipationrate Zhdankin V.,Boldyrev S., Chen C. H. K.,2016, @doi [] 10.1093/mnrasl/slv208, https://ui.adsabs.harvard.edu/abs/2016MNRAS.457L..69Z 457, L69
http://arxiv.org/abs/2312.16521v1
{ "authors": [ "A. Larosa", "C. H. K Chen", "J. R. McIntyre", "V. K. Jagarlamudi" ], "categories": [ "astro-ph.SR", "physics.space-ph" ], "primary_category": "astro-ph.SR", "published": "20231227105923", "title": "The relation between magnetic switchbacks and turbulence in the inner heliosphere" }
left=2cm,right=2.5cm, top=1.75cm,bottom=0.5cm300mm 210mm 18pt ===500 =50 =1000[1] leftmargin=20pt, itemsep=-2pt, labelsep=0.5em, topsep=2pt [1] leftmargin=15pt, itemsep=1pt, labelsep=0.5em, topsep=2pt1.01 §1.25.0.75em#1§.§0.25.0.75em#1 1.25Appendix.0.75em#1 mystyle20pt20pt0cm.0.75em mystyle thmTheorem lemLemma corollaryCorollarymystyle0220pt20pt0cm.0.75em mystyle02 remarkRemark addtoresetequationsectionfancy Page  of LastPage aboveskip=9pt,belowskip=11pt, font=small,labelfont=bf,textfont=rm[4] Computing Gerber-Shiu function in the classical risk model with interest using collocation method Zan Yu,   Lianzeng ZhangCorresponding author. School of Finance, Nankai University, Tianjin 300350,China =================================================================================================================== [E-mail addresses: zhlz@nankai.edu.cn(L. Zhang), yz3006@163.com(Z. Yu)] 1.05 [t]15cm The Gerber-Shiu function is a classical research topic in actuarial science. However, exact solutions are only available in the literature for very specific cases where the claim amounts follow distributions such as the exponential distribution. This presents a longstanding challenge, particularly from a computational perspective. For the classical risk process in continuous time, the Gerber-Shiu discounted penalty function satisfies a class of Volterra integral equations. In this paper, we use the collocation method to compute the Gerber-Shiu function for risk model with interest. Our methodology demonstrates that the function can be expressed as a linear algebraic system, which is straightforward to implement. One major advantage of our approach is that it does not require any specific distributional assumptions on the claim amounts, except for mild differentiability and continuity conditions that can be easily verified. We also examine the convergence orders of the collocation method. Finally, we present several numerical examples to illustrate the desirable performance of our proposed method. Keywords Gerber-Shiu function, Volterra integral equations, Collocation method,Convergence orders -1.75em empty § INTRODUCTION Consider the classical risk (surplus) processin continuous time{U(t)}_t ≥ 0as belowU(t)=u+c t-∑_i=1^N(t) X_i,t ≥ 0,where u ≥ 0 is the initial reserve, N(t) is the number of claims upto time t which follows a homogeneous Poisson process of parameter λ>0. The aggregate claim amount up to time t is Z(t)=∑_i=1^N(t) X_i, where the claim sizes{X_i,i=1,2,…} arepositive,independent and identically distributed random variables, with the distribution function F(x) and finite mean μ. Thepremium rate c satisfiesc=λμ(1+θ),where θ > 0 is the premium loading factor. As usual, we assume thatX_i and {N(t)}_t > 0 areindependent.The classical surplus process defined in (<ref>) does not take intoaccount any interest earnings on investment. Assume that the insurer receivesinterest on its surplus at a constant forceδ per unit time. Then the modified surplus process, namely {U_δ(t)}_t ≥ 0 , can bedescribed by U_δ(t)=u e^δ t+c *t[(δ)] -∫_0^t e^δ(t-x)d Z(x). The time of ruin is the first time that the risk process takes a negativevalue and is denoted by τ=inf{t ≥ 0| U_δ(t)<0},where τ=∞ if U_δ(t) ≥ 0 for all t ≥ 0. To study thetime to ruin τ, the surplus immediatelybefore ruin U(τ-), and the deficit at ruin |U(τ)| in the classial surplus process,<cit.>proposed an expected discounted penalty function. In thispaper, we study the modified Gerber-Shiu discounted penaltyfunction defined byΦ_δ,α(u)=𝔼[e^-ατ w(U_δ(τ-),|U_δ(τ)|) 1(τ<∞) | U_δ(0)=u],u ≥ 0, α≥ 0,where 1(A) is the indicator function of event A, and w:[0,∞)×[0, ∞) ⟼[0, ∞) is a measurable penalty function. Here, we can interpret e^-ατ as the`discounting factor'.The Gerber-Shiu function has been a popular tool for actuarial researchers dueto its broad applicability in representing a range of ruin-related quantitiessince it was introduced. In the past two decades, various stochastic processeshave been employed to model the temporal evolution of surplus process usingthe Gerber-Shiu function. For instance, <cit.> studied the expectedvalue of a discounted penalty functionunder the classical risk model, also known as the Cramér-Lundberg model. Overtime, the Cramér-Lundberg model has been extended in multiple directionsin order to describe more accurately the stylized features of the surplusprocess inthe real world. For example, the Sparre-Andersen model(, ), the Lévy riskmodel(,), and the Markov additive processes(, ) have allbeenstudied for this purpose. A comprehensive review of Gerber-Shiu function and its variants, risksurplus models, and additional structural features, along with a wide range ofanalytical, semi-analytical, and asymptotic methods has been provided by<cit.>. While most of the existing literature has focused on exploringexplicit solutions for Gerber-Shiu functions, this approach has limitations asit heavily depends on assumptions about the underlying claim sizedistribution.Only under a few types of distributions, the Gerber-Shiu function has explicit expressions, such asexponentials,combinations of exponentials, Erlangs, and their mixtures.Therefore,developing numerical methods for computing theGerber-Shiu function is of great importance.<cit.> derived the Laplace transform of thesurvival probability, thereafter the survival probability can be subsequentlyobtainedthrough some numerical inversion methods. This method was also applied by<cit.> to compute the Gerber-Shiudiscounted penalty function in the Lévy risk model and the perturbedcompound Poisson risk model, respectively. The Gerber-Shiu function can alsobe evaluated approximately by truncating an infinite Fourier series(see e.g. , ).In addition,<cit.> develop the Gerber-Shiu function on the Laguerre basis, and then compute the unknown coefficients based on sample information on claim numbers and individual claim sizes. <cit.> apply the frame dualityprojection method to compute the Gerber-Shiu function.In this paper, we shall compute the Gerber-Shiu function in a riskmodel under interest force from a new perspective that is easy to implement.We remark that some results have been obtained by e.g. <cit.>,<cit.> under interest force, but the results therein are mostlyconcerned with structural properties and general integral results, but nospecific computing methods are given. Here we start directly from the solutionof the integral equation to compute the Gerber-Shiu function. The rest of thispaper is organized asfollows. We first review some basicconclusions of the Gerber-Shiu function in Section <ref>.Section <ref> introduces the collocation method. In Section<ref>, we discuss the convergenceorderof the collocation method. We presentsome numerical examples to illustrate the effectiveness of collocation methodin Section <ref>. Some conclusions are shown inSection <ref>.§ PRELIMINARIES ON GERBER-SHIU FUNCTION In this section, we present some necessary preliminaries on Gerber-Shiufunction. Some of the results are borrowed from <cit.>.Throughout this paper, we condition on the time t, and on the amountof the first claim x. Thus,Φ_δ,α(u)= ∫_0^∞λ e^-λ t∫_0^∞𝔼[e^-ατ w(U(τ-),|U(τ)|) 1(τ<∞) | U_δ(0)=u ] d F(x) d t = ∫_0^∞λe^-(λ+α) t∫_0^u e^δ t+c *t[(δ)]Φ_δ, α(u e^δ t+c *t[(δ)]-x) d F(x) d t+∫_0^∞λe^-(λ+α) t∫_u e^δ t+c *t[(δ)]^∞ w(u e^δ t+c *t[(δ)], x-u e^δ t-c *t[(δ)]) d F(x) d t = λ(δ u+c)^(λ+α) / δ∫_u^∞(δ y+c)^-((λ+α) / δ)-1(∫_0^y Φ_δ, α(y-x) d F(x)+A(y)) d y,whereA(t)=∫_t^∞w(t,s-t) d F(s). Differentiating (<ref>) with respect to u, we getd/d uΦ_δ, α(u)=λ+α/c+δ uΦ_δ, α(u)-λ/c+δ u(∫_0^u Φ_δ, α(u-x) d F(x)+A(u)). Thus, integrating (<ref>), then performing integration by parts, we get,Φ_δ, α(u)=c Φ_δ, α(0)/c+δ u-λ/c+δ u∫_0^u A(t) d t+∫_0^u K_δ, α(u, t) Φ_δ, α(t) d t,whereK_δ, α(u, t)=δ+α+λ (1-F(u-t))/c+δ u. In particular, let α=0, and denote thatΦ_δ(u)=Φ_δ, 0(u), Φ_δ can be simplified to a Volterra integral equationof the second-kind:Φ_δ(u)=g(u)+∫_0^u K_δ(u, t) Φ_δ(t) d t,where g(u)=c Φ_δ(0)/c+δ u-λ/c+δ u∫_0^u A(t) d t Therefore, for (<ref>), if Φ_δ(0) is given, we can usenumerical method to approximate Φ_δ(u). Fortunately,<cit.> obtained Φ_δ(0) by using Laplacetransforms.Let m_A=∫_0^∞A(t)d tandϕ_1(s)=1/μ∫_0^∞e^-sxF̅(x)d x. ThenΦ_δ(0) =λ m_A/κ_δ∫_0^∞β(δ z) exp(-c z+λμ∫_0^z ϕ_1(δ s) d s) d zwhere κ_δ=c ∫_0^∞exp(-c z+λμ∫_0^zϕ_1(δ s) d s) d zandβ(s)=1/m_A∫_0^∞e^-sxA(x) d x. As a special example, let w(x_1,x_2)=1, Φ_δ(u) is expressed as theruin probability forthe surplus process (<ref>). A(t)=1-F(t), m_A=μ,β(s)=ϕ_1(s), (<ref>) can be simplified to Φ_δ(0)=κ_δ-1/κ_δ.which is equivalent to Eq. (14) of <cit.>. It is well known (see, for example, ) that if g and the kernel K_δ are both continuous, then the second-kind Volterra integral equation (<ref>) has a unique solution. § THE COMPUTATION PROCESS OF COLLOCATION METHOD In this section, we first introduce the collocation method for calucating thegeneral Volterra integral equation. Then we study how to use the collocationmethod tocompute the Gerber-Shiu function.Consider the general linear Volterra integral equation (VIE) of thesecond-kind as belowy(t)=g(t)+∫_0^t K(t, s)y(s) d s,t ∈ I:=[0, T]. On the RHS of (<ref>), the linear Volterra integral operator 𝒱:C(I) → C(I) isdefined by(𝒱 y)(t):=∫_0^t K(t, s)y(s) d s,t ∈ I,where K ∈ C(D) is some given function defined on D, here D:={(t, s): 0≤ s ≤ t ≤ T}.Let g ∈ C(I) be a given function.Use the notion of (<ref>), the Volterra integral equation can be written byy(t)=g(t)+(𝒱 y)(t),t ∈ I. Assume the solution of (<ref>) can be approximated by collocation inthe (continuous) piecewise polynomial spaceS_m-1(I_h):={v:.v|_σ_n∈π_m-1, 0 ≤ n ≤ N-1 }, where I_h:={t_i=t_i^(N): i =0,1,… , N }denotes a grid on the given intervalI:=[0, T], with t_0 =0 and t_N =T.Here, π_m-1 is theset of (real) polynomials of degree of m - 1 (with m> 1), and set σ_0:=[t_0, t_1] andσ_n:=(t_n, t_n+1] for n=1, … , N-1. The quantity h_n:=t_n+1-t_n is called the diameter of the gridI_h. In particular, for the convenience of calculation, we use the uniformgridI_h, which meansh_n≡ h = T/N for all 0 ≤ n ≤ N-1.Define the set of collocation points,X_h:={t_n, i=t_n+c_i h:n=0,1, …, N-1, i = 1,2,…, m }which is determined by the given grid I_h and the given collocationparameters{c_i }⊂ [0,1] such that 0≤ c_1 ≤⋯≤ c_m≤ 1, the collocation solution u_h∈ S_m-1(I_h) isdefined by the collocation equation corresponding to (<ref>),u_h(t)=g(t)+(𝒱 u_h)(t) ,t ∈ X_h. Using the local Lagrange basis functions with set {c_i },L_i(θ):=∏_k ≠ i^mθ-c_k/c_i - c_k, θ∈ [0,1]andU_n, i:=u_h(t_n, i)=u_h(t_n+c_i h ),i=1, …,m.For any t=t_n+θ h on the subinterval σ_n:= (t_n,t_n+1], the local represention u_hcan be written as a interpolationfunction:u_h(t)=u_h(t_n+θ h )=∑_i=1^m L_i(θ) U_n, i, θ∈(0,1]. Thus, the collocation equation (<ref>)assumes the form U_n, i =g(t_n, i)+∫_0^t_n,i K(t_n, i, s) u_h(s) d s =g(t_n, i)+∫_0^t_n K(t_n, i, s) u_h(s) d s+∫_t_n^t_n,i K(t_n, i, s) u_h(s) d s =g(t_n, i)+∫_0^t_n K(t_n, i, s) u_h(s) d s+ h ∫_0^c_i K(t_n, i, t_n+s h ) u_h(t_n+s h ) d s Consider F_n(t):=∫_0^t_n K(t, s) u_h(s) d s=∑_l=0^n-1 h∫_0^1 K(t, t_l+s h ) u_h(t_l+s h ) d s If we set t=t_n, i in (<ref>) and employ the localrepresentation (<ref>), we may write F_n(t_n, i)=∑_l=0^n-1 h ∫_0^1 K(t_n, i, t_l+s h ) u_h(t_l+s h ) d s =∑_l=0^n-1 h ∑_j=1^m(∫_0^1 K(t_n, i, t_l+s h ) L_j(s) d s) U_l, j So on the subinterval (t_n, t_n+1], the collocationequation (<ref>) can be rewritten as U_n, i= g(t_n, i)+∑_l=0^n-1 h ∑_j=1^m(∫_0^1 K(t_n, i, t_l+s h ) L_j(s) d s) U_l, j+h ∑_j=1^m(∫_0^c_i K(t_n, i, t_n+s h ) L_j(s) d s) U_n, j Let 𝐔_n:=(U_n, 1, …, U_n, m)^T ,𝐠_n:=(g(t_n, 1), …, g(t_n, m))^T, and define the matrices inL(ℝ^m),B_n^(l):=([ ∫_0^1 K(t_n, i, t_l+s h ) L_j(s) d s;])_m× m, 0 ≤ l ≤ n-1,i, j=1, …, mandB_n:=([ ∫_0^c_i K(t_n, i, t_n+s h ) L_j(s) d s; ])_m× m,i, j=1, …, m The linear algebraic system for𝐔_n∈ℝ^m can bewritten compactly as[ℐ_m-h B_n] 𝐔_n=𝐠_n+𝐆_n, n=0,1, …, N-1with𝐆_n:=(F_n(t_n, 1), …, F_n(t_n, m))^T=∑_l=0^n-1 h B_n^(l)𝐔_lHere, ℐ_m denotesthe identity matrix inL(ℝ^m).Since the kernel K of the Volterra operator 𝒱 is continuous onD, the elements of the matrices B_n are all bounded. By Neumann Lemma(), the inverse of matrix ℐ_m-h B_n exists whenever h B_n<1 formatrix norm. This clearly holds whenever h is sufficiently small. So for anygrid I_h with grid diameter h, each of the linear algebraicsystems (<ref>) has a unique solution 𝐔_n. Hence thecollocation equation (<ref>) defines a unique collocation solution u_h∈ S_m-1(I_h) for (<ref>), with local represention onσ_n given by (<ref>).Actually, the collocation solution u_h to the Volterra integral equationon an interval I is an element of some finite-dimensional function space(the collocation space) that satisfies the Volterra integral equation on anappropriate finitesubset of points in I (the set of collocation points).We suppose that the distribution function of claim size, F(x), iscontinuous. Obviously, K_δ and g satisfies the assumption ofRemark <ref>. So, substituting K_δin the linear algebraicsystems, we can write B_n^(l) and B_n as follow:B_n^(l)=(∫_0^1δ+F̅(t_n+c_1 h-t_l-s h)/c+δ(t_n+c_1 h) L_1(s) d s⋯ ∫_0^1δ+F̅(t_n+c_1 h-t_l-s h)/c+δ(t_n+c_1 h) L_m(s) d s ⋮⋮ ∫_0^1δ+F̅(t_n+c_m h-t_l-s h)/c+δ(t_n+c_m h) L_1(s) d s⋯ ∫_0^1δ+F̅(t_n+c_m h-t_l-s h)/c+δ(t_n+c_m h) L_m(s) d s )_m× m,andB_n=(∫_0^c_1δ+F̅(t_n+c_1 h-t_l-s h)/c+δ(t_n+c_1 h) L_1(s) d s⋯ ∫_0^c_1δ+F̅(t_n+c_1 h-t_l-s h)/c+δ(t_n+c_1 h) L_m(s) d s ⋮⋮ ∫_0^c_mδ+F̅(t_n+c_m h-t_l-s h)/c+δ(t_n+c_m h) L_1(s) d s⋯ ∫_0^c_mδ+F̅(t_n+c_m h-t_l-s h)/c+δ(t_n+c_m h) L_m(s) d s )_m× m. We can solve the corresponding linear algebraic system to approximate theGerber-Shiu function. It follows from the formula (<ref>) and (<ref>)that we have tocompute integrals. However, for most of interesting distribution functions,the integrals can not be computed explicitly. This may cause some trouble inourcalculations, but the numerical examples in section <ref> show that even if numerical integration is used, collocation methodstill has quite good accuracy. § CONVERGENCE ORDERS OF COLLOCATION METHODIn this section, starting from the error of the interpolation function, wefocus on the convergence of Equation (<ref>) when the kernel function issmooth. The results show that the convergence is closely related to the numberof collocation parameters.For any continuous function y, the error between y and the Langrangeinterpolation polynomial with the given points {x_j }isdefined bye_m(y ; t):=y(t)-∑_j=1^m L_j(t) y(x_j),t∈[a, b] where {x_j }⊂ [a,b] such that a≤ x_1 ≤⋯≤x_m ≤ b.Here, we briefly review an important special case of the celebrated PeanoKernel Theorem. Assume that y ∈ C^m[a, b]. Then for given knots a≤ x_1 < ⋯ < x_m ≤ b, e_m(y ; t) satisfies the integral representation e_m(y ; t)=∫_a^b K_m(t, s) y^(m)(s) d s,t ∈[a, b], where the Peano kernel K_m is given by K_m(t, s):=1/(m-1) !{(t-s)_+^m-1-∑_k=1^m L_k(t)(x_k-s)_+^m-1}. and y^(m) denotes the mth derivative. Proofs of this important result may be found for example in <cit.> or <cit.>.On the subinterval (t_n, t_n+1], let x_j=t_n,j=t_n+c_j h,theinterpolation error can bewritten as e_m(y ; θ):=y(t_n+θ h)-∑_j=1^m L_j(θ)y(t_n,j), θ∈[0, 1]. Thus, by theorem <ref>, we can get the following corollary: Under the assumptions of Theorem <ref> and with [a,b]=[t_n,t_n+1], t=t_n+θ h(θ∈ [0,1], h:=t_n+1-t_n),x_j=t_n+c_j h(j=1, … ,m) the interpolation error e_m(y ; t):=y(t_n+θ h)-∑_j=1^m L_j(θ) y(t_n+c_j h), θ∈[0,1] can be expressed in the form e_m(y ; t )=h^m∫_0^1 K_m(θ, z) y^(m)(t_n+z h) d z, θ∈[0,1] where K_m(θ, z):=1/(m-1) !{(θ-z)_+^m-1-∑_k=1^m L_k(θ)(c_k-z)_+^m-1}. Define R_m, n(θ):=∫_0^1 K_m(θ, z) y^(m)(t_n+zh ) d z, we may resort to Peano'sTheorem (see Corollary <ref>) to write y(t_n+θ h )=∑_j=1^m L_j(θ) Y_n, j+h^m R_m, n(θ), θ∈[0,1],and Y_n,j = y(t_n,j). Consider that u_h(t_n+θ h)=∑_j=1^m L_j(θ)U_n, j, then for any θ, the error between y and the collocation solutionsu_h is represented bye_h(t_n+θ h )=∑_j=1^m L_j(θ) ℰ_n, j+h^m R_m, n(θ), θ∈(0,1]with the collocation error, ℰ_n, j:=Y_n, j-U_n, j. Rewrite the error e_h with the Volterra integral equation (<ref>):e_h(t) =y(t)-u_h(t)=∫_0^t K(t, s)(y(s)-u_h(s)) d s=∫_0^t K(t, s)e_h(s) d s. Then, for the error on collocation points t_n,i, we have the followingequatione_h(t_n, i)=(𝒱 e_h)(t_n, i),0 ≤ n ≤ N-1,i=1, …, m.Furthermore, for the right side of the equation (<ref>)(𝒱 e_h)(t_n, i) = ∫_0^t_n K(t_n, i, s) e_h(s) d s+h ∫_0^c_i K(t_n, i, t_n+s h) e_h(t_n+s h) d s = ∑_l=0^n-1 h ∫_0^1 K(t_n, i, t_l+s h )(∑_j=1^m L_j(s) ℰ_l, j+h^m R_m, l(s)) d s +h ∫_0^c_i K(t_n, i, t_n+s h )(∑_j=1^m L_j(s) ℰ_n, j+h^m R_m, n(s)) d sHence, we have ℰ_n, i = h ∑_j=1^m(∫_0^c_i K(t_n, i, t_n+s h ) L_j(s) d s) ℰ_n, j+ ∑_l=0^n-1 h ∑_j=1^m(∫_0^1 K(t_n, i, t_l+s h ) L_j (s) d s) ℰ_l, j+∑_l=0^n-1 h^m+1∫_0^1 K(t_n, i, t_l+s h) R_m, l(s) d s +h^m+1∫_0^c_i K(t_n, i, t_n+s h ) R_m, n(s) d si=1, …, m Let ℰ_n:=(ℰ_n, 1, …,ℰ_n, m)^T∈ℝ^m, we obtain a system oflinear equations about the collocation error[ℐ_m-h B_n] ℰ_n=∑_l=0^n-1 h B_n^(l)ℰ_l+∑_l=0^n-1 h^m+1ρ_n^(l)+h^m+1ρ_n,0 ≤ n ≤ N-1The vectors ρ_n^(l) and ρ_n in ℝ^m areρ_n^(l):=(∫_0^1 K(t_n, i, t_l+sh ) R_m, l(s) d s, )^T, l< nandρ_n:=(∫_0^c_i K(t_n, i, t_n+s h ) R_m, n(s) d s,)^Twith i=1, …, m.We are now ready to give the convergence theorem. Assume the Volterra integral equation satisfies (a) K ∈ C^m(D) and g ∈ C^m(I). (b) u_h∈ S_m-1(I_h) is the collocation solution to (<ref>) defined by (<ref>). Then y-u_h_∞:=sup _t ∈ I|y(t)-u_h(t)| ≤ Cy^(m)_∞ h^m holds for any sets X_h of collocation points with 0 ≤ c_1≤⋯≤ c_m≤ 1. The constant C depends on the c_i but not on h. As the previous analysis, we conclude that the collocation error ℰ_n is determined by (<ref>). And according to the Neumann Lemma (), theinverse of matrixℐ_m-h B_n exists, whenever h B_n<1 for matrix norm.In other words, for any grid I_h, each matrix ℐ_m-h B_n has a uniformly bound inverse: (ℐ_m-h B_n)^-1_1≤ D_0, n=0,1, …, N-1. Assume that B_n^(l)_1≤ D_1 for 0 ≤ l<n ≤ N-1, and ρ_n^(l)_1≤ m K̅ k_m M_m,l<n, ρ_n_1≤ m K̅ k_m M_m. We define M_m:=y^(m)_∞,k_m:=max _θ∈[0,1]∫_0^1|K_m(θ, z)| d z and K̅:=max _t ∈ I∫_0^t|K(t, s)| d s=𝒱_∞ Then, from (3.7) ℰ_n_1≤ D_0 D_1∑_l=0^n-1 h ℰ_l_1+D_0[m K̅ k_m M_m∑_l=0^n-1 h^m+1+h^m+1 m K̅ k_m M_m] and hence ℰ_n_1≤γ_0∑_l=0^n-1 h ℰ_l_1+γ_1 M_m h^m, n=0,1, …, N-1, where γ_0:=D_0 D_1, γ_1:=m D_0K̅ k_m(T+h). Using Gronwall's inequality (), equation (<ref>) is bounded by ℰ_n_1 ≤γ_1 M_m h^mexp(γ_0∑_l=0^n-1 h ) ≤γ_1 M_m h^mexp(γ_0 T) (n=0,1, …, N-1) In other words, there exists a constant B<∞ such that, uniformly for h∈ (0,h̅), ℰ_n_1≤ B M_m h^m,n=0,1, …, N-1. Setting Λ_m:= max_jL_j_∞, the local error representation (<ref>) is |e_h(t_n+θ h )| ≤Λ_mℰ_n_1+h^m k_m M_m≤(Λ_m B+k_m) M_m h^m So for θ∈[0,1] and n=0,1, …, N-1, we have e_h_∞≤ Cy^(m)_∞ h^m, where the constant C depends on the c_i but not on h. For the collocation methods of Volterra integral equation, there are many papers discussing the convergence under different conditions; see for example <cit.> and <cit.>. This paper only gives a rough proof ofthe smooth kernel function. For more detailed proof, refer to <cit.>. § NUMERICAL RESULTSIn this section, we present some numerical examples to show that thecollocation method is very efficient for computing the Gerber-Shiu function.All results are performed in MATLAB on Windows, with Intel(R) Core(TM) i7CPU, at 2.60 GHz and a RAM of 16 GB. The collocation solution is now determined by the resultingsystem (<ref>) and the local Langrange representation (<ref>). In allexamples, we set c=1.2, λ=1 and δ=0.01. Here we consider the following claim size densities: (1) Exponential density: f(x)=e^-x, x>0. (2) Erlang(2) density: f(x)=4xe^-2x, x>0. (3) Combination of exponentials density: f(x)=3 e^-1.5 x-3 e^-3 x. Note that these distributions have a common mean of 1. We will compute thefollowing three special Gerber-Shiu functions: (1) Ruin probability with interest: Φ_δ(u)=ℙ(τ<∞| U_δ(0)=u), wherew(x, y) ≡ 1. (2) Expected claim size causing ruin: Φ_δ(u)=𝔼[(U_τ-+|U_τ|) 1_(τ<∞)| U(0)=u], where w(x, y)=x+y. (3) Expected deficit at ruin: Φ_δ(u)=𝔼[|U_τ| 1_(τ<∞)| U(0)=u], where w(x, y)=y.Throughout this section, we take c_1=1/3, c_2=2/3 whenm=2 and c_1=1/3, c_2=2/3, c_3=1 when m=3, and we setthe gridh=30/N. In Figure<ref>, we plot numerical result curves of exponential claim size densitywhen N=4096. We conducted a comparison of our approach with other methodsreported in the literature, and we find that the ruin probability we obtainedwere highly similar to those presented in the studies by <cit.>.In Figures <ref>–<ref>, we plot the relative errors under different N onthe same picture to show variability bands and illustrate the stability of theprocedures. We observe that the error are very close to eachother when N is large enough. Next, in order to verify the result in section <ref>, we compute the errors and convergences orders. Forequations with analytical solutions, the error E_N^1 is givenby E_N^1=sup _t ∈ I| u_h(t)-y(t) |. While the analytical solution cannot be written out explicitly, its error isdefined as E_N^2=|u_h^N(T)-u_h^2 N(T)|. At this time, the corresponding convergence order can be expressed asp_i =log_2(E_N^i/E_2 N^i),i=1,2. The errors andconvergence orders with u=5 are given in Table<ref>and <ref>. We calculate E_N^1 and p_1 for RuinProbability with interest. The exact ruin probability in the compound Poissonmodel with exponential claims, is calculated using the results in<cit.>. We found that when N is large enough (grid is smallenough), it is difficult for the numerical solution to change for a givennumber of decimal places. For Expected claimsize causing ruin and Expected deficit at ruin, E_N^2 and p_2 iscomputed. The results show that the global convergence order of the2-point collocation method is 2, and the global convergence order of the3-point collocation method is 3, which is consistent with the conclusionof Section <ref>.§ CONCLUDING REMARKSIn this paper, we apply the collocation method to compute the Gerber-Shiufunction in the classical risk model. Through several numericalillustrations,we find that this method is very efficient for the computations.Not only for the Volterra integral equation, the collocation method can alsobe used in some integro-differential equations. So we can use this method tosolve other ruin related problems in risk theory. For example, in<cit.> and <cit.>, the Gerber-Shiu function canbe expressed as a specific class of integro-differential equation in theclassical surplus process with a constant dividend barrier. For the mathematical aspects, we focused on the convergence results ofcollocation methods.When the kernel K is sufficiently regular, theconvergence is closely related to the number of collocation parameters.However, if thedistribution function F(x) does not satisfy the required assumptions, thedesiredresult could not be obtained. Finding a suitable method to improve thecalculation accuracy is an open problem, when the kernel function is notsmooth enough. We leave that aside for futureinvestigation. § ACKNOWLEDGEMENTThe author would like to thank the anonymous reviewers for their valuablesuggestions. apacite tocsectionReferences
http://arxiv.org/abs/2312.16004v1
{ "authors": [ "Zan Yu", "Lianzeng Zhang" ], "categories": [ "stat.AP", "cs.NA", "math.NA" ], "primary_category": "stat.AP", "published": "20231226112254", "title": "Computing Gerber-Shiu function in the classical risk model with interest using collocation method" }
http://arxiv.org/abs/2312.16605v1
{ "authors": [ "Wen-Di Guo", "Qin Tan", "Yu-Xiao Liu" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20231227150708", "title": "Quasinormal modes and greybody factor of a Lorentz-violating black hole" }
Department of Physics, Hubei University, Wuhan 430062, China chenr@hubu.edu.cn Department of Physics, Hubei University, Wuhan 430062, China binzhou@hubu.edu.cn Department of Physics, Hubei University, Wuhan 430062, China Key Laboratory of Intelligent Sensing System and Security of Ministry of Education, Hubei University, Wuhan 430062, China In recent years, Floquet engineering has attracted considerable attention as a promising approach for tuning topological phase transitions. In this work, we investigate the effects of high-frequency time-periodic driving in a four-dimensional (4D) topological insulator, focusing on topological phase transitions at the off-resonant quasienergy gap. The 4D topological insulator hosts gapless three-dimensional boundary states characterized by the second Chern number C_2. We demonstrate that the second Chern number of 4D topological insulators can be modulated by tuning the amplitude of time-periodic driving. This includes transitions from a topological phase with C_2=±3 to another topological phase with C_2=±1, or to a topological phase with an even second Chern number C_2=±2 which is absent in the 4D static system. Finally, the approximation theory in the high-frequency limit further confirms the numerical conclusions. Periodically driven four-dimensional topological insulator with tunable second Chern number Bin Zhou January 14, 2024 ===========================================================================================§ INTRODUCTIONThe discovery of the quantum Hall effect (QHE) opens the door to the field of topological matter <cit.>. In recent years,tremendous effort has been devoted to realizing the QHE in higher dimensions <cit.>. In 2001, Zhang and Hu proposed a four-dimensional (4D) generalization of the QHE <cit.>. The 4D QHE is characterized by the second Chern number, a topological invariant that describes the quantized nonlinear response of current to electric and magnetic fields <cit.>. The 4D topological insulator (TI) exhibiting the 4D QHE supports fully gapped bulk and gapless three-dimensional (3D) boundary states <cit.>. The 4D TI is impossible to realize in real materials due to the limitation of spatial dimensions. In engineered systems, the recent proposals for realizing the 4D TIs include: introducing synthetic dimensions in two-dimensional (2D) or 3D systems <cit.>, mapping high-dimensional models onto lower-dimensional systems <cit.>, and implementing 4D lattices by constructing appropriate capacitive and inductive connections in electric circuits <cit.>. Experimentally, 4D topological states have been realized in acoustic lattices <cit.>, photonic crystals <cit.>, an angled optical superlattice with ultracold bosonic atoms <cit.>, and electric circuits <cit.>.Floquet engineering, which applies time-periodic external fields to manipulate quantum systems, has emerged as a promising route to modulate topological phases <cit.>. It was found that time-periodic perturbations can drive trivial systems out of equilibrium to produce topological boundary states, and such topologically nontrivial phases driven by periodic external fields are called Floquet topological insulators (FTI) <cit.>. Recently, the research on periodically driven Floquet topological matter in solid-state materials <cit.>, photonic crystals <cit.>, acoustic lattices <cit.>, electric circuits <cit.>, and cold atom systems <cit.> has attracted considerable attention. In 2018, Peng and Refael presented a one-dimensional (1D) model with three quasiperiodic drives, realizing a 4D QHE that allows a bulk energy conversion by treating the three drives as synthetic dimensions <cit.>. Motivated by the above mentioned researches, a question naturally arises whether time-periodic driving can modulate topological phase transitions in 4D TIs.In this work, we present a systematic study on the time-periodically driven 4D TI with a high-frequency time-periodic driving. Here, we only focus on the high-frequency case where the frequency of the time-periodic driving is larger than the bandwidth of the static system, and there is no overlap between the undriven quasienergy bands and the driven quasienergy bands. We consider a 4D time-periodic vector potential V(V_x, V_y, V_z, V_w) as the time-periodic driving. When V_x≠0 or V=V_n≠0 (n=x,y,z,w), the second Chern number of the 4D system can be changed by tuning the amplitude of the time-periodic driving. However, there is no newly formed topological phase with distinct second Chern number. Furthermore, when V_x=V_y≠0 or V_x=V_y=V_z≠0, one can find that the time-periodic driving transforms a topological phase with C_2=±3 to another topological phase with C_2=±1, or to a topological phase with an even second Chern number C_2=±2 which is absent in the 4D static system. By solving for the effective Hamiltonian in the high-frequency limit, we present analytical expressions representing the closure of the bulk gap. We find that the transition points in the phase diagram fit well with the bulk gap closure points.The rest of the paper is organized as follows. In Sec. <ref>, we introduce the 4D Dirac model describing the 4D TI, and we present the method for calculating the second Chern number in Sec. <ref>. In Sec. <ref>, we investigate the 4D TI driven by time-periodic driving. In Sec. <ref>, we introduce time-periodic driving in the 4D Dirac model and transform the time-dependent Hamiltonian into the time-independent Floquet Hamiltonian based on the Floquet theory. In Sec. <ref>, we explore the effect of the time-periodic driving V(V_x, 0, 0, 0) in the 4D TI. Then, we investigate the effect of the time-periodic driving V(V_x, V_y, 0, 0) in the 4D TI in Sec. <ref>. Finally, we summarize our conclusions in Sec. <ref>.§ STATIC SYSTEM §.§ Static model The Dirac model describing the 4D TI is given by the following equation <cit.>:H(k)= sin(k_x)Γ_2+sin(k_y)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5+m(k)Γ_1,where the Dirac matrices Γ_j=(s_x⊗ s_0, s_y⊗ s_0, s_z⊗ s_x, s_z⊗ s_y, s_z⊗ s_z), j=1, 2, 3, 4, 5, satisfying the anticommutation relations {Γ_i,Γ_j}=2δ_ij. m(k)=m+c[cos(k_x)+cos(k_y)+cos(k_z)+cos(k_w)], where m is the Dirac mass and c denotes the nearest-neighbour hopping amplitude. In subsequent calculations, c=1. The doubly degenerate eigenvalues of H(k) areE(k)=±√(4+2 m(k)^2-∑_n=x,y,z,wcos(2 k_n))/√(2),and the bandwidth of the system is E_W=2|m|+8. In Fig. <ref>(a), we plot the bulk gap E_g with respect to the Dirac mass m. Remarkably, the bulk gap E_g is closed at m=±4,m=±2,m=0.§.§ Second Chern numberThe second Chern number is used to describe the nonlinear response of the current to an electric field and a magnetic field in 4D system <cit.>. The second Chern number is given by the following formula <cit.>:C_2=1/4π^2∫_FBZdkTr[F_xyF_zw+F_wxF_zy+F_zxF_yw],with the non-Abelian Berry curvatureF_mn^αβ=∂_mA_n^αβ-∂_nA_m^αβ+i[A_m,A_n]^αβ,where m, n=x, y, z, w and the Berry connection of the occupied bands A_m^αβ=-i⟨ u^α(k)|∂/∂ k_m| u^β(k)⟩, and | u^α(k)⟩ denotes the occupied eigenstates with α=1, …, N_occ. Figure <ref>(b) shows the second Chern number C_2 as a function of the Dirac mass m. It is obvious that the phase transition of the system is accompanied by the closure of the bulk gap. Qi et al. demonstrated that the number of 3D gapless boundary states with linear dispersion in the 4D system is equal to the value of the second Chern number <cit.>. In the region -4<m<-2, the second Chern number C_2=1. In the first Brillouin zone (FBZ) of the quasi-3D system [open boundary conditions (OBC) along the x direction], there is a single gapless Dirac cone cross at the point G=(k_y=0, k_z=0, k_w=0) as shown in Fig. <ref>(a). In the region -2<m<0, the second Chern number C_2=-3. There are three gapless Dirac cones cross at the points Y=(π, 0, 0), Z=(0, π, 0), and W=(0, 0, π) [Fig. <ref>(b)]. The results for m>0 are symmetrically distributed with those for m<0. In the region 0<m<2, the second Chern number C_2=3. There are three gapless Dirac cones cross at the points M_yz=(π, π, 0), M_yw=(π, 0, π), and M_zw=(0, π, π) [Fig. <ref>(c)]. Finally, in the region 2<m<4, the second Chern number C_2=-1. There is one gapless Dirac cone cross at the vertex of the FBZ R=(π, π, π) [Fig. <ref>(d)].§ FLOQUET SYSTEM §.§ Time-periodically driven model and Floquet Hamiltonian The Floquet theory is often used to study time-periodic systems <cit.>. For a time-periodic Hamiltonian H(τ)=H(τ+T) with the period T=2π/ω (ω is the frequency), the wave function Ψ_α(τ)=e^-iε_ατψ_α of the Schrödinger equation i∂_τΨ(τ)=H(τ)Ψ(τ) can be obtained by employing the Floquet theory, where ε_α is the α-th quasienergy. Using the Fourier transformation, the time-dependent Schrödinger equation can be converted into a time-independent Schrödinger equation:H_Fψ_α=ε_αψ_α,whereH_F,qp =qωδ_qpI+H_p-q,H_0 =1/T∫_0^Tdτ H(τ),H_p-q =1/T∫_0^Tdτ H(τ)e^i (p-q)ωτ,I is an identity matrix of the same size as H(τ). p, q take 0, ±1, ±2, ... and the Floquet Hamiltonian H_F is an infinite Hamiltonian. In the numerical calculations, p and q are taken to be finite integers until the results converge. When ω > E_W, there is no overlap between quasienergy bands at different intervals ε∈[(n-1/2)ω,(n+1/2)ω], and n is an integer. In the subsequent content, we restrict our attention to the quasienergy interval ε∈[-ω/2,ω/2].We apply the time-periodic vector potential V(V_x, V_y, V_z, V_w) to the 4D Dirac model H(k), then the time-dependent Hamiltonian H(k, τ) is given by the following expression:H(k, τ) =∑_j=1^5h_j(k, τ)Γ_j,withh_1(k, τ) =m+c∑_n=x, y, z, wcos[k_n+V_ncos(ωτ)],h_2(k, τ) =sin[k_x+V_xcos(ωτ)],h_3(k, τ) =sin[k_y+V_ycos(ωτ)],h_4(k, τ) =sin[k_z+V_zcos(ωτ)],h_5(k, τ) =sin[k_w+V_wcos(ωτ)],where V_n (n=x, y, z, w) is the amplitude of the time-periodic vector potential. After the Fourier transformation, the diagonal block matrices in the Floquet Hamiltonian H_F are shown below:H_0 =∑_j=1^5h_0, jΓ_j,withh_0, 1 =m+c∑_n=x, y, z, wcos(k_n)𝒥_0(V_n),h_0, 2 =sin(k_x)𝒥_0(V_x), h_0, 3=sin(k_y)𝒥_0(V_y),h_0, 4 =sin(k_z)𝒥_0(V_z), h_0, 5=sin(k_w)𝒥_0(V_w),and the off-diagonal block matricesH_-l =∑_j=1^5h_-l, jΓ_j, H_+l=H_-l^†,withh_-l, 1 =c/2∑_n=x, y, z, w[e^i k_n+(-1)^le^-i k_n]𝒥_l(V_n),h_-l, 2 =-i/2[e^i k_x-(-1)^le^-i k_x]𝒥_l(V_x),h_-l, 3 =-i/2[e^i k_y-(-1)^le^-i k_y]𝒥_l(V_y),h_-l, 4 =-i/2[e^i k_z-(-1)^le^-i k_z]𝒥_l(V_z),h_-l, 5 =-i/2[e^i k_w-(-1)^le^-i k_w]𝒥_l(V_w),where 𝒥_l(V_n) is the l-th Bessel function of the first kind, l=|p-q| (p≠ q). Then, the Floquet Hamiltonian H_F is given by the following matrix:H_F=[ ⋱ ⋮ ⋮ ⋮ ⋱; ⋯ H_0-ωH_+1H_+2 ⋯; ⋯H_-1 H_0H_+1 ⋯; ⋯H_-2H_-1 H_0+ω ⋯; ⋱ ⋮ ⋮ ⋮ ⋱ ].§.§ Time-periodic driving V(V_x, 0, 0, 0) In this subsection, we first consider the time-periodic driving V(V_x, 0, 0, 0). Then, the diagonal block matrices in the Floquet Hamiltonian H_F are given by:H_0= ∑_j=1^5h_0, jΓ_j,withh_0, 1= m+c[ cos(k_x)𝒥_0(V_x)+cos(k_y)+cos(k_z)+cos(k_w) ],h_0, 2= sin(k_x)𝒥_0(V_x), h_0, 3=sin(k_y),h_0, 4= sin(k_z), h_0, 5=sin(k_w),and the off-diagonal block matricesH_-l= ∑_j=1^5h_-l, jΓ_j, H_+l=H_-l^†,withh_-l, 1= c/2[e^i k_x+(-1)^le^-i k_x]𝒥_l(V_x),h_-l, 2= -i/2[e^i k_x-(-1)^le^-i k_x]𝒥_l(V_x),h_-l, 3= h_-l, 4=h_-l, 5=0.By numerically diagonalizing the Floquet Hamiltonian H_F, we show the logarithm of the bulk gap as a function of m and V_x in Fig. <ref>(a). Here, the bulk gap refers to the quasienergy gap near ε=0. The color bar converging to white represents the bulk gap close to 0. In the high-frequency limit (i.e., ω≫ E_W), we can obtain the effective Floquet Hamiltonian H_eff. The black dashed lines in Fig. <ref>(a) can be given by solving for the effective Hamiltonian, m=±3±𝒥_0(V_x), m=±1±𝒥_0(V_x) (see Appendix <ref> for details). Obviously, the results obtained by solving numerically for the Floquet Hamiltonian H_F are consistent with those obtained by solving analytically for the effective Hamiltonian H_eff.Accordingly, we plot the phase diagram of the system as a function of m and V_x in Fig. <ref>(b). It is found that as the amplitude of the periodic driving V_x increases, the topologically nontrivial regions with nonzero second Chern numbers diminish until they completely vanish at 𝒥_0(V_x,c)=0. However, when V_x>V_x,c≈2.405, the topologically nontrivial phase of the system reappears. Figures <ref>(c) and <ref>(d) show the second Chern number as a function of the Dirac mass m when V_x<V_x,c and V_x>V_x,c, respectively. It is found that the reappeared topologically nontrivial phase exhibits the same second Chern number as before the phase transition. Figure <ref> illustrates the quasienergy spectra of different topologically nontrivial phases at V_x=3.5 when the OBC along the x direction. The right panels show the quasienergy spectra in the interval [-ω/2,ω/2]. The results of the system with OBC along the y, z, or w direction are the same as the result in Fig. <ref>, i.e., the number of gapless 3D topological boundary states is equal to the value of the second Chern number. Here, we only discuss the topological phase transitions for V_x≠0. It should be noted that topological phase transitions for V_n≠0 (n=y, z, or w) is similar to those presented in this subsection. §.§ Topological phases with even second Chern number In this subsection, we consider the periodic driving V(V_x, V_y, 0, 0), where V_x=V_y=V_xy≠0. Then, the diagonal block matrices in the Floquet Hamiltonian H_F are given by:H_0= sin(k_x)𝒥_0(V_xy)Γ_2+sin(k_y)𝒥_0(V_xy)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5+m(k)Γ_1,and the off-diagonal block matricesH_-l= ∑_j=1^5h_-l, jΓ_j, H_+l=H_-l^†,withh_-l, 1= c/2[e^i k_x+(-1)^le^-i k_x+e^i k_y+(-1)^le^-i k_y]𝒥_l(V_xy),h_-l, 2= -i/2[e^i k_x-(-1)^le^-i k_x]𝒥_l(V_xy),h_-l, 3= -i/2[e^i k_y-(-1)^le^-i k_y]𝒥_l(V_xy),h_-l, 4= h_-l, 5=0,where m(k)=m+c[cos(k_x)𝒥_0(V_xy)+cos(k_y)𝒥_0(V_xy)+cos(k_z)+cos(k_w) ]. By numerically diagonalizing the Floquet Hamiltonian H_F, we show the bulk gap as a function of m and V_xy in Fig. <ref>(a), where the white region signifies the bulk gap close to zero. Similar to Sec. <ref>, we derive the effective Hamiltonian H_eff for this system based on the Floquet theory in the high-frequency limit (see Appendix <ref> for details). By analytically solving for the effective Hamiltonian H_eff, we can obtain the black dashed lines in Fig. <ref>(a). The black dashed lines correspond to points where the bulk gap is zero, given by m=0, m=±2, m=± 2𝒥_0(V_xy), and m=±2± 2𝒥_0(V_xy). These black dashed lines fit well with the numerical calculations for the Floquet Hamiltonian.In Fig. <ref>(b), we show phase diagram in the (m, V_xy) plane. The color map corresponds to value of the second Chern number. Notably, the position of the phase transition point coincides with that of the closure point in the bulk gap. In the range of 1<|m|<2, it can be observed that as the amplitude of the time-periodic driving V_xy increases, the system undergoes a phase transition from a topological phase with C_2=-3 to another topological phase with C_2=-1 as shown in Fig. <ref>(c). The quasienergy spectra of the quasi-3D system are shown in Figs. <ref>(a) and <ref>(b). It is evident that the time-periodic driving V_xy induces the emergence of the topological phase with C_2=-1, distinguished by a single gapless Dirac point. When V_xy=1.5 (V_xy=3.5), this Dirac point is located at the point Y (k_y=π, k_z=0, k_w=0) [G (0, 0, 0)].Moreover, it is intriguing that for |m|<1, the periodic driving can induce the emergence of a topologically nontrivial system with an even second Chern number C_2=±2. Figure <ref>(d) illustrates the second Chern number C_2 as a function of the amplitude V_xy when m=-0.4. When the amplitude V_xy exceeds the critical value V_xy,c=0.9184, the system transitions from the topological phase with C_2=-3 to another topological phase with C_2=-2. Within the interval 0.9184<V_xy<2.0415, C_2 keeps a quantized plateau with C_2=-2. As shown in Fig. <ref>(c), there are two Dirac points in the FBZ of the quasi-3D system at points Z (k_y=0, k_z=π, k_w=0) and W (k_y=0, k_z=0, k_w=π). When V_xy>2.0415, the topological properties of the system are destroyed until V_xy>2.8371. In the interval V_xy∈(2.8371, 4.9307), the topologically nontrivial phase with the second Chern number C_2=-2 reemerges. In this case, there are two Dirac points in the FBZ as shown in Fig. <ref>(d). It is noteworthy that in these topologically nontrivial systems induced by V_xy, when the OBC along the z or w direction, the value of the second Chern number does not coincide with the number of Dirac points (see Appendix <ref> for details).§ CONCLUSION In this paper, we investigate the effects of high-frequency time-periodic driving in a 4D TI. First, we find that the time-periodic driving V(V_x, 0, 0, 0) can modulate the topological phase of the 4D TI when V_x≠0. However, there is no newly formed topological phase with distinct second Chern number. The Floquet system exhibits similar results when V=V_n≠0 (n=x,y,z,w) (see Appendix <ref> for more details). In addition, we show that the time-periodic driving V(V_x, V_y, 0, 0) can force the system to convert from a topological phase with C_2=±3 to another topological phase with C_2=±1. When the Dirac mass |m|<1, the periodic driving V_xy can additionally induce a topological phase characterized by an even second Chern number C_2=±2. Note that there is no such topologically nontrivial phase with an even second Chern number in the 4D static system. Similarly, the time-periodic driving V(V_x, V_y, V_z, 0) can also induce a topologically nontrivial phase with C_2=±2 (see Appendix <ref>). Furthermore, the Floquet phase diagram can be explained by the approximation theory in the high-frequency limit.Experimentally, electronic circuits can be used to realize the 4D system <cit.>. With tunable complex-phase elements, momentum components of the 4D Dirac model can be periodically driven to realize the time-dependent Hamiltonian <cit.>. Therefore, we expect that the time-periodically driven 4D TIs can be experimentally realized through topolectrical circuit networks.It is worth mentioning that in our another work, we find that the time-periodic driving can induce a topological phase transition from a 4D normal insulator to a 4D FTI with nonzero second Chern number. In that work, the frequency of the time-periodic driving is smaller than the bandwidth of the static system, so the driven bands and the undriven bands overlap in the resonant quasienergy region. We consider two types of time-periodic driving, including a time-periodic onsite potential and a time-periodic vector potential. It is found that both types of the time-periodic driving can open the resonant quasienergy gap, and induce gapless topological boundary states.§ ACKNOWLEDGMENTSB.Z. was supported by the NSFC (Grant No. 12074107), the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (Grant No. T2020001) and the innovation group project of the Natural Science Foundation of Hubei Province of China (Grant No. 2022CFA012). R.C. acknowledges the support of NSFC (under Grant No. 12304195) and the Chutian Scholars Program in Hubei Province. Z.-R.L. was supported by the National Funded Postdoctoral Researcher Program (under Grant No. GZC20230751) and the Postdoctoral Innovation Research Program in Hubei Province (under Grant No. 351342).§ EFFECTIVE HAMILTONIAN FOR V_X≠0 By employing the Floquet theory in the high-frequency limit (i.e., ω≫ E_W), the time-dependent system can be described by a time-independent effective Hamiltonian as <cit.>H_eff=H_0+∑_l≠0[H_-l,H_+l]/lω+𝒪(ω^-2),where ω is the frequency of the time-periodic driving. When V_x≠0, [H_-l,H_+l]=0, thenH_eff=H_0,where the specific form of H_0 is shown in Eq. (<ref>). In the static system, the bulk gap is closed at m=±4, m=±2, m=0. In these cases, the high symmetry points of the bulk gap closure in the FBZ are as follows:m=-4:  Γ (k_x=0, k_y=0, k_z=0, k_w=0),m=-2:  X (π,0,0,0), Y (0,π,0,0), Z (0,0,π,0),W (0,0,0,π),m=0:  M_xy (π,π,0,0), M_xz (π,0,π,0), M_xw (π,0,0,π),M_yz (0,π,π,0), M_yw (0,π,0,π), M_zw (0,0,π,π),m=2:  R_xyz (π,π,π,0), R_xyw (π,π,0,π),R_xzw (π,0,π,π), R_yzw (0,π,π,π),m=4:  Q (π,π,π,π).When V_x≠0, by solving for the effective Hamiltonian H_eff [Eq. (<ref>)] at different high symmetry k points, we obtain the equations for the Dirac mass m versus V_x when the bulk gap vanishes, m=±3±𝒥_0(V_x), m=±1±𝒥_0(V_x).§ EFFECTIVE HAMILTONIAN FOR V_X=V_Y=V_XY≠0 Similarly, by employing the Floquet theory in the high-frequency limit, the effective Hamiltonian for V_x=V_y=V_xy≠0 is given by the following formula:H_eff=H_0+∑_l≠0[H_-l,H_+l]/lω+𝒪(ω^-2),where [H_-l,H_+l]=0. Therefore,H_eff=H_0= sin(k_x)𝒥_0(V_xy)Γ_2+sin(k_y)𝒥_0(V_xy)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5+m(k)Γ_1,where m(k)=m+c[cos(k_x)𝒥_0(V_xy)+cos(k_y)𝒥_0(V_xy)+cos(k_z)+cos(k_w) ]. By solving for the effective Hamiltonian H_eff [Eq. (<ref>)], we conclude that the bulk gap vanishes when m=0, m=±2, m=± 2𝒥_0(V_xy), or m=±2± 2𝒥_0(V_xy).§ QUASIENERGY GAP DISTRIBUTIONS FOR QUASI-3D SYSTEMS In the main text, we note that in the static system or the Floquet system with V_x≠0, the number of gapless 3D topological boundary states is equal to the value of the second Chern number. When V_x=V_y=V_xy≠0, the periodic driving can induce the emergence of topological nontrivial system with C_2=-2.In this appendix, we fix m=-0.4 and V_xy=1.5. The system is characterized by C_2=-2. In Figs. <ref>(a) and <ref>(b), we show the quasienergy gap and spectra in the (k_y, k_z, k_w) space with OBC along the x direction. In Fig. <ref>(a), the color approaching purple indicate that the quasienergy gap is close to zero. It can be found that there are two gapless Dirac points at Z (k_y=0, k_z=π, k_w=0) and W (0, 0, π). This matches the value of the second Chern number. The number of gapless Dirac points remains the same for the system with OBC along the y direction. However, there are four gapless Dirac points in the (k_x, k_y, k_z) space with OBC along the w direction as shown in Figs. <ref>(c) and <ref>(d). The results with OBC along the z direction are the same as those along the w direction. It is obvious that the number of gapless Dirac points does not match the value of the second Chern number for the system with OBC along the z direction or the w direction. When m = -1.7 and V_xy=1.5, the topologically nontrivial system with C_2=-1 exhibits the same phenomenon.§ TIME-PERIODIC DRIVING V(V_X, V_Y, V_Z, 0) In this appendix, we consider the periodic driving V(V_x, V_y, V_z, 0), where V_x=V_y=V_z=V_xyz≠0. Then, the diagonal block matrices in the Floquet Hamiltonian H_F are given by:H_0= sin(k_x)𝒥_0(V_xyz)Γ_2+sin(k_y)𝒥_0(V_xyz)Γ_3+sin(k_z)𝒥_0(V_xyz)Γ_4+sin(k_w)Γ_5+m(k)Γ_1,and the off-diagonal block matricesH_-l= ∑_j=1^5h_-l, jΓ_j, H_+l=H_-l^†,withh_-l, 1= c/2∑_n=x,y,z[e^i k_n+(-1)^le^-i k_n]𝒥_l(V_xyz),h_-l, 2= -i/2[e^i k_x-(-1)^le^-i k_x]𝒥_l(V_xyz),h_-l, 3= -i/2[e^i k_y-(-1)^le^-i k_y]𝒥_l(V_xyz),h_-l, 4= -i/2[e^i k_z-(-1)^le^-i k_z]𝒥_l(V_xyz),h_-l, 5= 0,where m(k)=m+c[cos(k_x)𝒥_0(V_xyz)+cos(k_y)𝒥_0(V_xyz)+cos(k_z)𝒥_0(V_xyz)+cos(k_w) ]. By solving the Floquet Hamiltonian, we show the bulk gap as a function of m and V_xyz in Fig. <ref>(a), where the white region indicates the bulk gap approaching zero. Furthermore, in the high-frequency limit, we can find the effective Hamiltonian for V_x=V_y=V_z=V_xyz≠ 0:H_eff=H_0+∑_l≠0[H_-l,H_+l]/lω+𝒪(ω^-2),where [H_-l,H_+l]=0. Therefore, we haveH_eff= sin(k_x)𝒥_0(V_xyz)Γ_2+sin(k_y)𝒥_0(V_xyz)Γ_3+sin(k_z)𝒥_0(V_xyz)Γ_4+sin(k_w)Γ_5+m(k)Γ_1.By solving for the effective Hamiltonian H_eff [Eq. (<ref>)], we conclude that the bulk gap vanishes when m=±1± 3𝒥_0(V_xyz) or m=±1±𝒥_0(V_xyz). In Fig. <ref>, the black dashed lines represent the points where the bulk gap vanishes.Correspondingly, we plot the phase diagram with respect to m and V_xyz in Fig. <ref>(b). The color map represents the values of the second Chern number. When |m|>2, the periodic driving can destroy the topologically nontrivial properties of the system, leading to a reduction of the second Chern number of the system from C_2=±1 to C_2=0. Furthermore, within the interval |m|<2, the periodic driving can induce topological phase transitions, causing a transformation from a topological phase with C_2=± 3 to other topological phases with C_2=±1 or C_2=±2. Next, we fix m=-1 and V_xyz=1.5. The system is characterized by C_2=-2. Figures <ref>(c) and <ref>(d) respectively show the distribution of quasienergy gap for the systems with OBC along the x and w directions. There are two gapless Dirac points in (k_y, k_z, k_w) space with OBC along the x direction. Similarly, there are two gapless Dirac points in quasi-3D systems with OBC along the y or z direction. However, there are four gapless Dirac points in the system with OBC along the w direction, which does not match the value of the second Chern number C_2=-2.§ TIME-PERIODIC DRIVING V(V_X, V_Y, V_Z, V_W) In this appendix, we consider the periodic driving V(V_x, V_y, V_z, V_w), where V_x=V_y=V_z=V_w=V≠0. Then, the diagonal block matrices in the Floquet Hamiltonian H_F are given by:H_0= [sin(k_x)Γ_2+sin(k_y)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5]𝒥_0(V)+m(k)Γ_1,and the off-diagonal block matricesH_-l= ∑_j=1^5h_-l, jΓ_j, H_+l=H_-l^†,withh_-l, 1= c/2∑_n=x,y,z,w[e^i k_n+(-1)^le^-i k_n]𝒥_l(V),h_-l, 2= -i/2[e^i k_x-(-1)^le^-i k_x]𝒥_l(V),h_-l, 3= -i/2[e^i k_y-(-1)^le^-i k_y]𝒥_l(V),h_-l, 4= -i/2[e^i k_z-(-1)^le^-i k_z]𝒥_l(V),h_-l, 5= -i/2[e^i k_w-(-1)^le^-i k_w]𝒥_l(V),where m(k)=m+c[cos(k_x)+cos(k_y)+cos(k_z)+cos(k_w) ]𝒥_0(V). Meanwhile, by employing the Floquet theory in the high-frequency limit, we can derive the effective Hamiltonian H_eff as follows:H_eff=H_0+∑_l≠0[H_-l,H_+l]/lω+𝒪(ω^-2).Since [H_-l,H_+l]=0, thereforeH_eff=H_0= [sin(k_x)Γ_2+sin(k_y)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5]𝒥_0(V)+m(k)Γ_1. Figures <ref>(a) and <ref>(b) show the bulk gap and the second Chern number for the Floquet Hamiltonian in the (m, V) plane. In Figs. <ref>(a) and <ref>(b), the black dashed lines [m=0, m=±2𝒥_0(V), m=±4𝒥_0(V)] represent the points where the bulk gap vanishes, which are obtained by solving for the effective Hamiltonian H_eff. As shown in Fig. <ref>(b), the topological nontrivial regions gradually diminish with increasing the amplitude of the periodic driving, until they completely vanish at V_c≈2.405. Then, as the periodic driving further intensifies, the topologically nontrivial phases reemerge. When m=-0.4 and V=3.5, the second Chern number of the Floquet system is C_2=-3. In Figs. <ref>(c) and <ref>(d), we show the quasienergy spectra for the system with OBC along the x and w directions, respectively. One can observe that the number of gapless Dirac points is the same in the quasi-3D system with OBC along the x or w direction. In addition, there are also three gapless Dirac points in quasi-3D systems with OBC along the y or z direction.apsrev4-1-etal-title_6authors95 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Klitzing et al.(1980)Klitzing, Dorda, and Pepper]PhysRevLett.45.494 author author K. v. Klitzing, author G. Dorda, and author M. Pepper,title title New Method for High-Accuracy Determination of the Fine-Structure Constant Based on Quantized Hall Resistance, 10.1103/PhysRevLett.45.494 journal journal Phys. Rev. Lett. volume 45, pages 494 (year 1980)NoStop [Wang et al.(2017)Wang, Sun, Lu, and Xie]PhysRevLett.119.136806 author author C. M. Wang, author H.-P. Sun, author H.-Z. Lu,and author X. C. Xie, title title 3D Quantum Hall Effect of Fermi Arcs in Topological Semimetals, 10.1103/PhysRevLett.119.136806 journal journal Phys. Rev. Lett. volume 119, pages 136806 (year 2017)NoStop [Zhang et al.(2018)Zhang, Zhang, Yuan, Lu, Zhang, Narayan, Liu, Zhang, Ni, Liu, Choi, Suslov, Sanvito, Pi, Lu, Potter, and Xiu]10.1038/s41586-018-0798-3 author author C. Zhang, author Y. Zhang, author X. Yuan, author S. Lu, author J. Zhang, author A. Narayan,et al., title title Quantum Hall effect based on Weyl orbits in Cd_3As_2, 10.1038/s41586-018-0798-3 journal journal Nature volume 565, pages 331 (year 2018)NoStop [Tang et al.(2019)Tang, Ren, Wang, Zhong, Schneeloch, Yang, Yang, Lee, Gu, Qiao, andZhang]10.1038/s41586-019-1180-9 author author F. Tang, author Y. Ren, author P. Wang, author R. Zhong, author J. Schneeloch, author S. A.Yang,et al., title title Three-dimensional quantum Hall effect and metal–insulator transition in ZrTe_5, 10.1038/s41586-019-1180-9 journal journal Nature volume 569, pages 537 (year 2019)NoStop [Chen et al.(2021a)Chen, Liu, Wang, Lu, and Xie]PhysRevLett.127.066801 author author R. Chen, author T. Liu, author C. M. Wang, author H.-Z. Lu,and author X. C. Xie, title title Field-Tunable One-Sided Higher-Order Topological Hinge States in Dirac Semimetals, 10.1103/PhysRevLett.127.066801 journal journal Phys. Rev. Lett. volume 127, pages 066801 (year 2021a)NoStop [Qin et al.(2020)Qin, Li, Du, Wang, Zhang, Yu, Lu, and Xie]PhysRevLett.125.206601 author author F. Qin, author S. Li, author Z. Z. Du, author C. M. Wang, author W. Zhang, author D. Yu, author H.-Z.Lu,and author X. C.Xie, title title Theory for the Charge-Density-Wave Mechanism of 3D Quantum Hall Effect, 10.1103/PhysRevLett.125.206601 journal journal Phys. Rev. Lett. volume 125,pages 206601 (year 2020)NoStop [Störmer et al.(1986)Störmer, Eisenstein, Gossard, Wiegmann, and Baldwin]PhysRevLett.56.85 author author H. L. Störmer, author J. P. Eisenstein, author A. C. Gossard, author W. Wiegmann, and author K. Baldwin,title title Quantization of the Hall effect in an anisotropic three-dimensional electronic system, 10.1103/PhysRevLett.56.85 journal journal Phys. Rev. Lett. volume 56, pages 85 (year 1986)NoStop [Hannahs et al.(1989)Hannahs, Brooks, Kang, Chiang, and Chaikin]PhysRevLett.63.1988 author author S. T. Hannahs, author J. S. Brooks, author W. Kang, author L. Y. Chiang,and author P. M. Chaikin, title title Quantum Hall effect in a bulk crystal, 10.1103/PhysRevLett.63.1988 journal journal Phys. Rev. Lett. volume 63, pages 1988 (year 1989)NoStop [Montambaux and Kohmoto(1990)]PhysRevB.41.11417 author author G. Montambaux and author M. Kohmoto, title title Quantized Hall effect in three dimensions, 10.1103/PhysRevB.41.11417 journal journal Phys. Rev. B volume 41, pages 11417 (year 1990)NoStop [Kohmoto et al.(1992)Kohmoto, Halperin, and Wu]PhysRevB.45.13488 author author M. Kohmoto, author B. I. Halperin,and author Y.-S. Wu, title title Diophantine equation for the three-dimensional quantum Hall effect, 10.1103/PhysRevB.45.13488 journal journal Phys. Rev. B volume 45, pages 13488 (year 1992)NoStop [Hill et al.(1998)Hill, Uji, Takashita, Terakura, Terashima, Aoki, Brooks, Fisk, and Sarrao]PhysRevB.58.10778 author author S. Hill, author S. Uji, author M. Takashita, author C. Terakura, author T. Terashima, author H. Aoki, author J. S. Brooks, author Z. Fisk,and author J. Sarrao, title title Bulk quantum Hall effect in Mo_4O_11, 10.1103/PhysRevB.58.10778 journal journal Phys. Rev. B volume 58, pages 10778 (year 1998)NoStop [Koshino et al.(2001)Koshino, Aoki, Kuroki, Kagoshima, and Osada]PhysRevLett.86.1062 author author M. Koshino, author H. Aoki, author K. Kuroki, author S. Kagoshima,and author T. Osada, title title Hofstadter Butterfly and Integer Quantum Hall Effect in Three Dimensions, 10.1103/PhysRevLett.86.1062 journal journal Phys. Rev. Lett. volume 86, pages 1062 (year 2001)NoStop [Bernevig et al.(2007)Bernevig, Hughes, Raghu, andArovas]PhysRevLett.99.146804 author author B. A. Bernevig, author T. L. Hughes, author S. Raghu, and author D. P. Arovas,title title Theory of the Three-Dimensional Quantum Hall Effect in Graphite, 10.1103/PhysRevLett.99.146804 journal journal Phys. Rev. Lett. volume 99, pages 146804 (year 2007)NoStop [Masuda et al.(2016)Masuda, Sakai, Tokunaga, Yamasaki, Miyake, Shiogai, Nakamura, Awaji, Tsukazaki, Nakao, Murakami, hisa Arima, Tokura, and Ishiwata]doi:10.1126/sciadv.1501117 author author H. Masuda, author H. Sakai, author M. Tokunaga, author Y. Yamasaki, author A. Miyake, author J. Shiogai,et al., title title Quantum Hall effect in a bulk antiferromagnet EuMnBi_2 with magnetically confined two-dimensional Dirac fermions, 10.1126/sciadv.1501117 journal journal Sci. Adv. volume 2, pages e1501117 (year 2016)NoStop [Jin et al.(2018)Jin, Wang, Xia, Zheng, andXu]PhysRevB.98.081101 author author Y. J. Jin, author R. Wang, author B. W. Xia, author B. B. Zheng,and author H. Xu, title title Three-dimensional quantum anomalous Hall effect in ferromagnetic insulators, 10.1103/PhysRevB.98.081101 journal journal Phys. Rev. B volume 98, pages 081101 (year 2018)NoStop [Li et al.(2020)Li, Liu, Jiang, and Xie]PhysRevLett.125.036602 author author H. Li, author H. Liu, author H. Jiang,and author X. C. Xie, title title 3D Quantum Hall Effect and a Global Picture of Edge States in Weyl Semimetals, 10.1103/PhysRevLett.125.036602 journal journal Phys. Rev. Lett. volume 125, pages 036602 (year 2020)NoStop [Cheng et al.(2020)Cheng, Jiang, Sun, and Xie]PhysRevB.102.075304 author author S.-g. Cheng, author H. Jiang, author Q.-F. Sun,andauthor X. C. Xie, title title Quantum Hall effect in wedge-shaped samples, 10.1103/PhysRevB.102.075304 journal journal Phys. Rev. B volume 102, pages 075304 (year 2020)NoStop [Wang et al.(2020a)Wang, Ren, Tang, Wang, Hou, Zeng, Zhang, and Qiao]PhysRevB.101.161201 author author P. Wang, author Y. Ren, author F. Tang, author P. Wang, author T. Hou, author H. Zeng, author L. Zhang,andauthor Z. Qiao, title title Approaching three-dimensional quantum Hall effect in bulk HfTe_5, 10.1103/PhysRevB.101.161201 journal journal Phys. Rev. B volume 101, pages 161201 (year 2020a)NoStop [Li et al.(2021)Li, Wang, Du, Qin, Lu, and Xie]10.1038/s41535-021-00399-2 author author S. Li, author C. M. Wang, author Z. Z. Du, author F. Qin, author H.-Z. Lu,and author X. C. Xie, title title 3D quantum Hall effects and nonlinear Hall effect, 10.1038/s41535-021-00399-2 journal journal npj Quantum Mater. volume 6, pages 96 (year 2021)NoStop [Zhang and Hu(2001)]Zhang_2001 author author S.-C. Zhang and author J. Hu,title title A Four-Dimensional Generalization of the Quantum Hall Effect, 10.1126/science.294.5543.823 journal journal Science volume 294, pages 823 (year 2001)NoStop [Qi et al.(2008)Qi, Hughes, and Zhang]PhysRevB.78.195424 author author X.-L. Qi, author T. L. Hughes, and author S.-C. Zhang,title title Topological field theory of time-reversal invariant insulators, 10.1103/PhysRevB.78.195424 journal journal Phys. Rev. B volume 78, pages 195424 (year 2008)NoStop [Mochol-Grzelak et al.(2018)Mochol-Grzelak, Dauphin, Celi, andLewenstein]Mochol_Grzelak_2018 author author M. Mochol-Grzelak, author A. Dauphin, author A. Celi, and author M. Lewenstein,title title Efficient algorithm to compute the second Chern number in four dimensional systems, 10.1088/2058-9565/aae93b journal journal Quantum Sci. Technol. volume 4,pages 014009 (year 2018)NoStop [Sugawa et al.(2018)Sugawa, Salces-Carcoba, Perry, Yue,and Spielman]doi:10.1126/science.aam9031 author author S. Sugawa, author F. Salces-Carcoba, author A. R. Perry, author Y. Yue,andauthor I. B. Spielman,title title Second Chern number of a quantum-simulated non-Abelian Yang monopole, 10.1126/science.aam9031 journal journal Science volume 360, pages 1429 (year 2018)NoStop [Li et al.(2013)Li, Zhang, and Wu]PhysRevLett.111.186803 author author Y. Li, author S.-C. Zhang, and author C. Wu, title title Topological Insulators with SU(2) Landau Levels, 10.1103/PhysRevLett.111.186803 journal journal Phys. Rev. Lett. volume 111, pages 186803 (year 2013)NoStop [Zhu et al.(2022a)Zhu, Zheng, Palumbo, and Wang]PhysRevLett.129.196602 author author Y.-Q. Zhu, author Z. Zheng, author G. Palumbo,and author Z. D. Wang, title title Topological Electromagnetic Effects and Higher Second Chern Numbers in Four-Dimensional Gapped Phases, 10.1103/PhysRevLett.129.196602 journal journal Phys. Rev. Lett. volume 129, pages 196602 (year 2022a)NoStop [Terrier and Kunst(2020)]PhysRevResearch.2.023364 author author F. Terrier and author F. K. Kunst, title title Dissipative analog of four-dimensional quantum Hall physics, 10.1103/PhysRevResearch.2.023364 journal journal Phys. Rev. Res. volume 2, pages 023364 (year 2020)NoStop [Chen et al.(2023a)Chen, Yi, andZhou]PhysRevB.108.085306 author author R. Chen, author X.-X. Yi,andauthor B. Zhou, title title Four-dimensional topological Anderson insulator with an emergent second Chern number, 10.1103/PhysRevB.108.085306 journal journal Phys. Rev. B volume 108, pages 085306 (year 2023a)NoStop [Price et al.(2015)Price, Zilberberg, Ozawa, Carusotto, and Goldman]PhysRevLett.115.195303 author author H. M. Price, author O. Zilberberg, author T. Ozawa, author I. Carusotto,and author N. Goldman, title title Four-Dimensional Quantum Hall Effect with Ultracold Atoms, 10.1103/PhysRevLett.115.195303 journal journal Phys. Rev. Lett. volume 115, pages 195303 (year 2015)NoStop [Ozawa et al.(2016)Ozawa, Price, Goldman, Zilberberg,and Carusotto]PhysRevA.93.043827 author author T. Ozawa, author H. M. Price, author N. Goldman, author O. Zilberberg,and author I. Carusotto, title title Synthetic dimensions in integrated photonics: From optical isolation to four-dimensional quantum Hall physics, 10.1103/PhysRevA.93.043827 journal journal Phys. Rev. A volume 93, pages 043827 (year 2016)NoStop [Chen et al.(2021b)Chen, Zhu, Tan, Wang, and Ma]PhysRevX.11.011016 author author Z.-G. Chen, author W. Zhu, author Y. Tan, author L. Wang,and author G. Ma, title title Acoustic Realization of a Four-Dimensional Higher-Order Chern Insulator and Boundary-Modes Engineering, 10.1103/PhysRevX.11.011016 journal journal Phys. Rev. X volume 11, pages 011016 (year 2021b)NoStop [Chen et al.(2022)Chen, Shi, Liu, Shen, He, Chan, Chen, and Dong]10.1093/nsr/nwac289 author author X.-D. Chen, author F.-L. Shi, author J.-W. Liu, author K. Shen, author X.-T. He, author C. T. Chan, author W.-J.Chen,and author J.-W.Dong, title title Second Chern crystals with inherently non-trivial topology, 10.1093/nsr/nwac289 journal journal Natl. Sci. Rev. volume 10, pages nwac289 (year 2022)NoStop [Lohse et al.(2018)Lohse, Schweizer, Price, Zilberberg, and Bloch]10.1038/nature25000 author author M. Lohse, author C. Schweizer, author H. M. Price, author O. Zilberberg,and author I. Bloch, title title Exploring 4D quantum Hall physics with a 2D topological charge pump, 10.1038/nature25000 journal journal Nature (London) volume 553, pages 55 (year 2018)NoStop [Zilberberg et al.(2018)Zilberberg, Huang, Guglielmon, Wang, Chen, Kraus, andRechtsman]10.1038/nature25011 author author O. Zilberberg, author S. Huang, author J. Guglielmon, author M. Wang, author K. P. Chen, author Y. E. Kraus,and author M. C. Rechtsman, title title Photonic topological boundary pumping as a probe of 4D quantum Hall physics, 10.1038/nature25011 journal journal Nature (London) volume 553, pages 59 (year 2018)NoStop [Kraus et al.(2013)Kraus, Ringel, and Zilberberg]PhysRevLett.111.226401 author author Y. E. Kraus, author Z. Ringel, and author O. Zilberberg,title title Four-Dimensional Quantum Hall Effect in a Two-Dimensional Quasicrystal, 10.1103/PhysRevLett.111.226401 journal journal Phys. Rev. Lett. volume 111, pages 226401 (year 2013)NoStop [Edge et al.(2012)Edge, Tworzydło, and Beenakker]PhysRevLett.109.135701 author author J. M. Edge, author J. Tworzydło,and author C. W. J. Beenakker, title title Metallic Phase of the Quantum Hall Effect in Four-Dimensional Space, 10.1103/PhysRevLett.109.135701 journal journal Phys. Rev. Lett. volume 109, pages 135701 (year 2012)NoStop [Lee et al.(2018)Lee, Wang, Chen, and Zhang]PhysRevB.98.094434 author author C. H. Lee, author Y. Wang, author Y. Chen,and author X. Zhang, title title Electromagnetic response of quantum Hall systems in dimensions five and six and beyond, 10.1103/PhysRevB.98.094434 journal journal Phys. Rev. B volume 98, pages 094434 (year 2018)NoStop [Petrides et al.(2018)Petrides, Price, and Zilberberg]PhysRevB.98.125431 author author I. Petrides, author H. M. Price,and author O. Zilberberg,title title Six-dimensional quantum Hall effect and three-dimensional topological pumps, 10.1103/PhysRevB.98.125431 journal journal Phys. Rev. B volume 98, pages 125431 (year 2018)NoStop [Wang et al.(2020b)Wang, Price, Zhang, and Chong]10.1038/s41467-020-15940-3 author author Y. Wang, author H. M. Price, author B. Zhang,and author Y. D. Chong, title title Circuit implementation of a four-dimensional topological insulator, 10.1038/s41467-020-15940-3 journal journal Nat. Commun. volume 11, pages 2356 (year 2020b)NoStop [Yu et al.(2020)Yu, Zhao, and Schnyder]10.1093/nsr/nwaa065 author author R. Yu, author Y. X. Zhao,andauthor A. P. Schnyder,title title 4D spinless topological insulator in a periodic electric circuit, 10.1093/nsr/nwaa065 journal journal Natl. Sci. Rev. volume 7, pages 1288 (year 2020)NoStop [Chen et al.(2023b)Chen, Brand, Helbig, Hofmann, Imhof, Fritzsche, KieBling, Stegmaier, Upreti, Neupert, Bzdušek, Greiter, Thomale, andBoettcher]10.1038/s41467-023-36359-6 author author A. Chen, author H. Brand, author T. Helbig, author T. Hofmann, author S. Imhof, author A. Fritzsche,et al., title title Hyperbolic matter in electrical circuits with tunable complex phases, 10.1038/s41467-023-36359-6 journal journal Nat. Commun. volume 14, pages 622 (year 2023b)NoStop [Zhang et al.(2023a)Zhang, Di, Zheng, Sun, and Zhang]10.1038/s41467-023-36767-8 author author W. Zhang, author F. Di, author X. Zheng, author H. Sun,and author X. Zhang, title title Hyperbolic band topology with non-trivial second Chern numbers, 10.1038/s41467-023-36767-8 journal journal Nat. Commun. volume 14, pages 1083 (year 2023a)NoStop [Liu et al.(2023)Liu, Lai, Wang, Cheng, Tian, and Chen]10.1515/nanoph-2022-0778 author author H. Liu, author P. Lai, author H. Wang, author H. Cheng, author J. Tian,and author S. Chen, title title Topological phases and non-Hermitian topology in photonic artificial microstructures, 10.1515/nanoph-2022-0778 journal journal Nanophotonics volume 12, pages 2273 (year 2023)NoStop [Dahlhaus et al.(2015)Dahlhaus, Fregoso, and Moore]PhysRevLett.114.246802 author author J. P. Dahlhaus, author B. M. Fregoso,and author J. E. Moore, title title Magnetization Signatures of Light-Induced Quantum Hall Edge States, 10.1103/PhysRevLett.114.246802 journal journal Phys. Rev. Lett. volume 114, pages 246802 (year 2015)NoStop [Kaladzhyan et al.(2017)Kaladzhyan, Simon, and Trif]PhysRevB.96.020507 author author V. Kaladzhyan, author P. Simon, and author M. Trif, title title Controlling topological superconductivity by magnetization dynamics, 10.1103/PhysRevB.96.020507 journal journal Phys. Rev. B volume 96, pages 020507 (year 2017)NoStop [Sie et al.(2015)Sie, McIver, Lee, Fu, Kong, and Gedik]10.1038/nmat4156 author author E. J. Sie, author J. W. McIver, author Y.-H. Lee, author L. Fu, author J. Kong,and author N. Gedik, title title Valley-selective optical Stark effect in monolayer WS_2, 10.1038/nmat4156 journal journal Nat. Mater. volume 14, pages 290 (year 2015)NoStop [Lindner et al.(2011)Lindner, Refael, and Galitski]10.1038/nphys1926 author author N. H. Lindner, author G. Refael, and author V. Galitski,title title Floquet topological insulator in semiconductor quantum wells, 10.1038/nphys1926 journal journal Nat. Phys.volume 7, pages 490 (year 2011)NoStop [Wang et al.(2013)Wang, Steinberg, Jarillo-Herrero, andGedik]doi:10.1126/science.1239834 author author Y. H. Wang, author H. Steinberg, author P. Jarillo-Herrero,andauthor N. Gedik, title title Observation of Floquet-Bloch States on the Surface of a Topological Insulator, 10.1126/science.1239834 journal journal Science volume 342, pages 453 (year 2013)NoStop [Mahmood et al.(2016)Mahmood, Chan, Alpichshev, Gardner, Lee, Lee, and Gedik]10.1038/nphys3609 author author F. Mahmood, author C.-K. Chan, author Z. Alpichshev, author D. Gardner, author Y. Lee, author P. A. Lee,and author N. Gedik, title title Selective scattering between Floquet-Bloch and Volkov states in a topological insulator, 10.1038/nphys3609 journal journal Nat. Phys. volume 12, pages 306 (year 2016)NoStop [McIver et al.(2019)McIver, Schulte, Stein, Matsuyama, Jotzu, Meier, and Cavalleri]10.1038/s41567-019-0698-y author author J. W. McIver, author B. Schulte, author F.-U. Stein, author T. Matsuyama, author G. Jotzu, author G. Meier,and author A. Cavalleri, title title Light-induced anomalous Hall effect in graphene, 10.1038/s41567-019-0698-y journal journal Nat. Phys. volume 16, pages 38 (year 2019)NoStop [Zhou et al.(2023)Zhou, Bao, Fan, Zhou, Gao, Zhong, Lin, Liu, Yu, Tang, Meng, Duan, and Zhou]10.1038/s41586-022-05610-3 author author S. Zhou, author C. Bao, author B. Fan, author H. Zhou, author Q. Gao, author H. Zhong,et al., title title Pseudospin-selective Floquet band engineering in black phosphorus, 10.1038/s41586-022-05610-3 journal journal Nature volume 614, pages 75 (year 2023)NoStop [He et al.(2019)He, Addison, Jin, Mele, Johnson, and Zhen]10.1038/s41467-019-12231-4 author author L. He, author Z. Addison, author J. Jin, author E. J. Mele, author S. G. Johnson,and author B. Zhen, title title Floquet Chern insulators of light, 10.1038/s41467-019-12231-4 journal journal Nat. Commun. volume 10, pages 4194 (year 2019)NoStop [Ito et al.(2023)Ito, Schüler, Meierhofer, Schlauderer, Freudenstein, Reimann, Afanasiev, Kokh, Tereshchenko, Güdde, Sentef, Höfer, and Huber]10.1038/s41586-023-05850-x author author S. Ito, author M. Schüler, author M. Meierhofer, author S. Schlauderer, author J. Freudenstein, author J. Reimann,et al., title title Build-up and dephasing of Floquet–Bloch bands on subcycle timescales, 10.1038/s41586-023-05850-x journal journal Nature volume 616, pages 696 (year 2023)NoStop [Katan and Podolsky(2013)]PhysRevLett.110.016802 author author Y. T. Katan and author D. Podolsky, title title Modulated Floquet Topological Insulators, 10.1103/PhysRevLett.110.016802 journal journal Phys. Rev. Lett. volume 110, pages 016802 (year 2013)NoStop [Li and Hu(2023)]10.1038/s41467-023-42139-z author author T. Li and author H. Hu,title title Floquet non-Abelian topological insulator and multifold bulk-edge correspondence, 10.1038/s41467-023-42139-z journal journal Nat. Commun. volume 14, pages 6418 (year 2023)NoStop [Chen et al.(2018a)Chen, Zhou, andXu]PhysRevB.97.155152 author author R. Chen, author B. Zhou,andauthor D.-H. Xu, title title Floquet Weyl semimetals in light-irradiated type-II and hybrid line-node semimetals, 10.1103/PhysRevB.97.155152 journal journal Phys. Rev. B volume 97, pages 155152 (year 2018a)NoStop [Chen et al.(2018b)Chen, Xu, andZhou]PhysRevB.98.235159 author author R. Chen, author D.-H. Xu,andauthor B. Zhou, title title Floquet topological insulator phase in a Weyl semimetal thin film with disorder, 10.1103/PhysRevB.98.235159 journal journal Phys. Rev. B volume 98, pages 235159 (year 2018b)NoStop [Oka and Aoki(2009)]PhysRevB.79.081406 author author T. Oka and author H. Aoki,title title Photovoltaic Hall effect in graphene, 10.1103/PhysRevB.79.081406 journal journal Phys. Rev. B volume 79, pages 081406 (year 2009)NoStop [Kitagawa et al.(2010)Kitagawa, Berg, Rudner, and Demler]PhysRevB.82.235114 author author T. Kitagawa, author E. Berg, author M. Rudner,and author E. Demler, title title Topological characterization of periodically driven quantum systems, 10.1103/PhysRevB.82.235114 journal journal Phys. Rev. B volume 82, pages 235114 (year 2010)NoStop [Gu et al.(2011)Gu, Fertig, Arovas, and Auerbach]PhysRevLett.107.216601 author author Z. Gu, author H. A. Fertig, author D. P. Arovas,andauthor A. Auerbach, title title Floquet Spectrum and Transport through an Irradiated Graphene Ribbon, 10.1103/PhysRevLett.107.216601 journal journal Phys. Rev. Lett. volume 107, pages 216601 (year 2011)NoStop [Rechtsman et al.(2013)Rechtsman, Zeuner, Plotnik, Lumer, Podolsky, Dreisow, Nolte, Segev, and Szameit]10.1038/nature12066 author author M. C. Rechtsman, author J. M. Zeuner, author Y. Plotnik, author Y. Lumer, author D. Podolsky, author F. Dreisow, author S. Nolte, author M. Segev,and author A. Szameit, title title Photonic Floquet topological insulators, 10.1038/nature12066 journal journal Naturevolume 496, pages 196 (year 2013)NoStop [Cayssol et al.(2013)Cayssol, Dóra, Simon, andMoessner]10.1002/pssr.201206451 author author J. Cayssol, author B. Dóra, author F. Simon,and author R. Moessner, title title Floquet topological insulators, 10.1002/pssr.201206451 journal journal Phys. Status Solidi RRL volume 7, pages 101 (year 2013)NoStop [Rudner and Lindner(2020)]10.1038/s42254-020-0170-z author author M. S. Rudner and author N. H. Lindner, title title Band structure engineering and non-equilibrium dynamics in Floquet topological insulators, 10.1038/s42254-020-0170-z journal journal Nat. Rev. Phys. volume 2, pages 229 (year 2020)NoStop [Perez-Piskunow et al.(2014)Perez-Piskunow, Usaj, Balseiro, andTorres]PhysRevB.89.121401 author author P. M. Perez-Piskunow, author G. Usaj, author C. A. Balseiro,and author L. E. F. F. Torres, title title Floquet chiral edge states in graphene, 10.1103/PhysRevB.89.121401 journal journal Phys. Rev. B volume 89, pages 121401 (year 2014)NoStop [Usaj et al.(2014)Usaj, Perez-Piskunow, Foa Torres, andBalseiro]PhysRevB.90.115423 author author G. Usaj, author P. M. Perez-Piskunow, author L. E. F. Foa Torres,and author C. A. Balseiro, title title Irradiated graphene as a tunable Floquet topological insulator, 10.1103/PhysRevB.90.115423 journal journal Phys. Rev. B volume 90, pages 115423 (year 2014)NoStop [Kundu et al.(2014)Kundu, Fertig, and Seradjeh]PhysRevLett.113.236803 author author A. Kundu, author H. A. Fertig, and author B. Seradjeh,title title Effective Theory of Floquet Topological Transitions, 10.1103/PhysRevLett.113.236803 journal journal Phys. Rev. Lett. volume 113, pages 236803 (year 2014)NoStop [Foa Torres et al.(2014)Foa Torres, Perez-Piskunow, Balseiro, andUsaj]PhysRevLett.113.266801 author author L. E. F.Foa Torres, author P. M.Perez-Piskunow, author C. A.Balseiro,and author G. Usaj, title title Multiterminal Conductance of a Floquet Topological Insulator, 10.1103/PhysRevLett.113.266801 journal journal Phys. Rev. Lett. volume 113, pages 266801 (year 2014)NoStop [Titum et al.(2015)Titum, Lindner, Rechtsman, and Refael]PhysRevLett.114.056801 author author P. Titum, author N. H. Lindner, author M. C. Rechtsman,andauthor G. Refael, title title Disorder-Induced Floquet Topological Insulators, 10.1103/PhysRevLett.114.056801 journal journal Phys. Rev. Lett. volume 114, pages 056801 (year 2015)NoStop [Wan et al.(2023)Wan, Ning, Xu, Zheng, andWang]wan2023photoinduced author author X. Wan, author Z. Ning, author D.-H. Xu, author B. Zheng,and author R. Wang, @nooptitle Photoinduced High-Chern-Number Quantum Anomalous Hall Effect from Higher-Order Topological Insulators,(year 2023), http://arxiv.org/abs/2307.07116 arXiv:2307.07116 [cond-mat.mes-hall] NoStop [Maczewsky et al.(2017)Maczewsky, Zeuner, Nolte, andSzameit]10.1038/ncomms13756 author author L. J. Maczewsky, author J. M. Zeuner, author S. Nolte, and author A. Szameit,title title Observation of photonic anomalous Floquet topological insulators, 10.1038/ncomms13756 journal journal Nat. Commun. volume 8, pages 13756 (year 2017)NoStop [Cheng et al.(2019)Cheng, Pan, Wang, Zhang, Yu, Gover, Zhang, Li, Zhou, and Zhu]PhysRevLett.122.173901 author author Q. Cheng, author Y. Pan, author H. Wang, author C. Zhang, author D. Yu, author A. Gover, author H. Zhang, author T. Li, author L. Zhou,and author S. Zhu, title title Observation of AnomalousModes in Photonic Floquet Engineering, 10.1103/PhysRevLett.122.173901 journal journal Phys. Rev. Lett. volume 122, pages 173901 (year 2019)NoStop [Pyrialakos et al.(2023)Pyrialakos, Apostolidis, Khajavikhan, Kantartzis, and Christodoulides]PhysRevB.107.174313 author author G. G. Pyrialakos, author A. Apostolidis, author M. Khajavikhan, author N. V. Kantartzis,and author D. N. Christodoulides, title title Antichiral topological phases and protected bulk transport in dual-helix Floquet lattices, 10.1103/PhysRevB.107.174313 journal journal Phys. Rev. B volume 107, pages 174313 (year 2023)NoStop [Kitagawa et al.(2012)Kitagawa, Broome, Fedrizzi, Rudner, Berg, Kassal, Aspuru-Guzik, Demler, and White]10.1038/ncomms1872 author author T. Kitagawa, author M. A. Broome, author A. Fedrizzi, author M. S. Rudner, author E. Berg, author I. Kassal, author A. Aspuru-Guzik, author E. Demler,and author A. G.White, title title Observation of topologically protected bound states in photonic quantum walks, 10.1038/ncomms1872 journal journal Nat. Commun. volume 3,pages 882 (year 2012)NoStop [Mukherjee et al.(2017)Mukherjee, Spracklen, Valiente, Andersson, Öhberg, Goldman, and Thomson]10.1038/ncomms13918 author author S. Mukherjee, author A. Spracklen, author M. Valiente, author E. Andersson, author P. Öhberg, author N. Goldman,and author R. R. Thomson, title title Experimental observation of anomalous topological edge modes in a slowly driven photonic lattice, 10.1038/ncomms13918 journal journal Nat. Commun. volume 8, pages 13918 (year 2017)NoStop [Afzal et al.(2020)Afzal, Zimmerling, Ren, Perron,and Van]PhysRevLett.124.253601 author author S. Afzal, author T. J. Zimmerling, author Y. Ren, author D. Perron,and author V. Van, title title Realization of Anomalous Floquet Insulators in Strongly Coupled Nanophotonic Lattices, 10.1103/PhysRevLett.124.253601 journal journal Phys. Rev. Lett. volume 124, pages 253601 (year 2020)NoStop [Maczewsky et al.(2020)Maczewsky, Höckendorf, Kremer, Biesenthal, Heinrich, Alvermann, Fehske, and Szameit]10.1038/s41563-020-0641-8 author author L. J. Maczewsky, author B. Höckendorf, author M. Kremer, author T. Biesenthal, author M. Heinrich, author A. Alvermann, author H. Fehske,and author A. Szameit, title title Fermionic time-reversal symmetry in a photonic topological insulator, 10.1038/s41563-020-0641-8 journal journal Nat. Mater. volume 19, pages 855 (year 2020)NoStop [Fleury et al.(2016)Fleury, Khanikaev, and Alú]10.1038/ncomms11744 author author R. Fleury, author A. B. Khanikaev,and author A. Alú, title title Floquet topological insulators for sound, 10.1038/ncomms11744 journal journal Nat. Commun. volume 7, pages 11744 (year 2016)NoStop [Peng et al.(2016)Peng, Qin, Zhao, Shen, Xu, Bao, Jia, andZhu]10.1038/ncomms13368 author author Y.-G. Peng, author C.-Z. Qin, author D.-G. Zhao, author Y.-X. Shen, author X.-Y. Xu, author M. Bao, author H. Jia,and author X.-F. Zhu, title title Experimental demonstration of anomalous Floquet topological insulator for sound, 10.1038/ncomms13368 journal journal Nat. Commun. volume 7, pages 13368 (year 2016)NoStop [Zhu et al.(2022b)Zhu, Xue, Gong, Chong,and Zhang]10.1038/s41467-021-27552-6 author author W. Zhu, author H. Xue, author J. Gong, author Y. Chong,and author B. Zhang, title title Time-periodic corner states from Floquet higher-order topology, 10.1038/s41467-021-27552-6 journal journal Nat. Commun. volume 13, pages 11 (year 2022b)NoStop [Cheng et al.(2022)Cheng, Bomantara, Xue, Zhu, Gong, and Zhang]PhysRevLett.129.254301 author author Z. Cheng, author R. W. Bomantara, author H. Xue, author W. Zhu, author J. Gong,and author B. Zhang, title title Observation of /2 Modes in an Acoustic Floquet System, 10.1103/PhysRevLett.129.254301 journal journal Phys. Rev. Lett. volume 129, pages 254301 (year 2022)NoStop [Dabiri and Cheraghchi(2023)]dabiri2023electric author author S. S. Dabiri and author H. Cheraghchi, @nooptitle Electric Circuit Simulation of Floquet Topological Insulators,(year 2023),http://arxiv.org/abs/2208.08196 arXiv:2208.08196 [cond-mat.mes-hall] NoStop [Jotzu et al.(2014)Jotzu, Messer, Desbuquois, Lebrat, Uehlinger, Greif, and Esslinger]10.1038/nature13915 author author G. Jotzu, author M. Messer, author R. Desbuquois, author M. Lebrat, author T. Uehlinger, author D. Greif,and author T. Esslinger, title title Experimental realization of the topological Haldane model with ultracold fermions, 10.1038/nature13915 journal journal Nature volume 515,pages 237 (year 2014)NoStop [Asteria et al.(2019)Asteria, Tran, Ozawa, Tarnowski, Rem, Fläschner, Sengstock, Goldman, and Weitenberg]10.1038/s41567-019-0417-8 author author L. Asteria, author D. T. Tran, author T. Ozawa, author M. Tarnowski, author B. S. Rem, author N. Fläschner, author K. Sengstock, author N. Goldman,and author C. Weitenberg, title title Measuring quantized circular dichroism in ultracold topological matter, 10.1038/s41567-019-0417-8 journal journal Nat. Phys. volume 15,pages 449 (year 2019)NoStop [Wintersperger et al.(2020)Wintersperger, Braun, Ünal, Eckardt, Liberto, Goldman, Bloch, and Aidelsburger]10.1038/s41567-020-0949-y author author K. Wintersperger, author C. Braun, author F. N. Ünal, author A. Eckardt, author M. D. Liberto, author N. Goldman, author I. Bloch,and author M. Aidelsburger, title title Realization of an anomalous Floquet topological system with ultracold atoms, 10.1038/s41567-020-0949-y journal journal Nat. Phys. volume 16, pages 1058 (year 2020)NoStop [Zhang et al.(2023b)Zhang, Yi, Zhang, Jiao, Shi, Yuan, Zhang, Liu, Chen, and Pan]PhysRevLett.130.043201 author author J.-Y. Zhang, author C.-R. Yi, author L. Zhang, author R.-H. Jiao, author K.-Y. Shi, author H. Yuan, author W. Zhang, author X.-J. Liu, author S. Chen,and author J.-W. Pan, title title Tuning Anomalous Floquet Topological Bands with Ultracold Atoms, 10.1103/PhysRevLett.130.043201 journal journal Phys. Rev. Lett. volume 130, pages 043201 (year 2023b)NoStop [Peng and Refael(2018)]PhysRevB.97.134303 author author Y. Peng and author G. Refael,title title Topological energy conversion through the bulk or the boundary of driven systems, 10.1103/PhysRevB.97.134303 journal journal Phys. Rev. B volume 97, pages 134303 (year 2018)NoStop [Weisbrich et al.(2021)Weisbrich, Klees, Rastelli, andBelzig]PRXQuantum.2.010310 author author H. Weisbrich, author R. Klees, author G. Rastelli,andauthor W. Belzig, title title Second Chern Number and Non-Abelian Berry Phase in Topological Superconducting Systems, 10.1103/PRXQuantum.2.010310 journal journal PRX Quantum volume 2, pages 010310 (year 2021)NoStop [Chen et al.(2023c)Chen, Guan, Lenggenhager, Maciejko, Boettcher, and Bzdu ššek]PhysRevB.108.085114 author author A. Chen, author Y. Guan, author P. M. Lenggenhager, author J. Maciejko, author I. Boettcher,and author T. c. v. Bzdu ššek, title title Symmetry and topology of hyperbolic Haldane models, 10.1103/PhysRevB.108.085114 journal journal Phys. Rev. B volume 108, pages 085114 (year 2023c)NoStop [Titum et al.(2017)Titum, Lindner, and Refael]PhysRevB.96.054207 author author P. Titum, author N. H. Lindner,and author G. Refael,title title Disorder-induced transitions in resonantly driven Floquet topological insulators, 10.1103/PhysRevB.96.054207 journal journal Phys. Rev. B volume 96, pages 054207 (year 2017)NoStop [Rahav et al.(2003a)Rahav, Gilary,and Fishman]PhysRevLett.91.110404 author author S. Rahav, author I. Gilary, and author S. Fishman,title title Time Independent Description of Rapidly Oscillating Potentials, 10.1103/PhysRevLett.91.110404 journal journal Phys. Rev. Lett. volume 91, pages 110404 (year 2003a)NoStop [Goldman and Dalibard(2014)]PhysRevX.4.031027 author author N. Goldman and author J. Dalibard, title title Periodically Driven Quantum Systems: Effective Hamiltonians and Engineered Gauge Fields, 10.1103/PhysRevX.4.031027 journal journal Phys. Rev. X volume 4,pages 031027 (year 2014)NoStop [Maricq(1982)]PhysRevB.25.6622 author author M. M. Maricq, title title Application of average Hamiltonian theory to the NMR of solids, 10.1103/PhysRevB.25.6622 journal journal Phys. Rev. B volume 25, pages 6622 (year 1982)NoStop [Grozdanov and Rakovi ćć(1988)]PhysRevA.38.1739 author author T. P. Grozdanov and author M. J. Rakovi ćć, title title Quantum system driven by rapidly varying periodic perturbation, 10.1103/PhysRevA.38.1739 journal journal Phys. Rev. A volume 38, pages 1739 (year 1988)NoStop [Rahav et al.(2003b)Rahav, Gilary,and Fishman]PhysRevA.68.013820 author author S. Rahav, author I. Gilary, and author S. Fishman,title title Effective Hamiltonians for periodically driven systems, 10.1103/PhysRevA.68.013820 journal journal Phys. Rev. A volume 68, pages 013820 (year 2003b)NoStop [Eckardt and Anisimovas(2015)]Eckardt_2015 author author A. Eckardt and author E. Anisimovas, title title High-frequency approximation for periodically driven quantum systems from a Floquet-space perspective, 10.1088/1367-2630/17/9/093039 journal journal New J. Phys. volume 17, pages 093039 (year 2015)NoStop [Bukov et al.(2015)Bukov, D'Alessio, and Polkovnikov]Bukov_2015 author author M. Bukov, author L. D'Alessio, and author A. Polkovnikov,title title Universal high-frequency behavior of periodically driven systems: from dynamical stabilization to Floquet engineering, 10.1080/00018732.2015.1055918 journal journal Adv. Phys. volume 64, pages 139 (year 2015)NoStop
http://arxiv.org/abs/2401.02973v1
{ "authors": [ "Zheng-Rong Liu", "Rui Chen", "Bin Zhou" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231226135757", "title": "Periodically driven four-dimensional topological insulator with tunable second Chern number" }
Considerations about RNNs]Considerations about temporal rescaling, discretization, and linearization of RNNs Fundación I+D del Software Libre,and Facultad de Ciencias, Universidad Internacional de la Rioja, , España mcaruso@fidesol.orgUniversidad Nacional de Quilmes, Departamento de Ciencia y Tecnología -CONICET, Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmarkcecilia.jarne@unq.edu.ar December 2023We explore the mathematical foundations of Recurrent Neural Networks (RNNs) and three fundamental procedures: temporal rescaling, discretization, and linearization. These techniques provide essential tools for characterizing RNN behaviour, enabling insights into temporal dynamics, practical computational implementation, and linear approximations for analysis. We discuss the flexible order of application of these procedures, emphasizing their significance in modelling and analyzing RNNs for computational neuroscience and machine learning applications. We explicitly describe here under what conditions these procedures can be interchangeable.§ INTRODUCTION Recurrent Neural Networks (RNNs) are universal approximators of dynamical systems <cit.>. These have become invaluable tools in computational neuroscience and machine learning. In computational neuroscience, RNNs excel at modelling the temporal dynamics of neural processes, allowing us to simulate and analyze complex brain functions and interactions, such as motor control, decision-making and other complex processes <cit.>. Their ability to capture sequential dependencies and recurrent patterns makes them well-suited for tasks such as understanding information processing in the brain and decoding neural signals. RNNs have also found several applications in machine learning, powering sequential data analysis and time-series prediction <cit.>.There are three main procedures we can consider applying over differential equations, and in particular, on Recurrent Neural Networks, to characterize the behaviour of the system that they model: discretization, temporal rescaling, and linearization <cit.>. We will show that the result of applying these procedures is independent of the order under certain constraints, meaning that the order does NOT alter the outcome or, in other words, they commute when considering RNNs.Transitioning from a continuous-time Recurrent Neural Network (RNN) to one suitable for computer implementation involves discretization. In continuous-time RNNs, dynamics is modelled using differential equations, which describe how neuron activations change continuously over time. However, computers operate in discrete time, meaning that they process information in discrete time steps. This is well known, and we daily employ numerical methods to approximate continuous-time behaviour in a discrete-time framework.First, we choose a small time step, often denoted as Δ, to divide time into discrete intervals. Then, we use methods such as the Euler method, Runge-Kutta, or more advanced techniques to iteratively update the neuron activations at each time step <cit.>. The key idea is to approximate continuous differential equations, such as those governing the RNN's dynamics, using finite differences. This process effectively transforms continuous-time RNN equations into a discrete-time form.This transition allows us to use computers for training and inference while still capturing essential aspects of the continuous-time model's behaviour, enabling the application of RNNs to real-world tasks like time series prediction and natural language processing when considering Machine Learning applications or allowing us to develop computational models of the brain and cognitive tasks.It's important to note that despite this discretization, the behaviour of the dynamical system can be characterized. The discrete-time RNN's stability, for instance, can be assessed by examining the eigenvalues of its weight matrix, shedding light on the presence and stability of fixed points in the network dynamics <cit.>. Temporal Rescaling can improve the numerical stability of solving differential equations. By rescaling the time variable, one can potentially reduce the condition number of the underlying linear system, which can result in more accurate and stable numerical solutions, especially when using numerical integration methods. In certain scenarios, such as simulating long-term dynamics or processes occurring over a wide range of timescales, Temporal Rescaling allows us to efficiently capture the behaviour of a system. By adjusting the time units, we can focus computational resources where they are most needed, avoiding unnecessary computations at very short or very long times. It can help identify dominant modes, equilibrium points, or oscillations in the system and gain a better understanding of the underlying dynamics.Linearization of dynamical systems simplifies complex systems into linear approximations, making it (sometimes) easier to analyze and understand their behaviour. Linear systems are well-studied and have well-established mathematical tools for analysis. Linear systems theory provides tools for control design. By linearizing a system around a desired operating point, for example, engineers can design control strategies that work effectively in the vicinity of that point, ensuring stability and desired performance.The rest of the paper is organized as follows. Section <ref> introduces the mathematical foundation of RNNs by considering a set of N artificial neurons, each characterized by a dynamic activity function. In Section <ref>, we explore explicitly the effects of Temporal Rescaling on this equation. Section <ref> focuses on the essential process of discretization required to compute the differential equation (<ref>) when simulating neural networks, and Section <ref>, delves into the concept of linearization in dynamical systems and explores the role of linearization in the context of activation functions within neural networks. Finally, Section <ref> presents some remarks and discussion of the three procedures.§ RNN DEFINITION AND PROCEDURES Let us consider a set of N artificial neurons, and for each of these, a dynamic quantity called activity, represented by a function h_i: [a, b] ⟶ℋ⊆ℝ with i=1, ⋯, N. We can arrange these N functions into a column vector h= (h_1, ⋯, h_i, ⋯, h_N)^𝔱, where 𝔱 denotes matrix transposition. The vector h represents the state of the network's activity (formed by the N neurons) at that time t. On the other hand, there are a series of M input functions, x_k: [a, b] ⟶𝐗⊆ℝ with k=1, ⋯ ,M, which can be arranged into a column vector x= (x_1, ⋯, x_i, ⋯, x_M)^𝔱. For recurrent neural networks, the activity vector h satisfies: h'(t)=- 1 /τh(t) + σ(w h(t)+w x(t)) Where h'(t) represents the derivative with respect to time in the usual sense. The matrices w and w are of size N× N and N× M, respectively. The elements of the matrix w, w_ij contain the synaptic connections, similar to w. σ is a vector field of activation defined from ℝ^N to itself, satisfying σ(0)=0 (since neuronal activity cannot be revived instantly, i.e., the result of activating a neuron with null activity is null). Additionally, each of its components is defined as the application of a unique non-linear function σ:ℝ⟶ℝ. That is, for a vector φ∈ℝ^N, expressed in components as φ=(φ_1,⋯,φ_N), we have σ(φ)=(σ(φ_1),⋯,σ(φ_N)).On the other hand, τ is a half-life time of each signal h_i, in the case where the network is completely disconnected, both internally w=0 and externally w=0.We can write (<ref>) in terms of its components: h_i'(t)=- 1 /τh_i(t) + σ(∑_j=1^N w_ijh_j(t)+∑_k=1^M w_ikx_k(t))The network's activity state is determined by (<ref>), which is updated as a result of the interaction between them via w, with external signals x influencing the neurons' activity according to w, along with some initial condition.Instead of starting from (<ref>), it may be useful to start from the "biased" version of the equation as follows: h'(t)=- 1 /τh(t) + σ(w h(t)+b+w x(t))Where b=(b_1,⋯,b_i,⋯,b_N)^𝔱 is another column vector, and each component is the bias for each activity h_i.To better understand, let's work with a more compact notation: h'(t)=F(h(t),x(t))Although we don't see any "recurrence" relationship in the discrete mathematics sense, we can see the seed of such a concept in (<ref>), indicating that changes in h over time depend on its current state.Three main procedures can be performed on this differential equation in order to characterize the behaviour of the system it models: temporal rescaling, discretization, and linearization. We will see that the result of applying these procedures is independent of the order. The order of these procedures does NOT alter the outcome or, in other words, both procedures commute under the considered hypothesis.Intuitively, we can anticipate that discretization and linearization are procedures applied to different elements of (<ref>). Discretization enters from the left (on the differential operator), while linearization does so from the right (on F).We will see that when temporal discretization is performed, a recurrence relationship or a difference equation of the same order as the differential equation (<ref>) will become apparent. §.§ Temporal Rescaling Let consider h(t), the vector containing the activity signals in an RNN, i.e., the hidden layer, Then h'(t)=-λ h(t)+σ(ω·h(t)+ ω·x(t)+b) Where λ:=τ^-1 and τ is the characteristic relaxation time of the network in the absence of interaction.By performing a temporal rescaling t⟼ s:=t/τ, with τ fixed, then h(t)=h(τ.s)=:𝔥(s), similarly x(t)=x(τ.s)=: ξ(s) We obtain: 𝔥'(s)=-𝔥(s)+τ σ(ω·𝔥(s)+ ω·ξ(s)+b) Now τ reappears in front of σ. It is possible to modify the weight matrices ω and ω to ω^* and ω^*, and the biases b to b^* in such a way that the τ can be absorbed, resulting in:𝔥'(s)=-𝔥(s)+σ(ω^* ·𝔥(s)+ ω^* ·ξ(s)+b^*) §.§ Discretization To compute (<ref>); in the sense of using a computer to perform simulations, it is always necessary to perform a discretization process. Let us consider that the time we are interested in studying such a system is contained within the interval [a, b] ⊂ℝ, we can bring in a well-known procedure, which involves slicing into n equal-sized slices of size Δ=(b-a)/n. From this, we obtain the sequence of times: t_k=a+ k.Δ, k=0,⋯,n Of course, t_0=a, and t_n=b.If the cut of the slices only has one side, then the approximation holds: h'(t_k)≃[h(t_k+Δ)-h(t_k) ]/Δ From the sequence of times (<ref>), we have t_i+1=t_i+Δ, valid for i=0,⋯, n-1. Then, we will have: h'(t_k)≃[h(t_k+1)-h(t_k) ]/Δ. From the slicing {t_k}_k, the sequence of snapshots of corresponding neural activities {h(t_k)}_k is defined. So, under (<ref>), now considered as an exact equality, the differential equation (<ref>) can be rewritten as: h(t_k+1)=h(t_k)+F(h(t_k),x(t_k))Δ This functions as a recurrence relation; given the information of activity and excitation signals in a given slice, let's say at time t_k, this "friendly" relation gives you the value of the activity signal for the next slice at time t_k+1. This meal is suitable for computers but can be indigestible for humans due to the tediously repetitive nature of these, as their name suggests, recurrent tasks, especially if they contain that ingredient: non-linearity within the recipe given by F. In Computational Neuroscience, fixed points of RNN models defined by equations <ref> and <ref> are commonly used to model neural responses to static or slowly changing stimuli. Such equations are more commonly used explicitly in computational neuroscience while Equation <ref> is more common in machine learning, but they are closely related. Equation <ref> has the same fixed point as Eq. <ref>, but hyperbolic stability is obtained when eigenvalues have a magnitude less than 1. Hence, if a fixed point is stable for Equation <ref>, it is also stable for Equation <ref>, but the converse is not true <cit.>.In machine learning, RNNs are typically used to learn mappings from input time series, x(t), to output time series, y(t), and they often are trained using backpropagation through time. In computational neuroscience, RNNs of the form of Equation <ref> are studied using such mapping and also for their fixed point properties <cit.>. §.§ Linearization Linearization of a dynamical system can be useful in some cases. It simplifies complex nonlinear systems into linear approximations, making analysing and understanding their behaviour easier. Linear systems are well-studied and have well-established mathematical tools for analysis. Linear systems theory provides tools for control design. By linearizing a system around a desired operating point, for example, engineers can design control strategies that work effectively in the vicinity of that point, ensuring stability and desired performance.Also, linearization is particularly valuable when dealing with systems that exhibit nonlinear behaviour around specific operating points. By focusing on local approximations, one can gain insights into how the system behaves in the vicinity of these points.One must be cautious since Linearization is effective when nonlinearities are small and can be approximated as deviations from a stable operating point. In systems with strong, pervasive nonlinearities, linear approximations may be inaccurate and provide little insight.Linearization relies on smooth functions and continuous derivatives. Systems with non-smooth or discontinuous dynamics, such as systems are inherently nonlinear and not amenable to linearization.Also, in cases where the state space of a system is very high-dimensional, linearization may lead to overly complex models. Linear approximations may require extensive computation and data collection, making them impractical for systems with large state spaces.Since the non-linear object in F, is exclusively in σ, linearization is a procedure related to the activation field and the type of signals (neuronal activity) we are interested in considering.Within this activation field σ, let's now examine each function within σ:ℝ⟶ℝ and assume that σ(φ) is k-times differentiable at φ=0. By Taylor's theorem, there exists a remainder function R_k(φ) that allows us to write it as:σ(φ) = σ(0) + (d_φσ(φ)|φ=0)φ + ⋯ + (dφ^(k)σ(φ)|_φ=0)φ^k + R_k(φ)φ^k. Activation functions σ are often chosen such that their tangent line has a slope of 1 at φ=0. In other words, the activation function near the origin resembles the identity function, recalling that σ(0)=0. Using all of this, and for sufficiently small φ, we can approximate σ(φ)≃φ.This procedure will be valid for regimes of constraint neuronal activity. From now on, we will refer to this as the "regular regime." We not saying that the linear approximation is valid only in the "regular regime." We just assert that in this regime, the intensity of neuronal activity is so weak that there is a formal procedure justifying the linear approximation. In fact, this approximation is also used in the case of long times. The reason for this can be justified by the differential equation, and it can be correct to assume a certain neuronal tranquillity in the long term. By neuronal tranquillity in the long term, we mean that over time, either because the matrix A (which is diagonalizable) is such that all of its eigenvalues have a real part less than 0 (this is called asymptotic stability) or because the neuronal activation function, which takes the weighted sum of the activity signals of each neuron, gets things in order and flattens out the whole situation. In this way, under linearization, we have: F(h(t),x(t))≃ - 1 /τh(t) + w h(t)+b+w x(t), In this case, the differential equation (<ref>) takes the form: h'(t)=(w - 1 /τI)h(t)+b+w x(t),where I is the identity matrix, in this case, of N× N. §.§ An additional comment We have noticed that both (<ref>) and (<ref>) are discretized as follows: h_i(t+Δ)-h_i(t) =- Δ/τh_i(t) + σ(∑_j=1^N w_ijh_j(t)+b_i+∑_k=1^M w_ikx_k(t))Δ valid for a sufficiently small Δ.If you choose τ=1=Δ in some time scale (in this case, 10^-3 seconds), you have: h_i(t+1) = σ(∑_j=1^N w_ijh_j(t)+b_i+∑_k=1^M w_ikx_k(t)).In vector form, it's written more compactly as: h(t+1) =σ(w h(t)+b+w x(t)).Both equations are mathematically correct under these assumptions. However, it's uncertain whether you lose significant information about the network's behaviour by considering that the slices of the temporal discretization are equal to the mean lifetime. In particular, it will depend on whether our interest lies in the time scale associated with the decay of individual neurons, or if we are interested in analyzing the collective emergent behaviour and assume that the activity does not decay before engaging in the excitation of the connected output neurons.§ DISCUSION We have delved into the fundamental mathematical framework of Recurrent Neural Networks (RNNs) and introduced procedures such as temporal rescaling, discretization, and linearization for characterizing the system's behaviour. Each of these procedures is integral to understanding and modelling RNNs effectively. Using this compact notation F(h(t),x(t))=σ(w h(t)+b+w x(t)), A=w - 1 /τI and B=w, we construct the following diagram in Fig. <ref> which summarizes the procedures of discretization, linearization, and temporal rescaling. Temporal rescaling, as described in the text, is a crucial tool for studying RNNs over different time scales. By introducing a scaling factor τ and rescaling time, we can observe how the network's dynamics change, potentially uncovering insights about its behaviour at different temporal resolutions. This procedure is particularly useful when dealing with networks that exhibit distinct behaviours or patterns over varying time intervals.Discretization, on the other hand, plays a pivotal role in implementing RNNs simulations on computers. It transforms the continuous-time differential equation into a discrete-time recurrence relation, making it computationally tractable. This is essential for simulating and analyzing RNNs in practice, allowing researchers to explore their behaviour and capabilities systematically.Linearization is another valuable tool that simplifies complex nonlinear RNNs into linear approximations. This simplification aids in analyzing the network's behaviour around specific operating points and designing control strategies. However, it's important to note that linearization is most effective when nonlinearities are relatively small, and it may not be suitable for highly nonlinear systems.These procedures are standard practices in the field of RNNs, and their order of application is often flexible, as they commute without altering the final outcome. While they are powerful tools for modelling and analyzing RNNs, it's essential to choose the appropriate procedure based on the specific characteristics and goals of the neural network being studied. Additionally, it's worth considering that these procedures may not capture the full complexity of certain networks with strong, pervasive nonlinearities or large state spaces, highlighting the need for a thoughtful approach to their application.§ DATA AVAILABILITY STATEMENTNo new data were created or analysed in this study.§ ACKNOWLEDGMENTS CONICET and UNQ supported the present work. Authors acknowledge support from PICT 2020-01413.§ REFERENCESiopart-num
http://arxiv.org/abs/2312.15974v1
{ "authors": [ "Mariano Caruso", "Cecilia Jarne" ], "categories": [ "cs.NE" ], "primary_category": "cs.NE", "published": "20231226100033", "title": "Considerations about temporal rescaling, discretization, and linearization of RNNs" }
Combining Bayesian reconstruction entropy with maximum entropy method for analytic continuations of matrix-valued Green's functions Li Huang January 14, 2024 =================================================================================================================================== Systematic adaptation of network depths at runtime can be an effective way to control inference latency and meet the resource condition of various devices. However, previous depth adaptive networks do not provide general principles and a formal explanation on why and which layers can be skipped, and, hence, their approaches are hard to be generalized and require long and complex training steps. In this paper, we present an architectural pattern and training method for adaptive depth networks that can provide flexible accuracy-efficiency trade-offs in a single network.In our approach, every residual stage is divided into 2 consecutive sub-paths with different properties. While the first sub-path is mandatory for hierarchical feature learning, the other is optimized to incur minimal performance degradation even if it is skipped. Unlike previous adaptive networks, our approach does not iteratively self-distill a fixed set of sub-networks, resulting in significantly shorter training time. However, once deployed on devices, it can instantly construct sub-networks of varying depths to provide various accuracy-efficiency trade-offs in a single model. We provide a formal rationale for why the proposed architectural pattern and training method can reduce overall prediction errors while minimizing the impact of skipping selected sub-paths. We also demonstrate the generality and effectiveness of our approach with various residual networks, both from convolutional neural networks and vision transformers. § INTRODUCTION Modern deep neural networks such as convolutional neural networks (CNNs) and transformers <cit.> provide state-of-the-art performance at high computational costs, and, hence, lots of efforts have been made to leverage those inference capabilitiesin various resource-constrained devices. Those efforts includecompact architectures <cit.>, network pruning <cit.>,weight/activation quantization <cit.>,knowledge distillation <cit.>, to name a few.However, those approaches provide static accuracy-efficiency trade-offs, and, hence, it is infeasible to deploy one single model to meetdevices with all kinds of resource-constraints. There have been some attempts to provide predictable adaptability to neural networksby exploiting the redundancy in eithernetwork depths <cit.>, widths <cit.>, or both <cit.>. However, one major difficulty with prior adaptive networksis that they are hard to train and requiresignificantly longer training time than non-adaptive networks. For example, most adaptive networks select a fixed number of sub-networks of varying depths or width,and train them iteratively, mostly through self-distilling knowledge fromthe largest sub-network (also referred to as the super-net) <cit.>. However, this iterative self-distillation takes long time and can generate conflicting training objectives for different parameter-sharing sub-networks,potentially resulting in worse performance <cit.>.Further, unlike width-adaptation networks,no general principle has been proposedfor how to select sub-networks for depth adaptation, since the effect of skipping individual layers has not been formally specified. In this work, we introduce an architectural pattern and training method for adaptive depth networks that is generally applicable to residual networks, e.g., CNNs and transformers.In the proposed adaptive depth networks,every residual stage is divided into 2 consecutive sub-paths and they are trained to have different properties.While the first sub-paths are mandatory for hierarchical feature learning,the second sub-paths are optimized to incur minimal performance degradationeven if they are skipped. More specifically, the second sub-path of every residual stage is optimized to preserve the feature distribution from its previous mandatory sub-pathin order to minimize the performance degradation when it is skipped.During training,this property of the second sub-paths is enforcedthrough skip-aware self-distillation, in whichonly one smallest sub-network, also referred as the base-net, is jointly trainedin order to self-distill intermediate feature distributions at every residual stage,as shown in Figure-<ref>-(a). This skip-aware self-distillation requires no explicit self-distillationor additional fine-tuning of individual sub-networks except the base-net,resulting in significantly shorter training time than previous adaptive networks. However, once trained,sub-networks with various depths can be selected instantly from a single network to meet the resource condition of devices, as shown in Figure <ref>-(b).Further, these sub-networks with various depths outperform individually trained non-adaptive networksdue to the regularization effect,as shown in Figure <ref>-(c).In Section <ref>, we discuss the details of our architectural pattern andtraining algorithm, and show formally thatthe chosen sub-paths trained with our skip-aware self-distillation are optimized to reduce prediction errorswhile minimally changing the level of input features. In Section <ref>, we empirically demonstrate that our adaptive depth networks with skippable sub-paths outperformcounterpart individual networks, both in CNNs and vision transformers, and achieve actual inference acceleration and energy-saving. To authors' best knowledge, this work is the firstgeneral approach to adaptive depth networks, providing a general principle for depth adaptationand a formal explanation on why layers can be skipped with minimal performance degradation.§ RELATED WORKAdaptive Networks:In most adaptive networks, parameter-sharingsub-networks are selected by adjusting either widths, depths, or resolutions<cit.>. For example,slimmable neural networks adjust channel widths of CNN models on the flyfor accuracy-efficiency trade-offs and they exploit switchable batch normalization tohandle multiple sub-networks<cit.>. Transformer-based adaptive depth networks have been proposed forlanguage modelsto dynamically skip some of the layers during inference <cit.>.However, in these adaptive networks,every target sub-network with varying widths or depths need to be trainedexplicitly, incurring significant training overheads and potential conflicts between sub-networks.Dynamic networks <cit.> are another class of adaptive networksthat exploit additional control networks or decision gates for input-dependent adaption of CNN models <cit.> and transformers <cit.>. In particular, most dynamic networks for depth-adaptation have some kinds of decision gates at every layers (or blocks)that determine if the layers can be skipped <cit.>. These approaches are based on the thought that some layers can be skipped on `easy' inputs. However, the learned policy for skipping layers is opaque to users anddoes not provide a formal descriptionof when and which layers can be skipped for a given input. Therefore, the network depths cannot be adapted in a predictable manner to meet the resource condition of target devices.Residual Blocks with Shortcuts:Since the introduction of ResNets <cit.>,residual blocks with shortcuts have received extensive attention because oftheir ability to train very deep networks, and have been chosen bymany CNNs <cit.> and transformers <cit.>. In <cit.>, Veit et al. argue that identity shortcuts make exponential paths and results in an ensemble of shallower sub-networks.This thought is supported by the fact that removing individual residual blocksat test time does not significantly affect performance,and it has been further exploited to train deep networks<cit.>. Other works argue that identity shortcuts enable residual blocks to performiterative feature refinement, where each block improves slightly but keeps the semantic of the representation of the previous layer<cit.>.Our work build upon those views on residual blocks with shortcuts and further extend them for adaptive depth networks by introducing an architectural pattern and trainingmethod that exploits the properties of residual blocks more explicitly for selected sub-paths.§ ADAPTIVE DEPTH NETWORKS We first present an architectural pattern and training details to build adaptive depth networks. And, then, we discussthe theoretic rationale for how depth adaption can be achieved with minimal performance degradation.§.§ Architectural Pattern for Depth AdaptationIn typical hierarchical residual networks such as ResNets <cit.> and Swin transformers <cit.>, the s-th residual stage is consisted of L identical residual blocks, whichtransform input features 𝐡^s_1additively to produce the output features 𝐡^s, as follows: 𝐡^s_1 + F_1(𝐡^s_1) +...+F^s_L/2(𝐡^s_L/2)_𝐅^s_base_=𝐡^s_base + ... + F_L(𝐡^s_L)_𝐅^s_skippable_=𝐡^s_super= 𝐡^s While a block with a residual function F_ℓ (ℓ=1,...,L)learns hierarchical featuresas traditional compositional networks <cit.>, previous literature <cit.> demonstrates that a residual function alsotend to learn a function thatrefines already learned features at the same feature level. If a residual block mostly performs feature refinement while not changing the level of input features,the performance of the residual network is not significantly affected by dropping the block at test time <cit.>. However, in typical residual networks, most residual blockstend to refine features while learning new level features as well, and, hence, random dropping of residual blocks at test timedegrades the performance significantly. Therefore, we hypothesize that if some designated residual blocks can be encouraged during training to focus more on feature refinement, then these blocks can be skipped to save computationat marginal loss of prediction accuracy at test time. To this end,we propose an architectural pattern for adaptive depth networks, in whichevery residual stage is divided into two consecutive sub-paths, or 𝐅^s_base and 𝐅^s_skippable as in Equation <ref> and Figure <ref>. While 𝐅^s_base learnsfeature representation 𝐡^s_base(= 𝐡^s_ L/2+1) with no constraint,the second sub-path 𝐅^s_skippable isconstrained to preserve the feature level of 𝐡^s_baseand only refine it to produce 𝐡^s_super. Since layers in 𝐅^s_base perform essential transformations for hierarchical feature learning, they cannot be bypassed during inference. However, since layers in𝐅^s_skippable only refine𝐡^s_base, they can be skippedto save computation.If 𝐅^s_skippable is skipped, then intermediate features 𝐡^s_base becomes the input to the next residual stage.Therefore, the overall network depth can beadjusted by choosing whether to skip 𝐅^s_skippable (s=1, ..., N_s) or not. In Section <ref>, we show that this architectural pattern for adaptive depth networks is generally applicable to a wide range of residual networks. §.§ Skip-Aware Self-DistillationPreserving the feature level of 𝐡^s_base in 𝐅^s_skippable implies, more specifically, thattwo feature representations 𝐡^s_base and𝐡^s_super have similar distributions over training input 𝐗. If 𝐡^s_super and 𝐡^s_base have similar distributions, skipping the sub-path 𝐅^s_skippable during inference results in minimal internal covariate shifts <cit.> to the following network layers.Kullback-Leibler (KL) divergence measures how different two distributions areover the same random variable, and, hence, we use Equation <ref>to measure the similarity of two distributions, 𝐡^s_base and 𝐡^s_super, over input 𝐗: x∈𝐗D_KL(𝐡^s_super || 𝐡^s_base) Algorithm <ref> showsour training method, called skip-aware self-distillation, in which Equation <ref> is included in the loss functionwhile the largest and the smallest sub-networks of 𝐌,which are called super-net and the base-net, respectively, are jointly trained [Some frameworks, such as Pytorch's distributed data parallel (DDP), donot support two consecutive forwards passes followed bya single backward pass: super.forward → base.fowrard → loss.backward.For such frameworks, we adapt the algorithm to super.forward → loss_super.backward → base.forward → loss_sub_path.backward. The latter method has similar results and is more memory-efficient. ]. In Algorithm <ref>, the forward function of the proposed adaptive depth networks 𝐌 accepts an extra argument, 'skip', that controls which residual stages skip their skippable sub-paths. For example, if 𝐌 has 4 residual stages, its base-net is selected by passing `skip=[True, True, True, True]'. In steps 5 and 7, the forward passes of the super-net and the base-net are executed for the same input 𝐱. Intermediate features 𝐡_𝐬𝐮𝐩𝐞𝐫 and 𝐡_𝐛𝐚𝐬𝐞 are also obtained during the forward passes of 𝐌. In step 8,D_KL(𝐡^s_super || 𝐡^s_base) in Equation <ref> is included in the loss function loss_base. By minimizing loss_base,two feature representations 𝐡^s_super and 𝐡^s_base, are explicitly enforced to have similar distributions for the same input 𝐱. Further, this step gives the effect of transferring the knowledge from 𝐡^s_super to 𝐡^s_base at every residual stage. Therefore, 𝐡^s_base is expected to learn more compact representation from 𝐡^s_super. The hyperparameter α controls the strength of this skip-aware self-distillation. In step 8, D_KL(ŷ_superŷ_base) is includedin the loss function for further distillation effect from the super-net to the sub-networks. Due to the architectural pattern ofinterleaving the mandatory and the skippable sub-paths,minimizing D_KL(ŷ_superŷ_base) also minimizes D_KL(𝐡_super𝐡_base) implicitly.Experimental resultshows that similar results can be achieved when only D_KL(ŷ_superŷ_base) is used in loss_base (Section <ref>). This implicit approach is useful when the extraction of intermediate features is tricky. In Algorithm <ref>, only two sub-networksare involved for the training of 𝐌, and, hence,the total training time is no greater than training two sub-networksindividually. However, at test time, sub-networks with various depths can be selected on the fly by systematically skipping sub-paths in residual stages. For example, in a network with 4 residual stages,16 (=2^4) parameter-sharing sub-networks can be selectedby varying the skip argument. In contrast, prior adaptive networks supporting 16 parameter-sharing sub-networks need to perform 16 explicit self-distillation from the super-net to the sub-networks <cit.>. We demonstrate this result in Section <ref>. §.§ Formal Analysis of Skippable Sub-PathsD_KL(𝐡^s_super || 𝐡^s_base) in Equation <ref> can be trivially minimized if residual blocks in𝐅^s_skippable learn identity functions, or 𝐡^s_base + 𝐅^s_skippable(𝐡^s_base) = 𝐡^s_base.However, since the super-net is jointly trainedwith the loss function loss_super,the residual functions in 𝐅^s_skippable cannot simply be an identity function.Then, what do the residual functions in 𝐅^s_skippable learn during training? This can be further investigated through Taylor expansion <cit.>. For our adaptive depth networks,a loss function ℒ used for training the super-netcan be approximated with Taylor expansion as follows:ℒ(𝐡^s_super)= ℒ{𝐡^s_base + 𝐅^s_skippable(𝐡^s_base)} =ℒ{𝐡^s_base + ... + F_L-1(𝐡^s_L-1)+ F_L(𝐡^s_L)}≈ℒ{𝐡^s_base + ... + F_L-1(𝐡^s_L-1)}+ F_L(𝐡^s_L)·∂ℒ(𝐡^s_L)/∂𝐡^s_L + 𝒪(F_L(𝐡_L^s)) In Equation <ref>, the loss function is expanded around 𝐡^s_L, or 𝐡^s_base + ... + F_L-1(𝐡^s_L-1). Only the first order term is left and all high order terms, such asF_L(𝐡_L^s)^2 ·∂^2ℒ(𝐡^s_L)/ 2 ∂(𝐡^s_L)^2,are absorbed in 𝒪(F_L(𝐡_L^s)).The terms in 𝒪(F_L(𝐡_L^s)) can be ignored if F_L(𝐡_L^s) has a small magnitude. In typical residual networks, however, every layer is trained to learn new featureswith no constraint, and, hence, there is no guarantee that F_L(𝐡^s_L) have small magnitude. In contrast, in our adaptive depth networks, the residualsin 𝐅^s_skippableare explicitly enforced to have small magnitude throughthe skip-aware self-distillation, and, hence, the terms in 𝒪(F_L(𝐡_L^s)) can be ignored for the approximation. If we similarly keep expanding the loss function around 𝐡_j (j= L/2 +1,...,L) while ignoring high order terms, we obtain the following approximation:ℒ(𝐡^s_super) ≈ℒ(𝐡^s_base) + ∑_j=L/2+1^L F_j(𝐡^s_j) ·∂ℒ(𝐡^s_j)/∂𝐡^s_j In Equation <ref>,minimizing the loss ℒ(𝐡^s_super) during training drives F_j(𝐡^s_j) (j=L/2+1,...,L)in the negative half space of ∂ℒ(𝐡^s_j)/∂𝐡^s_j to minimize the dot product between F_j(𝐡^s_j) and ∂ℒ(𝐡^s_j)/∂𝐡^s_j. This implies that every residual function in 𝐅^s_skippable is optimized to learn a functionthat has a similar effect to gradient descent:F_j(𝐡^s_j) ≃-∂ℒ(𝐡^s_j)/∂𝐡^s_j(j=L/2+1,...,L)In other words, the residual functions in the skippable sub-paths reducethe loss ℒ(𝐡^s_base) iteratively during inference while preserving the feature distribution of 𝐡^s_base.Considering this result, we can conjecture that, with our architectural pattern and training method,layers in 𝐅^s_skippablelearn functions that refineinput features 𝐡^s_base iteratively for better inference accuracy while minimally changing the distribution of 𝐡^s_base. Therefore, skipping 𝐅^s_skippableonly slightly reduces prediction accuracy. §.§ Skip-aware Batch NormalizationOriginally, batch normalization (BN) <cit.>was proposed to handle internal covariate shift during training non-adaptive networks by normalizing features. In our adaptive depth networks, however,internal covariate shifts can occur during inference in mandatory sub-paths if different sub-networks are selected. To handle potential internal covariate shifts,switchable BN operators, called skip-aware BNs, are used in mandatory sub-paths. For example, at each residual stage, two sets of BNs are available for the mandatory sub-path, and they are switcheddepending on whether its skippable sub-path is skipped or not.The effectiveness of switchable BNs has been demonstratedin networks with adaptive widths <cit.> and adaptive resolutions <cit.>.However, in previous adaptive networks,N sets of switchable BNs are required in every layer to support N parameter-sharing sub-networks. Such a large number of switchable BNs not onlyrequires more parameters, but also makes the training process complicatedsince N sets of switchable BNs need to be trained iteratively during training.In contrast, in our adaptive depth networks,every mandatory sub-path needs only two sets of switchable BNs, regardless of the number of supported sub-networks. This reduced number of switchable BNssignificantly simplifies thetraining process as shown in Algorithm <ref>. Furthermore,the amount of parameters for skip-aware BNs is negligible. For instance, in ResNet50, skip-aware BNs increase the parameters by 0.07%. Transformers <cit.> exploit layer normalization (LN)instead of BNs and naive replacement of LNs to BNs incurs instability during training <cit.>. Therefore, for our adaptive depth transformers, we apply switchable LN operators in mandatory sub-paths instead of switchable BNs. § EXPERIMENTS To demonstrate the generality and effectiveness of our approach for adaptive depth networks, we conduct experiments on various networks and vision tasks. §.§ Networks We use four representative residual networksas base models to apply the proposed architecture patternin Section <ref>:MobileNet V2 <cit.> is a lightweight CNN model, ResNet <cit.> is a larger CNN model, andViT <cit.> and Swin-T <cit.> are representative vision transformers. All base models except ViT have 4 residual stages, each with 2 ∼ 6 (residual or encoder) blocks. So,according to the proposed architectural pattern,every residual stage is evenly divided into 2 sub-paths for depth adaptation. Since ViT does not define residual stages, we divide 12 encoder blocks into 4 groups, resembling other residual networks, anddesiginate the last encoder block of each groupas a skippable sub-path. We use the suffix `-ADN' to denote our adaptive depth networks.Since our adaptive depth networks havemany parameter-sharing sub-networks in a single network,we indicate which sub-network is usedfor evaluation in parenthesis, e.g., ResNet50-ADN (super-net). For our sub-networks, boolean valuesare also used to indicate in which residual stages the sub-paths are skipped. For example, ResNet50-ADN (base-net) is equivalent to ResNet50-ADN (TTTT). §.§ ImageNet Classification We evaluate our method on ILSVRC2012 dataset <cit.> that has 1000 classes.The dataset consists of 1.28M training and 50K validation images. For CNN models, we follow most training settings in the original papers <cit.>, except that ResNet models are trained for 150 epochs. ViT and Swin-T are trained for 300 epochs, following DeiT's training recipe <cit.>. However, in Swin-T-ADN, we disable stochastic depths <cit.> for the base-netsince the strategy of random dropping of residual blocks conflicts with our approach to skipping sub-paths.For fair comparison,our adaptive depth networks and corresponding individual networksare trained in the same training settings. The hyperparameter α in Algorithm <ref>is set to 0.5 for all networks. The results in Table <ref> show that our adaptive depth networksoutperform individual counterpart networkseven though our sub-networks share parameters in a single network.Further,our results with vision transformers demonstrate thatour approach is generally applicable to residual networksand compatible with their state-of-the-art training techniques such asDeiT's training recipe <cit.>. We conjecture that this performance improvementresults from effective distillation of knowledge from h^s_super to h^s_base at each residual stage and the iterative feature refinement at skippable sub-paths, shown in Equation <ref>. In Table <ref> and Figure <ref>,several state-of-the-art efficient inference methods and dynamic networksare compared with our sub-networks of adaptive depth networks.The result demonstrates thatour adaptive depth networks match or outperform many state-of-the-art static and dynamic networks across varying depth ranges. In Figure <ref>, sub-networks of ResNet50-ADN are compared to the counterpart individualResNets of equivalent depths. In particular,it should be noted that ResNets trained with knowledge distillation in the same training settingshas worse performance than individual ResNets trained without knowledge distillation.As reported in previous works, successful knowledge distillation requiresa patient and long training <cit.>, and straightforward knowledge distillation using ImageNet does not improve the performance of student models <cit.>. In contrast, our ResNet50-ADN trained withskip-aware self-distillation achieves better performancethan counterpart ResNets. This demonstrates that the high performance ofadaptive depth networks does not simplycome from the distillation effect,but from the effective combination ofthe proposed architectural patternand the skip-aware self-distillation strategy. §.§ Training TimeOne important advantage of our approach is thatour adaptive depth networksrequire significantly shorter training time than other adaptive networks.Table <ref> shows thattraining our ResNet50-ADN takes similar time to train two individual networks combined. In contrast,the compared adaptive networksrequires much longer training time than ours. MSDNet <cit.>represents adaptive depth networks with multiple early-exiting branches and classifiers.SkipNet <cit.> represents input-dependent dynamic networks that drop layers for ‘easy’ inputs. In S-ResNet50 <cit.>, 4 sub-networks with varying widths aretrained through iterative self-distillation from its super-net. Despite significantly shorter training time is required,our adaptive depth networks can support various sub-networks at test time.Figure <ref>-(left) shows the performanceof ResNet50-ADN when its depth is varied at test time.Among them, only the super-net and the base-net are explicitly trained in Algorithm <ref>. Other sub-networks are selected on the fly at test timeby skipping sub-paths in varying number of residual stages. Although these sub-networks are not trained explicitly,they show graceful degradation of performanceas the depth of sub-networks becomes gradually shallower. However, although ResNet50-ADN support 2^4 sub-networksin theory, Figure <ref>-(right) showsthat some sub-networks are more useful than the others. For instance, even though ResNet50-ADN(TFFF)and ResNet50-ADN(FFFT) have the same network depths,ResNet50-ADN(TFFF) has about 0.58% highertop-1 accuracy.This result shows that skipping sub-paths in later stages is more detrimental to the performance in general. §.§ Performance on DevicesFigure <ref> shows the performance of adaptive networks in an actual device.The inference latency and energy consumption ofResNet50-ADN is compared to S-ResNet50 <cit.>, a representative width-adaptation network. The result shows that depth-adaptation of ResNet50-ADNis highly effective inaccelerating inference speedsand reducing energy consumption. For example, in ResNet50-ADN, reducing FLOPs by 38% through depth adaptation reduces both inference latency and energy consumption by 35%. In contrast, even though S-ResNet50 can reduce FLOPs by up to 93% by adjusting its width, it only achieves up to 9% acceleration in practice.§.§ Ablation StudyWe conduct ablation study on ImageNet classification to investigate the influence of two key components of the proposed adaptive depth networks: (1) skip-aware self-distillation and (2) skip-aware BN/LNs. When our skip-aware self-distillation is not applied,the loss in Algorithm <ref> is modified to loss= 1/2{criterion(𝐲, ŷ_super) +criterion(𝐲, ŷ_base)} for joint training of the super-net and the base-net. Table <ref> shows the results. For ResNet50, when neither of them is applied, the inference accuracy of the super-netand the base-net is significantly lower thanindividual networks by 1.5% and 2.8%, respectively. This result shows the difficulty of joint trainingsub-networks for adaptive networks. When one of the two components is applied individually,the performance is still slightly worse than individual networks'. In the third row, when both skip-aware self-distillation and skip-aware BNs are applied together,ResNet50-ADN achieves significantly better performance thanindividual networks, both in the super-net and the base-net. Finally, the last row shows the result when only D_KL(ŷ_superŷ_base)is used for skip-aware self-distillation. Without explicit distillation of intermediate features,slightly lower performance is observed in the base-net. The results with ViT-b/32 show thatswitchable layer normalizationhas similar effect in vision transformers.§.§ Visual Analysis of Sub-PathsTo investigate how our training method affects feature representations in the mandatory and the skippable sub-paths, we visualize the activation of 3rd residual stage of ResNet50-ADN using Grad-CAM <cit.>.The 3rd residual stage of ResNet50-ADN has 6 residual blocksand the last three blocks are skippable. In Figure <ref>-(a),the activation regions of original ResNet50 changes gradually across all consecutive blocks. In contrast, in Figure <ref>-(b), ResNet50-ADN (super-net)manifests very different activation regions in two sub-paths. In the first three residual blocks,we can observe lots of hot activation regions in wide areas, suggesting active learning of new level features. In contrast, significantly less activation regions are foundin the skippable last three blocks andthey are gradually concentrated around the target object, demonstrating the refinement of learned features.Further, in Figure <ref>-(c), we can observe that the final activation map of the ResNet50 (base-net)is very similar to the super-net's final activation mapin Figure <ref>-(b). This implies that they have similar distributions for the same inputs, as suggested in Section <ref>.§.§ Object Detection and Instance Segmentation In order to investigate the generalization ability of our approach, we use MS COCO 2017 datasetson object detection and instance segmentation tasks usingrepresentative detectors. We compare individual ResNet50 and ouradaptive depth ResNet50-ADN as backbone networks of the detectors. For training of detectors, we use Algorithm <ref> with slight adaptation. For object detection, the intermediate features h_base^s and h_super^s (s=1..N_r) can be obtained directly from backbone network's feature pyramid networks (FPN)<cit.>, and, hence,a wrapper function is not required to extract intermediate features.All networks are trained onfor 12 epochs from ImageNet pretrained weights,following the training settings suggested in <cit.>.Table <ref> shows the results oncontaining 5000 images. Our adaptive depth backbone networks still outperform individual static backbone networksin terms of COCO's standard metric AP.§ CONCLUSIONS We propose an architectural pattern for adaptive depth networks and its training method that renders selected sub-paths of the network to incur minimal performance degradation even if they are skipped during inference.Our approach does not train a fixed set of sub-networks iteratively, and, hence significantly shorter training time is required than previous adaptive networks. However, once deployed on devices, it can instantly construct sub-networks of various depths to provide various accuracy-efficiency trade-offs in a single network. We show formally that the proposed architectural pattern and training method reduces overall prediction errors while minimizing the impact of skipping sub-paths.We also empirically demonstrate the generality and effectiveness of our approach using representative convolutional neural networks and vision transformers.ieeenat_fullname
http://arxiv.org/abs/2312.16392v1
{ "authors": [ "Woochul Kang" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231227034338", "title": "Adaptive Depth Networks with Skippable Sub-Paths" }
Computing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded Márton Benedek1 Péter Biró1 Gergely Csáji1Matthew Johnson2 Daniel Paulusma2 Xin Ye2 January 14, 2024 ===========================================================================================================This paper describes a formal general-purpose automated program repair (APR) framework based on the concept of program invariants. In the presented repair framework,the execution traces of a defected program are dynamically analyzed to inferspecifications φ_correct and φ_violated, whereφ_correct represents the set of likely invariants (good patterns)requiredfor a run to be successful and φ_violatedrepresents the set of likely suspicious invariants (bad patterns) that result in the bug in the defected program. These specifications are then refined using rigorous program analysis techniques,which are also used to drive the repair process towards feasible patches and assess the correctness of generated patches. We demonstrate the usefulness of leveraging invariants in APR by developing an invariant-based repair system for performance bugs. The initial analysis shows the effectiveness of invariant-based APR in handling performance bugs by producing patches that ensure program's efficiency increase without adversely impacting its functionality. Automated program repair · Invariant learning and refinement · Patch overfitting · Program verifier · CPAChecker · Performance bugs § INTRODUCTION Automated program repair (APR) has recently gained great attentionbecause it helps to significantly decrease manual debugging effort by automatically generating patches for defected programs.Modern program repair tools have been shown to be effective at fixing bugs in many real-world programs.The poor quality of automatically generated patches <cit.>, however, continues to be a major obstacle to the adoption of automated program repair by software practitioners. Problem The primary reason for the low quality of automatically generated patches by current APR tools is the lack of specifications of the intended behavior.Most program repair systems rely on tests as the correctness criteria, because a formal specification is not explicitly provided by software developers. Therefore, current APR approaches produce plausible patcheswhich must be (manually) inspected before being deployed. Therefore, there is no guarantee that the generated patches are generally correct and do not introduce new bugs. SolutionProgram verification technology enables developers to prove the correctness of the program before deploying it. One of the key activities underlying this technology involves inferring a program invariant—a logical formula that serves as an abstract specification of a program. Developers can significantly benefit fromprogram invariants to identify program properties that must be preserved when modifying code. Unfortunately, these invariants are typically absent from code,leading to the dominance of less rigorous APR approaches (e.g., dynamic APR) and the well-known patch overfitting challenge  <cit.>. We argue that by using test cases and reachability-based analysis techniques, an accurate set of invariants may be obtained and utilized to produce high-quality patches.In other words, program verification tools such as CPAChecker <cit.> and PathFinder <cit.> can be used to refine the dynamically generated invariant candidates. This can be done by first using the test cases to analyze the execution traces of the program to infer a set of invariant candidates.These candidates are then refined using a program verifier to obtain more accurate invariants.Thegoal is to infer two specifications: (i) φ_correct, which represents the set ofgood patternsrequired for a run to succeed, and (ii) φ_violated, which represents the set of bad patterns that lead to the target bug. Invariant-based APR offers two key benefits. First, it directs APRtowards potentially feasible patches. Second, it enables the formal validation of plausible patches using program verifiers.It has been argued in earlier APR literature that because the desired behaviour of the program is not explicitly provided,formal APR may not be feasible and applicable, leading to the dominance of dynamic APR methods. However, we argue that even when the expected behaviour is not explicitly specified,formal APR is still feasible and possible for thefollowing reasons. First, by analyzing successful runs of the buggy program, it is possible to infer the (likely) program's expected behaviour and that accurate specifications may be obtained by analysing high-quality successful tests.Second, there are several mature automated invariant inference tools available, such as the tool Daikon,that can be used to infer the likely formal specification of the buggy program that is being fixed. Third, there are a variety of mature program verification tools readily available, including theorem provers like Z3 and PVS,software model checkers like BLAST, and program verifiers like CPAChecker. Such tools can be used to automatically check the satisfaction of invariants in generated patches. Because it is intended to delivertrustworthy patches, formal APR is more expensive than dynamic APR. To infer a likely specification of the expected behaviour, for instance, one must use an invariant inference tool.The cost of this step varies according to how effectively the tool is used and how many tests are involved.Second, the patch validation procedure entails the use of a variety of program verification tools, the analysis of which is mostly based on the complexity of the program being examined and the property being verified. Viability of invariant-based APR Program invariants have shown effectiveness in many applications, such as program understanding, fault localization, and formal verification.Invariants are effectivebecause functional correctness relates to the final result of a program rather than any specific implementation.They can therefore assist in abstracting many concrete execution steps and thus greatly reduce the effort needed to reason about the patch's correctness. In fact, developers who aim to repair a defected undocumented program (a program written without thought for formal specifications)can find invariant-based APR helpful in their repair tasks. The availability of mature automated invariant detection tools like Daikon <cit.>and practical software verification tools like CPAChecker and PathFindermakes the invariant-based program repair technique viable. At first glance, refining invariants using program verification tools seems too expensive.However, due to tremendous advances in software verification <cit.>, in practice, invariant-based verification can be made pretty efficient.In particular, the software analysis framework CPAChecker, which supports many different reachability analyses,has been effectively used to validate a wide variety of reachability queries against C programs with up to 50K lines of code.This makes reachability analysis a promising technique that can be used to significantly reduce the patch overfitting problem and produce high-quality patches. We hypothesise that the next generation of APR tools will be developed employing the concept of program invariants or a combination that does so.This is mainly due to following observations. First, using program invariants makes it possible for the repair process touse formal verification techniques like theorem provers and software model checkers.Second, program invariants help to learn useful information about both the functional and non-functional attributes of the program, which is crucial when handling performance defects.This is a key advantage of invariant-based APR comparing to other repair approaches. § INVARIANT-BASED PROGRAM REPAIR FRAMEWORKIn this section we reformulate the APR problem using the concept of program invariants. We then describe how one can analyze the execution traces of fault-free runs to infer likely specifications of the program's intended behaviour and execution traces of faulty runs to infer likely suspicious invariants that lead to the faulty behaviour.Before proceeding further, let us introduce some definitions. (fault-free vs. faulty runs). Let P be a buggy program, ℛ be the set of runs of P, and φ_beh be a property ofprogram P's intended behavior.We say that a run r ∈ℛ is a successful run (i.e., fault-free run) if P (r) φ_beh. On the other hand,we say that a run r' ∈ℛ is a faulty run if P (r) φ_beh.From Definition <ref> we note that by analyzing information extracted from fault-free runs, one might be ableto infer a specification of the program's intended behavior. Similarly, by analyzing the execution information of faulty runs, one might be ableto deduce the violating invariants that cause the bug.This is because fault-free runs representruns in which program invariants are maintained, while faulty runs represent runs in which some program invariants are violated. (Invariant-based APR problem). Let P be a program containing bug b and T = (T_P ∪ T_F) be a test suite, where T_P represents the set of passing tests and T_F represents the set of failing tests. Let D be a dynamicinvariant inference tool like Daikon, and V be a program verification tool like CPAChecker. The invariant-based APR process consists of the following steps:* [Invariant extraction]. Generate an initial set of invariantsℐfor P using D. * [Invariant refinement]. Refine the setℐ using V to produce specifications φ_correct and φ_violated.This can be done by asserting invariants at a program's location of interest and using any generated counter-example to refine them.* [Fault localization].Compute a list of suspicious statements whose mutation may lead to a valid patch by analyzingspecifications φ_correct and φ_violated. * [Patch generation]. Construct code that corrects the invariants that are violated while maintaining other program invariants.This can be performed by employing a patch generation procedure like search- or semantic-based.* [Patch validation].Validate the correctness of the generated patches using V.Depending on the type of the bug being fixed and the structure of the analyzed program, different program locations may be of relevance for properties φ_correct and φ_violated. Examples include pre- and post-conditions for different functions, or loop invariants for some program loops. Note that the first two steps of the invariant-based APR process described at Definition <ref> are necessary for increasing confidence in the precision of patches that are generated. The actual repair steps of the process, steps 3-5, can be formally stated as follows:pt = FV(PGV(FL(φ_correct, φ_violated, P), T), φ_correct, φ_violated)where FL is an invariant-based fault localization process, PGV is patch generation and validation process using test suite, and FV is a formal patch validation process using the verification tool V.If no plausible patch is found or a plausible patch is found but incorrect, the repair process returns 𝖿𝖺𝗂𝗅.However, if the plausible patch passes the verification step carried out by the tool V, the process returns a patch. We now turn to discuss how one can generate specifications φ_correct and φ_violatedby analyzing the execution information obtained by running program P usingpassing and failing tests.The analysisof fault-free and faulty runs leads to the identification of the following formal patterns. * φ_correct = ℐ_good = V (D(P, T_P)), invariants deduced using only successful runs. This set of invariants represents the likely intended behavior of P. * φ_faulty =ℐ_mix = V(D(P, T_F)), invariants deduced using the set of faulty runs.Note thatthe set ℐ_mix may contain both good and bad patterns depending on how the target bug affects different functionalities of P.* φ_violated = ( ℐ_mix∖ℐ_good), the set of violated invariants related to the bug.It is important to categorize and distinguish inferred patterns (invariants) into good and bad patterns, especially when dealing with programs that have several functional requirements. This helps to identify the set of desired invariants to be maintained and violated invariants to be repaired when modifying the code. It also helps to identify the set of relevant invariants for the particular bug being analyzed. The soundness of inferred φ_correct and φ_violateddepends heavily on the soundness of the employed invariant inference tool as well as the invariant refinement process. Increasing the amount of program behavior exercised using reachability analysis increases the likelihood that φ_correct and φ_violated are true. (Patch validation in invariant-based APR). Let P be a program containing bug b and T be a test suite containing at least one failing test and one passing test. Let also pt be a plausible patch that makes P passes all test cases in T. The validity of patch pt can be formally checked as followsvalidity (pt) = V (pt, φ_correct)V (pt, φ_violated)where V (pt, φ_correct) ∈{true, false}and that the tool's response depends on whether the specification is fulfilled or violated in the program being examined.To boost confidence in the validity of the resulting patch,we opt to check patches against both φ_correct and φ_violated. However, to lower the cost of calling theverifier V against each candidate patch, we aim to implement a three-step patch validation method that uses the test suite firstand the program verifier afterwards.Generating plausible patches is done in the first step using test cases. Second step involves formally checking plausible patches against the set of bad patterns (property φ_violated). Patches that pass the first two steps arechecked against the set of good patterns (property φ_correct) in the third step. § FIXING PERFORMANCE BUGS USING INVARIANT-BASED APR Performance bugs are programming errors that cause significant performance degradation - lead to low system throughput.Experience has shown that many commercial software that is widely used suffer from performance problems<cit.>.Therefore, there is a need to develop a rigorous repair framework for performance bugsthat ensures efficiency gain without compromising functionality.One unique characteristic of performance bugs comparing to functional bugs is thatperformance bugs do not affect the functionality of the program (i.e., the program is semantically correct but inefficient)and thus the intended behavior of the program can be automatically deduced using an invariant inference tool.This section describes an invariant-based APR system for performance bugs and demonstrateshow it may be applied to handle performance bugs by producing patches that ensures efficiency improvement without sacrificing functionality. There are two possible ways to generate the efficiency property that can be used in the formal analysis of plausible generated patches.* By inferring an efficiency predicate on the loop's control variables (the upper bound on the number of times the loop is iterated w.r.t. the size of input).Inferring such property requires the analysis of the predicates that the inference tool generated and the syntactic structure of the analyzed program. * By using a timed property, which is typically provided by the user. The property specifies the expected time for termination in efficient implementations.This also can be inferred by analyzing the execution traces of successful runs. §.§ Invariant-based Repair Framework for Performance BugsIn this section we describe an invariant-based repair framework for handling performance bugs. The framework consists mainly of the following components:* a set of passing tests (tests that lead to fast runs), * a set of failing tests (tests that lead to slow runs), * runtime monitor to keep track of the program's execution time and differentiate between fast and slow runs, and * an automated invariant inference tool (Daikon or CPAChecker) and automated invariant verification tool (PVS, Z3 solver, or CPAChecker).We now turn to discuss how we define the notions of passing and failing testsand the process of generating and validating patches for performance bugs.Passing and failing tests for performance bugsPerformance bugsdo notproduce debugging information at runtime: they do not producecrashes, exceptions, or incorrect results. We therefore use a runtime monitor with a predefined timer to redefine the concepts of passing and failing tests. We consider test cases that lead to fast runs as passing testswhile test cases that lead to slow runsas failing tests. A repair that transforms slow runs into fast runs while preserving the desired behavior of the original program is considered as a valid repair.Patch generation strategy for performance bugs Since we deal with a semantically correct but inefficient program, an efficient version of the program can often be created by restructuring the original program's basic components. Our preliminary analysis demonstrates the effectiveness of genetic repair tools, such as GenProg, in dealing with performance bugs. This suggests that programs with performance bugs can be fixed by relatively simple changes. For instance, various performance bugs can be fixed by using mutation operators like move, swap, delete, and insert employed by genetic repair programs.Consequently, we aim to combine our repair framework with genetic-based patch generation tools.Patch validation for performance bugs It should be noted that invariant inference tools can also be used to derive predicates related to the non-functional attributes of the program. This can be achieved by adding extra non-functional variables to the program being repaired. Suppose we have a program P with a set of variables V and that P containing a performance bug. We need to check whether the generated plausible patch for program P fixes the performance bug without introducing new functional bug.To do so, we first generate and validate predicates related to the efficiency attributes of the program, as described below. * Add a fresh variablewhose value has no impact on the behavior ofP.The type of performance bug that is being handled determines howis used to model the efficiency of the program. However, for the loop programs we consider, acts like a counter that is incremented once for each iteration. In other words, the number of loop iterations serves as a model for efficiency.* Use the invariant detection tool D to infer the numerical invariants ℐ (P,) and ℐ (pt,) for the original and plausible patched version, where ℐ (P,) represents the collection of invariants in program P involving variable.* Compare the numerical predicates in ℐ (P,) and ℐ (pt,) to determine whether the patched version pt is more efficient than original program P.For simplicity reasons, we assume we deal with a program with a single loop. The number of loops in the analyzed program, however, determines how many more variables are needed.The invariant inference tool D is thus used to infer invariants on (V ∪{}). We then distinguish the following types of predicates:* ℐ (P,V):predicates related to the program's functionality, and* ℐ (P, ):predicates related to the program's efficiency. Using the generatedpredicates,one can check the validity of patch pt as followsvalidity (pt) = SemaEq  (ℐ (P,V ), ℐ (pt, V))  PredSm  (ℐ (pt, ), ℐ (P, ) )where SemaEq is a Boolean operation that checks whether the given sets of invariants are semantically equivalent and PredSm is a Boolean operation that checks whether the upper bound in thepredicate related to the patched version is smaller than the upper bound in theone related to the original program. We now describe two formal procedures to verify the validity of plausible patches (specification (<ref>))using the available program verification tools. * Daikon-PVS: In this patch validation procedure, Daikon is used to generate predicates related to the functional and efficiency attributes of programs P and pt.In the event that ℐ (P,V) andℐ (pt,V) (i.e., predicates related to functional attributes) are not identical, it may be necessary to examine both equivalence and implication relations between the predicates in those sets in order to determine whether P and pt are semantically equivalent. By querying the theorem prover PVS, this task can be accomplished.* CPAChecker-PVS: One interesting feature in CPAChecker is that it produces correctness witnesses in GraphML format and in those witnesses, one can find the invariants of the analyzed program. This feature can be utilized to generate the set of invariants in both the original program and corresponding plausible one. In case that the invariants generated for both programs are not identical,it may be necessary to examine both equivalence and implication relations between the predicates in the two sets by invoking the prover PVS.§.§ Fixingreal-world performance bugs using invariant-based APR In this section, we show how invariant-based APR can be used to handle real-world performance bugs. For space reasons, we only consider one interesting example of performance bugs (see Listing <ref>).The bug is based on a real-world flaw that occurred in Apache and has also been analyzed by other researchers <cit.>. [t]int found = -1;while (found < 0 )// Check if string source[] contains target[] char first = target[0]; int max = sourceLen - targetLen; for (int i = 0; i <= max; i++) // Look for first character.if (source[i] != first)while (++i <= maxsource[i] != first);// Found first characterif (i <= max)int j = i + 1;int end = j + targetLen - 1; for (int k=1; j<endsource[j]==target[k]; j++, k++); if (j == end)/* Found whole string target. */ found = i; break;// append another character; try againsource[sourceLen++] = getchar(); A challenging performance bug found in ApacheAnalysis of the program in Listing <ref>The program aims to determine whether a given (target) string is contained within another (source) string.If the target string is found in the source string, the program sets the variableto the index of the target string's first character. But there is a significant performance flaw in the program: when the target string is at the start of the source string, the run is fast, and the program stops almost instantaneously.On the other hand, the run is slower and takes longer to finish when the target string is closer to the end of the source string.This is mostly because there will be a significant increase in the number of redundant computations. The fault is that the initialization statement of the control variableof the for loop at line 6 should be placed outside the scope of the main while loop just after the initialization of the variable . The longest run that we reported occurs when the source string has a length of 10^7 characters, and the target is a single character that is present at the end of the source string.In this instance, the program runs for 30 hours before terminating and producing the correct results.§.§ Results and analysisTo handle the performance bug at Listing <ref>, we select two APR tools: the search-based repair tool GenProg <cit.> and the semantic-based repair tool FAngelix <cit.>.These are general-purpose repair tools for C code that can be used to fix a range of program bugs, including loop program bugs.While GenProg successfully generated a plausible patch, FAngelix was unable to produce a plausible one. To avoid doing repetitive calculations in the original program, GenProg moved the initialization statement of the variableoutside of the for loop at line 6.In other words, the program starts with the initialization statement of the variablein the patched version.In this case, the generated patch passes the test cases since is no longer being set to 0 every time the loop receives a new character. To check the validity of theplausible patch generated by GenProg, we run the tool Daikon and compare the functional and efficiency predicates obtained for both the original program and the plausible patch.Daikon generates the same set of invariants w.r.t. functional variables (i.e., both the original and the patched versions have the same invariants w.r.t. program variables.) This demonstrates that the patch maintains the functional behavior of the original program. Listing <ref> contains four loops: the while loop at line 2, for loop at line 6, while loop at line 9, and for loop at line 15. To evaluate the efficiency of the original and patched programs,it is sufficient to calculate the upper bound on the number of iterations, as the patch does not modify the logic of any of the loops by adding or removing an operation. That is, each iteration of the four loops in both programs involves the same number of operations. We therefore add four iteration counters (cnt_2, cnt_6, cnt_9, cnt_15) to model the efficiency of each loop, where the index of the counter corresponds to the line number of the loop being analyzed. For instance, the counter cnt_2 is initially set to zero and advanced by one whenever the loop at line 2 is run. Wemake the following observations when analyzing the efficiency predicates for both the buggy and patched versions: * Invariants generated for the counter variables cnt_2 and cnt_15 in the buggy and patched versions are the same.This indicates that the patch does not affect the number of times the loops at lines 2 and 15 are iterated.* The counter variablecnt_9 only advances in the buggy version and results in the invariantcnt_9 ≤ 500499. The fact that the patched version no longer employs the while loop at line 9 is a sign of a major improvement.* Daikon generated the invariant cnt_6 ≤ 1001 in the buggy version and invariant cnt_6 ≤ 501 in the patched version. This shows that the loop at line 6 is iterated 50% less times in the patched version than it is in the original code.The aforementioned findings, along with the fact that the derived functional predicates of both the original and patched versions are identical,boost our confidence about the validity of the generated patch by the tool GenProg. § RELATED WORK Patch overfitting in APR Several solutions have been developed to alleviate the overfitting problem in APR,such as symbolic specification inference <cit.>,machine learning-based prioritization of patches <cit.>,fuzzing-based test-suite augmentation <cit.>,and concolic path exploration <cit.>.These solutions rely on limited incomplete test cases and do not guarantee the general correctness of the patches. Compared to those approaches that generate test inputs, invariant-based APR automaticallygenerates and refines desired invariants that need to be maintained and violated invariants that need to be repaired when modifying the code, which makes the approach more reliable than existing repair approaches.Modern general-purpose APR tools still rely on symbolic execution or concolic execution <cit.> to discover counterexamples and generate repairs. However,these repair approaches manually inspect to determine whether the generated patches are correct oridentical to developer patches, which could be error-prone.Invariant-based APR makes it possible to apply automatedverification techniques to alleviate overfitting problem and formally and systematically check the accuracy of generated patchesby comparing them to the developers patches. Handling performance bugs Several attempts have been made todetect and repair performance bugs in programs using dynamic, static, and hybrid analysis approaches <cit.>.<cit.> carried out an empirical investigation into performance bugs and presented several efficiency rules for identifying them. Using dynamic-static analysis techniques, several fix strategies have been developed in <cit.> to identify and fix performance problems. However, our method is different from previous studies in that it is a more general and rigorous technique that makes use of program invariant to address loop program performance issues and yield reliable patches. Thanks to program invariants, the original program's efficiency can be systematically compared to the patched version. § CONCLUSION AND FUTURE WORK We described a novel general-purpose APR system based on the concept of program invariants. Invariant-based APR holds the promise to handle a wider range of bugs and produce more reliable patches than other APR approaches.This is because invariant-based repair systems depend on stronger correctness criteria rather than test suites. We demonstrate the usefulness of leveraging invariants in APR by developing an invariant repair system for performance defects.The preliminary results showed that invariant-based APR can assist in generating valid patches that ensure efficiency improvement without compromising functionality.Future work To complete the line of research initiated here regarding invariant-based APR, we identify the following key directions for future work. * First and foremost,we aim to conducta thorough empirical analysis to determine how well invariant-based APR handles functional and non-functional defects in programs.This also entails assessing the invariant inference and invariant verification tools that are currently accessible. * Accurate invariant generation is required to ensure the validity of patches produced by invariant-based APR.We conjecture that reachability analyses can aid with this complex computational task and we aim to combine invariant-based APR with program verification toolsthat support both invariant generation and refinement such as CPAChecker and PathFinder.plain
http://arxiv.org/abs/2312.16652v1
{ "authors": [ "Omar I. Al-Bataineh" ], "categories": [ "cs.SE" ], "primary_category": "cs.SE", "published": "20231227174619", "title": "Invariant-based Program Repair" }
runhan_xie@berkeley.eduUniversity of California, Berkeley Department of Industrial Engineering and Operations Research Berkeley, CA USAigrosof@cs.cmu.edu igrosof3@gatech.eduCarnegie Mellon University Computer Science Department Pittsburgh, PA USA Georgia Institute of Technology School of Industrial and Systems Engineering Atlanta, GA USAzivscully@cornell.eduCornell University School of Operations Research and Information Engineering Ithaca, NY USADispatching systems, where arriving jobs are immediately assigned to one of multiple queues, are ubiquitous in computer systems and service systems. A natural and practically relevant model is one in which each queue serves jobs in FCFS (First-Come First-Served) order. We consider the case where the dispatcher is size-aware, meaning it learns the size (i.e. service time) of each job as it arrives; and state-aware, meaning it always knows the amount of work (i.e. total remaining service time) at each queue. While size- and state-aware dispatching to FCFS queues has been extensively studied, little is known about optimal dispatching for the objective of minimizing mean delay. A major obstacle is that no nontrivial lower bound on mean delay is known, even in heavy traffic (i.e. the limit as load approaches capacity). This makes it difficult to prove that any given policy is optimal, or even heavy-traffic optimal.In this work, we propose the first size- and state-aware dispatching policy that provably minimizes mean delay in heavy traffic. Our policy, called CARD (Controlled Asymmetry Reduces Delay), keeps all but one of the queues short, then routes as few jobs as possible to the one long queue. We prove an upper bound on CARD's mean delay, and we prove the first nontrivial lower bound on the mean delay of any size- and state-aware dispatching policy. Both results apply to any number of servers. Our bounds match in heavy traffic, implying CARD's heavy-traffic optimality. In particular, CARD's heavy-traffic performance improves upon that of LWL (Least Work Left), SITA (Size Interval Task Assignment), and other policies from the literature whose heavy-traffic performance is known. Heavy-Traffic Optimal Size- and State-Aware Dispatching Ziv Scully=======================================================§ INTRODUCTIONDispatching, or load balancing, is at the heart of many computer systems, service systems, transportation systems, and systems in other domains. In such systems, jobs arrive over time, and each job must be irrevocably sent to one of multiple queues as soon as it arrives. It is common for each queue to be served in First-Come First-Served (FCFS) orderMotivated by the ubiquity of dispatching, we study a classical problem in dispatching theory:[ A job's response time (a.k.a. sojourn time, latency, delay) is the amount of time between its arrival and its completion.]How should one dispatch to FCFS queues to minimize the jobs' mean response time?We specifically consider size- and state-aware dispatching. This means that the dispatcher learns a job's size, or service time, when the job arrives; and the dispatcher always knows how much work, or total remaining service time, there is at each queue. We make typical stochastic assumptions about the job arrival process, working with M/G arrivals (see <ref>).Despite the extensive literature on dispatching in queueing theory (see <ref>), optimal size- and state-aware dispatching is an open problem, as highlighted by <cit.>. The problem is a Markov decision process, so it can in principle be approximately solved numerically <cit.>. But the numerical approach has two drawbacks. First, the curse of dimensionality makes computation impractical for large numbers of queues. Second, being able to numerically solve any specific instance (meaning a given number of queues, job size distribution, and load) does not readily provide insight that applies to all instances.§.§ Our contributionsIn this work, we take the first steps towards developing a theoretical understanding of optimal size- and state-aware dispatching, making two main contributions.We give the first lower bound on the minimum mean response time achievable under any dispatching policy (<ref>).We propose a new dispatching policy, called CARD (Controlled Asymmetry Reduces Delay), and prove an asymptotically tight upper bound on its mean response time (<ref>). We illustrate CARD in <ref>. / Our upper and lower bounds match in the heavy-traffic limit as load ρ approaches 1, the maximum load capacity. Specifically, we find an explicit constant K such that the dominant term of both bounds is K/1 - ρ. This makes CARD the first policy to be proven heavy-traffic optimal, aside from the implicitly specified optimal policy. Characterizing the optimal constant K, which was previously unknown, is another contribution of our work. §.§.§ How CARD outperforms previous policiesBelow, we describe the intuition behind CARD's design in a two-server system. See <ref> for an illustration.To minimize mean response time, one generally wants to avoid situations where small jobs need to wait behind large jobs. One way to do this is to dedicate one server to small jobs and the other server to large jobs, where the size cutoff between “small” and “large” is defined such that half the load is due to each size class. This is the approach taken by the SITA (Size Interval Task Assignment) policy <cit.>. Under SITA, due to Poisson splitting, the dispatching system reduces to two independent M/G/1 systems. As shown by <cit.>, SITA can sometimes perform very well, but it can sometimes be much worse than simple LWL (Least Work Left) dispatching, under which the system behaves like a central-queue M/G/2.Our key observation is that the main reason SITA performs poorly is that its “short server”, namely the queue to which it sends small jobs, can accumulate lots of work. CARD avoids this issue by actively regulating the amount of work at the short server. To do so, CARD creates a third class of “medium” jobs, which are on the border between small and large, and sets a threshold which serves as a target amount of work at the short server. Whenever a medium job arrives, CARD dispatches it to the short server if and only if the short server has less work than the threshold. This prevents too much work accumulating in the short server while also preventing the short server from unduly idling.§.§.§ CARD's performance beyond heavy trafficOf course, practical systems rarely operate at loads very near capacity, but our theoretical bounds on CARD's performance are admittedly not tight outside the heavy-traffic regime. As such, we also study CARD in simulation across a wider range of loads. We find empirically that CARD has good performance outside of heavy traffic, but slightly modifying CARD can significantly improves performance. Both the original and modified versions of CARD improve upon traditional heuristics like LWL and SITA, sometimes by an order of magnitude. The modified version is competitive with the Dice policy of <cit.>, the best known heuristic for the size- and state-aware setting. See <ref> for an example where at high load, CARD achieves reductions of over 75% relative to LWL and over 50% compared to SITA.§.§.§ Outline The remainder of the paper is organized as follows.<ref> reviews related work.<ref> presents our model and defines the CARD policy.<ref> states our main results and gives some intuition for why they hold.<ref> prove our results:a lower bound on the performance, namely mean response time, of any policy (<ref>);stability of CARD (<ref>); andan upper bound on CARD's performance, which implies its heavy-traffic optimality (n = 2 servers in <ref>, general case in <ref>).<ref> studies CARD outside of heavy traffic via simulation. /We note that a preliminary version of this work appeared as a three-page workshop abstract <cit.>, but it was extremely limited compared to the current version: it treated only the case of two servers and two job sizes, it did not provide any lower bound, and it omitted all proofs.§.§ Related Work§.§.§ FCFS dispatching with incomplete informationWhether a dispatching policy is optimal depends critically on the information available to the dispatcher. When the size of the arriving job is unknown, but server states (e.g. number of jobs at each server, work at each server, etc.) are known (state-aware), depending on the server-state information, Round-Robin (RR) <cit.>, Join-Shortest-Queue (JSQ) <cit.>, and LWL <cit.> are shown to be optimal. The common key idea of these policies is to join the queue with least (or least expected) amount of work. When only the sizes and the distribution of the arriving jobs are known, SITA is known to be optimal <cit.>. Recently, multi-level size-aware dispatching policies that combine SITA and RR have been proposed and studied <cit.>.§.§.§ FCFS size- and state-aware dispatchingFor size- and state-aware FCFS dispatching, various heuristics have been proposed and studied in simulations. Many of them are based on approximate dynamic programming e.g. <cit.>. Another class of policies, called sequential dispatching policies, are introduced in <cit.>. Among the sequential dispatching policies, Dice <cit.> shows superior performance in simulations and is among the best heuristics that has been developed. In our simulations (<ref>), Dice often slightly outperforms CARD. However, there is no theoretical analysis so far on the performance of Dice, even in heavy traffic.§.§.§ Heavy-traffic Optimality ResultsThe aforementioned optimality results are strong in the sense that they either show stochastic ordering optimality on sample paths, or show optimality for any load of jobs. For more complicated policies and systems, characterizing the mean response time for an arbitrary load is a difficult task. Therefore, a large number of works focus on analyzing the heavy-traffic regime and establish optimality therein. One approach is to prove optimality via process limits e.g. <cit.>. Such approach focuses on the transient regime and interchange of limits are usually not established for analysis in steady state. Another approach is to work directly in the stationary regime and establish heavy-traffic optimality results on mean response times in steady state e.g. <cit.>. However, these optimality results focus on settings where job sizes are unknown, so they do not address our goal of optimal size-aware dispatching.§.§.§ Tools and MethodologyRecently, <cit.> introduced and popularized a Lyapunov drift-based approach that is applied to study the steady-state performance of queueing systems in heavy traffic. The approach has been adopted in studying various switches (e.g. <cit.>), load-balancing algorithms (e.g. <cit.>), and other stochastic models (e.g. wireless scheduling, Stein's method, mean-field models). In some sense, our paper applies drift method to continuous-time continuous state Markov processes. Our use of the Rate Conservation Law <cit.> parallels the use of “zero drift” condition in drift analysis. An important step in drift analysis is establishing state-space collapse. We prove a result of this type in <ref>.§.§.§ Other Relevant WorkWhen scheduling is allowed at the servers, optimal dispatching policies can be very different. When there are multiple parallel SPRT servers, <cit.> study a multi-layer dispatching policy and show optimality using a diffusion limit argument. <cit.> develop a dispatching policy, called guardrails, that achieves optimal mean response time in heavy traffic. In recent years, learning-based dispatching policies have also been studied in literature <cit.>.In the context of scheduling jobs on a single server, when SRPT (Shortest Remaining Processing Time) is shown to be optimal <cit.>, <cit.> show that having two priority classes is sufficient for a good performance in heavy traffic. The heavy-traffic performance of CARD ends up roughly equivalent to the performance of a single-server system with two priority classes. However, we cannot match the performance demonstrated by <cit.>: they decrease the fraction of load in the lower-priority class to zero in heavy traffic, whereas CARD's “lower-priority jobs”, namely those sent to the long server, must constitute a roughly 1/n fraction of the load.§ SYSTEM MODEL AND THE CARD POLICY §.§ Model DescriptionWe consider a system of n ≥ 2 identical FCFS (First-Come, First-Served) servers, each of which has its own queue. The system has one central dispatcher, which immediately dispatches jobs to a server when they arrive. We consider M/G job arrivals with (Poisson) arrival rate λ and job size distribution S. We assume S^2 < ∞. The system load, namely the average rate at which work arrives, is ρ = λS. We assume a server never idles unless there are no jobs present in its queue.We use the convention that each server completes work at rate 1/n, so a job of size s requires ns time in service.This convention means the largest possible stability region is ρ∈ [0, 1), regardless of the number of servers n. The convention is also convenient when comparing our system's performance to that of a “resource-pooled” M/G/1 with the same arrival process and server speed 1. We write W_ for the mean amount of work in such a resource-pooled M/G/1.We consider size- and state-aware dispatching policies. That is, when a job arrives, the dispatcher may use both the job's size and the system state to decide where to dispatch it to. For our purposes, the most important aspect of the system state is the amount of work remaining at each server. We write W_i for the amount of work at server i (but see also <ref>), W = (W_1, …, W_n) for the vector of work amounts, and W_ = ∑_i = 1^n W_i for the total work. We write W_i(t) or W(t) when discussing work at a specific time t.The main metric we consider is mean response time. A job's response time is the amount of time between its arrival and completion. Due to our 1/n service rate convention, if a job of size s is dispatched to a server with w work, the job's response time is n(w + s). We write T_π for the mean response time over all jobs (in the usual limiting long-run average sense) under policy π.Purely for simplicity of notation, we assume the job size distribution S has no atoms. This is to ensure that expressions like S (S < m) are continuous functions of m. One can generalize all of our definitions and results to distributions with atoms using a lexicographic ordering trick.[ Have the system assign each job an i.i.d. uniform U ∈ [0, 1] independent of its size S, and replace comparisons S < m with comparisons (S, U) ≺ (m, v) for some v ∈ [0, 1], where ≺ is the lexicographic order. If S ((S, U) ≺ (m, v)) has a jump discontinuity at m, varying v interpolates continuously between the left and right limits.]§.§ Defining the CARD PolicyWe now introduce our policy, CARD, which stands for Controlled Asymmetry Reduces Delay. We first present it in the context of n=2 servers, then generalize to n ≥ 2 servers.§.§.§ CARD for two serversIn the n=2 case, CARD designates server 1 as the short server and server 2 as the long server. To emphasize this, when discussing CARD, we write W_s = W_1 and W_ℓ = W_2 for the work at the short and long servers, respectively.CARD has three threshold parameters to set:The two size thresholds 0 ≤ m_- ≤ m_+ divide jobs into small, medium, and large (see below).The work threshold c ≥ m_+ is, roughly speaking, a target work level for the short server. / Based on these parameters, CARD dispatches jobs as follows (see also <ref>):A small job, namely one with size in [0, m_-), is always dispatched to the short server.A medium job, namely one with size in [m_-, m_+), is dispatched depending on W_s at time of arrival. If W_s ≤ c, it is sent to the short server, and if W_s > c, it is sent to the long server.A large job, namely one with size in [m_+, ∞), is always dispatched to the long server. /§.§.§ Setting CARD's parametersThere are a range of ways to set m_-, m_+, and c that yield stability and heavy-traffic optimality. We specify these formally in the statements of <ref>, but we highlight the key points here (see also <ref>).The size thresholds m_- and m_+ should be chosen such that small jobs and large jobs are each less than half the load. Formally, we requireS (S < m_-) < 12S < S (S < m_+).In particular, we have m_- < m < m_+, where m is the solution to S (S < m) = 12S. As we show in our lower bound (<ref>), this value m is in some sense the ideal cutoff between small and large jobs. As such, it is important that in heavy traffic, either m_- → m or m_+ → m (or both). We do the former in our upper bound (<ref>).The work threshold c must balance a tradeoff between two concerns. On one hand, we want there to be little work at the short server so that small jobs have low response times. On the other hand, we do not want the short server to run out of work, as excessive idling could increase response times or even cause instability. Roughly speaking, this means setting c = Θ[][]1/1 - ρ^p for a suitable choice of p ∈ (0, 1).It is convenient in our proofs to ensure c ≥ m_+, so we assume this throughout. It also makes intuitive sense that a single medium job should not bring the short server from empty to above the work threshold. §.§.§ Generalizing CARD to any number of serversWe now generalize the above policy to n≥2 servers. Here we focus on an extension that prioritizes simplicity of analysis while still achieving optimal heavy-traffic performance. In our simulation study (<ref>), we consider a more complex variant which has better performance at practical loads.The basic idea of n-server CARD is to reduce to the two-server case. We use the same three parameters m_-, m_+, and c, and we define small, medium, and large jobs in the same way. The only difference is that instead of one short and one long server, we use n - 1 short servers 1, …, n - 1 and a single long server n. We thus write W_s_i = W_i and W_ℓ = W_n when discussing n-server CARD. Abusing notation slightly, we write simply W_s when discussing a generic short server whose index is not important.Jobs are dispatched as follows:A small job is always dispatched to a uniformly random short server.A medium job is dispatched as follows. The dispatcher selects a uniformly random short server N ∈{1, …, n - 1} and inspects its amount of work W_s_N. If W_s_N≤ c, the job is dispatched to the chosen short server N, and if W_s_N > c, it is dispatched to the long server.A large job is always dispatched to the long server. /Another way to view n-server CARD is in the following distributed manner. Suppose that instead of one dispatcher, we have n - 1 independent “subdispatchers”, each associated with a short server, and suppose that all jobs arrive at a uniformly random dispatcher. Then n-server CARD is the result of each of the subdispatchers using two-server CARD, except they all share the same long server.The way we set the parameters of n-server CARD is essentially the same as how we set the parameters of two-server CARD. The only difference is that instead of wanting small and large jobs to both have less than half the load, we want small jobs to be less than a 1 - 1/n fraction of the load, and we want large jobs to be less than a 1/n fraction of the load. We therefore setS (S < m_-) < *1 - 1/nS < S (S < m_+).This means m_- < m < m_+, where now m is the solution to S (S < m) = []1 - 1/nS.§.§ Key Definitions for Main Results and AnalysisWe state our main results and perform our analysis in terms of the following quantities. §.§.§ Drift-related quantitiesThe following quantities are related to characterizing drifts, which are the average rates at which work increases or decreases in various situations.Let ϵ = 1 - ρ. If both servers are busy, then W_ has drift -ϵ.Let ρ_s, ρ_m, and ρ_ℓ be the loads due to small, medium, and large jobs, respectively:ρ_s=λS(S < m_-),ρ_m=λS(m_- ≤ S < m_+),ρ_ℓ=λS(S≥ m_+). Let α and β be the following quantities related to the drift of W_s:α=1/n-1/n-1ρ_s,β=1/n-1(ρ_s+ρ_m)-1/n.If W_s > c, then W_s has drift -α, and if 0<W_s ≤ c, then W_s has drift +β.Let δ∈ (0, ϵ] be a bound on the probability the short server is idle, i.e. W_s = 0≤δ. We discuss how to set CARD's parameters to achieve this bound below. /To specify CARD's m_- and m_+ parameters, it suffices to specify α and β: these determine ρ_s and ρ_m, which in turn determinem_- and m_+. Moreover, for any given β, we show in <ref> how to set CARD's c parameter to achieve W_s = 0≤δ. As such:Instead of specifying m_-, m_+, and c directly, we specify α, β, and δ.In particular, our upper bounds <ref> specify how α, β, and δ should scale as functions of ϵ.§.§.§ Heavy trafficOur main results consider the ϵ↓ 0 limit, which we call the heavy-traffic regime. This is equivalent to λ↑ 1/S. In particular, we leave the number of servers fixed.Underlying our results are explicit bounds that hold even outside the limiting regime (see e.g. <ref>). Because of our focus on heavy traffic, we assume for convenience that ϵ < 1/n. In particular, this ensures we can set β > 0, which ensures that W_s always drifts towards c. The case where ϵ > 1/n and β < 0 is less interesting, as then both W_s and W_ℓ always have negative drift. §.§.§ Performance-related quantitiesThe following quantities are used in our response time bounds (<ref>). Define K_ and m such thatK_ = S/SS ≥ m = n S ≥ m.This characterization of m is equivalent to the aforementioned S (S < m) = []1 - 1/nS. In <ref>, we show that, roughly speaking, T_≈ K_W_, whereW_ = λS^2/2 ϵis the mean work in a resource-pooled M/G/1 (<ref>). § MAIN RESULTS AND KEY IDEASWe now present our main results, followed by some intuition for why they hold. We begin with a lower bound on the mean response time for any dispatching policy. We then state our results about CARD: stability for all ϵ>0, and heavy-traffic optimality as ϵ↓ 0. See <ref> for the proofs, with some details deferred to <ref>.\beginrestatable:theoremUnder any dispatching policy π and for any ϵ∈(0,1),T_π≥ K_W_ - (n - 1) S^2/2 m + n S. \endrestatable:theorem See <ref>. \beginrestatable:theoremLet δ > 0, and consider CARD with thresholdc = n(n-1)m_+/βlogn+1/n βδ.[(a)] Each short server satisfies W_s = 0≤δ.If δ < n/n - 1ϵ , then the system is a stable. Specifically, the set {(0, …, 0)} is positive recurrent for the process W(t) = (W_1(t),…, W_n(t)). / \endrestatable:theorem See <ref>. \beginrestatable:theoremFor any fixed number of servers n ≥ 2, if CARD's parameters are set such thatα = Θ(1), β = Θ[]ϵ^1/3[]log1/ϵ^2/3, andW_s = 0≤δ = Θ(ϵ^3),in the ϵ↓0 limit, then CARD achieves mean response time bounded byT_≤ K_W_ + O[][]1/ϵlog1/ϵ^1/3.In particular, CARD is heavy-traffic optimal: lim sup_ϵ↓0T_/T_π≤ 1 for any dispatching policy π. \endrestatable:theorem See <ref> for the case of n = 2 servers and <ref> for the general case. §.§ Intuition for Lower Bound on All Policies We now give some intuition for <ref>. We focus on the heavy-traffic regime, where our aim is to show that the best possible mean response time is roughly T≈ K_W_.To begin, recall T = 1/λN, where N is the mean number of jobs in the system. The key idea is to relate N to the mean amount of work W_. This is helpful because one can easily show W_≥W_ (see e.g. <ref>). How can we relate N to W_? In heavy traffic, most jobs in the system are waiting in a queue and have yet to enter service. We thus approximate N≈W_ / S_𝗊𝗎𝖾𝗎𝖾, where S_𝗊𝗎𝖾𝗎𝖾 is the mean size of jobs waiting in a queue. This means minimizing T amounts to maximizing the mean size of jobs waiting in the queue. This makes sense in light of the fact that when studying scheduling policies beyond FCFS, serving small jobs ahead of large jobs reduces mean response time <cit.>.What is the largest that S_𝗊𝗎𝖾𝗎𝖾 can be? Because we are restricted to FCFS service, the only mechanism by which we can affect the sizes of jobs in the system is dispatching. In particular, we can dispatch jobs of different sizes to different servers. Suppose, for example, that servers 1, …, n - 1 have a negligible amount of work, meaning nearly all of the work is at server n. Then S_𝗊𝗎𝖾𝗎𝖾 would be the average size of jobs dispatched to server n, which could be much greater than S. The best we could hope to do is S_𝗊𝗎𝖾𝗎𝖾 = SS ≥ m for as high a threshold m as possible. But in heavy traffic, we need server n to handle a 1/n fraction of the load, so the largest value of m possible solves S (S ≥ m) = 1/nS. This is equivalent to the characterization of m from <ref>, so it leads to T / W_≈S / SS ≥ m = K_. Observing W_≥W_ completes the bound.To make this reasoning rigorous, it turns out that reasoning directly in terms of S_𝗊𝗎𝖾𝗎𝖾 is difficult. We instead prove <ref> using a potential-function approach. However, the potential function and manipulations we perform on it were directly inspired by the intuition:The best-case scenario is to dedicate one server to the jobs of size at least m, and to ensure that all other servers have a negligible amount of work.§.§ Intuition for Upper Bound on CARD We now give some intuition for <ref>. By the lower bound intuition above, CARD is already well on its way to achieving the best-case scenario: it attempts to keep the amount of work at the n - 1 short servers near c, and the long server only serves medium and large jobs. To show CARD matches the lower bound in heavy traffic, it would suffice to show the following.CARD does not have much more work than a resource-pooled M/G/1: W_≈W_. * Roughly speaking, this amounts to showing that we avoid situations where one server is idle while another server has lots of work (see <ref>).CARD's short servers do not exceed c work by too much: W_s≈ c. * We also need to set c such that it is negligible in heavy traffic.CARD rarely dispatches medium jobs to the long server: W_s ≤ c≈ 1. /Our main tool for showing these and related properties is examining what we call below-above cycles. Consider a particular short server. It alternates between below periods, during which W_s ≤ c, and above periods, during which W_s > c. It turns out that much of our analysis rests on below-above cycles not being too long. One reason for this is that when enough short servers are in above periods, the long server is temporarily overloaded. Long periods of transient overload could cause W_ to be significantly greater than W_. Short below-above cycles prevent this possibility. See <ref> for more details about how we use below-above cycles. § UNIVERSAL LOWER BOUND*<ref>Before diving into the proof, we give the high-level idea for n = 2 servers.Suppose an arrival occurs while W_1 < W_2. For that individual arrival, its response time if it were sent to queue i would be 2 W_i, so the “benefit” of sending it to queue 1 instead of queue 2 is 2(W_2 - W_1). Reasoning symmetrically if W_1 < W_2, we conclude that the benefit of dispatching jobs to the shorter queue is proportional to |W_2 - W_1|.The main challenge is therefore to show that no dispatching policy can both frequently dispatch to the shorter queue, and also maintain large difference |W_2 - W_1| between the queues. The key observation is that if we dispatch the job to the shorter queue, then |W_2 - W_1| decreases, so the next arrival would see less benefit. That is, we can view |W_2 - W_1| as a type of resource: dispatching jobs to the shorter queue depletes it, while dispatching jobs to the longer queue replenishes it. It is thus best to dispatch shorter jobs to the shorter queue, which slowly depletes |W_2 - W_1|, and dispatch longer jobs to the longer queue, which quickly replenishes |W_2 - W_1|. To formalize the idea of viewing |W_2 - W_1| as a resource, we use the potential function 12(W_2 - W_1)^2.The proof below handles any number of servers n. The idea is essentially the same as the n = 2 case, except we look at the work differences |W_i - W_j| for every pair of servers i ≠ j. Consider an arbitrary stationary dispatching policy π. We first introduce notation for π's dispatching decisions. Suppose a job of random size S arrives and observes work vector W = (W_1, …, W_n). We denote by W_ the work at the queue the arrival is dispatched to. Note that while S is independent of W, it is not independent of W_. We also write W_ = ∑_i = 1^n W_i for the total work at all queues. Because each server does work at rate 1/n, we can write T_π asT_π = n W_ + S = W_ + n W_ - W_ + n S. The main task is to give a lower bound on n W_ - W_. To do so, we apply the rate conservation law <cit.> to V(W), whereV(w) = 1/2∑_i = 1^n ∑_j = 1^i - 1 (w_i - w_j)^2.The value of V(W) can change in two ways.Work is done continuously at each nonempty queue. We denote this average continuous change by D_t V(W).Arrivals add work to whichever queue the dispatcher chooses. By PASTA (Poisson Arrivals See Time Averages) <cit.>, this yields average change λV(W + S 𝐞_) - V(W), where 𝐞_ is the standard basis vector with a 1 indicating the queue the job is dispatched to. / The rate conservation law <cit.> states that the average rate of change of V(W) is zero, soD_t V(W) + λV(W + S 𝐞_) - V(W) = 0. We now investigate each of the two terms in <ref>. We first observe that D_t V(W)≤ 0, because in the absence of arrivals, for any two queues i and j, the absolute difference |W_i - W_j| either decreases (if exactly one server is idle) or stays constant (otherwise). Therefore,V(W + S 𝐞_) - V(W)≥ 0.Expanding the definition of V(w) and writing ∑_i ≠𝖼𝗁𝗈𝗂𝖼𝖾 for sums over all queues other than the one the job is dispatched to, we obtain0≤1/2*[r]∑_i ≠𝖼𝗁𝗈𝗂𝖼𝖾[](W_ + S - W_i)^2 - (W_ - W_i)^2= n - 1/2S^2 + *[r]∑_i ≠𝖼𝗁𝗈𝗂𝖼𝖾 S(W_ - W_i)= n - 1/2S^2 + S (n W_ - W_).Subtracting both sides from m n W_ - W_ and using the fact that-W_≤ n W_ - W_≤ (n - 1) W_,we obtainm n W_ - W_ ≥(m - S) (n W_ - W_) - n - 1/2S^2≥ -[](S - m)^+ (n - 1) W_ - (m - S)^+ W_ - n - 1/2S^2(a)= -[][](n - 1) (S - m)^+ + (m - S)^+ W_ + n - 1/2S^2= -[](m - S + n(S - m)^+) W_ + n - 1/2S^2,where (a) follows from the fact that an arriving job's size S is independent of the work vector W it observes upon arrival.We now substitute the bound from <ref> into <ref>, obtainingT_π = S - n (S - m)^+/mW_ - (n - 1) S^2/2 m + n S.The bound follows from W_≥W_ (see e.g. <ref>) and <ref>, which impliesS - n (S - m)^+= S - n S (S > m) + m n S > m= m n S > m= m K_.§ CARD STABILITY ANALYSIS Proving CARD's stability is more than a straightforward application of the Foster-Lyapunov theorem, which is widely used to establish stability of queueing systems. The main obstacle here is that the long server alternates between being underloaded and overloaded. It is thus difficult to find a Lyapunov function that is negative outside a compact set.To overcome this obstacle, we use a result of <cit.>. Roughly, it says that since W_s is a Markov process of its own, if it is stable, then it suffices to do a drift analysis of W_ℓ, averaged over the stationary distribution of W_s. Of course, we first need to show that W_s has a stationary distribution. Our proof for CARD's stability therefore proceeds in three steps.We show that the short server's work W_s(t), as a Markov process of its own, is Harris ergodic (<ref>).With the stability of W_s(t) in hand, we bound the idleness probability of the short server in steady state (<ref>).We apply the result of <cit.> (<ref>) to show stability whenever the long server is on average not overloaded. Our bound on the short server's idleness probability from the previous step thus gives a sufficient condition for stability. / Armed with these key ideas, the proofs themselves are relatively straightforward, with the bulk of the work being computation. As such, we defer most of these computation details to <ref>.\beginrestatable:lemmaW_s is Harris ergodic for any ϵ>0. \endrestatable:lemmaThe proof uses a Foster-Lyapunov theorem for continuous-time Markov processes <cit.>. The key step is to verify that the Lyapunov function V(w_s) = w_s has bounded drift when w_s ≤ c and negative drift when w_s > c. This is true because when w_s > c, we only send small jobs to the short server. We defer the details to <ref>. We establish our short server idleness bound by first proving a general bound on the probability that W_s is lower than c by a general amount x. The idleness bound follows by plugging in x = c.\beginrestatable:lemmaSuppose θ > 0 satisfies (S_s,m)_e(θ)>1/n(n-1)β+n-1, where (S_s,m)_e(·) is the Laplace transform of the equilibium distribution of the size of small and medium jobs. Then for all x ∈ [0, c],W_s < c-x≤*n(n-1)β+n-1(S_s,m)_e(θ)/*n(n-1)β+n-1(S_s,m)_e(θ)-1e^-θ x.\endrestatable:lemmaThis result is a Chernoff-type bound on (c - W_s)^+, so the main task is to bound expθ (c - W_s)^+. We do this by applying the rate conservation law <cit.> to expθ (c - W_s)^+. We defer the details to <ref>. \beginrestatable:lemmaWe have the following bound on the idleness of the short server,W_s = 0≤n+1/nβexp*-β c/n(n-1)m_+.\endrestatable:lemmaLet θ=β/n(n-1)1/m_+. Since β < 1n(n-1) and all small and medium jobs have length at most m_+, we have(S_s,m)_e(θ)≥1-θ(S_s,m)_e≥1-β/n(n-1). We can therefore apply <ref>, from which the bound follows by the computation below and setting x = c:W_s<c-x ≤*n(n-1)β+n-1*1-β/n(n-1)/*n(n-1)β+n-1*1-β/n(n-1)-1exp*-β x/n(n-1)m_+≤n+1/nβexp*-β x/n(n-1)m_+.We defer the proof of <ref> to <ref>.§ CARD MEAN RESPONSE TIME ANALYSISWith the lower bound from <ref> in mind, our next step is to establish an upper bound on the mean response time under CARD. We focus here on the two-server case. The general case uses the same ideas but has more complicated computations, so we defer its proof to <ref>.Let T_, s, T_, m, and T_, ℓ be the mean response times of small, medium, and large jobs under CARD, respectively. We haveT_ = p_s T_, s + p_mT_, m + p_ℓT_, ℓ≤ 2 S + 2 p_s W_s + 2p_m c W_s ≤ c + 2 p_m W_ℓ(W_s > c) + 2 p_ℓW_ℓ.where the inequality follows from how CARD dispatches jobs, the PASTA property <cit.>, and the fact that the servers complete work at rate 1/2. The main difficulty of analyzing (<ref>) lies in bounding W_ℓ and W_ℓ(W_s>c). We now give a high-level overview of the obstacles and our approach. §.§ Key Ingredients: Work Decomposition, Below-Above Cycles, and Palm InversionTo bound W_ℓ, it suffices to bound W_. The following theorem, called the work decomposition law <cit.>, provides a way to bound W_. We state it below in a way that is specialized to our system.\beginrestatable:theoremDenote by I the fraction of servers that are idle, namelyI = 1/n∑_i=1^n (W_i=0).If the system is stable, then the steady-state mean total work W_ satisfiesW_ = W_ + I W_/ϵ = λ/2S^2 + I W_/ϵ,where W_ is the work in an M/G/1 with arrival rate λ and job size distribution S. \endrestatable:theoremThe key component we need to bound from <ref> is I W_. We would like to studyI W_ =(W_s+W_ℓ)*12(W_s=0)+12(W_ℓ=0)=12W_ℓ(W_s=0)+12W_s(W_ℓ=0)The main difficulty here is to bound W_ℓ(W_s=0). Since CARD dispatches differently to the long server based on the state of the short server, W_ℓ depends on the state of W_s. Such a dependency also poses challenges in analyzing W_ℓ W_s>c, when even knowing W_ℓ is not sufficient.Under CARD, W_s alternates between being above and below the threshold c. Such a behavior naturally leads to renewal intervals consists of the “above” periods and “below” periods. We partition time into alternating intervals, called below periods and above periods, as follows:A time t is in a below period if W_s(t) ≤ c.A time t is in an above period if W_s(t) > c. / A below-above cycle is then a complete below period followed by a complete above period. Below-above cycles start at times t for which W_s(t) = c. We can partition time into below-above cycles.We introduce the following notation for working with below periods, above periods, and below-above cycles:We write _c^0· for the Palm expectation <cit.> taken at the start of a below-above cycle. Roughly speaking, _c^0· = “·a below period starts at time 0”, but the formal definition avoids conditioning on a measure-zero event.In the context of a below-above cycle starting at time 0, meaning W_s(0) = c, we denote the lengths of the below and above period by B and A, respectively:B= inft > 0 : W_s(t) > c, A= inft > B : W_s(t) = c.Abusing notation slightly, we also use B and A to denote the lengths of the below and above period in a generic below-above cycle, not necessarily one that starts at time 0. / Why are above and below periods helpful for analyzing CARD? Within an above or below period, CARD does not change how it dispatches jobs, making it easier to analyze W_ℓ within one below-above cycle. The Palm inversion formula <cit.>, which is a generalization of the celebrated renewal-reward theorem, allows us to connect the average behavior of W_ℓ within one below and above cycle to a steady-state average. For example, it impliesW_ℓ = 1/A+B_c^0*∫_0^A+BW_ℓ(t)ṭ, W_ℓ(W_s>c) = 1/A+B_c^0*∫_B^A+BW_ℓ(t)ṭ.Our high-level idea is to relate both of these quantities to _c^0W_ℓ(0), the mean work at the long server at the start of a below-above cycle. We show in <ref> that, roughly speaking,W_ℓ≈_c^0W_ℓ(0), W_ℓ(W_s>c)≈_c^0W_ℓ(0) W_s > c. The rest of this section is organized as follows.<ref> analyzes the behavior of the short server. In particular, we show that above and below cycles are not too long.<ref> analyzes the behavior of the long server. Using the fact that above and below cycles are not too long, we show <ref>. As part of this, we bound W_.<ref> assembles the pieces to prove <ref>. /§.§ Analyzing the Short Server and Below-Above Cycles In this section, we bound various quantities relating to work at the short server and the below-above cycles. Of particular importance are the mean excesses of the above and below periods A_e and B_e, as they are used to better understand the relations between W_ℓ and _c^0W_ℓ(0).The techniques we use to obtain bounds on A_e and B_e also immediately yield bounds on A and B. Despite not using these bounds, given that they help complete the picture of how the system behaves, we state them, too.As a reminder, the excess or equilibrium distribution of a random variable V is the distribution V_e whose probability density function is f(t) = V > t / V. The excess arises naturally in renewal theory <cit.>. Most important for our purposes is the fact thatV_e = V^2/2 V. \beginrestatable:lemmaB≤m_+/β, B_e≤c+m_+/β≤2 c/β, and(B_e)_e≤c+m_+/β≤2 c/β.\endrestatable:lemma Suppose that at time 0, the short server has W_s(0) = v ≤ c work, so time 0 is in a below period. Let τ(v) be the time until the end of the below period. We will showτ(v)≤c + m_+ - v/β≤c + m_+/β≤2 c/β,where the last step follows because c ≥ m_+ (<ref>). This implies all three of the bounds.A below period starts with c work at the short server, so B = τ(c)≤m_+/β.The excesses B_e and (B_e)_e can both be interpreted as the distribution of the amount of time until the below period ends, starting from some random amount of work at the short server, so their means can each be written as τ(V) for an appropriate variable V. /It remains only to show <ref>, which we do using a supermartingale argument. Suppose W_s(0) = v as above, and define X(t) = c - W_s(t) + β t. We now show that X(t) is a supermartingale with respect to the Markov process W_s(t). LetΔ_s(u, t) be the amount of work completed by the short server during (u, t] andΣ_s(u, t) be the amount of work that arrives to the short server during (u, t]. / For any 0 ≤ u ≤ t, we haveX(t) | W_s(u) - X(u)= W_s(u) - W_s(t) | W_s(u) + β (t - u) = Δ_s(u, t) - Σ_s(u, t) | W_s(u) + β (t - u) ≤*12(t - u) - Σ_s(u, t) | W_s(u) + β (t - u) = (t - u) *12 - ρ_s - ρ_ℓ + β = 0,so X(t) is indeed a supermartingale. Applying the optional stopping theorem to X(t) and τ(v), which we justify below, yieldsc - v = X(0)≥X(τ(v)) = c - W_s(τ(v)) + βτ(v) (a)≥ -m_+ + βτ(v),from which <ref> follows. Above, (a) uses the fact that all medium jobs have size at most m_+, so at the moment the below period ends, the short server's work can jump to at most c + m_+.All that remains is to verify that we can indeed apply the optional stopping theorem.We have τ(v) < ∞ by positive recurrence of W_s(t).We have uniform integrability, namely lim_t→∞X(t)(τ(v) > t) = 0, thanks to the following two observations. First, W(t)(τ(v) > t)→ 0 because c - W(t) ∈ [0, c] when t is in a below period. Second, β t(τ(v) > t)≤βτ(v)(τ(v) > t)→ 0 because τ(v) < ∞. / \beginrestatable:lemma W_s - cW_s > c≤m_+/4 αand(W_s - c)^2W_s > c≤m_+^2/8 α^2\endrestatable:lemma Each above period starts with W_s - c ∈ [0, m_+]. Until the end of the above period, W_s - c evolves like the amount of work in an M/G/1 queue with server speed 1/2, job size distribution S_s, and work arrival rate ρ_s < 1/2. This means (W_s - cW_s > c) has the same distribution as an M/G/1 with vacations, where the vacation length distribution is that of W_s - c at the start of an above period. The desired bounds follow from the work decomposition formula for the M/G/1 with vacations <cit.> and the observation that both job sizes and vacation lengths are bounded by m_+. We defer the details to <ref>. \beginrestatable:lemma A≤m_+/αandA_e≤m_+/4 α^2.\endrestatable:lemma As in the proof sketch of <ref>, we view the short server during an above period as an M/G/1 with server speed 1/2 and work arrival rate ρ_s, so the mean drift of W_s is -(1/2 - ρ_s) = -α. By standard results for M/G/1 busy periods <cit.>, starting from W_s - c = v, it takes v/α time in expectation for the above period to end.The A bound follows from the fact that at the start of an above period, W_s - c ≤ m_+, implying A≤ m_+/α.The A_e bound follows from the fact that the residual time of an above period is distributed as A_e. But the residual time of an above period is the same as the amount of time until an above period ends starting from the stationary distribution of W_s - c conditional on being in an above period. This means A_e = W_s - cW_s > c/α, so the result follows from <ref>. / §.§ Analyzing the Long Server In this section, we bound differences between _c^0W_ℓ(0) and W_ℓ, W_ℓ(W_s>c), and W_ℓ(W_s=c), separately. These bounds will help us achieving the ultimate goal of this section: to upper bound W_, thereby obtaining a bound on W_ℓ. Let q_A and q_B be the probabilities of being in an above or below period, respectively. That is,q_A = W_s > c = A/A + Band q_B = W_s ≤ c = B/A + B,where the expressions in terms of expectations of A and B follow from renewal-reward theorem.\beginrestatable:lemma []W_ℓ-_c^0W_ℓ(0)≤*√(q_A A_e) + √(q_B B_e)^2 ≤q_A m_+/2 α^2 + 4 q_B c/β.\endrestatable:lemmaThe long server workload process can be described asW_ℓ(t)=*W_ℓ(0)-Δ_ℓ(0,t)+Σ_ℓ^m(0,t)+Σ_ℓ^ℓ(0,t)^+,whereΔ_l(0,t) is the total work processed by the long server in (0,t],Σ_ℓ^m(0,t) is the total work added to the long server from medium job arrivals in (0,t], andΣ_ℓ^ℓ(0,t) is the total work added to the long server due to large job arrivals in (0,t]. / Applying the Palm inversion formula <cit.> to W_ℓ givesW_ℓ = 1/A+B_c^0*∫_0^A+B*W_ℓ(0)-Δ_ℓ(0,t)+Σ_ℓ^m(0,t)+Σ_ℓ^ℓ(0,t)^+ṭ(a)=_c^0W_ℓ(0) + 1/A+B_c^0*∫_0^A+Bmax*-Δ_ℓ(0,t)+Σ_ℓ^m(0,t)+Σ_ℓ^ℓ(0,t), -W_ℓ(0)ṭ,where (a) holds since W_ℓ(0), the amount of long server work at time 0, is independent of A + B, the length of the below-above cycle starting at time 0.We now bound W_ℓ-_c^0W_ℓ(0) separately from above and below. To obtain a lower bound, we bound the integrand below by -Δ(t), obtainingW_ℓ - _c^0W_ℓ(0)≥ -_c^0*∫_0^A + BΔ_ℓ(0,t)ṭ(b)≥ -_c^0*∫_0^A + Bt/2ṭ = -(A+B)^2/4 A+B,where (b) holds because the server completes work at rate 12 while it is busy. To obtain an upper bound, we bound the integrand above by Σ^m_ℓ(0, t) + Σ^ℓ_ℓ(0, t). We first bound its conditional expectation given A and B. Notice that Σ^m_ℓ(0, t) + Σ^ℓ_ℓ(0, t) consists of arrivals of large jobs during (0, t] and medium jobs during (B, t]. Neither of these types of arrivals impacts the lengths of the above and below periods, so_c^0Σ^m_ℓ(0, t) + Σ^ℓ_ℓ(0, t)A, B = ρ_m (t - B)^+ + ρ_ℓ t ≤ t.From <ref> and a computation similar to the lower bound, we obtainW_ℓ-_c^0W_ℓ(0)≤(A+B)^2/2 A+B.Combining this with the lower bound, the result follows from <ref>, <ref>, andCauchy-Schwarz:(A+B)^2/2 A+B≤A^2 + √(A^2B^2) + B^2/2 A+B = q_A A_e + 2 √(q_A A_eq_B B_e) + q_B B_e.Finally, we apply AM-GM inequality on √(q_A A_eq_B B_e) and our bounds on A_e and B_e from <ref> to complete the proof. \beginrestatable:lemma []W_ℓ(W_s > c) - q_A _c^0W_ℓ(0)≤ q_A A_e + 2 √(q_A A_eq_B B_e)≤q_A m_+/4 α^2 + √(2 q_A q_B m_+ c)/α√(β).\endrestatable:lemmaSimilar to that of <ref>. See <ref>. \beginrestatable:lemma W_ℓ(W_s=0)≤δ_c^0W_ℓ(0) + √(δB_e^2)≤δ_c^0W_ℓ(0) + 2 c √(2 δ)/β.\endrestatable:lemmaApplying Palm inversion formula <cit.> to W_ℓ(W_s=0) yieldsW_ℓ(W_s=0) = 1/A+B_c^0*∫_0^BW_ℓ(t) (W_s(t)=0)ṭ,where we can end the integral at B because we only have W_s(t) = 0 during below periods, which corresponds to t ∈ [0, B). We further expand the right-hand side using <ref>. No medium jobs are dispatched to the short server during below periods, soW_ℓ(W_s=0)≤1/A+B_c^0*∫_0^B(W_ℓ(0)+Σ_ℓ^ℓ(0,t)) (W_s(t)=0)ṭ(a)=_c^0W_ℓ(0)/A+B_c^0*∫_0^B(W_s(t)=0)ṭ + 1/A + B_c^0*∫_0^BΣ_ℓ^ℓ(0,t) (W_s(t)=0)ṭ.where (a) follows from the independence of W_ℓ(0) and ∫_0^B(W_s(t)=0)ṭ. To analyze the first term, we observe that by the Palm inversion formula <cit.> and <ref>,1/A+B_c^0*∫_0^B(W_s(t)=0)ṭ = (W_s = 0) = W_s=0≤δ.To analyze the second term, we apply <ref>, yielding_c^0*∫_0^BΣ_ℓ^ℓ(0,t) (W_s(t)=0)ṭ≤_c^0*∫_0^B t(W_s(t)=0)ṭ.The right-hand side is difficult to compute directly due to the dependency of B and W_s. To resolve this, we apply the Palm inversion formula <cit.> to B_a(W_s=0), where B_a(t) is the age process of the below-above cycle, namely the amount of time since the current cycle began. This yields1/A+B_c^0*∫_0^B t(W_s(t)=0)ṭ = B_a(W_s=0)Thus, to bound 𝒯_3, it suffices to bound B_a(W_s=0). By Cauchy-Schwarz,B_a(W_s=0)≤√(B_a^2W_s=0)(b)=√(B_e^2W_s=0)(c)≤√(δB_e^2),where (b) follows because B_a has distribution B_e, and (c) follows from <ref>. The result then follows from bounding B_e^2 using <ref>. \beginrestatable:lemma W_ℓ≤W_≤[]1 + δ/ϵW_ + 2 c + m_+ √(q_A)/2 α√(ϵ) + 4 c √(δ)/α^2 βϵ.\endrestatable:lemmaWe use <ref> to bound W_, which amounts to analyzing I W_. We haveI W_ = (W_s+W_ℓ) *12(W_s=0)+12(W_ℓ=0) = 12W_ℓ(W_s=0)+12W_s(W_ℓ=0).Combining <ref> and noting W_ℓ≤W_ yields a bound on W_ℓ(W_s=0):W_ℓ(W_s=0)≤δ*W_ + q_A m_+/4 α^2 + 4 q_B c/β + 2 c √(2 δ)/β.To bound W_s(W_ℓ=0), we computeW_s(W_ℓ=0) (a)≤(c+(W_s-c)^+) (W_ℓ=0)(b)≤ cW_ℓ=0+√(*((W_s-c)^+)^2 W_ℓ=0)= cW_ℓ=0+√(q_A*(W_s-c)^2W_s > c W_ℓ=0)(c)≤ 2ϵ c+m_+ √(q_A ϵ)/2α,where (a) follows from W_s≤ c+(W_s-c)^+, (b) follows from Cauchy-Schwarz, and (c) follows from <ref> and the fact that ϵ = 12W_s=0 + 12W_ℓ=0≥12W_ℓ=0. Combining the bounds on W_ℓ(W_s=0) and W_s(W_ℓ=0) with <ref>, we obtainW_ = W_ + 1/ϵ*12W_ℓ(W_s=0) + 12W_s(W_ℓ=0)≤W_ + c + m_+ √(q_A)/4 α√(ϵ) + δ/2 ϵW_ + q_A m_+ δ/8 α^2 ϵ + 2 q_B c δ/βϵ + c √(2 δ)/βϵ.The result follows after rearranging and simplifying. We use the fact that we have defined the parameters such that c ≥ m_+ and δ≤ϵ (<ref>), which means 1/[]1 - δ/2 ϵ≤ 1 + δ/ϵ≤ 2. And, using the fact that α, β≤12, we loosely bound the terms with a √(δ) factor byq_A m_+ δ/8 α^2 ϵ + 2 q_B c δ/βϵ + c √(2 δ)/βϵ≤ (q_A + q_B) c δ/2 α^2 βϵ + c √(2 δ)/βϵ≤2 c √(δ)/α^2 βϵ. §.§ Bounding Mean Response TimeWe now prove <ref>, our main upper bound result. It follows as a corollary of a more explicit bound, which we state in <ref>. To simplify the computations, we assume that β≥ 2δ, but we could remove this assumption at the cost of slightly complicating the expressions. \beginrestatable:lemmaIf β≥ 2 δ, then q_A ≤2 β/α + β and q_B ≤α/α + β.\endrestatable:lemmaThe short server is stable, so the load of jobs arriving to it equals the average rate it completes work. This means ρ_s + ρ_m W_s ≤ c = 12W_s > 0. <ref> implies W_s > 0∈ [1 - δ, 1], so the bound follows from the definitions of α and β and the β≥ 2 δ assumption. \beginrestatable:theoremIn a system with n = 2 servers, if δ≤ϵ < 1/2 and β≥ 2 δ, then by setting c according to <ref>, CARD achieves mean response time bounded byT_ ≤*K_ + 4 β/α + β*1 + δ/ϵW_ + 2 S + 44 m_+ max[]β/α^2 (α + β),√(β/α^2 ϵ (α + β)), log3/2 βδ/β (α + β), √(log3/2 βδ/β), √(δ)log3/2 βδ/α^2 β^2 ϵ.\endrestatable:theoremIt suffices to bound the work quantities on the right-hand side of <ref>.<Ref> implies W_s≤ c + q_A W_s - cW_s > c≤ c + q_A m_+/α.<Ref> imply, after some simplification,W_ℓ(W_s > c)≤ q_A W_ℓ + q_A m_+/α^2 + 4 q_A q_B c/β + √(2 q_A q_B m_+ c)/α√(β)./ We use these with <ref> to express the right-hand side in terms of α, β, δ, and m_+, then simplify. We defer the details to <ref>. The bound follows directly from plugging the parameter choices into <ref>, and comparing with the lower bound in <ref> implies heavy-traffic optimality. But the main question is why these are the right ways to set the parameters.If we set δ = Θ(ϵ^d) for fixed d, the only expression in <ref> that is increasing as a function of d is log3/2 βδ = d Θ*log1/ϵ. We thus ignore factors of √(δ) when determining α and β. One can check at the end that d ≥ 3 suffices.Observe that we want β/α↓ 0 to ensure the multiplier of W_ approaches K_. If we substitute β = κα into <ref>, then for any fixed κ, the resulting expression is a decreasing function of α, so we set α = Θ(1). With this choice, the largest terms from the maximum in <ref> are Θ[]√(β / ϵ) and Θ[]1/βlog1/ϵ, which are balanced by β = Θ[]ϵ^1/3[]log1/ϵ^2/3. § SIMULATIONS We have established the optimality of CARD as load approaches capacity. In this section, we investigate the performance of CARD in moderate traffic via simulations. We aim to provide insights into the following questions with our simulations.How good is CARD's performance compared with other dispatching policies in literature?Are there simple modifications of CARD that exhibit better performance in practice? CARD has three tunable parameters: c, α, and β. The recipe provided in <ref> is optimal in heavy traffic, but are there rules of thumb that work well beyond heavy traffic? How sensitive is CARD's performance to these parameters?/ In all of our simulations, we consider three benchmark policies: LWL, SITA-E <cit.>, and Dice <cit.>. Roughly, Dice lets the server with least work pick small jobs from the arrival stream and defers large jobs to servers with more work. We refer interested readers to <cit.> for details.[ The version of Dice we simulate differs slightly from the original version in <cit.>, where Dice have thresholds that do not vary with load. We notice that constant thresholds lead to suboptimal performance for either low or high loads. Therefore, we incorporate load-dependent thresholds for Dice that lead to good performance across all loads simulated. With two servers, we use a threshold of the form ηϵ^-1/3 for Dice, picking η = 1.8, 5.2, 20 for 𝖼𝗏 = 1, 10, 100, respectively. With ten servers, we use thresholds 2m_iϵ^-1/3.] Of course, there are many more dispatching policies. We pick LWL and SITA-E because they are extensively studied, and we pick Dice because among all heuristics for size- and state-aware dispatching, it has the best performance at high load <cit.>.Our simulations include job size distributions with exponential and heavier tails. Heavy-tail distributions are common in computer systems and networks (e.g. <cit.>) and the high mean response times they incur make a good dispatching policy essential. Throughout this section, we consider three Weibull distributions with mean 1 and coefficients of variation (𝖼𝗏) 1, 10, and 100. We simulate 40 trials for each data point, with 10^7 arrivals per trial for 𝖼𝗏 = 1 and 𝖼𝗏 = 10, and 3×10^7 arrivals per trial for 𝖼𝗏 = 100. We show 95% confidence intervals when wider than the marker size. §.§ Performance of CARD with Two ServersAlthough CARD as introduced in <ref> is heavy-traffic optimal, we can improve its performance under moderate traffic with one small modification: instead of statically deciding which server is short and which is long, dynamically treat whichever server has less work as the short server. We call this variant Flexible CARD, and call the original version Rigid CARD to disambiguate.<ref> shows us that both CARD versions significantly outperform LWL and SITA-E, especially at high loads and with large coefficients of variation. For instance, with 𝖼𝗏 = 100 and ρ = 0.98, CARD gives a 93% reduction compared to LWL, and a 61% reduction compared to SITA-E. Flexible CARD is also almost tied with Dice at all loads simulated. §.§ Calibrating the Parameters of Two-Server CARDWe now discuss how to calibrate parameters c, α, and β. In practice, α and β as prescribed in <ref> are difficult to calibrate, because the ranges of α and β change as ρ increases. Therefore, we consider instead the parameters α'=1/2-ρ_s/ρ and β'=1/2-ρ_ℓ/ρ. Adjusting α' can therefore be understood as adjusting the fraction of small jobs and adjusting β' can be understood as adjusting the fraction of large jobs.After trying a few strategies for scaling c as a function of ρ, we found that thresholds of the form c=γ1/√(ϵ)log*1/ϵ, where γ depends on the distribution, yield decent performance.In general, for the three job size distributions we consider, mean response time under flexible CARD is not very sensitive to these parameters near the optima (see <ref>). Any choice of parameters not too far from the optima yields decent performance. We found that α'=β'=0.15 for all three distributions and γ=0.3, 0.6, 2.5 for cv = 1, 10, and 100, respectively, lead to decent performance. These are also the parameters we used in <ref>.§.§ Improving CARD's Performance for More than Two ServersAs the number of servers increases, flexible CARD with three parameters (γ, α', and β') no longer performs well for distributions with large coefficients of variation <ref>. Therefore, we propose another variant of CARD for n servers called multi-band CARD. We first present the general dispatching rules, then explain how multi-band rigid and flexible CARDs are defined.We divide the job size into n+1 small intervals such that each interval amounts to 1/n of the total load except for the first and last interval, each of which amounts to 1/2n of the total load. Denote the endpoints of these intervals as 0,m_1,… m_n,∞.Server i except the last one has a threshold c_i, which can be different for different servers. When a job of size s arrives, it is dispatched according to the following general rules: * If s<m_1, it is dispatched to server 1. * If s>m_n, it is dispatched to server n. * If s∈[m_i,m_i+1) for i=1,…, n-1, it is dispatched to server i if W_i≤ c_i. Otherwise, it is dispatched to server i+1. / / Multi-band rigid CARD numbers the servers 1 to n and dispatches according to the rules outlined above. Server numbers do not change under rigid CARD. On the other hand, Multi-band flexible CARD sorts the servers in increasing work order when a job arrives so that W_1≤ W_2≤⋯≤ W_n, then dispatch according to the general rules. Since all the m_i's are fixed for each distribution, the tunable parameters are the c_i's. Our experiments show that we achieve good performance by setting c_i = m_i/√(ϵ).As we can see in <ref>, multi-band CARDs significantly outperforms LWL and SITA-E at high loads and for job size with large coefficients of variation. When the job size distribution has cv=10, at ρ=0.98, mean response time under flexible CARD is ∼22% and ∼19% of the mean response times under LWL and SITA-E, respectively. When the job size distribution has cv=100, at ρ=0.98, mean response time under flexible CARD is ∼4% and ∼21% of the mean response times under LWL and SITA-E, respectively. Moreover, multi-band flexible CARD almost ties with Dice in all loads simulated.ACM-Reference-Format§ DEFERRED PROOFS§.§ StabilityAs outlined in <ref>, we begin by showing that the short server is stable for any threshold c≥0. Our main tool is a continuous-time Foster-Lyapunov theorem developed in . A key component of the theorem is the infinitesimal generators for Markov processes. Let X(t) be a Markov process, its infinitesimal generator, 𝒜, is the operator defined by𝒜V(x)=lim_t↓0V(X(t)) X(0)=x-V(x)/t.The domain of 𝒜 is all functions V for which the limit on the right exists for all x in the state space. Since work at the short server, W_s(t), is a Markov process, for a function V with left derivative, we may explicitly derive the infinitesimal generator of W_s(t) under CARD: 𝒜V(w_s) =-12V'(w_s) (w_s>0)+λ p_s_S_sV(w_s+S_s)-V(w_s)+(w_s≤ c)λ p_m _S_mV(w_s+S_m)-V(w_s),where p_s=S≤ m, p_m=m_-<S<m_+, _S_s· is the expectation over the distribution of small jobs (i.e. S S≤ m_-) and _S_m· is the expectation over the distribution of medium jobs (i.e. S m_-<S<m_+), (·) is the indicator function, and V' is the left derivative.We now present the continuous-time Foster-Lyapunov theorembelow for easy reference. Suppose that a Markov process Φ is a non-explosive right process. If there exists constants c,d>0, a function f≥1, a closed petite set C, and a function V≥0 that is bounded on C such that for all x∈ O_m and m∈ℤ,𝒜_m V(x) ≤ - η f(x) + d _C(x),then Φ is positive Harris recurrent. Here, O_m is a family of precompact sets that increases to the entire state space as m→∞ and 𝒜_m is the generator for the truncated process restricted to O_m. This restriction is in place mainly to handle possibly explosive processes. Our process W(t) is not explosive. More importantly, the Lyapunov function V we consider in <ref> is increasing and differentiable. It follows that 𝒜_mV(x)≤𝒜V(x) for all x. It therefore suffices for us to apply theorem <ref> with 𝒜V(x) instead.*<ref> We first check the preconditions of <ref> hold for W_s under CARD policy for any ϵ>0. W_s is obviously non-explosive. Let V(W_s)=W_s, C={W_s: W_s≤ c}, and f(w_s)≡1. Then we have𝒜V(W_s)=β(W_s≤ c)-α(W_s>c)Since α>0 for any ϵ>0, positive Harris recurrence of W_s(t) follows from <ref> if C is a closed petite set. We now check this.It follows from <cit.> that W_s(t) is non-evanescent. By <cit.>, the K_a-chain of W_s(t) is an irreducible T-process with everywhere nontrivial continuous component. By <cit.>, C is a petite set. Given positive recurrence of W_s(t) and <cit.>, we conclude from <cit.> that W_s(t) is also ergodic. *<ref> Define V(w_s)=(c-w_s)^+ and fix some θ>0. Since W_s has a stationary distribution, we can apply the rate conservation law <cit.> to e^θ V(W_s), which yieldsθ/n_πe^θ V(W_s)(V(W_s) < c)+λ_s.m_π,S_s,me^θ V(W_s+)-e^θ V(W_s)=0.Here, π is the stationary distribution of W_s and λ_s,m is the arrival rate into a short server from small and medium jobs, and V(W_s+) is the value of V(W_s) immediately after a job arrival of size S_s,m. Rearranging yieldsθ/n_πe^θ V(W_s) + λ_s,m_π,S_s,me^θ V(W_s+)-e^θ V(W_s)= θ/n_πe^θ V(W_s)(V(W_s) = c).We drop the RHS and work withθ/n_πe^θ V(W_s)+λ_s,m_π,S_s,me^θ V(W_s+)-e^θ V(W_s)≥0.We first analyze the second term on the LHS. Conditioning on a given state W_s, we have_S_s,m*e^θ V(W_s+)-e^θ V(W_s) W_s(a)=_S_s,m*e^θ (V(W_s)-S_s,m)^+-e^θ V(W_s) W_s= _S_s,m*e^θ (V(W_s)-S_s,m)^+-e^θ(V(W_s)-S_s,m)+e^θ(V(W_s)-S_s,m)-e^θ V(W_s) W_s(b)=_S_s,m*e^θ (V(W_s)-S_s,m)^+-e^θ(V(W_s)-S_s,m) W_s+e^θ V(W_s)_S_s,m*(e^-θ S_s,m-1)(c)=_S_s,m*(1-e^θ(V(W_s)-S_s,m)) (V(W_s)<S_s,m) W_s+e^θ V(W_s)(S_s,m(θ)-1),where [(a)] follows from V(W_s+)=(c-W_s-S_s,m)^+=(V(W_s)-S_s,m)^+,follows from the independence of the arriving job size and V(W_s) for any given W_s≤ c, andfollows from the definition of LST and the fact thate^θ (V(W_s)-S_s,m)^+-e^θ(V(W_s)-S_s,m)=0, if V(W_s)≥ S_s,m1-e^θ(V(W_s)-S_s,m),if V(W_s)<S_s,m./Taking expectation over π on both sides of (<ref>) and substituting into (<ref>) givesθ/n_πe^θ V(W_s) ≥λ_s,m_π*e^θ V(W_s)(1-S_s,m(θ))-λ_s,m_π,S_s,m*(1-e^θ(V(W_s)-S_s,m)) (V(W_s)<S_s,m)(a)=θ*n-1/n+(n-1)β_π*e^θ V(W_s)(S_s,m)_e(θ)- θ*n-1/n+(n-1)β_π,S_s,m*1-e^θ(V(W_s)-S_s,m)/θS_s,m(V(W_s)<S_s,m),24muwhere (a) follows from n-1/n+(n-1)β=λ_s,mS_s,m, which is the load of the short and medium jobs, and the fact that(S_s,m)_e(θ)=1-S_s,m(θ)/θS_s,m,which holds for a general job size distribution (with or without a density function). See e.g. . Since(1-e^-θ(S_s,m-V(W_s)))(V(W_s)<S_s,m)≤1-e^-θ S_s,m,we have_π,S_s,m*1-e^θ(V(W_s)-S_s,m)/θS_s,m(V(W_s)<S_s,m)≤(S_s,m)_e(θ).Since θ≥0 is chosen so that (S_s,m)_e(θ)>1/n(n-1)β+n-1, we have *n-1n+(n-1)β(S_s,m)_e(θ)-1n>0. Thus, we rearrange (<ref>) to obtain_πe^θ V(W_s)≤*n(n-1)β+n-1(S_s,m)_e(θ)/*n(n-1)β+n-1(S_s,m)_e(θ)-1.Markov's inequality then givesV(W_s) < x≤*n(n-1)β+n-1(S_s,m)_e(θ)/*n(n-1)β+n-1(S_s,m)_e(θ)-1e^-θ x,and the lemma follows. *<ref> Part (a) is a corollary of <ref>. For part (b), we first establish the result for n=2 servers, then show how it generalizes to n>2 servers. For n=2, we denote the state as W(t)=(W_s(t),W_ℓ(t)).To establish (b), we first applyto the pre-jump chain {W(T_n-)}, where T_n is the arrival time of the nth job. Conditions A1-A3 in Theorem 1 are fulfilled by <ref>. DefineL_2(w_ℓ)=λ w_ℓ, f(w_s)=-12+ρ_ℓ+ρ_m(w_s>c), h(x)=12e^-x.We now verify conditions B1 and B2 are met with above choices of L_2, f, and h. Condition B1:sup_w_s,w_ℓ|L_2(W_ℓ(T_1-))-L_2(w_ℓ)| W_ℓ(0-)=w_ℓ,W_s(0-)=w_s,Arrival at time 0≤1.Condition B2: Let π_s be the stationary distribution of W_s(t). By PASTA, π_s is also the stationary distribution of {W_s(T_n-)}. We have_π_sf(W_s) =-12+ρ_ℓ+ρ_mW_s>c(a)≤ -12+ρ_ℓ+ρ_m(ρ_m+ρ_s)-1/2+δ/2/ρ_m=-1+ρ+δ/2=-ϵ+δ/2<0,where (a) comes from <ref> and PASTA. We then compute[ The conditional probabilities below are a slight abuse of notation. They should be understood as referring to the probability measure induced by the pre-jump Markov chain starting from state (w_s, w_ℓ).]L_2(W_ℓ(T_1-))-L_2(w_ℓ)) W_ℓ(0-)=w_ℓ,W_s(0-)=w_s,Arrival at time 0= λ*∫_0^T_1--12(W_ℓ(t)>0)ṭ W_ℓ(0-)=w_ℓ,W_s(0-)=w_s,Arrival at time 0 +ρ_ℓ+ρ_m(w_s>c)= f(w_s)+λ/2**T_1-(w_ℓ+S)^+ W_ℓ(0-)=w_ℓ,W_s(0-)=w_s,Arrival at time 0≤ f(w_s)+λ/2**T_1-w_ℓ^+=f(w_s)+λ/21/λe^-λ w_ℓ=f(w_s)+h(L_2(w_ℓ)).It now follows from <cit.> that the embedded pre-jump chain {W(T_n-)} is positive Harris recurrent.Since {W(T_n-)} is positive Harris recurrent and easily seen to be {(0, 0)}-irreducible, the expected number of steps until returning to (0,0) is finite from any starting state. The time between steps is exponentially distributed with mean 1/λ, so we conclude from Wald's equation that the expected return time of the original process W(t) to state (0,0) is also finite. Positive Harris recurrence of W(t) immediately follows. We now generalize the above proof to n>2 servers. To begin with, we define a vector-valued process W_short servers(t)=(W_s_1(t),…,W_s_n-1(t)). Under multiserver CARD, W_short servers(t) has the following properties:W_short servers(t) is a Markov process of its own and is Harris ergodic.Since stationary distribution of the short servers are i.i.d., the stationary distribution of W_short servers is the product of stationary distributions of the short servers in isolation. / With these two properties in hand, the argument for n=2 servers as presented above works for n>2 servers with the same functions h, L_2, and the following f:f(w_short servers)=-1/n+ρ_ℓ+ρ_m∑_i=1^n-1(w_s_i>c). §.§ Response Time Analysis *<ref> As stated in the proof sketch, (W_s - cW_s > c) has the same distribution as an M/G/1 with vacations.The job size distribution is S_s = (SS < m_-). In particular, using the fact that S_s is stochastically dominated by m_+, one can show that (S_s)_e is stochastically dominated by a uniform distribution on [0, m_+].[ It is not in general true that S being dominated by R implies S_e is dominated by R_e. This is specific to the case that R is a deterministic constant.]The load is 1 - 2 α, and so the slackness is 2 α. * The reason we use 2 α instead of α is because the server operates at speed 1/2. By “doubling the clock speed”, the server speed becomes 1, and the distribution of (W_s - cW_s > c) is unaffected. This makes it easy to apply standard results about the M/G/1 with vacations.Let U denote the vacation length distribution. It is hard to characterize exactly, but because W_s - c ≤ m_+ at the start of an above period, U_e is stochastically dominated by a uniform distribution on [0, m_+]. / The desired bounds follow from the work decomposition formula for the M/G/1 with vacations <cit.>. Specifically, for an M/G/1 with vacations, we can write its steady-state work W_M/G/1/vac as an independent sum of random variables with distributions W_ and U_e. This meansW_s - cW_s > c = W_M/G/1/vac = W_ + U_e,(W_s - c)^2W_s > c = W_M/G/1/vac^2 = W_^2 + 2 W_U_e + U_e^2.Applying the PK formula with the relevant parameters, we obtainW_ = (1 - 2 α) (S_s)_e/2 α≤(1 - 2α) m_+/4 α,W_^2 ≤(3 - 4 α (2 - α)) m_+^2/24 α ^2.The result then follows from U_e≤m_+/2 and U_e^2≤m_+^3/3. *<ref> The proof is very similar to that of <ref>, so we give only the key steps.Applying the Palm inversion formula <cit.> to W_ℓ(W_s>c) givesW_ℓ(W_s> c) = 1/A+B_c^0*∫_B^A+B W_ℓ(t) ṭ,where we can start the integral at B and remove the indicator because W_s(t) > c exactly during above periods, which corresponds to t ∈ [B, A+B). Expanding this using <ref> and noting the independence of W_ℓ(0) from the below-above cycle, we obtainW_ℓ(W_s> c) - A/A+B_c^0W_ℓ(0)= 1/A+B_c^0*∫_B^A+Bmax*-Δ_ℓ(0,t)+Σ_ℓ^m(0,t)+Σ_ℓ^ℓ(0,t), -W_ℓ(0)ṭ.Applying <ref> to the left-hand side, we see it suffices to give bounds on the right-hand side. The same reasoning as the proof <ref> yields[]W_ℓ(W_s> c) - q_A _c^0W_ℓ(0)≤1/A+B_c^0*∫_B^A+B t ṭ = (A + B)^2 - B^2/2 A + B.The result then follows from a computation similar to the end of the proof of <ref>. *<ref> Consider a tagged job arriving to the system. Recall from <ref> thatT_ - 2 S ≤ 2 (p_s + p_m) W_s + 2 p_m W_ℓ(W_s > c) + 2 p_ℓW_ℓ.We now bound the work expectations and probabilities in the last line.<ref> implies W_s≤ c + q_A W_s - cW_s > c≤ c + q_A m_+/α.<ref> imply, after some simplification,W_ℓ(W_s > c)≤ q_A W_ℓ + q_A m_+/α^2 + 4 q_A q_B c/β + √(2 q_A q_B m_+ c)/α√(β). <ref> bounds W_ℓ. / From these bounds and some simplification, using facts like p_s + p_m + p_ℓ = 1 and α≤ 1, we obtainT_ - 2 S ≤ 2 (p_ℓ + q_A) *1 + δ/ϵW_ + 4 q_A m_+/α^2 + m_+ √(q_A)/α√(ϵ) + 6 c + 8 q_A q_B c/β + 2 √(2 q_A q_B m_+ c)/α√(β) + 8 c √(δ)/α^2 βϵ.We now use <ref> to express as much as possible on the right-hand side in terms of α, β, δ, and ϵ. After some simplification, including using the preconditions of the theorem, we obtainT_ - 2 S ≤ 2 *p_ℓ + 2 β/α + β*1 + δ/ϵW_ + 8 m_+ β/α^2 (α + β) + m_+ √(2 β)/α√(ϵ (α + β)) + *12 m_+/β + 32 m_+ α/β (α + β)^2log3/2 βδ + 4 m_+/α + β√(2/αβlog3/2 βδ) + 16 m_+ √(δ)/α^2 β^2 ϵlog3/2 βδ.Finally, we observe that p_ℓ = S > m_+≤S > m and simplify further. §.§ Extension to Any Number of Servers*<ref>Fix a short server s_N. Notice that under multi-server CARD, W_s_N are i.i.d. Thus, the analysis applies to any short server. Let A and B be the above and below periods of W_s_N. Using a similar proof as that of <ref>, we obtain[]W_ℓ-_c^0W_ℓ(0)≤*√(q_A A_e) + √(q_B B_e)^2 ≤q_A m_+/2 α^2 + 4 q_B c/β.For any short server i, we haveq_A=W_s_N>c≤β+1/nδ/α + β≤2 β/α + βand q_B=W_s_N≤ c≤α/α + β.The proof is similar to that of <ref>. Note that q_A and q_B are the same for all short servers because W_s_N are i.i.d. in steady state. <ref> follow from the same arguments as the two-server case. We would like to obtain a counterpart of <ref>. To this end, we use a multi-server version of <ref>. Note that we haveI W_ =1/n∑_i=1^n-1(W_s_i=0)W_ℓ+1/n∑_i=1^n-1(W_ℓ=0)W_s_i+1/n∑_k≠ j(W_s_k=0)W_s_jWe bound these three terms separately.1/n∑_i=1^n-1(W_s_i=0)W_ℓ (a)=n-1/nW_ℓ(W_s_1=0)≤n-1/n*δ*W_ + q_A m_+/4 α^2 + 4 q_B c/β + 2 c √(2 δ)/β, 1/n∑_i=1^n-1(W_ℓ=0)W_s_i (b)=n-1/n(W_ℓ=0)W_s_1≤n-1/n*nϵ c+m_+ √(q_A nϵ)/2√(2)α, 1/n∑_k≠ j(W_s_k=0)W_s_j (c)≤(n-1)(n-2)/nW_s_k=0W_j≤(n-1)*c+m_+q_A/αδ,where (a), (b), and (c) all follow from the fact that W_s_1,…, W_s_n-1 are i.i.d. in steady state. Proof of the other bounds are similar to their counterparts in <ref>. <ref> givesW_ℓ≤W_ ≤[]1 + (n-1)δ/ϵW_ + n(n-1) c +√(n)(n-1) m_+ √(q_A)/2√(2)α√(ϵ) + 8/n c √(δ)/α^2 βϵ+n(n-1)/ϵ*c+m_+q_A/αδ.from <ref>. Since W_s_1,…, W_s_n-1 are i.i.d. in steady state, we have, by PASTA,A medium job joins a short server queue=W_s_1≤ c=q_BTherefore, using an argument similar to that for <ref>, we haveT_ - n S ≤ n (p_ℓ + q_A) *1 +(n-1) δ/ϵW_ + 2n q_A m_+/α^2 + n(n-1)√(n)m_+ √(q_A)/2√(2)α√(ϵ) + *n^2(n-1)+n c + 4n q_A q_B c/β + n √(2 q_A q_B m_+ c)/α√(β) + 8 c √(δ)/α^2 βϵ+n^2(n-1)*c+m_+q_A/αδ/ϵ.This can be further expanded using the bounds for q_A and q_B, as well as the expression of c.T_ - n S ≤ n *p_ℓ + 2β/β+α*1 +(n-1) δ/ϵW_+4nβ m_+/α^2(α+β)+n(n-1)m_+√(nβ)/2α√(ϵ(α+β)) +*n(n-1)(n^2(n-1)+n)m_+/β+8n^2(n-1)m_+α/β(α+β)^2logn+1/nβδ +2n√(n(n-1))m_+/α+β√(1/αβlogn+1/nβδ)+8n(n-1)m_+√(δ)/α^2β^2ϵlogn+1/nβδ +n^2(n-1)*n(n-1)m_+/βlogn+1/nβδ+2m_+β/α(α+β)δ/ϵ_𝒯At this point, we note that the upper bound for T_ - n S is, after letting n=2, the same as that in <ref>, except for 𝒯. Thus, settingα = Θ(1), β = Θ[]ϵ^1/3[]log1/ϵ^2/3, andW_s = 0≤δ = Θ(ϵ^3),and noting that 𝒯→0 as ϵ↓0, we conclude that the bound yields the same heavy-traffic scaling as that in <ref>.Finally, we note that K_ emerges becauselim_ϵ↓0np_ℓ=nS>m=K_. § ADDITIONAL SIMULATIONSOur additional simulations applies flexible CARD with three parameters to n=10 servers. We simulate 40 trials for each data point, with 10^7 arrivals per trial for 𝖼𝗏 = 1 and 𝖼𝗏 = 10 and 3×10^7 job arrivals per trial for 𝖼𝗏 = 100. We show 95% confidence intervals when wider than the marker size. <Ref> show that, for n=10 servers, flexible CARD has decent performance when the coefficient of variation is small. However, for large coefficients of variation, flexible CARD does not perform well, even if we use LWL to dispatch small and medium jobs among the short servers. Specifically, when cv=10, flexible CARD deviates from Dice, although still better than LWL and SITA-E. When cv=100, flexible CARD performs worse than SITA-E at high loads. The unsatisfactory performance of flexible CARD for n=10 servers motivates us to design multi-band CARD.
http://arxiv.org/abs/2312.16377v1
{ "authors": [ "Runhan Xie", "Isaac Grosof", "Ziv Scully" ], "categories": [ "cs.PF" ], "primary_category": "cs.PF", "published": "20231227021719", "title": "Heavy-Traffic Optimal Size- and State-Aware Dispatching" }
Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking Thiem Hoang Received ...; accepted... ==========================================================================================Compositional Zero-Shot Learning (CZSL) aims to transfer knowledge from seen state-object pairs to novel unseen pairs. In this process, visual bias caused by the diverse interrelationship of state-object combinations blurs their visual features, hindering the learning of distinguishable class prototypes. Prevailing methods concentrate on disentangling states and objects directly from visual features, disregarding potential enhancements that could arise from a data viewpoint. Experimentally, we unveil the results caused by the above problem closely approximate the long-tailed distribution. As a solution, we transform CZSL into a proximate class imbalance problem. We mathematically deduce the role of class prior within the long-tailed distribution in CZSL. Building upon this insight, we incorporate visual bias caused by compositions into the classifier's training and inference by estimating it as a proximate class prior. This enhancement encourages the classifier to acquire more discernible class prototypes for each composition, thereby achieving more balanced predictions. Experimental results demonstrate that our approach elevates the model's performance to the state-of-the-art level, without introducing additional parameters. Our code is available at <https://github.com/LanchJL/ProLT-CZSL>. § INTRODUCTION Objects in the world often exhibit diverse states of existence; an apple can be sliced or unripe, while a building can be ancient or huge. Humans have the ability to recognize the composition of the unseen based on their knowledge of seen elements. Even if people have never seen a green apple before, they can infer the characteristics of a green apple from a red apple and a green lemon. To empower the machine with this capability, previous work <cit.> propose Compositional Zero-Shot Learning (CZSL), a task aims to identify unseen compositions from seen state-object compositions.However, the combination of state-objects creates a visual bias for a attribute (state or object) in it, hindering the learning of distinguishable class prototypes. In the face of above challenge, early approaches in the domain of CZSL can be categorized into two distinct methods. The first method utilized two independent classifiers to categorize states and objects <cit.>. The second method involved training a common embedding space where semantic and visual features could be projected to reduce the distance between them <cit.>. Commonly, these studies concentrate on improving the structure of classifiers and investigating alternative architectures. However, minimal research has been conducted considering the problem in terms of data distribution. We analyze the prior and posterior probabilities associated with attributes (states or objects) and compositions to determine a more suitable solution. Fig. <ref> illustrates that the class prior follows a distinct trend differing from the posterior probabilities. For instance, even though the model is trained on a comparable number of samples, it demonstrates a low probability of predicting the object labeled as O5. This issue also extends to making inferences about compositions, which reminds us of the long-tail distribution or class imbalance <cit.>. We consider that certain samples are infected by the intricate interplay between objects and states within compositions <cit.>, leading to significant bias from the ideal class prototype. Consequently, these samples with large visual bias make it difficult for the classifier to fit their intrinsic patterns, results in the inability to form effective classification boundaries. In contrast to class imbalance, we refer to this phenomenon as `attribute imbalance' below. The recent methods for CZSL <cit.> synchronize the prediction of visual features to states and objects with the prediction of compositions in the common embedding space, which works as a model ensemble approach. While this design addresses the capability to categorize some classes, the non-interaction among the independent classifiers may lead to incomplete mutual compensation due to potential information gaps. The identified shortcomings prompted a redesign of the model using the model ensemble approach. Building on the success of logit adjustment in addressing long-tail learning <cit.>, this study treats attribute imbalance information as special prior knowledge (In the following we denote by `attribute prior') that approximates the class prior. This attribute prior is derived from the estimation of available samples by two independent classifiers for states and objects. In other words, we construct this prior by modelling the visual bias of states and objects from samples. During the training phase, we incorporate it through logit adjustment into the common embedding space. This approach enables the production of balanced posterior probabilities regarding the poorly-classified classes in Fig. <ref>, thereby preventing each independent classifier from ineffectively reinforcing the ability to classify the well-classified classes. Specifically, we reconstructed the CZSL problem from the perspective of mutual information and adjusted the posterior values predicted by the model from the perspective of maximizing mutual information. In addition, we generalize the above attribute prior to the unseen class in order to optimize the lower bound of seen-unseen balanced accuracy <cit.> obtained by <cit.>. We refer to this method as the logit adjustment for Proximate Long-Tail Distribution (ProLT) in CZSL. Unlike previous methods, ProLT does not necessitate introducing additional parameters, yet it significantly enhances the overall CZSL model performance. Our contributions are summarized as follows:* In our study, we conduct an analysis of the data distribution in CZSL. We translate the visual bias in compositions into an attribute imbalance and thereby generalize CZSL to a proximate long-tail learning problem. * Our analysis involves a mathematical examination of both the training and inference phases of the model. This enables us to adapt the model's posterior probability based on the attribute prior.* Our model enhances the prediction of relationships in compositions without the need for introducing additional parameters. Experimental results on three benchmark datasets demonstrate the effectiveness of our approach.§ RELATED WORK Compositional Zero-Shot Learning (CZSL): Zero-Shot Learning (ZSL) transfers knowledge from seen classes to unseen ones by leveraging attributes <cit.>. CZSL <cit.> builds upon this foundation by incorporating the notion of composition learning <cit.>, with its extension primarily relying on the shared semantics of state and object within the composition of both seen and unseen classes.Initial CZSL methodologies directly classify states and objects, effectively converting the task into a conventional supervised assignment <cit.>. However, the fusion of state-object pairs led to visual bias in both elements, impeding the acquisition of discernible class prototypes. Numerous subsequent strategies utilize visual-semantic alignment within a common embedding space <cit.> to grasp the entwined nature of objects and states within compositions. However, this technique is susceptible to domain shift challenges. Recent methodologies typically amalgamate these two models, creating a framework of model ensembles. For instance, <cit.> enhances the model's adaptability to unseen classes by disentangling visual features and subsequently reconstituting them for novel classes. Meanwhile, <cit.> introduces conditional state generation to address visual alterations arising from object-state combination. ProLT aligns closely with this paradigm, although with a greater emphasis on direct inquiries into visual bias attributes. Long-Tailed Classification: Numerous studies address the issue of imbalanced class distributions, with one prominent approach being posterior modification methods <cit.>. Within ZSL, <cit.> regards it as an imbalanced challenge involving seen and unseen classes, and then applies regulatory techniques based on logit adjustment. However, this approach does not readily extend to the issue of attribute imbalance in our context. <cit.> considers the presence of visual bias in samples re-weighting within the optimization process, but its localization-based weighting strategy ignores the differences between classes. In this study, we introduce advanced logit adjustment strategies theoretically, aiming to enhance the equilibrium of predictions between various classes.§ METHODOLOGY§.§ Task Definition Considering the two disjoint sets 𝒴^S and 𝒴^U, , 𝒴^S ∩𝒴^U = Ø. CZSL aims to classify sample ∈𝒳 into a composition y=(s,o) ∈𝒴, where 𝒴 = 𝒴^S ∪𝒴^U, and samples from 𝒴^U are unseen during training. y is composed by state s∈𝒮 and object o ∈𝒪, 𝒮 and 𝒪 are sets of states and objects. Samples from 𝒴^S and 𝒴^U share the same objects o and states s, but their compositions (s,o) are different. Define the visual space 𝒳⊆ℝ^d_x and d_x is the dimension of the space, 𝒳 can be divided into 𝒳^S and 𝒳^U based on whether their samples belong to seen classes. We can define the train set as 𝒟_seen={(,y) | ∈𝒳^S, y ∈𝒴^S } and an unseen set for evaluation of methods which is 𝒟_unseen={(,y)| ∈𝒳^U,y∈𝒴^U}. We employ the Generalized ZSL setup defined in <cit.>, which requires both seen and unseen classes involves in testing. §.§ Empirical Analysis on Model Ensemble For the problem of approximate long-tailed distributions caused by visual bias in CZSL, ensemble-based methods have demonstrated exceptional performance in CZSL <cit.>. Typically, this approach combines the predictions of two models to produce the final prediction. The first model consists of two independent classifiers C_o and C_s for objects and states. The second model is a composition classifier C_y. The process of the model can be viewed as inputting the samples into three classifiers to estimate the posterior probabilities: p(s|) = softmax[C_s()], p(o|) = softmax[C_o()],p(y|) = softmax[C_y()],p̂(y|) =δ p(y|)+ (1-δ) [p(s|)+p(o|) ],where p(s|),p(o|) and p(y|) are posterior probability from classifiers, p̂(y|) is the final posterior probabilities. δ is a weight factor. 𝒞_y() denotes the logits for class y based on sample , and 𝒞_s(), 𝒞_o() are similarly defined.As demonstrated in Tab. <ref>, augmenting two additional posterior estimates p(o|) and p(s|) to p(y|) can significantly enhance CZSL results. However, only relying solely on p(o|) and p(s|) does not enable accurate estimation, this suggests the improvement in results is not due to the introduction of superior classifiers. Consequently, we can deduce the subsequent conjectures: The effectiveness of ensemble-based methods emanates from incorporating 𝒞_s and 𝒞_o, aiding in the classification of compositions that encounter a relative disadvantage within 𝒞_y. While attribute imbalances vary across states, objects, and compositions, all three elements might not simultaneously experience large visual bias for a particular class. Based on these preliminary studies, we can posit that effective classification of classes with large visual bias within common embedding spaces requires information compensation. In our study, we directly estimate visual bias as compensation described above. Considering that the visual bias generated by state-object combination is difficult to eliminate directly, we try to introduce it as an attribute prior into the training process from the classifier. In the following, we detail this process.§.§ From the Perspective of Mutual Information Let us first consider the problem from a simple CZSL approach based on common embedding spaces like <cit.>. The optimization objective of these methods can be viewed as the maximum likelihood: argmin_θ𝔼_(,y)∼𝒟_seen[-logp(y|)],where p(y|) is defined in Eq. <ref>, which denotes the distribution of compositions predicted by the model. θ denotes the model parameters.Given the characteristics of CZSL, where each sample is associated with two labels, s and o, there is a conditionality between the two in the setup of the dataset, , p(y)=p(s,o)=p(o)p(s|o),p(y) and p(o) denotes class prior of class y and object o, and p(s|o) is conditional class prior of s and o. Inspired by <cit.>,we look at the above issues through the perspective of mutual information <cit.>, we have: I(Y;X) ≈𝔼_yD_KL[p(y |) p(y)]= ∑_,yp(,y)logp(y|)/p(y)= ∑_,yp(,y)logp(y|)/p(o)p(s|o),where X and Y are discrete random variables corresponding to x and y, respectively, and S and O are similarly defined. D_KL represents the Kullback-Leibler divergence, while p(,y) represents the joint probability of the class y and the visual feature x. Due to the real posterior probability between y andis unknown, we use p(y|) as an approximation. We can interpret the optimization of maximum likelihood as follows, based on the posterior term inEq. <ref>, logp(y|)/p(o)p(s|o)∼𝒞_y(),which can be transfer to: logp(y|) ∼𝒞_y()+logp(o)p(s|o),here, 𝒞_y() represents the logits for class y, defined in Eq. <ref>, ∼ denotes approximately equal. The expression on the right-hand side is re-normalized using the softmax function, , 0.97!-logp(y|) ∼ log[1+∑_o_i≠ o∑_s_j ≠ sp(o_i)p(s_j|o_i)/p(o)p(s|o)e^𝒞_ŷ()- 𝒞_y() ]∼log[1+∑_o_i≠ o∑_s_j ≠ s (p(o_i)p(s_j|o_i)/p(o)p(s|o) )^ηe^𝒞_ŷ()- 𝒞_y() ],where ŷ=(s_i,o_i), and η is an adjustment factor. Eq. <ref> demonstrates that by incorporating the class prior p(s|o) and p(o) for state s and object o, we can optimize the model's mutual information. Consequently, we approach the CZSL problem from the perspective of mutual information. §.§ Estimating the Attribute PriorThe above idea comes from the logits adjustment <cit.> introduced to address class imbalance <cit.>, which demonstrate that the inclusion of a class prior enhances the maximization of mutual information, and we generalize it to CZSL task.As stated in Introduction, we undertake the transformation of CZSL into an approximate long-tailed distribution issue caused by visual bias from state-object combinations. Our argument centers on the proposition that attribute imbalance within CZSL contributes to an approximate form of class imbalance, since visual bias hinders reduces the distinguishability of some of the samples. Therefore, exclusive reliance on the class prior is inadequate. Building upon this rationale, we propose to use the attribute prior to assume the function of the class prior within the long-tailed distribution, serving as an approximation.We propose incorporating the model's conditional posterior probabilities as an approximation for this scenario. We continue to denote it as the `prior' due to its function as a prior probability during the training process, despite being computed using posterior probability. Since attribute imbalance cannot be directly quantified from the dataset, we simulate it by utilizing the posterior probability of the additional classifiers, forand its corresponding s,o, we have: p̂(s) = 𝔼_∼ p()[p(s|)], p̂(o) = 𝔼_∼ p()[p(o|)],where ∈𝒟_seen, p(s|) and p(o|) are defined in Eq. <ref>, which are posterior probabilities from 𝒞_s and 𝒞_o, we use their predicted expectations for all training samples as a special attribute prior. From this we can replace the class prior in Eq. <ref> with following item: k(s,o)= softmax[ σ(s,o)p̂(s)p̂(o) ],where σ(s,o) is a function used to model the conditional nature of the composition, , σ(s,o)={[ 1 (s,o) ∈𝒴^S∪𝒴^U,; 0 else.; ].From this we obtain the final objective function according to Eq. <ref>: ℒ_cls= log[1+∑_o_i≠ o∑_s_j ≠ s (k(s_j,o_i)/k(s,o) )^ηe^𝒞_ŷ()- 𝒞_y() ]. §.§ Logit Adjustment for Inference Due to the introduction of unseen classes in the inference phase we need to make additional adjustments. CZSL usually measures model performance in terms of 𝒜^H which denotes Harmonic Mean (HM) accuracy: 𝒜^H=2/(1/𝒜^S+1/𝒜^U), where 𝒜^S, 𝒜^U denote seen and unseen accuracy.<cit.> provides a lower bound of HM, below we briefly describe its conclusions. For HM's lower bound we have: 𝒜^H≥ 1/𝔼_𝐱∼ p(𝐱)|𝒴|p(𝒴)p(y|y∈𝒴)/q(𝒞_out=y|𝐱)p(y|𝐱),where q(𝒞_out=y|) represents the probability of predicting class y using our model. The set 𝒴 can be either 𝒴^S or 𝒴^U, p(y|y∈𝒴) represents the conditional class prior, and |𝒴|p(𝒴) can be seen as a hyper-parameter that quantifies the differences between seen and unseen classes. Considering that the gap between the domains of seen and unseen classes in CZSL is not significant, we can simply treat |𝒴|p(𝒴) as an ignorable constant in the following process.Finding the Bayesian optimum for 𝒜^H is difficult. However, it is possible to maximize its lower bound, which is equal to minimizing the upper bound of its inverse, , the denominator term of Eq. <ref> is minimized if:ỹ=argmax_y [𝒞_y()+η logp(y|y∈𝒴) ],where η is from Eq. <ref>, ỹ is the predicted label for sample . For conditional class prior p(y|y∈𝒴^S), which represents the true class frequency when y belong to seen classes. Following Eq. <ref>, we similarly replace the prior with the attribute prior estimate in Eq. <ref> here, which is:p(y | (s,o) ∈𝒴^S) := k(s,o),(s,o) ∈𝒴^S,and the attribute prior of unseen classes are not available to the model, we model it here using a combination of the estimation from Importance Sampling <cit.> with the attribute prior from seen samples, which can be denoted as:p(y|y∈𝒴^U):=k(s,o)+k̂_(s,o)/λ k(s,o),(s,o) ∈𝒴^U,where 1/λ is a hyper-parameter denotes the distribution of . The above results are re-transformed into probability distributions in the actual calculation. And k̂_(s,o) is instance-based conditional posterior probability:k̂_(s,o)=softmax [ σ(s,o)p(s|)p(o|) ],the aforementioned setup arises because during testing, we are unable to provide posterior probabilities k̂_(s,o) from multiple samples simultaneously. Furthermore, Importance Sampling results in significant variance when the number of samples is insufficient. To address this, we attempt to augment it by leveraging seen attribute prior. ProLT makes inferences during testing phase based on Eq. <ref>, our aim is to integrate local information during testing with the prior derived from seen classes, to address the disparities between seen and unseen classes. With Eq. <ref> ProLT theoretically achieves the best overall accuracy. §.§ Method OverviewThis section provides a concise summary of the aforementioned methods. Our approach, illustrated in Fig. <ref>, involves training two independent classifiers denoted as 𝒞_s and 𝒞_o. These classifiers are implemented using prototype learners, namely 𝒫_s and 𝒫_o, and visual embedders 𝒱_o, 𝒱_s, to determine the prototypes of states and objects, ,𝒞_s():=cos(𝒱_s(),𝒫_s(s))/τ,𝒞_o():=cos(𝒱_o(),𝒫_o(o))/τ,where τ is the temperature. These classifiers are trained with vanilla cross-entropy loss:ℒ_ic= log[1+∑_s'≠ se^𝒞_s'()-𝒞_s() ][1+∑_o'≠ oe^𝒞_o'()-𝒞_o() ].Once the classifiers reach a specific training stage, we calculate the attribute prior using Eq. <ref>, and employ the loss function ℒ_cls from Eq. <ref> for training the classifier 𝒞_y for compositions:𝒞_y():=cos(𝒱_y(),𝒫_y(y))/τ,where 𝒫_y is the prototype learner for compositions and 𝒱_y is a visual embedder. After training, the model uses Eq. <ref> for inference. § EXPERIMENTS§.§ DatasThere are numerous recent approaches to compositionality research, and three datasets have been primarily employed for evaluation: MIT-States <cit.>, UT-Zappos <cit.>, and C-GQA <cit.>. We utilized a standardized evaluation dataset for a reasonable comparison with previous methods.MIT-States presents a considerable challenge, consists of 53,753 images. It comprises 115 state classes, 245 object classes, and 1,962 compositions. In the total compositions, there are 1,262 seen compositions, and 700 compositions remain unseen. UT-Zappos is a collection of 50,025 images that focuses on various forms of footwear. It consists of 12 object classes and 16 state classes which is a fine-grained dataset, yielding 116 compositions, of which 83 are seen. C-GQA is introduced by <cit.>, which encompasses a wide variety of real-world common objects. It comprises 413 states, 674 objects, and over 27,000 images, along with more than 9,000 compositions, consisting of 5,592 seen and 1,932 unseen compositions. §.§ Evaluation Protocol The setting of GZSL <cit.> requires both seen and unseen compositions during testing. We report the best accuracy of seen classes (best seen), the unseen class (best unseen), and its harmonic accuracy (HM). In order to measure the performance on attribute learning, we report the best accuracy of states (best sta) and objects (best obj). Building upon the research of <cit.> and <cit.>, we calculate the Area Under the Curve (AUC) by comparing the accuracy on seen and unseen compositions with various bias terms. §.§ Implementation Details Below we present the details of the implementation of ProLT on ResNet-18 <cit.>. Visual Representations and Semantic: In line with prior methods, we employed ResNet-18 pre-trained on ImageNet <cit.> to extract 512-dimensional visual features from the images. For semantic information, we utilized GloVe <cit.> to extract attribute names as 300-dimensional word vectors. Implementations and Hyper-Parameters: For three prototype learner 𝒫_s,𝒫_o and 𝒫_y are GloVe connects withtwo Fully Connected (FC) layers with ReLU <cit.> following the first layer. And the three visual embedders 𝒱_s,𝒱_o, and 𝒱_y are also two FC layers with ReLUand Dropout <cit.>. All FCs embed the input features in 512 dimensions and the hidden layer is 1024 dimensions. The overall model is trained using the Adam optimizer <cit.> on NVIDIA GTX 2080Ti GPU, and it is implemented with PyTorch <cit.>. We set the learning rate as 5×10^-4 and the batchsize as 128. We train the 𝒞_s,𝒞_o and 𝒞_y with an early-stopping strategy, it needs about 400 epochs on MIT-States, 300 epochs on UT-Zappos and 400 epochs on C-GQA. For hyper-parameters, we set τ as 0.1,0.1,0.01, η as 1.0,1.0,1.0 and λ as 50,10,100 for MIT-States, UT-Zappos, and C-GQA, respectively.§.§ Compared with State-of-the-Arts ProLT is mainly compared with recent methods using fixed ResNet-18 as backbone with the same settings. We also compared ProLT with the CLIP-based approaches <cit.> after using CLIP <cit.> to learn visual and semantic embeddings. The comparison results are shown in Tab. <ref>.The results demonstrate that ProLT achieves a new state-of-the-art performance when using ResNet-18 as backbone on the MIT-States, UT-Zappos, and C-GQA. Specifically, our method achieves the highest AUC of 6.0% on MIT-States, surpassing CANet by 0.6%. On the UT-Zappos, we achieve the highest HM of 49.3%, outperforming CANet by 2.0%. Although ProLT has a slight disadvantage on the C-GQA dataset, it remains competitive with the state-of-the-art methods, achieving an HM of 14.4%. As for the CLIP-based approaches, ProLT has produced remarkable outcomes. Unlike DFSP, our method avoids the incorporation of extra self-attention or cross-attention mechanisms. Despite this, we excel across all three datasets, attaining an HM of 38.2% on MIT-States and 49.4% on UT-Zappos. These results underscore the compatibility of ProLT when combined with CLIP. §.§ Ablation Study In this section, we verify that each of these modules plays an active role by ablating each of its parts on UT-Zappos with ResNet-18. The results are shown in Tab. <ref> and Tab. <ref>.Attribute Prior versus Class Prior: As mentioned above, we use the attribute prior in place of the class prior due to the attribute imbalance. To further validate this, we replaced Eq. <ref> and Eq. <ref> using class prior, shown in Tab. <ref>. To make the results more robust, we tested two different prototype learners, , the GCN from the CGE <cit.> and the FC layers. The results in Tab. <ref> indicate that incorporating a class prior yields improvements over the baseline. We attribute this enhancement mainly to <cit.>, the class sizes of datasets are not solely identical. However, ProLT exhibits a substantial advantage over the other methods, which demonstrates the more dominant influence of potential attribute imbalances in CZSL. Effect of Components:We eliminate the effects of each component by adjusting the hyper-parameters η in Eq. <ref> and the attribute prior in Eq. <ref> to verify the role played by each component. In Tab. <ref>, we set η to 0 to convert Eq. <ref> to a vanilla cross entropy loss and the inference phase is converted to same as CGE. For p=0, we remove the attribute prior in inference phase. We also tested on both prototype learners. We can observe that each part of the ablation leads to a decrease in outcome, with η=0 being the most significant. This reflects the effectiveness of our method. §.§ Hyper-Parameter AnalysisOur method primarily comprises the subsequent hyper-parameters: 1) logit-adjusting factor (η), and 2) factor about the sample distribution (λ). We test on the UT-Zappos under various hyper-parameters based on ResNet-18, shown in Fig. <ref>. For η, the best AUC are observed when η=1.6, and the gap between seen and unseen begins to decrease as η increases. Concerning λ, the outcomes are documented within the interval λ∈ [1.0, 50.0] with increments about 5.0. The pinnacle value for the seen class is observed at 20.0, and 35.0 for unseen class. Overall, these hyper-parameter settings yield results characterized by minimal fluctuations, thus underscoring the robustness of our methodology. §.§ Qualitative Results Qualitative results for unseen compositions, accompanied by the top-3 predictions when we use ResNet-18 as backbone, are displayed in Fig. <ref>. Concerning MIT-States, we argue that certain erroneous predictions as partially justifiable. For instance, the phrase tiny dog, for which the model's incorrect predictions involve small dog and tiny animal, exhibits a high degree of semantic similarity. A similar phenomenon can be observed for the brown chair in C-GQA. For UT-Zappos, ProLT's limitation in fine-grained classification persists. An illustrative example is the outcomes for leather boot.M, our approach encounters challenges in making nuanced differentiations within the category of boots.§ CONCLUSIONThis paper presents from an experimental analysis aimed at revealing the concealed proximate long-tail distribution issue within CZSL. In our work, CZSL is transformed into an underlying proximate class imbalance problem, and the logit adjustment technique is employed to refine the posterior probability for individual classes. Diverging from conventional methods for handling long-tailed distributions, the introduced attribute prior is derived from the model's sample estimation of visual bias. Experimental results demonstrate that our approach attains state-of-the-art outcomes without necessitating the introduction of supplementary parameters.§ ACKNOWLEDGEMENTSThis work was supported by National Natural Science Foundation of China (NSFC) under the Grant No. 62371235.§ APPENDIX § SUPPLEMENTARY EXPERIMENTS AND DETAILS WITH VISION-LANGUAGE MODEL §.§ Implementation Details (Supplemental)In this section, we provide details of the setup of ProLT when using CLIP to learn visual and semantic embeddings. Visual Representations and Semantic: We employ the pretrained CLIP Vit-L/14 model as both our image and test encoder. Regarding semantics, we employ a learnable soft prompt [v1][v2][v3][state][object], following the approach of <cit.>, where [v1][v2][v3] represent the learnable content. To embed attributes such as state or object, we compute the average embedding value for each composition containing the corresponding state or object. Implementations and Hyper-Parameters: The three prototype learners, 𝒫_s,𝒫_o, and 𝒫_y, adhere to the configuration detailed in Sec. 4.3, except for the omission of GloVe <cit.>. Similarly, the three visual embedders, 𝒱_s,𝒱_o, and 𝒱_y, remain consistent with the specifications in Sec. 4.3. We train the entire model using the Adam optimizer <cit.> on two NVIDIA GTX 3090 GPUs, while configuring the batch size as 16. The other hyper-parameter configurations remain consistent with those in main text. §.§ Ablation Study with CLIPFollowing Sec. 4.5, we conducted an identical experiment on CLIP to validate the effectiveness of ProLT. As demonstrated in Tab. <ref>, we compare the outcomes on UT-Zappos <cit.> under three scenarios: without incorporating any priors but using a model-ensemble method, with the inclusion of class priors, and with the inclusion of attribute priors. Similarly, the results demonstrate the beneficial impact of incorporating the attribute prior. In comparison to the direct utilization of the class prior, our approach leads to a rise of 1.9% in AUC and 2.0% in HM.Moreover, we conduct a comparative analysis by removing the attribute prior during both the training and testing phases. Referencing Tab. <ref>, when η=0, indicating our methods is changed to a simple common embedding space method like <cit.>, which led to a significant drop in results. A significant enhancement is observed when these are combined, similar to the findings in Tab. 4. Collectively, the aforementioned experiments substantiate the favorable impact of ProLT on CLIP.§ ADDITIONAL EXPERIMENTS AND FURTHER INFORMATIONIn this section we add some detailed information from Sec. 4 as well as perform some additional experiments. All experiments are performed with ResNet-18 <cit.> as the backbone.§.§ Training Details Early Stopping:As mentioned in Sec. 3.6, ProLT requires that 𝒞_s and 𝒞_o be trained together first using ℒ_ic. In this process we simply employ an early stopping strategy on the validation set. We trained these module for a maximum of 50 epochs and use AUC for early-stopping. After 𝒞_s and 𝒞_o training is complete, it starts outputting attribute priors and co-training with 𝒞_y. This process we adopt the same early stopping strategy on the validation set. We set the maximum of 1000 epochs and also use AUC for early-stopping. Hyper-Parameter Selection:Hyper-parameter selection involves grid-search on the validation set. For architectural parameters, we explore the 1) hidden layer count for 𝒱_s, 𝒱_o, and 𝒱_y within the range 0,1,2, and 2) hidden layer count for 𝒫_s, 𝒫_o, and 𝒫_y within the same range. Concerning optimization, such as learning rate, we adopt the configuration from <cit.> without extensive modifications. For the remaining hyper-parameters, we search for η∈ [0.0,2.0] with a step of 0.2, λ∈ [5,50] with a step of 5 for UT-Zappos, and λ∈ [10,200] with a step of 10 for MIT-States and C-GQA. Additionally, we perform a search for τ∈ [0.02,0.2] with an increment of 0.02, encompassing the value τ={0.005,0.01}. For the choice of word embedding, we search the word embedding of 1) GloVe <cit.>, 2) Word2Vec <cit.>, 3) Fasttext <cit.>, 4) GloVe+Word2Vec and (5) Fasttext+Word2Vec. §.§ Further Experiments of Hyper-ParametersThis section delves into the analysis of the effects arising from the subsequent hyper-parameter configurations: 1) temperature τ, 2) hidden layer dimensions, and 3) word embeddings. Illustrated in Fig. <ref>, we present the outcomes achieved across various τ values on UT-Zappos. The peak AUC emerges at τ=0.14 and the difference between seen and unseen is minimized at τ=0.1. Likewise, we present outcomes utilizing diverse word embeddings on UT-Zappos, detailed in Tab. <ref>. ProLT excels when employing GloVe, yet generally, variations in word vectors exhibit minimal impact. Concerning the varying dimension configurations, we document the outcomes obtained using dimensions 256,512,1024,2048,4096 for the hidden layers in three classifiers, as indicated in Tab. <ref>. Notably, we discern that a hidden layer dimension of 1024 consistently yields optimal results. However, when employing dimensions of 2048 or 4096, we posit that the inferior performance could result from the propensity of higher-dimensional hidden layers to manifest overfitting on seen classes. §.§ Explanation of Importance SamplingEq. 16 employs Importance Sampling to estimate the attribute prior for unseen classes. The specifics of this approach are outlined in this section. During this procedure, we introduce an auxiliary proposal distribution to aid in creating an approximate estimation, , the distribution of the seen attribute prior k(s,o). Therefore, the estimation of the prior for unseen classes can be represented as follows: 1/n∑_i^np(_i)k̂_(s,o)/k(s,o).In Eq. 16, p(_i) is replaced by λ, which is a hyper-parameter. This is owing to the unavailability of direct access to the data distribution for the test set. As the posterior can be obtained only for individual samples during testing, we set n to 1 in practice. This approach yields significant variance due to an inadequate sample size. Consequently, we posit that it should be integrated with the another information.§.§ Further Ablation Study on InferenceIn this section, we validate the effectiveness of the approach on UT-Zappos that combining the seen attribute prior with Importance Sampling, as detailed in Sec. 3.5. Tab. <ref> presents the outcomes where we nullify the probability estimated by Importance Sampling via setting λ→∞ in Eq. 16, and the results when we set the seen attribute prior to 0. It is worth noting that with seen attribute prior set to 0, we generalize the probability of Importance Sampling to the seen class for consistency. We conducted experiments on two embedding functions to ensure robustness following Tab. 3. From the table, we can observe that the introduction of the two respectively brings about an improvement in AUC, HM relative to the baseline, while it is not significant in the rest of the metrics. In addition, the combination of the two usually creates complementarities, suggesting that they are not mutually exclusive.§.§ Why Our Method WorksAs shown in Fig. <ref>, we display the adjusted posterior p(y|) for the same compositions in Fig. 1. It is evident that ProLT yields a more balanced distribution stemming from 𝒞_y. Furthermore, it becomes apparent that 𝒞_y exhibits a preference for compositions characterized by significant visual bias in Fig. 1. But the incorporation of the prior, as defined in Eq. 14, can mitigates this distinction.This aspect reflects ProLT: enhancing the model's grasp of fundamental visual-semantic relationships during the training phase through the maximization of mutual information. Utilizing posterior probability adjustment, ProLT achieves classification by harmonizing both a prior and a posterior during the inference phase.
http://arxiv.org/abs/2312.15923v1
{ "authors": [ "Chenyi Jiang", "Haofeng Zhang" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231226073502", "title": "Revealing the Proximate Long-Tail Distribution in Compositional Zero-Shot Learning" }
Accretion-induced flickering variability amongsymbiotic stars from space photometry with NASA TESS J. Merc<ref>jaroslav.merc@mff.cuni.czP. G. Beck<ref>,<ref>S. Mathur<ref>,<ref>R. A. García<ref>Received 29 September 2023 / Accepted 18 December 2023 ======================================================================================================================§ ABSTRACTThe application of deep learning techniques on aroma-chemicals has resulted in models more accurate than human experts at predicting olfactory qualities. However, public research in this domain has been limited to predicting the qualities of single molecules, whereas in industry applications, perfumers and food scientists are often concerned with blends of many odorants. In this paper, we apply both existing and novel approaches to a dataset we gathered consisting of labeled pairs of molecules. We present a publicly available model capable of generating accurate predictions for the non-linear qualities arising from blends of aroma-chemicals.§ INTRODUCTIONAlthough recent breakthroughs in predicting odor labels of molecules have allowed researchers to gain a deeper understanding of how scent and molecular structure are related<cit.>, the non-linear relationships occurring in mixtures of aroma-chemicals have yet to be untangled.In real world applications, perfumers and food scientists care about the the cumulative aromas of blends of molecules in addition to just the notes of isolated molecules. When multiple ingredients are combined, unexpected qualities may emerge, and notes present in the individual aroma-chemicals may become muted or unnoticeable in the blend <cit.>.Prior to the application of graph neural networks (GNNs)<cit.> to odor prediction, researchers used featurizations of aroma-chemicals based on specific molecular structures, like aromaticity and the presence of certain functional groups. These approaches achieved decent success on benchmarks like the DREAM Olfactory Challenge<cit.>; however, the adoption of GNNs to this domain led to significant improvements in the predictive power of contemporary models<cit.>. Instead of hand-engineered featurizations, these models used backpropagation to train the hidden layers of neural networks. The models generated embeddings for each node in the molecular graph, which could be combined to generate vector representations for the graph as a whole.We adapt these deep learning methods and also apply new techniques to generate embeddings to blends of aroma-chemicals. Because the majority of research applying graph neural networks to chemistry remains proprietary, we tested a variety of different architectures in order to produce usable results.§ METHODS §.§ DatasetTo generate a dataset of aroma-chemical blends, molecular structures (SMILES) and odorant labels were gathered from the Good Scents online chemical repository<cit.>. Although the website listed only -.9ex~3.5k molecules, each aroma-chemical's page contained recommended blenders, which when combined, produced specific aromas. The molecule pages contained many (50+) blender recommendations, so we were able to gather over 160k molecule-pair data points. The dataset contains discrete labels for the presence or absence of 109 olfactory notes.[There was no data available for relative concentrations in the blends.]The collection of all molecule pairs in the database forms a meta-graph where each node is itself a molecular graph, and with edges between nodes if there are odor-labels for the blended pair of molecules. In order to ensure train/test separation, the meta-graph was carved into two components with the following requirements: each component must contain blended pair data points covering every label to prevent distributional shift between the training and testing datasets; also, in order to maximize the amount of usable data, the number of edges between the components (known as the edge-boundary degree) should be minimized, as these data points must be thrown out to ensure train/test separation. Optimally minimizing the edge-boundary degree is NP hard in the general case<cit.>, though previous work shows that certain special cases can be solved in polynomial time.In order to generate a "good enough" carving, we used a randomized carving algorithm, assigning pairs to the train and test components randomly and proportionally (80:20 split), as long as the assigned pair did not connect the two components. This resulted in a final dataset with 115,939 training pairs and 3,404 test pairs. Unfortunately, this meant that -.9ex~47k data points had to be discarded to satisfy the separation requirements. While the dataset contained 109 odor labels, only 33 labels appeared frequently enough (1k training pairs) to be included. §.§ ArchitectureA variety of model architectures were tested using a random hyperparameter search. For elaboration on the full hyperparameter space, see the [sec:failed]“Hyperparameter Optimization” section in the appendix. The best architecture was structured as follows:The Graph Isomorphism Network (GIN)<cit.>, built in PyTorch Geometric<cit.>, was selected as the GNN architecture. The GIN was used for three message passing steps, and in order to allow weight-tying between these steps, the initial Open Graph Benchmark<cit.> atomic encodings were padded to the hidden layer dimension (D=832). We used a two layer feedforward neural network, built in PyTorch<cit.>, as the update function.The GIN generated embeddings for every atom/node in both molecules across the pair. The node-embeddings were combined using global mean and add pools concatenated together to generate graph-level embeddings, for each molecule in the pair.The graph-level embeddings for the two molecules were then furthered concatenated (in arbitrary order) and passed through another two-layer feedforward network to generate the pair-level embedding (also of D=832).[This arbitrary ordering of graph-level embeddings worked well for the molecule-pair task, as molecules tended to appear evenly between the first and second position in the pair as a by-product of the graph-carving, but for blends of 3+ molecules, more advanced techniques, like a Set2Set<cit.> model would be needed to combine the graph-level embeddings.] Logits for all 33 odor labels were predicted linearly from the pair-level embeddings. We used a binary cross-entropy loss for the predicted labels with respect to their true values.The model was scheduled for 250 epochs, but it was terminated after 121 epochs using early-stopping (patience=0). We used the Adam optimizer (lr=2.1e-5) with a decaying learning rate (decay=0.08) across the first 90% of training epochs. § RESULTSTo compare the various models, we used mean the area under the receiver operating characteristic curve (AUROC) across all labels. After 100 hyperparameter trials, the strongest model achieved a mean AUROC of 0.80. For context, a naive 0-R model using the mean frequency of each label across all molecules as a constant prediction achieves an AUROC of 0.5 for every label, by definition. Our model performs well for many labels, but barely above random for others. As a baseline, we generated 2048 bit Morgan fingerprints (radius = 4) for each molecule and then concatenated the pairs of fingerprints in order to fit a logistic regression model, using scikit-learn<cit.>, to predict the odor labels.The easiest label to predict was alliaceous (garlic) reflecting previous work which suggested that this note simply correlated with the presence of sulfur in the molecule. Unlike in this previous work<cit.>, our model accurately predicted the label musk (AUROC=0.92), which occurs across many different structural classes of molecules. Direct comparison between benchmarks are not straightforward, as previous work predicted continuous ratings for odor, and our dataset contains discrete labels. Regardless, the hardest label for our model to predict was earthy. Further research is needed to understand why different labels are easier or harder to predict depending on model architecture. §.§ Transfer LearningWe also measured the performance of our model on a transfer learning task where the odor labels of single molecules are predicted. To do this, we generated graph-level embeddings for the 2,362 training and 393 test molecules that were assigned to the respective components. From there, we trained a logistic regression classifier to predict the same 33 odor labels from these graph-level embeddings. As above, a 0-R baseline with AUROC = 0.5 was fit for each label and the same Morgan fingerprints model was adapted to the single molecule task. Alliaceous remained easy to predict for the model, but surprisingly, musk was the easiest to predict out of all the labels. In our dataset, musky molecules were often used together in blends; we hypothesize the meta-graph structure produced similar embeddings in our model for paired molecules, regardless of structural class. This provided an advantage over previous work, where models had to discover the similarities between musky molecules from molecular structure alone, across a number classes.The significant improvement of the Morgan fingerprint model on the single molecule prediction task, as compared to the blended pair task suggests that the former task is much harder than the latter. Our model transfers quite well and still outperforms the molecular fingerprint model, overall.§ CONCLUSIONBy applying deep-learning techniques to a novel dataset, we trained a model capable of accurately predicting the non-linear olfactory qualities of aroma-chemical blends. Our GNN model generates molecular embeddings that are also useful for tasks on single aroma-chemicals, and is available on https://github.com/laurahsisson/odor-pairGitHub.In our opinion, the ultimate research goal in this domain is to produce a model capable of predicting continuous labels for blends of many aroma-chemicals at varying concentrations. This mirrors the real-life work done by food scientists and perfumers.Well-labeled public olfactory datasets that would enable this research remain scarce even for the single molecule case. Though fragrance companies likely have extensive libraries of blend recipes, these datasets remain proprietary. Novel techniques must be applied in to build stronger models in the face of this data-scarcity, and new approaches for dataset augmentation should be explored. Our work stands as a proof of concept for further research in this domain. §.§ AcknowledgementsWe thank Dr. Andreas Keller and Dr. Ritesh Kumar for mentorship and technical guidance.plain § APPENDIX §.§ Note CanonicalizationFor labels that are difficult to predict because of their appearance across multiple structural classes, researchers and perfumers may benefit from using different labels specific to each structural class. Musk is one such label and though floral musk and soft musk are frequently used to distinguish between different kinds of musky odors, difficulty arises when musk is used directly as a note, instead of as a family of notes. Future work could determine the feasibility of splitting this note/family apart into a number of distinct odor words. Researchers could task a panel of experts to determine if two molecules, both labelled musk, come from the same or different structural class. If musks from different classes are easily separable, then new descriptive words are called for.In the same vein, our paper uses 33 labels out of a set of 109 descriptors, simply based on availability. There is no agreed on canonical set of odor descriptors, and previous works have used sets 138 labels<cit.>, 131 labels<cit.>, and as few as 19 labels<cit.>. Though it is possible to predict ratings on one set of labels from another<cit.>, a canonical set would allow direct comparison between different approaches.§.§ Hyperparameter Optimization The full hyperparameter search space was as follows:Hyperparameter optimization is a challenging task, but research papers rarely provide the final hyperparameters used to train their models, and even fewer fully enumerate their search space. Much of the hyperparameter optimization procedure used here was adapted from “Neural Message Passing for Quantum Chemistry”<cit.> in which researchers built a GNN model for predicting quantum properties of organic molecules. These hyperparameters transferred well to our dissimilar task/GNN architecture, and are likely useful across disparate deep-learning chemistry domains.
http://arxiv.org/abs/2312.16124v1
{ "authors": [ "Laura Sisson" ], "categories": [ "cs.LG", "physics.chem-ph", "q-bio.QM" ], "primary_category": "cs.LG", "published": "20231226171809", "title": "Olfactory Label Prediction on aroma-chemical Pairs" }
Observation of χ_cJ→ 3(K^+K^-)M. Ablikim^1, M. N. Achasov^4,c, P. Adlarson^75, O. Afedulidis^3, X. C. Ai^80, R. Aliberti^35, A. Amoroso^74A,74C, Q. An^71,58,a, Y. Bai^57, O. Bakina^36, I. Balossino^29A, Y. Ban^46,h, H.-R. Bao^63, V. Batozskaya^1,44, K. Begzsuren^32, N. Berger^35, M. Berlowski^44, M. Bertani^28A, D. Bettoni^29A, F. Bianchi^74A,74C, E. Bianco^74A,74C, A. Bortone^74A,74C, I. Boyko^36, R. A. Briere^5, A. Brueggemann^68, H. Cai^76, X. Cai^1,58, A. Calcaterra^28A, G. F. Cao^1,63, N. Cao^1,63, S. A. Cetin^62A, J. F. Chang^1,58, G. R. Che^43, G. Chelkov^36,b, C. Chen^43, C. H. Chen^9, Chao Chen^55, G. Chen^1, H. S. Chen^1,63, H. Y. Chen^20, M. L. Chen^1,58,63, S. J. Chen^42, S. L. Chen^45, S. M. Chen^61, T. Chen^1,63, X. R. Chen^31,63, X. T. Chen^1,63, Y. B. Chen^1,58, Y. Q. Chen^34, Z. J. Chen^25,i, Z. Y. Chen^1,63, S. K. Choi^10A, G. Cibinetto^29A, F. Cossio^74C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^78, A. Dbeyssi^18, R.  E. de Boer^3, D. Dedovich^36, C. Q. Deng^72, Z. Y. Deng^1, A. Denig^35, I. Denysenko^36, M. Destefanis^74A,74C, F. De Mori^74A,74C, B. Ding^66,1, X. X. Ding^46,h, Y. Ding^34, Y. Ding^40, J. Dong^1,58, L. Y. Dong^1,63, M. Y. Dong^1,58,63, X. Dong^76, M. C. Du^1, S. X. Du^80, Z. H. Duan^42, P. Egorov^36,b, Y. H. Fan^45, J. Fang^59, J. Fang^1,58, S. S. Fang^1,63, W. X. Fang^1, Y. Fang^1, Y. Q. Fang^1,58, R. Farinelli^29A, L. Fava^74B,74C, F. Feldbauer^3, G. Felici^28A, C. Q. Feng^71,58, J. H. Feng^59, Y. T. Feng^71,58, M. Fritsch^3, C. D. Fu^1, J. L. Fu^63, Y. W. Fu^1,63, H. Gao^63, X. B. Gao^41, Y. N. Gao^46,h, Yang Gao^71,58, S. Garbolino^74C, I. Garzia^29A,29B, L. Ge^80, P. T. Ge^76, Z. W. Ge^42, C. Geng^59, E. M. Gersabeck^67, A. Gilman^69, K. Goetzen^13, L. Gong^40, W. X. Gong^1,58, W. Gradl^35, S. Gramigna^29A,29B, M. Greco^74A,74C, M. H. Gu^1,58, Y. T. Gu^15, C. Y. Guan^1,63, Z. L. Guan^22, A. Q. Guo^31,63, L. B. Guo^41, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,g, A. Guskov^36,b, J. Gutierrez^27, K. L. Han^63, T. T. Han^1, X. Q. Hao^19, F. A. Harris^65, K. K. He^55, K. L. He^1,63, F. H. Heinsius^3, C. H. Heinz^35, Y. K. Heng^1,58,63, C. Herold^60, T. Holtmann^3, P. C. Hong^34, G. Y. Hou^1,63, X. T. Hou^1,63, Y. R. Hou^63, Z. L. Hou^1, B. Y. Hu^59, H. M. Hu^1,63, J. F. Hu^56,j, S. L. Hu^12,g, T. Hu^1,58,63, Y. Hu^1, G. S. Huang^71,58, K. X. Huang^59, L. Q. Huang^31,63, X. T. Huang^50, Y. P. Huang^1, T. Hussain^73, F. Hölzken^3, N Hüsken^27,35, N. in der Wiesche^68, J. Jackson^27, S. Janchiv^32, J. H. Jeong^10A, Q. Ji^1, Q. P. Ji^19, W. Ji^1,63, X. B. Ji^1,63, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^71,58, D. Jiang^1,63, H. B. Jiang^76, P. C. Jiang^46,h, S. S. Jiang^39, T. J. Jiang^16, X. S. Jiang^1,58,63, Y. Jiang^63, J. B. Jiao^50, J. K. Jiao^34, Z. Jiao^23, S. Jin^42, Y. Jin^66, M. Q. Jing^1,63, X. M. Jing^63, T. Johansson^75, S. Kabana^33, N. Kalantar-Nayestanaki^64, X. L. Kang^9, X. S. Kang^40, M. Kavatsyuk^64, B. C. Ke^80, V. Khachatryan^27, A. Khoukaz^68, R. Kiuchi^1, O. B. Kolcu^62A, B. Kopf^3, M. Kuessner^3, X. Kui^1,63, N.  Kumar^26, A. Kupsc^44,75, W. Kühn^37, J. J. Lane^67, P.  Larin^18, L. Lavezzi^74A,74C, T. T. Lei^71,58, Z. H. Lei^71,58, M. Lellmann^35, T. Lenz^35, C. Li^43, C. Li^47, C. H. Li^39, Cheng Li^71,58, D. M. Li^80, F. Li^1,58, G. Li^1, H. B. Li^1,63, H. J. Li^19, H. N. Li^56,j, Hui Li^43, J. R. Li^61, J. S. Li^59, Ke Li^1, L. J Li^1,63, L. K. Li^1, Lei Li^48, M. H. Li^43, P. R. Li^38,l, Q. M. Li^1,63, Q. X. Li^50, R. Li^17,31, S. X. Li^12, T.  Li^50, W. D. Li^1,63, W. G. Li^1,a, X. Li^1,63, X. H. Li^71,58, X. L. Li^50, X. Z. Li^59, Xiaoyu Li^1,63, Y. G. Li^46,h, Z. J. Li^59, Z. X. Li^15, C. Liang^42, H. Liang^71,58, H. Liang^1,63, Y. F. Liang^54, Y. T. Liang^31,63, G. R. Liao^14, L. Z. Liao^50, J. Libby^26, A.  Limphirat^60, C. C. Lin^55, D. X. Lin^31,63, T. Lin^1, B. J. Liu^1, B. X. Liu^76, C. Liu^34, C. X. Liu^1, F. H. Liu^53, Fang Liu^1, Feng Liu^6, G. M. Liu^56,j, H. Liu^38,k,l, H. B. Liu^15, H. M. Liu^1,63, Huanhuan Liu^1, Huihui Liu^21, J. B. Liu^71,58, J. Y. Liu^1,63, K. Liu^38,k,l, K. Y. Liu^40, Ke Liu^22, L. Liu^71,58, L. C. Liu^43, Lu Liu^43, M. H. Liu^12,g, P. L. Liu^1, Q. Liu^63, S. B. Liu^71,58, T. Liu^12,g, W. K. Liu^43, W. M. Liu^71,58, X. Liu^38,k,l, X. Liu^39, Y. Liu^80, Y. Liu^38,k,l, Y. B. Liu^43, Z. A. Liu^1,58,63, Z. D. Liu^9, Z. Q. Liu^50, X. C. Lou^1,58,63, F. X. Lu^59, H. J. Lu^23, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,63, C. L. Luo^41, M. X. Luo^79, T. Luo^12,g, X. L. Luo^1,58, X. R. Lyu^63, Y. F. Lyu^43, F. C. Ma^40, H. Ma^78, H. L. Ma^1, J. L. Ma^1,63, L. L. Ma^50, M. M. Ma^1,63, Q. M. Ma^1, R. Q. Ma^1,63, X. T. Ma^1,63, X. Y. Ma^1,58, Y. Ma^46,h, Y. M. Ma^31, F. E. Maas^18, M. Maggiora^74A,74C, S. Malde^69, Y. J. Mao^46,h, Z. P. Mao^1, S. Marcello^74A,74C, Z. X. Meng^66, J. G. Messchendorp^13,64, G. Mezzadri^29A, H. Miao^1,63, T. J. Min^42, R. E. Mitchell^27, X. H. Mo^1,58,63, B. Moses^27, N. Yu. Muchnoi^4,c, J. Muskalla^35, Y. Nefedov^36, F. Nerling^18,e, L. S. Nie^20, I. B. Nikolaev^4,c, Z. Ning^1,58, S. Nisar^11,m, Q. L. Niu^38,k,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^63, Q. Ouyang^1,58,63, S. Pacetti^28B,28C, X. Pan^55, Y. Pan^57, A.  Pathak^34, P. Patteri^28A, Y. P. Pei^71,58, M. Pelizaeus^3, H. P. Peng^71,58, Y. Y. Peng^38,k,l, K. Peters^13,e, J. L. Ping^41, R. G. Ping^1,63, S. Plura^35, V. Prasad^33, F. Z. Qi^1, H. Qi^71,58, H. R. Qi^61, M. Qi^42, T. Y. Qi^12,g, S. Qian^1,58, W. B. Qian^63, C. F. Qiao^63, X. K. Qiao^80, J. J. Qin^72, L. Q. Qin^14, L. Y. Qin^71,58, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, Z. H. Qu^72, C. F. Redmer^35, K. J. Ren^39, A. Rivetti^74C, M. Rolo^74C, G. Rong^1,63, Ch. Rosner^18, S. N. Ruan^43, N. Salone^44, A. Sarantsev^36,d, Y. Schelhaas^35, K. Schoenning^75, M. Scodeggio^29A, K. Y. Shan^12,g, W. Shan^24, X. Y. Shan^71,58, Z. J Shang^38,k,l, J. F. Shangguan^55, L. G. Shao^1,63, M. Shao^71,58, C. P. Shen^12,g, H. F. Shen^1,8, W. H. Shen^63, X. Y. Shen^1,63, B. A. Shi^63, H. Shi^71,58, H. C. Shi^71,58, J. L. Shi^12,g, J. Y. Shi^1, Q. Q. Shi^55, S. Y. Shi^72, X. Shi^1,58, J. J. Song^19, T. Z. Song^59, W. M. Song^34,1, Y.  J. Song^12,g, Y. X. Song^46,h,n, S. Sosio^74A,74C, S. Spataro^74A,74C, F. Stieler^35, Y. J. Su^63, G. B. Sun^76, G. X. Sun^1, H. Sun^63, H. K. Sun^1, J. F. Sun^19, K. Sun^61, L. Sun^76, S. S. Sun^1,63, T. Sun^51,f, W. Y. Sun^34, Y. Sun^9, Y. J. Sun^71,58, Y. Z. Sun^1, Z. Q. Sun^1,63, Z. T. Sun^50, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, Y. A. Tang^76, L. Y. Tao^72, Q. T. Tao^25,i, M. Tat^69, J. X. Teng^71,58, V. Thoren^75, W. H. Tian^59, Y. Tian^31,63, Z. F. Tian^76, I. Uman^62B, Y. Wan^55,S. J. Wang ^50, B. Wang^1, B. L. Wang^63, Bo Wang^71,58, D. Y. Wang^46,h, F. Wang^72, H. J. Wang^38,k,l, J. J. Wang^76, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, Meng Wang^1,63, N. Y. Wang^63, S. Wang^12,g, S. Wang^38,k,l, T.  Wang^12,g, T. J. Wang^43, W.  Wang^72, W. Wang^59, W. P. Wang^35,71,o, X. Wang^46,h, X. F. Wang^38,k,l, X. J. Wang^39, X. L. Wang^12,g, X. N. Wang^1, Y. Wang^61, Y. D. Wang^45, Y. F. Wang^1,58,63, Y. L. Wang^19, Y. N. Wang^45, Y. Q. Wang^1, Yaqian Wang^17, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^72, Z. Y. Wang^1,63, Ziyi Wang^63, D. H. Wei^14, F. Weidner^68, S. P. Wen^1, Y. R. Wen^39, U. Wiedner^3, G. Wilkinson^69, M. Wolke^75, L. Wollenberg^3, C. Wu^39, J. F. Wu^1,8, L. H. Wu^1, L. J. Wu^1,63, X. Wu^12,g, X. H. Wu^34, Y. Wu^71,58, Y. H. Wu^55, Y. J. Wu^31, Z. Wu^1,58, L. Xia^71,58, X. M. Xian^39, B. H. Xiang^1,63, T. Xiang^46,h, D. Xiao^38,k,l, G. Y. Xiao^42, S. Y. Xiao^1, Y.  L. Xiao^12,g, Z. J. Xiao^41, C. Xie^42, X. H. Xie^46,h, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^71,58, T. Y. Xing^1,63, C. F. Xu^1,63, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^66, M. Xu^71,58, Q. J. Xu^16, Q. N. Xu^30, W. Xu^1, W. L. Xu^66, X. P. Xu^55, Y. C. Xu^77, Z. P. Xu^42, Z. S. Xu^63, F. Yan^12,g, L. Yan^12,g, W. B. Yan^71,58, W. C. Yan^80, X. Q. Yan^1, H. J. Yang^51,f, H. L. Yang^34, H. X. Yang^1, Tao Yang^1, Y. Yang^12,g, Y. F. Yang^43, Y. X. Yang^1,63, Yifan Yang^1,63, Z. W. Yang^38,k,l, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Z. Y. You^59, B. X. Yu^1,58,63, C. X. Yu^43, G. Yu^1,63, J. S. Yu^25,i, T. Yu^72, X. D. Yu^46,h, Y. C. Yu^80, C. Z. Yuan^1,63, J. Yuan^34, L. Yuan^2, S. C. Yuan^1, Y. Yuan^1,63, Y. J. Yuan^45, Z. Y. Yuan^59, C. X. Yue^39, A. A. Zafar^73, F. R. Zeng^50, S. H.  Zeng^72, X. Zeng^12,g, Y. Zeng^25,i, Y. J. Zeng^59, X. Y. Zhai^34, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,63, B. L. Zhang^1,63, B. X. Zhang^1, D. H. Zhang^43, G. Y. Zhang^19, H. Zhang^80, H. Zhang^71,58, H. C. Zhang^1,58,63, H. H. Zhang^34, H. H. Zhang^59, H. Q. Zhang^1,58,63, H. R. Zhang^71,58, H. Y. Zhang^1,58, J. Zhang^80, J. Zhang^59, J. J. Zhang^52, J. L. Zhang^20, J. Q. Zhang^41, J. S. Zhang^12,g, J. W. Zhang^1,58,63, J. X. Zhang^38,k,l, J. Y. Zhang^1, J. Z. Zhang^1,63, Jianyu Zhang^63, L. M. Zhang^61, Lei Zhang^42, P. Zhang^1,63, Q. Y. Zhang^34, R. Y Zhang^38,k,l, Shuihan Zhang^1,63, Shulei Zhang^25,i, X. D. Zhang^45, X. M. Zhang^1, X. Y. Zhang^50, Y.  Zhang^72, Y.  T. Zhang^80, Y. H. Zhang^1,58, Y. M. Zhang^39, Yan Zhang^71,58, Yao Zhang^1, Z. D. Zhang^1, Z. H. Zhang^1, Z. L. Zhang^34, Z. Y. Zhang^76, Z. Y. Zhang^43, Z. Z.  Zhang^45, G. Zhao^1, J. Y. Zhao^1,63, J. Z. Zhao^1,58, Lei Zhao^71,58, Ling Zhao^1, M. G. Zhao^43, N. Zhao^78, R. P. Zhao^63, S. J. Zhao^80, Y. B. Zhao^1,58, Y. X. Zhao^31,63, Z. G. Zhao^71,58, A. Zhemchugov^36,b, B. Zheng^72, B. M. Zheng^34, J. P. Zheng^1,58, W. J. Zheng^1,63, Y. H. Zheng^63, B. Zhong^41, X. Zhong^59, H.  Zhou^50, J. Y. Zhou^34, L. P. Zhou^1,63, S.  Zhou^6, X. Zhou^76, X. K. Zhou^6, X. R. Zhou^71,58, X. Y. Zhou^39, Y. Z. Zhou^12,g, J. Zhu^43, K. Zhu^1, K. J. Zhu^1,58,63, K. S. Zhu^12,g, L. Zhu^34, L. X. Zhu^63, S. H. Zhu^70, S. Q. Zhu^42, T. J. Zhu^12,g, W. D. Zhu^41, Y. C. Zhu^71,58, Z. A. Zhu^1,63, J. H. Zou^1, J. Zu^71,58 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 BochumRuhr-University, D-44780 Bochum, Germany^4 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^6 Central China Normal University, Wuhan 430079, People's Republic of China^7 Central South University, Changsha 410083, People's Republic of China^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^9 China University of Geosciences, Wuhan 430074, People's Republic of China^10 Chung-Ang University, Seoul, 06974, Republic of Korea^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^12 Fudan University, Shanghai 200433, People's Republic of China^13 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^14 Guangxi Normal University, Guilin 541004, People's Republic of China^15 Guangxi University, Nanning 530004, People's Republic of China^16 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^17 Hebei University, Baoding 071002, People's Republic of China^18 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany^19 Henan Normal University, Xinxiang 453007, People's Republic of China^20 Henan University, Kaifeng 475004, People's Republic of China^21 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^22 Henan University of Technology, Zhengzhou 450001, People's Republic of China^23 Huangshan College, Huangshan245000, People's Republic of China^24 Hunan Normal University, Changsha 410081, People's Republic of China^25 Hunan University, Changsha 410082, People's Republic of China^26 Indian Institute of Technology Madras, Chennai 600036, India^27 Indiana University, Bloomington, Indiana 47405, USA^28 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione diPerugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy^29 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara,I-44122, Ferrara, Italy^30 Inner Mongolia University, Hohhot 010021, People's Republic of China^31 Institute of Modern Physics, Lanzhou 730000, People's Republic of China^32 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia^33 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile^34 Jilin University, Changchun 130012, People's Republic of China^35 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^36 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^37 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^38 Lanzhou University, Lanzhou 730000, People's Republic of China^39 Liaoning Normal University, Dalian 116029, People's Republic of China^40 Liaoning University, Shenyang 110036, People's Republic of China^41 Nanjing Normal University, Nanjing 210023, People's Republic of China^42 Nanjing University, Nanjing 210093, People's Republic of China^43 Nankai University, Tianjin 300071, People's Republic of China^44 National Centre for Nuclear Research, Warsaw 02-093, Poland^45 North China Electric Power University, Beijing 102206, People's Republic of China^46 Peking University, Beijing 100871, People's Republic of China^47 Qufu Normal University, Qufu 273165, People's Republic of China^48 Renmin University of China, Beijing 100872, People's Republic of China^49 Shandong Normal University, Jinan 250014, People's Republic of China^50 Shandong University, Jinan 250100, People's Republic of China^51 Shanghai Jiao Tong University, Shanghai 200240,People's Republic of China^52 Shanxi Normal University, Linfen 041004, People's Republic of China^53 Shanxi University, Taiyuan 030006, People's Republic of China^54 Sichuan University, Chengdu 610064, People's Republic of China^55 Soochow University, Suzhou 215006, People's Republic of China^56 South China Normal University, Guangzhou 510006, People's Republic of China^57 Southeast University, Nanjing 211100, People's Republic of China^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand^61 Tsinghua University, Beijing 100084, People's Republic of China^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey^63 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^64 University of Groningen, NL-9747 AA Groningen, The Netherlands^65 University of Hawaii, Honolulu, Hawaii 96822, USA^66 University of Jinan, Jinan 250022, People's Republic of China^67 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom^68 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany^69 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom^70 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^71 University of Science and Technology of China, Hefei 230026, People's Republic of China^72 University of South China, Hengyang 421001, People's Republic of China^73 University of the Punjab, Lahore-54590, Pakistan^74 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^75 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^76 Wuhan University, Wuhan 430072, People's Republic of China^77 Yantai University, Yantai 264005, People's Republic of China^78 Yunnan University, Kunming 650500, People's Republic of China^79 Zhejiang University, Hangzhou 310027, People's Republic of China^80 Zhengzhou University, Zhengzhou 450001, People's Republic of China^a Deceased^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China^h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China^i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China^j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China^k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China^l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China^m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan^n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland^o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, GermanyJanuary 14, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Recently, some research show that deep neural networks are vulnerable to the adversarial attacks, the well-trainned samples or patches could be used to trick the neural network detector or human visual perception. However, these adversarial patches, with their conspicuous and unusual patterns, lack camouflage and can easily raise suspicion in the real world. To solve this problem, this paper proposed a novel adversarial patch method called the Latent Diffusion Patch (LDP), in which, a pretrained encoder is first designed to compress the natural images into a feature space with key characteristics. Then trains the diffusion model using the above feature space. Finally, explore the latent space of the pretrained diffusion model using the image denoising technology. It polishes the patches and images through the powerful natural abilities of diffusion models, making them more acceptable to the human visual system. Experimental results, both digital and physical worlds, show that LDPs achieve a visual subjectivity score of 87.3%, while still maintaining effective attack capabilities.§ INTRODUCTION Deep learning, an essential branch of artificial intelligence, has recently excelled in many challenging tasks, including object classification, facial recognition, autonomous driving, and license plate recognition. However, current research shows that Deep Neural Networks (DNNs) are vulnerable due to their sensitivity and lack of interpretability, making them susceptible to adversarial examples. Even minor perturbations can lead to incorrect predictions by DNNs. In the digital realm, adversarial attacks are mainly executed by adding subtle pixel disturbances to the original input images<cit.>. Unlike the digital world, adversarial attacks in the physical world are influenced by complex physical factors such as lighting, distance, and angles, making them more challenging. In the physical world, carefully designed physical adversarial examples can also mislead DNNs into making wrong decisions. For example, Thys and others<cit.> showed that placing a patch on a cardboard in front of a camera can prevent successful detection of a person. Xu and others<cit.> demonstrated that wearing a T-shirt with an adversarial patch can help evade target detectors. Most prior studies on physical-world adversarial attacks focused mainly on the effectiveness and robustness of the attack, overlooking the visual appearance and semantic plausibility of the adversarial patterns. This often leads to the creation of bizarre patterns that draw attention and can be easily identified as anomalies by experts in the field, causing the attack to fail.To address this issue, this paper introduces the Latent Diffusion Patch (LDP). Initially, a pretrained autoencoder compresses natural images (such as cats in nature) into a feature space. This space, approximating the image manifold of natural images, reduces feature redundancy and retains only key features. Subsequently, a diffusion model is utilized to learn this feature space, constraining the spatial information of the generated adversarial patterns. Iterative denoising explores the latent space of the diffusion model, mapping random noise into the feature space. The hidden variables discovered through this process are sampled by a decoder to create adversarial patterns. These patterns significantly lower the detection score of the target object while closely resembling natural images in appearance. Experimental results demonstrate that individuals with LDP can easily evade human detectors in both digital and physical worlds. Moreover, the high camouflage of LDP patterns successfully avoids detection by researchers,and our contributions can be summarized as follows: * This study proposes a novel method for generating adversarial patterns in the physical world. By employin a diffusion model, it constrains the spatial information of adversarial patterns to closely resemble the feature space obtained from perceptually compressed natural images. As a result, the generated images appear more natural to the subjective visual perception, while still maintaining robust attack performance. * By limiting the variation range of latent space hidden variables, this approach ensures that the feature vectors of the Latent Diffusion Patch (LDP) do not deviate excessively from the latent space of natural images. This constraint guarantees that the LDP closely resembles natural images in the physical world to the greatest extent possible. * The Latent Diffusion Patch (LDP) achieves satisfactory adversarial attack results in various environments, including indoor and outdoor settings. Additionally, it demonstrates commendable generalizability and portability across different detector models. § RELATED WORKSIn recent years, with the rapid development of Deep Neural Networks (DNNs), research on adversarial attacks targeting DNNs has become increasingly prevalent. This section will briefly review recent works related to adversarial examples and diffusion models. §.§ Adversarial ExampleAdversarial examples are meticulously designed inputs aimed at leading neural network models to make incorrect decisions. In 2014, Szegedy and others<cit.> successfully generated the first adversarial example by adding subtle disturbances trained in the wrong gradient direction to original digital images. This research posed a challenge to the robustness and generalizability of deep neural networks, giving birth to an entirely new field of study. Subsequent research on adversarial examples primarily focused on the digital world<cit.>, where researchers added imperceptible adversarial perturbations directly to digital images at the pixel level.In recent years, adversarial attacks in the physical world have been increasingly proposed, targeting mainly classifier and detector models, posing greater risks to DNN applications in real-world scenarios. In the physical world, since it's not feasible to directly alter the pixel values of DNN inputs, the common approach in this field involves placing carefully designed adversarial patterns near the target object to influence the DNN's decision-making regarding that object.In adversarial attacks targeting classifier models, Brown et al<cit.>. placed adversarial patches around a banana, leading the image classifier to misclassify it as a toaster. Sharif et al. created adversarial glasses to attack facial recognition systems. Athalye et al. introduced the Expectation Over Transformation (EoT) method to generate potent 3D adversarial objects. Evtimov et al<cit.>. introduced Robust Physical Perturbations (RPP) to execute physical adversarial attacks. By affixing black and white stickers on road signs, they made the model misidentify a STOP sign as a 45 mph speed limit sign, sounding an alarm for the autonomous driving field.In attacks targeting detector models, the primary goal of attackers is often to use adversarial patterns to make target objects evade detection by target detectors, such as human and vehicle detectors. In attacks related to human detectors, Thys et al<cit.>. first created AdvPatch, which, when placed in front of a person, effectively evaded detection by human detectors. Building on this research, Xu et al<cit.>. introduced Thin Plate Spline (TPS) technology, which simulates the deformation of clothing wrinkles during a person's movement. They successfully designed an adversarial T-shirt, transferring the carrier of the adversarial pattern from rigid to flexible materials. However, a person wearing an adversarial T-shirt could only evade detection when facing the detector, a significant limitation. To overcome this, Hu et al<cit.>. subsequently proposed Adversarial Texture, a technique that can cover clothing of any shape, allowing individuals wearing covered clothes to attack target detectors from various angles.In the aforementioned adversarial attacks against human detectors, while each method demonstrated effective evasion of model detection, the process of pattern generation often neglected control over the color appearance of the patterns. Consequently, this led to the creation of adversarial patterns with overly vivid and bizarre colors. These attention-grabbing appearances not only deviate from the essence of adversarial examples but also risk being recognized as anomalies by humans before the patterns are inputted into the model.To address the issue of adversarial patterns being bizarre and conspicuous, researchers like Duan and Luo <cit.>have attempted to blend adversarial patterns with their surrounding physical environment. This approach aims to camouflage the patterns as much as possible within the physical world, solving the problem from the perspective of image semantic plausibility. Hu and others, meanwhile, have turned to generating natural adversarial patches by searching for adversarially effective natural patterns within the latent space learned by GANs. Although this method can effectively sample and generate natural images as adversarial patterns from random noise, it faces challenges such as training instability, model collapse<cit.>, and mode collapse during the dynamic training process of GANs.Subsequently, Tan and others proposed a new framework with a two-stage training strategy to generate legitimate adversarial patches (LAP), enhancing the visual rationality of the produced adversarial patterns. Guesmi and others introduced a method involving a similarity loss function to generate natural and more robust DAPs without using GANs<cit.>. However, despite both methods aiming to create more natural and plausible adversarial patterns, they still exhibit flaws and irrationalities in image quality. They do not achieve true naturalness. §.§ Diffusion modelAlthough GAN networks are known for their excellent image generation capabilities, their inherent issues make them unsuitable for generating adversarial patterns. Addressing the multitude of problems associated with GANs, the recently proposed diffusion models have demonstrated outstanding performance. Moreover, they are capable of producing images of higher quality compared to GANs.Diffusion models have recently achieved remarkable results in fields such as computer vision<cit.>, natural language processing<cit.>, and speech processing. Moreover, recent works have shown that denoising diffusion probabilistic models (DDPMs)<cit.> have attained state-of-the-art outcomes in terms of density estimation and the quality of generated samples<cit.>.The Denoising Diffusion Probabilistic Model (DDPM) consists of two parameterized Markov chains and utilizes variational inference to generate samples that match the original data after a given number of time steps. The forward chain incrementally adds Gaussian noise to the original data distribution through a pre-designed schedule until the data distribution converges to a standard Gaussian distribution. Conversely, the reverse chain starts from a given standard Gaussian distribution image and progressively removes the Gaussian noise learned by the neural network until the Gaussian distribution image is restored to the clean original data distribution.Formally, we define a forward noise process q,It introduces Gaussian noise at time t to the data distribution for the next moment,generating hidden variables denoted as x_1,x_2,x_3,.....,x_T .Here, t is a specific moment sequentially chosen from a schedule and lies within interval t∈(0,T] .Given a data distribution x ∼ q(x_0).The overall objective of optimizing the diffusion model is as follows:E_t ∈μ(0,T),x ∈ q(x_0),ϵ∈ N(0,I)[||ϵ - ϵ_θ (x_t,t)||^2] Herein,ϵ_θrepresents the neural network model, which uses x_t and t to predict the noise ϵ added in the forward process.Equation (1) is akin to denoising score matching under the index t.During the generation process, a sample x_T is initially selected from a standard Gaussian distribution, and then the learned reverse chain of the neural network is used to sequentially render samples x_t,until a new data x_0 is obtained. However, training diffusion models directly in the image's pixel space can be slow and costly. To address this issue, Rombach and others recently proposed the Latent Diffusion Model (LDM). LDM first uses an autoencoder to perceptually compress high-dimensional data images into a low-dimensional feature space. Specifically, it encodes images x from the RGB space,x ∈ℝ ^ H ×W × 3, into latent variables z = ε(x) using the encoder ε(·), and then reconstructs data images using the decoder D(·). The process can be represented as x = D(z) = D(ε(x)), z ∈ℝ ^ h ×w × 3 where c is the number of channels in the latent variables.The encoder ε(·) compresses the natural image x into the feature space as z, following a parameter f = H/h = W/w.The diffusion model then directly learns high-frequency and imperceptible image feature abstractions in this feature space. The overall objective of LDM can be defined as:E_t ∈μ(0,T), z = ε(x), ϵ∈ N(0,I)[||ϵ - ϵ_θ(z_t,t)||^2] Since the forward chain process is fixed, it is only necessary to obtain effective latent variables z from the pretrained encoder ε before training. This enables the stepwise early matching to restore and map random noise back into the feature space by progressively operating on z.Additionally, during sampling, it suffices to decode once using the pretrained decoder to map the sampled latent variables from the feature space back to the image space.§ METHOD The objective of this paper is to generate high-naturality adversarial camouflage patterns that can simultaneously evade detection by detectors and human perception, to be applied in the physical world as adversarial patches. To achieve this, we propose the Latent Diffusion Patch (LDP), which begins by using a pretrained encoder to perceptually compress natural images into a feature space. This feature space, approximating the image manifold of natural images, effectively reduces feature redundancy, retaining only key image features. Then, this feature space is used to train a diffusion model. Finally, by denoising and exploring the latent space of the pretrained diffusion model, we use a decoder to sample the hidden variables found, creating adversarial patterns that approximate natural images. Figure 2 presents the framework flowchart for generating LDP, including the generation and optimization of adversarial patterns. We introduce a patch generator composed of a pretrained diffusion model and decoder. This generator explores the latent space of the diffusion model to obtain latent variables resembling natural image feature vectors and uses the decoder to sample natural adversarial patterns. A specific loss function is designed to iteratively update the latent vectors, maintaining the naturality quality of the pattern by constraining its mean and variance.§.§ Generating Adversarial PatchesIn the generation process of the Latent Diffusion Patch (LDP), we first pretrain an autoencoder A using a dataset of natural images from the same category. The autoencoder A consists of an encoder ε and a decoder D.The encoder ε is used to compress data images into the feature space, and then this feature space is utilized to train the diffusion model M. Since the diffusion model learns the feature vectors of the data images, we can explore the latent space of the diffusion model to constrain the spatial domain of the patch to be close to the feature space of natural images.The patch generator begins by randomly sampling a noise Z_T ∈ℝ ^ h × w × d from a standard Gaussian distribution, where h and w are the height and width of the latent variable, respectively, and d is the dimension of the latent variable. By continuously denoising and exploring the latent space of the diffusion model, the random noise is mapped into the feature space of natural images. The adversarial patch P is then obtained through the decoder, sampling from the explored latent variables as P = D(M(Z_T)) ∈ℝ ^ H × W × 3.ext, we iteratively update Z_T to optimize our objective function, which is defined as follows:L_total = L_det + α L_kl + β L_tv + γ L_nps The formula includes four independent loss functions. The first term, L_det , is the adversarial detection loss, and the second term, L_kl,is the regularization loss for the latent variables. These two loss functions are used to control the adversarial attack effectiveness of the LDP (see Section 3.2 for specific details).The third term, L_tv , represents the total variation loss, which is used to control the overall color smoothness of the adversarial pattern. It is defined as follows:L_tv = ∑_i,j√( (P_i+1,j-P_i,j)^2 + (P_i,j+1-P_i,j)^2)Here, P_i,j represents the pixel value of the LDP at coordinates (i,j).The final term,L_nps,is the non-printability loss, which ensures that the pixel values of the LDP are as close as possible to the colors that can be output by printing devices. This is to ensure that the color of LDP in the physical world closely matches the color of the pattern generated in the digital world. The specific expression for L_nps is:L_nps = ∑_i,j (c ∈ C min ||P_i,j - c||_2)Here, C represents a set of three-channel colors that can be printed by a group of N printers. β and γ are two hyperparameters used to control the intensity of the corresponding losses. In our experiments, we set β = 0.1 and γ = 0.01 §.§ Adversarial Gradient and ConstraintsThe process of generating the Latent Diffusion Patch (LDP) involves using adversarial gradients to guide changes in the pattern's pixel values, thereby deceiving the target detector. To obtain the adversarial gradients of the target object, it is first necessary to render the adversarial pattern onto the target object. Then, this composite image is fed into the target detector, where the adversarial loss is calculated based on the output prediction vector.Let G(x) = {P_xywh,C_obj,C_cls}represent a human detector, where x is the input sample image, producing multiple sets of prediction vectors as output. In these vectors,P_xywh denotes the coordinates of the predicted bounding box in x, C_obj represents the probability that the box contains a target, and C_cls denotes the probability of each category. We minimize the product of C_obj and C_cls simultaneously to reduce the detector G confidence in recognizing human targets in x, thereby achieving the effect of evading detection. The specific formula is as follows: L_det = 1/N∑_i=1^Nmax [C_obj(x'_i) × C_cls(x'_i)] Where x'_i represents the i-th image in a single batch of images with adversarial noise added, and the total number of images in the batch is N.Equation (6) uses the reduction of the maximum confidence level of all human categories in the image detection results as the loss function. By iteratively lowering the maximum confidence score of human targets in each round, the Latent Diffusion Patch (LDP) achieves its effect of inducing suppression of detection.Moreover, during the adversarial optimization process, the model can optimize any latent vector Z_T within the high-density region deviating from the standard Gaussian distribution. Therefore, it's necessary to set constraints on it. The diffusion model is trained by randomly sampling noise from the standard Gaussian distribution, and during the forward diffusion process, each time step variable is encoded as a Gaussian distribution dependent only on the previous time step variable. For continuous Gaussian diffusion models, the starting point of the reverse denoising process must conform to the data distribution of the standard Gaussian distribution. If Z_T, deviating from the standard Gaussian distribution area, is mapped through the diffusion model's denoising process, it cannot ensure that the generated patches are sufficiently realistic.To ensure the realism of the generated adversarial patterns, this paper imposes a regularization lossL_kl to constrain the mean and variance of Z_T, thereby keeping Z_T within the vicinity of the standard Gaussian distribution. The regularization term loss is defined as:L_kl = KL(N(μ,σ^2)||N(0,I)) = 1/2 (-log σ^2 + μ + σ^2 -1) Where μ and σ are the mean and standard deviation of the distribution in which Z_T resides.However, the process of directly sampling Z_T from the probability distribution N(μ,σ^2) is non-differentiable. Therefore, to effectively optimize the neural network, we shift the optimization target from the latent vector Z_T to μ and σ.Since the linear transformation of a Gaussian distribution remains a Gaussian distribution, the optimized μ and σ can be re-sampled through a parameter transformation. This transformation converts the process of sampling a latent vector Z_T from N(μ,σ^2) into first sampling random noise ε from N(0,I),and then setting Z_T = μ + ε×σ.This process facilitates the conformity o Z_T to the standard Gaussian distribution using KL divergence. We use the weight coefficient γα to control the intensity of L_kl,with α= 0.5 in our experiments.§ EXPERIMENTS This paper will conduct experiments with the Latent Diffusion Patch (LDP) in both digital and physical worlds, providing comprehensive evaluation results under various experimental setups and environments to demonstrate the effectiveness of the proposed method.In this paper, we select Yolov2, Yolov3, Yolov3tiny, Yolov4, and Yolov4tiny as the target detectors to be attacked and use the official COCO dataset weights, with an input image resolution of 416 × 416. In the process of training the diffusion model and constructing the pattern generator, we choose the AFHQ dataset, which includes images of three domains of animal faces: cats, dogs, and wild animals, as shown in Figure X. Each domain provides about 5000 images with a resolution of 512×512. To ensure that the patterns generated by the diffusion model closely resemble creatures in the natural world, we specifically select cats as our training image set. Subsequently, the data images are compressed through the encoder to obtain feature variables with dimensions of 64×64×20, and all the feature variable sets are saved as the feature space. The entire patch optimization process uses the Adam optimizer, with learning rates of 0.0001, β_1 = 0.5,β_2 = 0.999, and a batch size set to 12. §.§ Digital World Experiments§.§.§ Evaluation on INRIA datasetIn the evaluation of the experiments, we utilized the INRIA Person Dataset for training and assessing the proposed method. This dataset comprises 614 training images and 288 test images. To meet the input size requirements of the target detectors, all data images were resized to a resolution of 416×416. The experiments used Mean Average Precision (mAP) as the primary metric for evaluating attack performance, a commonly used performance metric in object detection tasks.The true label detection boxes were taken as those detected by each detector on the original data, with the target detector's mAP at 100% at this stage. The LDP was then overlaid onto the original data according to the coordinates of the detected boxes, and the target detector was used again to determine the mAP for data images with LDP. Table 1 shows the evaluation results on the INRIA dataset. We trained and evaluated LDP using four different target detectors. LDP achieved lower mAP scores across various target detector combinations, demonstrating the effectiveness and transferability of the method.§.§.§ Comparative Experiment To more significantly assess the performance of the Latent Diffusion Patch (LDP), this paper selected four recent related works for comparison, including: Naturalistic Patches, Adversarial T-shirt, Adversarial Patches, and Universal Physical Camouflage (UPC). Table 2 displays the Mean Average Precision (mAP) of these methods on the INRIA dataset. The experimental results indicate that compared to the state-of-the-art methods, LDP also achieves competitive attack performance.The focus of this paper is on generating adversarial patterns that are not easily perceptible to the human eye. Measuring the naturality of adversarial patches is a challenging task, and currently, there are no suitable metrics to achieve this purpose. Therefore, to evaluate naturality, two subjective surveys were conducted, each with 30 independent participants.In the first part of the naturality assessment, we presented the aforementioned four types of adversarial patterns and LDP sequentially to participants and asked them to score each pattern according to their subjective judgment, with a full score of 100. The average score was then taken as the naturality score for each adversarial pattern. The experimental results, as shown in Table 3, indicate that LDP notably scored higher in subjective naturality.In the second part of the naturality assessment, we randomly arranged 3 images of natural cats and 3 different LDPs, asking participants to rate the naturality of each image. This experiment was designed to assess the absolute visual naturality score of the LDPs generated by our method compared to actual natural images, with results also shown in Table 3. The findings demonstrate that the adversarial patterns generated by our proposed method appear more natural visually and are less likely to be judged as malicious inputs.§.§ Physical World Experiment §.§.§ Experimental setupIn the physical world experiments, we used Yolov3 as the primary model for attack. The LDP, trained and printed using a printer, was attached to a cardboard to create adversarial patches. Subsequently, videos of individuals holding the LDP were recorded using a device, and random image frames were extracted from these videos and fed into the model for detection. The distance between the subjects and the device was approximately 2 to 3 meters. The recording device was an iPhone 13 smartphone, equipped with a 1200W pixel camera, providing sufficient clarity to capture the adversarial patterns of the LDP. Additionally, it is important to note that to protect the privacy of the participants, the faces of individuals in the detection results were pixelated.§.§.§ Physical attack assessmentConsidering that adversarial patterns in the physical world are primarily influenced by the size of the image and the physical environment, we conducted experiments using two different sizes of LDPs and in three distinct settings.Regarding size, we selected LDPs measuring 23×23 cm and 33×33 cm, as shown in Figures 2 and 3, respectively. These sizes were chosen to ensure the integrity and effectiveness of the attack when the LDP is printed as a pattern on clothing. In Figure 4, we demonstrate the effectiveness of LDPs of different sizes. It is evident that LDPs of varying sizes can exhibit robust adversarial effects in the physical world.In terms of setting, we chose indoor, outdoor, and corridor environments for recording experimental videos. Figure 5 displays the attack effectiveness of the LDP in different environments. In the physical world, natural factors like lighting and brightness significantly affect the pixel values of adversarial patterns when they are input into detectors, posing a challenge for adversarial attacks in physical settings. However, as shown in Figure 5, LDPs adapt well to different scenarios. Whether indoors, outdoors, or in dimly lit corridors, LDPs can impact detector recognition. Table 4 shows the Attack Success Rate (ASR) of patches generated by LDP in different settings. Notably, in indoor environments, LDP achieved an attack success rate of 75Furthermore, due to the high naturality of the LDP's image, the detector successfully identified the LDP's adversarial pattern as a cat, a result not typically seen in most previous work. In earlier studies, due to the bizarre nature of the adversarial patterns, most adversarial outputs, despite being aggressive, were not recognized by detectors for what they visually represented. For instance, even patches generated bythrough GAN models, which to the human eye resembled a Pomeranian dog, were not recognized as such by the target detectors. In contrast, LDP achieved a truly meaningful adversarial attack on detectors. § CONCLUSION This paper presents a method that leverages a pretrained diffusion model to learn the latent space obtained through the perceptual compression of an autoencoder, thereby creating natural adversarial patches targeted at object detectors. Leveraging the impressive generative capabilities of the diffusion model, the LDP framework successfully produces visually more natural adversarial patches. It maintains competitive attack performance in both digital and physical domains through extensive qualitative and quantitative experiments, as well as subjective naturality evaluations compared to other similar methods. ieee_fullname
http://arxiv.org/abs/2312.16401v1
{ "authors": [ "Xianyi Chen", "Fazhan Liu", "Dong Jiang", "Kai Yan" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227040944", "title": "Natural Adversarial Patch Generation Method Based on Latent Diffusion Model" }
Jones Wenzl projectors in Verma modules Ryoga Matsumoto======================================= We construct special idempotents in End_U_q(𝔰𝔩_2)(M(μ_1)⊗⋯⊗ M(μ_n)) like the Jones Wenzl projector where M(μ_i) is Verma module whose highest weight is μ_i. § INTRODUCTIONTemperley Lieb algebra is introduced in TemperleyLieb to solve the problems of statistical physics. Furthermore, Wenzl introduced the special idempotents in Temperley Lieb algebra called Jones Wenzl idempotent in Wenzl. The Jones Wenzl idempotent plays an important role in topology. Murakami and Murakami constructed knot invariants using the Jones Wenzl idempotent in <cit.>. The knot invariants extend the Jones polynomial obtained in Jones. It is known that Temperley Lieb algebra is isomorphic to the endomorphism algebra of tensor products of 2-dimensional irreducible representation over U_q(𝔰𝔩_2) in AndersenLehrerZhang. Then it is important to consider the idempotents in endomorphism algebras of tensor products of U_q(𝔰𝔩_2) representations. In LacabanneTubbenhauerVaz, the structure of the endomorphism algebra of tensor products of Verma modules over U_q(𝔰𝔩_2) is determined using Howe duality-like method. However, the special idempotent of the endomorphism algebra like the Jones Wenzl idempotent has not been constructed. In this article, we construct special idempotents in End_U_q(𝔰𝔩_2)(M(μ_1)⊗⋯⊗ M(μ_n)) like Jones Wenzl projector. More precisely, we obtain the following.There exists an element P_μ_1, …, μ_n∈End_U_q(𝔰𝔩_2)(M(μ_1)⊗⋯⊗ M(μ_n)) as follows.P_μ_1, …, μ_n^2 =P_μ_1, …, μ_n §.§ AcknowledgementsWe would like to thank Yuji Terashima for valuable discussions. This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2102.§ REPRESENTATION OF U_Q(𝔰𝔩_2)In this section, we define the quantum algebra U_q(𝔰𝔩_2) and the representations.The quantum group U_q(𝔰𝔩_2) of 𝔰𝔩_2 is the algebra on ℂ(q) generated by the elements K,K^-1,E,F that satisfy the following relations.KK^-1=1=K^-1K, KE=q^2EKKF=q^-2FK, EF-FE=K-K^-1/q-q^-1The coproduct Δ: U_q(𝔰𝔩_2)→ U_q(𝔰𝔩_2)⊗ U_q(𝔰𝔩_2) is defined below on the algebra.Δ(K^±1):=K^±1⊗ K^±1,Δ(F):=F⊗ 1+K^-1⊗ F, Δ(E):=E⊗ K+1 ⊗ E k+1-dimensional irreducible representation V_k over ℂ(q) has a basis {v_0, v_1, ⋯ , v_k } called an induced basis and satisfies the following relations for the generators of U_q(𝔰𝔩_2).K· v_i:= q^k-2iv_i E· v_i:= [i]v_i-1F· v_i:= [k-i]v_i+1where v_-1=v_k+1=0. [k] is defined as[k]:=q^k-q^-k/q-q^-1 Let k, l be non-negative integers such that l < k. We define [k]! and [ k; l ] as follows.[k]! := [k][k-1]⋯ [1][ k; l ] :=[k]!/[l]![k-l]!We have[ k+1; j ] =q^-k+j-1[ k; j-1 ] +q^j[ k; j ]It is given by directly computing.q^-k+j-1[ k; j-1 ] +q^j[ k; j ] = [k]!/[j]![k+1-j]!(q^-k+j-1[j]+q^j[k+1-j]) = [k]!/[j]![k+1-j]!(q-q^-1)(q^-k+2j-1-q^-k-1+q^k+1-q^-k+2j-1) = [ k+1; j ] Verma module M(μ) over ℂ(q,q^μ) has a basis {v_0, v_1, ⋯} called an induced basis and satisfies the following relations for the generators of U_q(𝔰𝔩_2).K· v_i:= q^μ-2iv_i E· v_i:= [i]v_i-1F· v_i:= [μ-i]v_i+1where v_-1=0.Given a module M, M' on U_q(𝔰𝔩_2), by using the action induced from the coproduct, the tensor product representation M ⊗ M' is defined from the following relations. For all m ∈ M and m' ∈ M', we haveK^±1·(m⊗ m'):=(K^±1· m)⊗ (K^±1· m') F· (m⊗ m'):=(F· m)⊗ m'+(K^-1· m)⊗ (F· m') E· (m⊗ m'):=(E· m)⊗ (K· m')+m ⊗(E· m')§ ENDOMORPHISM ALGEBRAS OF V_1^⊗ N AND M(Μ_1)⊗⋯⊗ M(Μ_N)In this section, we introduce Temperley-Lieb algebra and Temperley-Lieb algebra of type B. Hereinafter, we simply denote v_i ⊗ v_j ∈ M ⊗ N by v_i,j where M and N are M(μ) or V_k respectively. The intertwining operators cap: ℂ(q) → V_1⊗ V_1 and cup: V_1⊗ V_1 →ℂ(q) over U_q𝔰𝔩_2 are defined as follows.cap(1)=v_0,1-q^-1v_1,0cup(v_0,0)=cup(v_1,1)=0,cup(v_0,1)=-q,cup(v_1,0)=1Temperley Lieb algebra TL_n is a ℂ(q)-algebra End_U_q(𝔰𝔩_2)(V_1^⊗ n). The generators of TL_n is {e_i} (i=1, ⋯, n-1) where e_i is defined as follows.e_i= Id^⊗ (i-1)⊗ (cup ∘ cap)⊗Id^⊗ (n-i-1) We let TL_μ_1,⋯,μ_n:= End_U_q(𝔰𝔩_2)(M(μ_1)⊗⋯⊗ M(μ_n)).Let the ground field of U_q(𝔰𝔩_2), M(μ)⊗ M(λ) and M(μ+λ) be ℂ(q,q^μ,q^λ). Set ℂ(q, q^μ, q^λ) linear maps E_μ, λ: M(μ) ⊗ M(λ) → M(μ+λ) and F_μ, λ: M(μ+λ) → M(μ) ⊗ M(λ) as follows.E_μ, λ(v_i,j) := q^i(λ-j)v_i+jF_μ, λ(v_k) := ∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]v_j,k-jwhere ∏_i=0^-1:=1. Then we have E_μ, λ∈Hom_U_q(𝔰𝔩_2)(M(μ) ⊗ M(λ), M(μ+λ)) and F_μ,λ∈Hom_U_q(𝔰𝔩_2)(M(μ+λ), M(μ) ⊗ M(λ)). We must show the commutativity XE_μ,λ=E_μ, λX and XF_μ, λ=F_μ, λX where X=K, E, F. For i,j ≥ 0, the action below is defined as follows.K, E, F: M(μ)⊗ M(λ) → M(μ)⊗ M(λ) Kv_i,j = q^μ+λ-2(i+j)v_i,jEv_i,j = q^λ-2j[i]v_i-1,j+[j]v_i,j-1Fv_i,j = [μ-i]v_i+1,j+q^-μ+2i[λ-j]v_i,j+1First we will show that E_μ is an intertwining operator over U_q(𝔰𝔩_2). Now we prove the following diagram is commutative.M(μ)⊗ M(λ) [r]^E_μ,λ[d]^XM(μ+λ) [d]^XM(μ)⊗ M(λ) [r]^E_μ,λM(μ+λ)Consider the case X=K, we obtainKE_μ,λv_i,j= q^μ+λ-2(i+j)q^i(λ-j)v_i+jE_μ,λKv_i,j= q^i(λ-j)q^μ+λ-2(i+j)v_i+jThen we have KE_μ,λ=E_μ,λK. Consider the case X=E, we obtainEE_μ,λv_i,j = [i+j]q^i(λ-j)v_i+j-1E_μ,λEv_i,j = q^(i-1)(λ-j)q^λ-2j[i]v_i+j-1+q^i(λ-j+1)[j]v_i+j-1= q^i(λ-j)(q^-j[i]+q^i[j])v_i+j-1= q^i(λ-j)[i+j]v_i+j-1Then we have EE_μ,λ=E_μ,λE. Consider the case X=F, we obtainFE_μ,λv_i,j = [μ+λ-i-j]q^i(λ-j)v_i+j+1E_μ,λFv_i,j = q^(i+1)(λ-j)[μ-i]v_i+j+1+q^i(λ-j-1)q^-μ+2i[λ-j]v_i+j+1= q^i(λ-j)(q^λ-j[μ-i]+q^-μ+i[λ-j])v_i+j+1= q^i(λ-j)[μ+λ-i-j]v_i+j+1Then we have FE_μ,λ=E_μ,λF. From the computations above, E_μ,λ is an intertwining operator over U_q(𝔰𝔩_2). Next we will show that F_μ,λ is an intertwining operator over U_q(𝔰𝔩_2). Now we prove the following diagram is commutative.M(μ+λ) [r]^F_μ,λ[d]^XM(μ)⊗ M(λ) [d]^XM(μ+λ) [r]^F_μ,λM(μ)⊗ M(λ)Consider the case X=K, we obtainKF_μ,λv_k = q^μ+λ-2k∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]v_j,k-jF_μ,λKv_k = ∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]q^μ+λ-2kv_j,k-jThen we have KF_μ,λ=F_μ,λK. Consider the case X=E. If k=0, EF_μ,λv_k= F_μ,λEv_k is trivial. If k ≠ 0, we obtainEF_μ,λv_k = ∑_j=1^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]q^λ-2(k-j)[j]v_j-1,k-j +∑_i=0^k-1q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i][k-j]v_j,k-1-j= ∑_j=0^k-1 q^-(k-1-j)(μ-j-1)[ k; j+1 ]∏_i=0^j[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-1[μ+λ-i]q^λ-2(k-1-j)[j+1]v_j,k-1-j +∑_i=0^k-1q^-(k-1-j)(μ-j)[ k-1; j ]∏_j=0^j-1[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-2[μ+λ-i]q^-μ+j[k][λ-k+j+1]/[μ+λ-k+1]v_j,k-1-j= ∑_j=0^k-1 q^-(k-1-j)(μ-j)[ k-1; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-2[μ+λ-i]q^λ-k+j+1[k][μ-j]/[μ+λ-k+1]v_j,k-1-j +∑_j=0^k-1q^-(k-1-j)(μ-j)[ k-1; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-2[μ+λ-i]q^-μ+j[k][λ-k+j+1]/[μ+λ-k+1]v_j,k-1-j= [k]∑_j=0^k-1 q^-(k-1-j)(μ-j)[ k-1; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-2[μ+λ-i]·q^λ-k+j+1[μ-j]+q^-μ+j[λ-k+j+1]/[μ+λ-k+1]v_j,k-1-j= [k]∑_j=0^k-1 q^-(k-1-j)(μ-j)[ k-1; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-2[μ+λ-i]v_j,k-1-j F_μ,λEv_i,j= [k]∑_j=0^k-1 q^-(k-1-j)(μ-j)[ k-1; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-2[λ-i]/∏_i=0^k-2[μ+λ-i]v_j,k-1-jThen we have EF_μ,λ=F_μ,λE. Consider the case X=F, we obtainFF_μ,λv_k = ∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]Fv_j,k-j= ∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i][μ-j]v_j+1,k-j +∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]q^-μ+2j[λ-k+j]v_j,k+1-j= ∑_j=1^k+1 q^-(k+1-j)(μ-j+1)[ k; j-1 ]∏_i=0^j-2[μ-i]∏_i=0^k-j[λ-i]/∏_i=0^k-1[μ+λ-i][μ-j+1]v_j,k+1-j +∑_j=0^k q^-(k+1-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]/∏_i=0^k-1[μ+λ-i]q^j v_j,k+1-j= ∑_j=0^k+1 q^-(k+1-j)(μ-j)(q^-k-1+j[ k; j-1 ] +q^j[ k; j ])∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]/∏_i=0^k-1[μ+λ-i]v_j,k+1-j= ∑_j=0^k+1 q^-(k+1-j)(μ-j)[ k+1; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]/∏_i=0^k-1[μ+λ-i]v_j,k+1-j (∵Lemma <ref>) F_μ,λFv_k = [μ+λ-k]F_μ,λv_k+1= [μ+λ-k]∑_j=0^k+1 q^-(k+1-j)(μ-j)[ k+1; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]/∏_i=0^k[μ+λ-i]v_j,k+1-j= ∑_j=0^k+1 q^-(k+1-j)(μ-j)[ k+1; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]/∏_i=0^k-1[μ+λ-i]v_j,k+1-jThen we have FF_μ,λ=F_μ,λF. From the results above, F_μ,λ is an intertwining operator. § MAIN THEOREMIn this section, we define special idempotents in TL_μ_1,⋯,μ_n like the Jones Wenzl projector in TL_n. Hereinafter, let the ground field of U_q(𝔰𝔩_2) and M(μ_1)⊗⋯⊗ M(μ_n) be ℂ(q,q^μ_1,⋯,q^μ_n).Jones Wenzl projector P_n ∈ TL_n is defined as follows.P_1:=Id,P_n:=P_n-1+[n-1]/[n]P_n-1e_n-1P_n-1 We haveP_n^2 =P_nSee Proposition 2 in KauffmanLins.Put E_μ,λ;μ_1,⋯,μ_i:= E_μ,λ⊗Id_μ_1⊗⋯⊗Id_μ_i and F_μ,λ;μ_1,⋯,μ_i:= F_μ,λ⊗Id_μ_1⊗⋯⊗Id_μ_i. By induction on n>2, extended Jones Wenzl projectors P_μ_1,⋯,μ_n∈ TL_μ_1,⋯,μ_n are defined as follows.P_μ_1,μ_2:= F_μ_1,μ_2E_μ_1,μ_2,P_μ_1,⋯,μ_n:=F_μ_1,μ_2;μ_3,⋯,μ_nP_μ_1+μ_2,μ_3,⋯,μ_nE_μ_1,μ_2;μ_3,⋯,μ_nThe definition above is inspired by Definition 2.11 in RoseTubbenhauer. By Corollary 2.13 in RoseTubbenhauer, the Jones Wenzl projectors defined in RoseTubbenhauer coincide with P_n. We haveE_μ,λF_μ,λ= Id_μ+λLet k be a non-negative integer. Then we haveE_μ,λF_μ,λv_k = ∑_j=0^k q^-(k-j)(μ-j)[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]q^j(λ-k+j)v_k = ∑_j=0^k q^j(μ+λ)-kμ[ k; j ]∏_i=0^j-1[μ-i] ∏_i=0^k-j-1[λ-i]/∏_i=0^k-1[μ+λ-i]v_kNow we prove E_μ,λF_μ,λv_k= v_k by induction on k. If k=0, it is trivial by the above. By the above calculation, it suffices to show that∏_i=0^k[μ+λ-i]=∑_j=0^k+1q^j(μ+λ)-(k+1)μ[ k+1; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]By induction, we obtain∏_i=0^k[μ+λ-i] = [μ+λ-k]∏_i=0^k-1[μ+λ-i] = [μ+λ-k]∑_j=0^k q^j(μ+λ)-kμ[ k; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j-1[λ-i](∵by induction) = ∑_j=0^k q^j(μ+λ)-kμ[ k; j ] q^λ-k+j∏_i=0^j[μ-i]∏_i=0^k-j-1[λ-i]+∑_j=0^k q^j(μ+λ)-kμ[ k; j ] q^-μ+j∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i](∵ [μ+λ-k]=q^λ-k+j[μ-j]+q^-μ+j[λ-k+j]) = ∑_j=1^k+1q^j(μ+λ)-(k+1)μ∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]q^-k+j-1[ k; j-1 ] +∑_j=0^k q^j(μ+λ)-(k+1)μ∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i]q^j[ k; j ]= ∑_j=1^k+1q^j(μ+λ)-(k+1)μ[ k+1; j ]∏_i=0^j-1[μ-i]∏_i=0^k-j[λ-i](∵Lemma <ref>)Then the result follows. Extended Jones Wenzl projectors P_μ_1,…,μ_n∈ TL_μ_1,…,μ_n satisfy the following conditions.P_μ_1,…,μ_n^2= P_μ_1,…,μ_nWe prove it by induction on n>2. If n=2, from Lemma <ref>, we haveP_μ_1,μ_2^2 = F_μ_1,μ_2E_μ_1,μ_2F_μ_1,μ_2E_μ_1,μ_2= F_μ_1,μ_2E_μ_1,μ_2Then we obtain P_μ_1,μ_2^2=P_μ_1,μ_2. Suppose that P_μ_1,…,μ_n-1^2= P_μ_1,…,μ_n-1. Then we haveP_μ_1,…,μ_n^2 = F_μ_1,μ_2;μ_3,⋯,μ_nP_μ_1+μ_2,μ_3,⋯,μ_nE_μ_1,μ_2;μ_3,⋯,μ_nF_μ_1,μ_2;μ_3,⋯,μ_nP_μ_1+μ_2,μ_3,⋯,μ_nE_μ_1,μ_2;μ_3,⋯,μ_n= F_μ_1,μ_2;μ_3,⋯,μ_nP_μ_1+μ_2,μ_3,⋯,μ_n^2 E_μ_1,μ_2;μ_3,⋯,μ_n (∵Lemma <ref>) = F_μ_1,μ_2;μ_3,⋯,μ_nP_μ_1+μ_2,μ_3,⋯,μ_nE_μ_1,μ_2;μ_3,⋯,μ_n (∵By induction) = P_μ_1,⋯,μ_nThus the result follows. AndersenLehrerZhangarticle author = Andersen, Henning author = Lehrer, Gus author = Zhang, Ruibin, title = Cellularity of certain quantum endomorphism algebras, journal = Pacific Journal of Mathematics, volume = 279, pages = 11-35, year = 2013 MR3263166article AUTHOR = Cautis, Sabin author = Kamnitzer, Joel author = Morrison, Scott, TITLE = Webs and quantum skew Howe duality, JOURNAL = Math. Ann., FJOURNAL = Mathematische Annalen, VOLUME = 360, NUMBER = 1-2, PAGES = 351–390, YEAR = 2014, ISSN = 0025-5831Jonesarticle author = Vaughan F. R. Jones, title = A polynomial invariant for knots via von Neumann algebras, journal = Bulletin (New Series) of the American Mathematical Society, volume = 12, number = 1, pages = 103-111, publisher = American Mathematical Society, year = 1985, KauffmanLinsbook author = Louis H. Kauffman, author = S. Lins, title = Temperley-Lieb Recoupling Theory and Invariants of 3-Manifolds (AM-134), publisher = Princeton University Press, year = 1994 LacabanneTubbenhauerVazarticle author = Lacabanne, Abel author = Tubbenhauer, Daniel author = Vaz, Pedro, title = Verma Howe duality and LKB representations, year = 2022, note = preprint MurakamiMurakamiarticle author = Hitoshi Murakami author = Jun Murakami, title = The colored Jones polynomials and the simplicial volume of a knot, journal = Acta Mathematica, volume = 186, number = 1, pages = 85 - 104, publisher = Institut Mittag-Leffler, year = 2001, RoseTubbenhauerarticle author = Rose, David E. V. author = Tubbenhauer, Daniel, title = Symmetric Webs, Jones-Wenzl Recursions, and q-Howe Duality, journal = International Mathematics Research Notices, volume = 2016, number = 17, pages = 5249-5290, year = 2015, month = 10, TemperleyLiebarticle author = H. N. V. Temperley author = E. H. Lieb, title = Relations between the 'Percolation' and 'Colouring' Problem and other Graph-Theoretical Problems Associated with Regular Planar Lattices: Some Exact Results for the 'Percolation' Problem, journal = Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, volume = 322, number = 1549, pages = 251–280, publisher = The Royal Society, year = 1971 Wenzlarticle author = H. Wenzl, title = On a sequence of projections, journal = C. R. Math. Rep. Can. J. Math, number = 9, pages = 5-9, year = 1897 DEPARTMENT OF MATHEMATICS, TOHOKU UNIVERSITY, 6-3, AOBA, ARAMAKI-AZA, AOBA-KU, SENDAI, 980-8578, JAPAN Email Address:
http://arxiv.org/abs/2401.02442v1
{ "authors": [ "Ryoga Matsumoto" ], "categories": [ "math.RT", "math.QA" ], "primary_category": "math.RT", "published": "20231227035756", "title": "Jones Wenzl projectors in Verma modules" }
1]Kelsey A. Jacksonkaj22475@umd.edu 1,2]Carl A. Millercamiller@umd.edu 1,3]Daochen Wangwdaochen@gmail.com [1]Joint Center for Quantum Information and Computer Science (QuICS), University of Maryland [2]Computer Security Division, National Institute of Standards and Technology (NIST) [3]Department of Computer Science, University of British Columbia Evaluating the security ofin the quantum random oracle model [ ================================================================ In the wake of recent progress on quantum computing hardware, the National Institute of Standards and Technology (NIST) is standardizing cryptographic protocols that are resistant to attacks by quantum adversaries. The primary digital signature scheme that NIST has chosen is . The hardness of this scheme is based on the hardness of three computational problems: Module Learning with Errors (), Module Short Integer Solution (), and .andhave been well-studied and are widely believed to be secure. However,is novel and, though classically as hard as , its quantum hardness is unclear. In this paper, we provide the first proof of the hardness ofvia a reduction fromin the Quantum Random Oracle Model (QROM). Our proof uses recently developed techniques in quantum reprogramming and rewinding. A central part of our approach is a proof that a certain hash function, derived from theproblem, is collapsing. From this approach, we deduce a new security proof forunder appropriate parameter settings. Compared to the only other rigorous security proof for a variant of , , our proof has the advantage of being applicable under the condition q = 12n, where q denotes the modulus and n the dimension of the underlying algebraic ring.This condition is part of the originalproposal and is crucial for the efficient implementation of the scheme. We provide new secure parameter sets forunder the condition q = 12n, finding that our public key sizes and signature sizes are about 2.5× to 2.8× larger than those offor the same security levels. § INTRODUCTION Quantum computers are theoretically capable of breaking the underlying computational hardness assumptions for many existing cryptographic schemes. Therefore, it is vitally important to develop new cryptographic primitives and protocols that are resistant to quantum attacks. The goal of NIST's Post-Quantum Cryptography Standardization Project is to design a new generation of cryptographic schemes that are secure against quantum adversaries.In 2022, NIST selected three new digital signature schemes for standardization <cit.>: Falcon, SPHINCS+, and .Of the three,<cit.>, orin shorthand,was identified as the primary choice for post-quantum digital signing.To practically implement post-quantum cryptography, users must be provided with not only assurance that a scheme is secure in a post-quantum setting, but also the means by which to judge parameter choices and thereby balance their own needs for security and efficiency. The goal of the current work is to provide rigorous assurance of the security ofas well as implementable parameter sets. A common model for the security of digital signatures is existential unforgeability against chosen message attacks, or .In this setting, an adversary is allowed to make sequential queries to a signing oracle for the signature scheme, and then afterwards the adversary attempts to forge a signature for a new message. We work in the setting of strong existential unforgeability () wherein we must also guard against the possibility that an adversary could try to forge a new signature for one of the messages already signed by the oracle. (See <ref> for details.)Additionally, we utilize the quantum random oracle model (QROM) for hash functions.We recall that when a hash function H: X→ Y is used as a subroutine in a digital signature scheme, the random oracle model (ROM) assumes that one can replace each instance of the function H with a black box that accepts inputs from X and returns outputs in Y according to a uniformly randomly chosen function from X to Y. (This model is useful because random functions are easier to work with in theory than actual hash functions.)The random oracle model needs to be refined in the quantum setting because queries to the hash function can be made in superposition: for any quantum state of the form ∑_x ∈ Xα_x | x >, where ∀ x∈ X, α_x ∈ℂ, a quantum computer can efficiently prepare the superposed state ∑_x ∈ Xα_x | x > | H ( x ) >. The quantum random oracle model (QROM) therefore assumes that each use of the hash function can be simulated by a black box that accepts a quantum state supported on X and returns a quantum state supported on X × Y (computed by a truly random function from X to Y) <cit.>. While no efficient and truly random functions actually exist, the QROM is generally trusted and it enables the application of a number of useful proof techniques. §.§ Known security results for is based on arithmetic over the ring R_q _q[X]/(X^n+1), where q is an odd prime and n is a power of 2. Similar to otherliterature, we generally leave the parameters q,n implicit.For any non-negative integer η, let S_η⊆ R_q denote the set of all polynomials with coefficients from { - η, -η + 1 , …, η}. The security analysis forin <cit.> is based on three computational problems.The first two are standard problems (<ref>) but the third problem is non-standard (<ref>). The first problem is the Module Learning With Errors () problem. Assuming that a matrix A ∈ R_q^m × k and short vectors s_1 ∈ S_η^k and s_2 ∈ S_η^m are chosen uniformly at random, theproblem is to distinguish the matrix-vector pair (A, tAs_1 + s_2) from auniformly random matrix-vector pair.Let m, k, η∈ℕ. The advantage of an algorithmfor solving _m, k, η is defined as:^_m,k,η()|[b=0 | A ← R_q^m× k, t← R_q^m,b←(A, t)]- [b=0 | A ← R_q^m× k,(s_1, s_2) ← S_η^k × S_η^m, tAs_1 + s_2,b←(A, t)] |.Here, the notation (x) denotestaking input x. We note that theproblem is often phrased in other contexts with the short vectors s_1 and s_2 coming from a Gaussian, rather than a uniform, distribution.The use of a uniform distribution is one of the particular features of .The second problem, , is concerned with finding short solutions to randomly chosen linear systems over R_q. Let m,k, γ∈ℕ. The advantage of an algorithmfor solving _m, k, γ is defined as:^_m,k,γ() [ [I_m|A]·y = 0 ∧ 0 < y≤γ|A← R_q^m× k,y←(A)].The third problem is a more complex variant ofthat incorporates a hash function H. Let τ,m,k,γ∈ℕ and H {0,1}^* →, where ⊆ R_q is the set of polynomials with exactly τ coefficients in {-1,1} and all remaining coefficients zero. The advantage of an algorithmfor solving _H,τ,m,k,γ is defined as[ denotes string concatenation. ^|H⟩ denoteswith quantum query access to H – a formal definition can be found in <ref>.]:^_H,τ,m,k,γ() [H( [I_m | A] · y M) = y_m+k∧y≤γ| A ← R_q^m× k,(y, M) ←^|H⟩(A) ].The security guarantee foris given in <cit.> by the inequality[Strictly speaking, there should be two other terms (_^() and 2^-α+1) on the right-hand side of <ref>. However, we ignore them in the introduction as it is easy to set parameters such that these terms are very small. We also mention that the original proof of this inequality uses a flawed analysis of Fiat-Shamir with aborts. The flaw was found and fixed in <cit.>.]^_() ≤_k, l, η^() + _k, l, ζ'^()+ _H, τ, k, l+1, ζ^(),where all terms on the right-hand side of the inequality depend on parameters that specify , andstands for strong unforgeability under chosen message attacks. The interpretation of <ref> is: if there exists a quantum algorithmthat attacks the -security of , then there exist quantum algorithms ,, for , , andthat have advantages satisfying <ref> and run in time comparable to . <ref> implies that breaking thesecurity ofis at least as hard as solving one of the , , orproblems. Whileandare known to be no harder thanand , respectively, there are no known attacks taking advantage of their module structure so it is generally believed that they are as hard as their unstructured counterparts <cit.>. In turn,andare at least as hard as the (Gap) Shortest Vector Problem, which is the underlying hard problem of lattice cryptography <cit.>.However, the final problem, , is novel and so its difficulty is an open question. The problem is known to be as classically hard assince there exists a reduction fromtoin the ROM <cit.>. The reduction uses the following “rewinding” argument. Any randomized algorithm can be specified by a deterministic circuit with auxiliary random bits. Therefore, given a randomized algorithm for , we can run its deterministic circuit with some randomly chosen bits to obtain one solution and then rewind and run it again using the same bits chosen from before, while at the same time reprogramming the random oracle at the query corresponding to the output of the first run, to obtain a second solution. Subtracting these two solutions toyields a solution to . However, the argument fails for the following reasons in the QROM (where a quantum algorithm can make queries in superposition to a quantum random oracle): * The randomness in a quantum algorithm includes the randomness of measurement outcomes. We cannot run a quantum algorithm twice and guarantee that the “random bits” will be the same in both runs because we cannot control measurement outcomes. More generally, we cannot rewind a quantum algorithm to a post-measurement state.* Since a quantum algorithm can make queries in superposition, it is no longer clear where to reprogram the random oracle. Currently, the only explicit rigorous proof of 's security based on conventional hardness assumptions<cit.> requires modifying the parameters such that q = 58 and 2γ < √(q/2), where γ is a length upper bound on vectors corresponding to valid signatures. This ensures that all non-zero vectors in S_2γ are invertible which equipswith a so-called “lossy mode”. This variant is called . <cit.> then prove that a signature scheme with such a lossy mode is . However, thespecification <cit.> uses a value of q satisfying q = 12n (for n = 256) which is incompatible with the assumption that q = 58. The fact that q = 12n is central to claims about the speed of the algorithms in <cit.>: this condition implies that R_q is isomorphic to the direct product ring _q^× n (or _q^n in shorthand) via the Number Theoretic Transform which allows for fast matrix multiplication over R_q. Therefore, it is highly desirable to find a security proof that works under the assumption that q = 12n. Moreover, when q = 58, the ring R_q is structurally different from when q = 12n since in the former case R_q is isomorphic to _q^n/2×_q^n/2 <cit.>. Therefore, it may be imprudent to translate any claims of security in the case q = 58 to the case q = 12n. §.§ Overview of main resultThe main result of our paper is the first proof of the computational hardness of theproblem, presented in <ref>.This hardness result implies a new security proof forwhich, unlike the previous proof in <cit.>, applies to the case q=12n. Specifically, we reduceto . By <ref>, our result implies that the security of(with parameters that are not too far from the original parameters) can be based on the hardness ofand . Let m,k,τ,γ,η∈ℕ. Suppose q ≥ 16, q = 12n, and 2γη n(m+k) < q/32. If there exists an efficient quantum algorithm 𝒜 that solves _H,τ, m,k,γ with advantage ϵ, under the assumption that H is a random oracle, then there exists an efficient quantum algorithm for solving _m+k,m,η with advantage at least Ω(ϵ^2/Q^4).Here, Q denotes the number of quantum queries 𝒜 makes to H. We now give a high-level overview of the proof. The first step is to define two experiments: the chosen-coordinate binding experimentand the collapsing experiment . These experiments are interactive protocols between a verifier and a prover. The protocols end with the verifier outputting a bit b. If b=1, the prover is said to win the experiment. The reduction then proceeds in three steps: (i) reduce winningto solving , (ii) reduce winningto winning , and (iii) reduce solvingto winning . Combining these steps together gives a reduction fromto . The reduction can be illustrated as(i)⟵(ii)⟵(iii)⟵,where the left arrow means “reduces to”. Step (i): ←. In theexperiment, the prover is first given a uniformly random A∈ R_q^m× l which it uses to send the verifier some z ∈ R_q^m, the verifier then sends the prover a challenge c chosen uniformly at random from , and finally the prover sends the verifier a response y ∈ R_q^l. The prover wins if Ay = z, y≤γ, and the last coordinate of y is c. We directly apply the main result of <cit.> to reduce winningwhen l=m+k to solving _H,τ,m,k,γ when H is a random oracle. In more detail, the result implies that an efficient algorithm that winsusing Q queries with probability ϵ can be used to construct another efficient algorithm that winswith probability at least Ω(ϵ/Q^2). Step (ii): ←. In theexperiment, the prover is first given a uniformly random A∈ R_q^m× l which it uses to send the verifier some z∈ R_q^m together with a quantum state that must be supported only on y ∈ R_q^l such that Ay = z, y≤γ. Then, the verifier samples a uniformly random bit b'. If b'=1, the verifier measures the quantum state in the computational basis, otherwise, it does nothing. The verifier then returns the quantum state to the prover. The prover responds by sending a bit b' to the verifier and wins if b'=b. The advantage of the prover is 2p-1 where p is its winning probability.By using techniques in <cit.>, we reduce winningto winning . More specifically, we show that an efficient algorithm that winswith advantage ϵ can be used to construct another efficient algorithm that winswith advantage at least ϵ(ϵ-1/), which is roughly ϵ^2 since 1/ is very small for the τs we will consider. We generalize techniques in <cit.> to work for challenge sets of size >2, which is necessary since the challenge set in theexperiment, , generally has size >2. The key idea of first applying the quantum algorithm for winningto the uniform superposition of all challenges remains the same. Step (iii): ←. We build on techniques in <cit.> to reduce winningto winning . More specifically, we show that an efficient algorithm that winswith advantage ϵ can be used to construct another efficient algorithm that solves _l,m,η with advantage at least ϵ/4. Given a quantum state supported on y∈ R_q^l with Ay = z and y≤γ, as promised in theexperiment, <cit.> considers the following two measurements. Sample b∈ R_q^l from one of the two distributions defined in(see <ref>), compute a rounded version of b· y in a separate register, and measure that register. When n=1, <cit.> shows that the effect of the measurement in one case is close to the computational basis measurement and in the other case is close to doing nothing. Therefore, an algorithm for winningcan be used to solve . Our work extends <cit.> to arbitrary n provided q = 12n. The extension relies on the fact that each coefficient of b·Δ, where 0≠Δ∈ R_q and b is chosen uniformly at random from R_q, is uniformly random in ℤ_q. (This is despite the fact that b·Δ is generally not uniformly random in R_q.) We establish this fact using the explicit form of the isomorphism between R_q and ℤ_q^n when q = 12n.Finally, in <ref>, we propose explicit sets of parameters using q ≈ 3× 10^13 and n=256, such that q = 1 2n. These sets of parameters achieve different levels of security. We compare our sets of parameters with sets proposed by thespecifications <cit.> and theconstruction of <cit.>. Our parameter sets lead to larger public key and signature sizes for the same security levels. The advantage of our parameter sets is that they provably (in contrast to parameters proposed byspecifications) endowwith -security for a q satisfying q = 12n (in contrast to parameters proposed in ). § PRELIMINARIESℕ denotes the set of positive integers. For k ∈ℕ, [k] denotes the set {1,…,k}. An alphabet refers to a finite non-empty set. Given an alphabet S, the notation s← S denotes selecting an element s uniformly at random from S. Given two alphabets A and B, the notation B^A denotes the set of functions from A to B. We write the concatenation of arbitrary strings a, b as ab. Given matrices A_1,…,A_n of the same height, [A_1 | A_2 | ... | A_n ] denotes the matrix with the A_is placed side by side. log refers to the base-2 logarithm.We always reserve the symbol q for an odd prime and n for a positive integer that is a power of 2. R_q denotes the ring _q[X]/(X^n+1) (following the convention in otherliterature <cit.>, we leave the n-dependence implicit). For k∈ℕ, a primitive kth root of unity in ℤ_q is an element x∈ℤ_q such that x^k = 1 and x^j ≠ 1 for all j∈ [k-1]; such elements exist if and only if q = 1k. Given r∈ℤ_q, we define rq to be the unique element r'∈ℤ such that -(q-1)/2≤ r'≤ (q-1)/2 and r' = rq. For any r = a_0 + a_1 X + … + a_n-1X^n-1∈ R_q, we define r_i a_iq for all i∈{0,1,…,n-1} and r_∞max_i r_i. For r ∈ R_q^m, we define r_∞max_i∈ [m]r_i_∞. For η∈ℕ, S_η denotes the set {r ∈ R_q |r≤η}. For τ∈ℕ,denotes the set {r ∈ R_q |r = 1, r = √(τ)} and so = 2^τnτ. §.§ Quantum computation A (quantum) state, or density matrix, ρ on ℂ^d is a positive semi-definite matrix in ℂ^d× d with trace 1. A pure state is a state of rank 1. Since a pure state can be uniquely written as ψ where |ψ⟩∈ℂ^d and ⟨ψ||ψ⟩^†, we usually refer to a pure state by just |ψ⟩. A (projective) measurement is given by a set = {P_1,…,P_k}⊆ℂ^d× d such that ∑_i P_i = 1, and ∀ i,j ∈ [k], P_i = P_i^† and P_iP_j = δ_i,jP_i. The effect of performing such a measurement on ρ is to produce the density matrix ∑_i=1^k P_iρ P_i. A register is either (i) an alphabet Σ or (ii) an m-tuple X = (Y_1,…,Y_m) where m∈ℕ and Y_1,…,Y_m are alphabets.* The size of the register is Σ, a density matrix on the register refers to a density matrix on ℂ^Σ, and the computational basis measurement on the register refers to the measurement {x| x ∈Σ}, where |x⟩ denotes the vector in ℂ^Σ≅ℂ^Σ that is 1 in the xth position and zero elsewhere.* The size of the register is Y_1×…×Y_m, a density matrix on the register refers to a density matrix on ℂ^Y_1⊗…⊗ℂ^Y_m, and the computational basis measurement on the register refers to the measurement {y_1⊗…⊗y_m| y_1∈ Y_1,…, y_m ∈ Y_m}. A quantum algorithmis specified by a register X = (Y_1,…,Y_m) where Y_i = 2 for all i and a sequence of elementary gates, i.e., 2^m× 2^m unitary matrices that are of the formS[ 1 0; 0 i ],H 1/√(2)[11;1 -1 ], orCNOT[ 1 0; 0 0 ]⊗[ 1 0; 0 1 ] + [ 0 0; 0 1 ]⊗[ 0 1; 1 0 ],tensored with 2× 2 identity matrices.[When we later consider a quantum algorithm on a register of size d ∈ℕ, we mean a quantum algorithm on a register (Y_1,…, Y_m) where Y_i = 2 for all i and m is the smallest integer such that 2^m ≥ d.] The unitary matrix U associated withis the product of its elementary gates in sequence. The time complexity of , (), is its number of elementary gates. To perform a computation given an input x∈{0,1}^k where k≤ m,applies U to the starting state |ψ_0⟩|x_1+1⟩⊗…⊗|x_k+1⟩⊗|1⟩^⊗ (m-k) and measures all registers in the computational basis. We also need the definition of a quantum query algorithm. Let t∈ℕ. A quantum query algorithmusing t queries is specified by registers X,Y,Z and a sequence of t+1 quantum algorithms _0,_1,…,_t, each with register (X,Y,Z). The time complexity of , (), is t+∑_i=0^t(_i). Let U_i denote the unitary associated with _i, γY, and ϕ Y →ℤ_γ be a bijection. Given HX→ Y, let O^H denote the unitary matrix defined by O^H|x⟩|y⟩|z⟩ = |x⟩|ϕ^-1(ϕ(y) + ϕ(H(x)))⟩|z⟩ for all (x,y,z)∈ X× Y× Z. Then: * ^|H⟩ denotes the algorithm with register (X,Y,Z) that computes as follows. Apply U_0 to the starting state |ψ_0⟩. Then, for each i = 1,…,t in sequence, apply O^H then U_i. Finally, measure all registers in the computational basis. * ^H denotes the algorithm with register (X,Y,Z) that computes as follows. Apply U_0 to the starting state |ψ_0⟩. Then, for each i = 1,…,t in sequence, measure register X in the computational basis and apply O^H then U_i. Finally, measure all registers in the computational basis. In the definitions of ^|H⟩ and ^H, we have described what it means for a quantum algorithm to make quantum and classical queries to a function H, respectively. Under this description, we can naturally define quantum query algorithms that make classical queries to one function and quantum queries to another. Such algorithms are relevant in the security definition ofas described in the next subsection. §.§ Digital signature schemes Letbe common system parameters shared by all participants.A digital signature scheme is defined by a triple of randomized algorithms = (, , ) such that * The key generation algorithm () outputs a public-key, secret-key pair (pk,sk) such that pk defines the message set .* The signing algorithm (sk,m), where m∈, outputs a signature σ.* The verification algorithm (pk,m,σ) outputs a single bit {0,1}. We sayhas correctness error γ≥ 0 if for all (pk,sk) in the support of () and all m∈, [(pk, m, σ) = 0 |σ←(sk, m)] ≤γ. Let = (, , ) be a signature scheme. Letbe a quantum query algorithm. Then _^()[(pk, m, σ) = 1,m ∉| (pk, sk) ←(), (m, σ)←^(sk, ·)(pk)],_^()[(pk, m, σ) = 1,(m, σ)∉| (pk, sk) ←(), (m, σ)←^(sk, ·)(pk)],whereis the set of queries made byto (sk, ·) and is the set of query-response pairssent to and received from (sk,·).Whenis a function of λ∈ℕ, we say thatis (s)-secure if for every -time quantum query algorithm , we have _^(s)() ≤.In this paper, we use the definition of thesignature scheme as specified in <cit.>. In the concrete parameters section, <ref>, we adopt the same notation as in <cit.>. The definition ofinvolves a function H{0,1}^*→ that is classically accessible by itsandalgorithms. In the definitions ofandsecurity of , we assume that the quantum algorithmhas classical query access to (sk,·) and quantum query access to H. Our proof of 's security will assume that H can be modeled by a random oracle.§.§ Cryptographic problems and experimentsWe now give the formal definitions of the chosen-coordinate binding and collapsing experiments mentioned in the introduction. More general versions of these definitions can be found in, e.g., <cit.>. First, we define a “plain” version of , where the input matrix is not given in Hermite Normal Form. First reducingfromwill be convenient later on. Let τ, m,l,γ∈ℕ and H {0,1}^* → B_τ. The advantage of solving _H,τ,m,l,γ with a quantum query algorithmfor message M ∈{0, 1}^* is defined as^_H,τ,m,l,γ() [H( A y M) = y_l ∧y≤γ| A← R_q^m× l,(y, M) ←^|H⟩(A) ]. Let τ, m,l,γ∈ℕ. The advantage of a quantum algorithm = (_1,_2) for winning _τ,m,l,γ, denoted ^_τ,m,k,γ(), is defined as the probability that the experiment below outputs 1. 0.8_τ,m,l,γ.* Sample A ← R_q^m× l.* (z,T) ←_1(A), where z∈ R_q^m and T is an arbitrary register.* Sample c←. * y←_2(T,c), where y∈ R_q^l.* Output 1 if A y = z, y≤γ, and y_l = c. When τ, m, l, γ are functions of λ∈ℕ, we say that thehash function is chosen-coordinate binding (CCB) if for every -time quantum algorithm , ^_τ, m, l, γ() ≤ 1/B_τ +.Let m,l,γ∈ℕ. The advantage of a quantum algorithm = (_1,_2) for winning _m,l,γ, denoted ^_m,l,γ, is defined as 2p-1 where p is the probability the experiment below outputs 1. 0.8_m,l,γ.* Sample A ← R_q^m× l.* (Y, Z, T) ←_1(A), where Y is a register on R_q^l, Z is a register on R_q^m, and T is an arbitrary register.* Sample b ←{0,1}. If b=1, measure Y in the computational basis.* b'←_2(Y,Z,T).* Output 1 if b'=b. We sayis valid if the state on the register (Y,Z) output by _1 in step 2 is supported on elements (y,z)∈ R_q^l× R_q^m such that A y = z and y≤γ. When m,l,γ are functions of λ∈ℕ, we say that thehash function is collapsing if for every -time quantum algorithm , ^_m, l, γ() ≤ 1/2 +. § SECURITY PROOF FORThe main result of this subsection is the following theorem which follows from <ref>.Let m,k,τ, γ,η∈ℕ. Suppose q≥ 16, q = 12n, and 2γη n (m+k) < q/32. Suppose that there exists a quantum query algorithmfor solving _H,τ,m,k,γ using Q queries and expected advantage ϵ over uniformly random H{0,1}^* →. Then, for all w ∈ℕ, there exists a quantum algorithm ℬ that solves _m+k,m,η with advantage at leastϵ-nq^-k/4(2Q+1)^2(ϵ-nq^-k/(2Q+1)^2 - 1/) -1/41/3^w.Moreover, () ≤() + (log, w,n,log q,m,k). Assuming that the choice of parameters as functions of the security parameter λ is such that nq^-k =, 1/ =, and w =, <ref> shows that the advantage ofis roughly Ω(ϵ^2/Q^4).The proof of <ref> proceeds by the following sequence of reductions, which we have labeled by the number of the section in which they are proven:<ref>⟵<ref>⟵<ref>⟵<ref>⟵.First, we establish some properties of R_q that will be used in <ref>. §.§ Properties of RqSuppose q = 12n. Let w be a primitive (2n)-th root of unity in _q. Then for all m∈ such that 0≠m<n, the following equation holds in _q: ∑_j=0^n-1 w^2mj = 0. Consider the following equation in _q:(1-w^2m)·∑_j=0^n-1 w^2mj = 1-w^2mn = 0,where the first equality uses a telescoping sum and the second uses w^2n=1. But 1-w^2m≠ 0 since 0≠m<n and w is a primitive (2n)-th root of unity in _q. Therefore, since _q is an integral domain when q is prime, ∑_j=0^n-1 w^2mj = 0 as required.Suppose q = 12n. Then, R_q ≅_q^n as algebras over _q.[To be clear, the algebra _q^n over _q refers to the set _q^n equipped with component-wise addition and multiplication, and scalar multiplication defined by α· (c_0,…,c_n-1)(α c_0,…, α c_n-1), where α∈_q and (c_0,…,c_n-1)∈_q^n.] For q prime, the multiplicative group _q^* of non-zero elements in _q is cyclic. Let g be a generator of _q^*. Let w g^(q-1)/(2n), which is well-defined since q=12n. Define the mapping ϕ R_q →_q^n by:ϕ(p(x)) = [ 1 w … w^n-1; 1 w^3 …w^3(n-1); ⋮ ⋱ ⋮; 1w^(2n-1) … w^(2n-1)(n-1) ][ a_0; a_1; ⋮; a_n-1 ],where p(x)a_0 + a_1x + … + a_n-1x^n-1. It is clear that ϕ is a linear map. To see that ϕ is homomorphic with respect to multiplication, observe that for any p̃(x)∈_q[x] such that p(x) = p̃(x)(x^n+1), we haveϕ(p(x)) = (p̃(w^1),p̃(w^3),…, p̃(w^(2n-1))),since (w^2k-1)^n + 1 = 0 in _q for all k ∈ [n].To see that ϕ is bijective, observe its explicit inverse ϕ'_q^n → R_q, defined by ϕ'(c_0,…,c_n-1) =a_0 + a_1 x + … + a_n-1x^n-1, where [ a_0; a_1; ⋮; a_n-1 ]n^-1[11…1; w^-1 w^-3…w^-(2n-1);⋮ ⋱⋮; w^-(n-1)w^-3(n-1)… w^-(2n-1)(n-1) ][ c_0; c_1; ⋮; c_n-1 ]and n^-1 denotes the multiplicative inverse of n in ℤ_q, which exists since q = 1 2nn < q.Since w is a primitive (2n)-th root of unity in _q, <ref> implies that the matrices corresponding to ϕ and ϕ' multiply to the identity in _q. Therefore, ϕ' is the inverse of ϕ.§.§ Reduction from toSuppose q = 12n. Let m,k,γ,τ∈ℕ and H {0,1}^* → B_τ. Suppose that there exists a quantum query algorithmusing Q queries that solves _H,τ,m,k,γ with advantage ϵ, then there exists a quantum query algorithmusing Q queries for solving _H,τ,m,m+k,γ with advantage at least ϵ - n/q^k. Moreover, () ≤() + O(nlog (q) · mk min(m,k)).The probability that a uniformly random B ←_q^m× (m+k)has row-echelon form [I_m|B'] (i.e., rank m) is at least (1-1/q^k). Therefore, by <ref>, the probability that a uniformly random A ← R_q^m× (m+k) does not have row-echelon form [I_m|A'] is at most 1-(1-1/q^k)^n ≤ n/q^k. When A has row-echelon form [I_m|A],first performs row reduction and then runs . Since the time to perform row reduction on A is O(nlog (q) · mk min(m,k)), the proposition follows.§.§ Reduction from toLet S, U, C, R be alphabets, V S× U× C× R →{0,1}, and = (_1,_2) be a quantum algorithm. We define the Σ-experiment by: 0.8 Σ-.* s← S.* (u, T)←_1(s), where u∈ U and T is an arbitrary register.* c ← C.* r ←_2(T,c).* Output 1 if V(s,u,c,r) = 1. The advantage offor winning the Σ-experiment is the probability of the experiment outputting 1.In this subsection, we use the following theorem from <cit.>. Letbe a quantum query algorithm using Q queriesthat takes input s∈ S and outputs u∈ U and r∈ R. Then, there exists a two-stage quantum algorithm = (_1,_2) (not using any queries) such that the advantage ofin the Σ-experiment is at least1/(2Q+1)^2[ V(s,u,H(u),r) | H ← C^U,s ← S,(u,r) ←^|H⟩(s) ].Moreover, (_1) + (_2) ≤() + Q Here, f ∈ C^U indicates a function f: U → C. We remark that in the original statement of the theorem,(_1)+ (_2) is upper bounded by ()+(Q, log(U, log(C)). The second term accounts for the cost of instantiating Q queries to a 2(Q+1)-wise independent hash function family from U to C. By the well-known Vandermonde matrix method (see, e.g., <cit.>), this cost can be upper bounded by O(Q^2 ·log(U)·log(C)). However, we follow the convention in <cit.> and equate this cost to Q under the fair assumption that , like , can also query a random oracle at unit cost.Let m,l,γ,τ∈ℕ. Let H{0,1}^* →. Suppose there exists a quantum query algorithmfor solving _H,τ,m,l,γ using Q queries with expected advantage ϵ over uniformly random H. Then there exists a quantum algorithm = (_1,_2) for winning _τ,m,l,γ with advantage at least ϵ/(2Q+1)^2. Moreover (_1) + (_2) ≤() + Q.The quantum query algorithmfor _H,τ,m,l,γ takes input A and outputs (y,M). So there exists another quantum query algorithm ' using Q queries that outputs ((A yM),y).The first part of the proposition follows from applying <ref> to ' with the following parameter settings which make the Σ-experiment identical to the _τ,m,l,γ experiment* Set S=R_q^m× l, U to be the query space of ', C =, and R = R_q^l.* Set V R_q^m× l× U ×× R_q^l →{0,1} byV(A,u,c,y) = [z = A y,y≤γ,y_l = c],where u∈{0,1}^* is parsed as u = (zM) with z ∈ R_q^m and M ∈{0,1}^*. §.§ Reduction from toIn this subsection, we will use the following lemma, which can be found as <cit.>. Let P,Q be projectors in ℂ^d× d and ρ be a density matrix in ℂ^d such that ρ Q = ρ. Then (QPρ P) ≥(Pρ)^2. The following proposition is similar to <cit.> and <cit.> except the size of the challenge set in theexperiment (in step 3 of <ref>) is not restricted to being 2. Let m,l,γ,τ∈ℕ.Suppose that there exists a quantum algorithm = (_1,_2) that succeeds in _τ,m,l,γ with advantage ϵ, then there exists a valid quantum algorithm = (_1,_2) that succeeds in _m,l,γ with advantage at least ϵ(ϵ - 1/). Moreover, (_1) ≤(_1) + (_2) + O(mllog(q)log()) and (_2) ≤(_2) + O(log()).We assume without loss of generality (wlog) that the arbitrary register in step 2 of the _τ,m,l,γ experiment (<ref>) is of the form (Y,T'), where Y is a register on R_q^l and T' is an arbitraryregister. We assume wlog that _1 prepares a state |ϕ⟩ on register (Y,Z,T'), where Z is a register on R_q^m, and measures Z in the computational basis to produce the z in step 2 of the _τ,m,l,γ experiment. We also assume wlog that _2 actson its input register (Y,T',C), where C is a register onthat contains the c from step 3 of the _τ,m,l,γ experiment, as follows: * Apply a unitary U of the form ∑_r∈ U_r⊗r on (Y, T', C).* Measure Y in the computational basis. We proceed to construct = (_1,_2) for the _m,l,γ experiment (<ref>). We first construct _1, given input A∈ R_q^m× l, as follows: * Run _1(A) to prepare state |ϕ⟩ on register (Y,Z,T').* Prepare state |ψ⟩^-1/2∑_r∈|r⟩ on register C in time O(log()). The current state on register (Y,Z,T',C) is σϕ⊗ψ. Apply U on register (Y,T',C) and then measure register (Y, Z, T', C) with the projective measurement {Π, 1 - Π}, where Π is defined byΠ∑_r∈ ∑_(y,z) ∈ R_q^l × R_q^my≤γ,Ay=z,y_l = ry,z⊗ 1_T'⊗r.This measurement can be implemented by computing a bit indicating whether the constraints defining Π are satisfied into a separate register and then measuring that register, which takes time O(mllog(q) + log()).* Let B be a bit register. If Π is measured, set the bit stored in B to 1. If (1-Π) is measured, replace the state on register (Y,Z) with |0^l⟩⊗|0^m⟩, set the bit stored in B to 0.Then output the register (Y, Z, T', C, B). Let T(T', C, B). We construct _2, given input register (Y,Z,T), as follows: * If B contains 0, output a uniformly random bit b' ∈{0,1}.* Else apply U^† on register (Y, T', C). Then measure C with the projective measurement {ψ, 1 - ψ} using (the inverse of) the preparation circuit for |ψ⟩ in time O(log(). If the outcome is ψ, output 0; else output 1. It is clear thatis valid by definition. Moreover,(_1)≤(_1) + (_2) + O(mllog(q)log()),(_2)≤(_2) + O(log()). We proceed to lower bound the success probability of . We analyze the probabilities of the following disjoint cases corresponding tobeing successful.* Case 1: In this case, 1-Π is measured and b'=b. The probability that 1-Π is measured is (1-ϵ). Conditioned on 1-Π being measured, b' is a uniformly random bit so the probability b'=b is 1/2. Therefore, the overall probability of this case is (1-ϵ)/2.* Case 2: In this case, Π is measured, b=1, and then 1-ψ is measured.The probability that Π is measured is ϵ and the probability that b=1 is 1/2. We now condition on these two events happening. Since b=1, the state of register C in the input to _2 is a mixture of states of the form r where r∈. This is because b=1 means that register Y is measured in the computational basis and conditioned on Π being measured, the C register is also measured in the computational basis (see the form of Π in <ref>). Therefore, the probability of _2 measuring ψ is 1/. Thus, the overall probability of this case is ϵ· (1/2) · (1- 1/).* Case 3: In this case, Π is measured, b=0, and then ψ is measured.The probability that b=0 is 1/2. Conditioned on b=0,<ref>, applied with projectors ψ and U^†Π U and state σ, shows that the probability of measuring Π and then ψ is least ϵ^2. Therefore, the overall probability of this case is at least ϵ^2/2. Summing up the probabilities of the above cases, we see that the success probability ofis at least1-ϵ/2 + ϵ/2(1 - 1/) + ϵ^2/2 = 1/2 + ϵ/2(ϵ - 1/).Therefore, the advantage ofis at least ϵ(ϵ - 1/), as required.§.§ Reduction from toThe proof structure of the main result of this subsection, <ref>, follows <cit.>. We need to modify a number of aspects of their proof since it applies to thehash function whereas here we consider its module variant, i.e., thehash function.We will use a rounding function ·_q →{0,1,…,t-1}, where t∈ℕ, that is defined as follows. For j ∈{0,1,…, t-1}, defineI_j {jq/t,jq/t+1,…, jq/t + q/t-1} ifj ∈{0,1,…, t-2},{(t-1)q/t,(t-1)q/t+1,…, q-1} ifj=t-1.(Note that I_j contains exactly q/t elements for j∈{0,1,…,t-2} and at least q/t elements for j=t-1 with the constraint that q/t ≤I_t-1≤ q/t + t - 1.) Then, for a∈_q, define a to be the unique j∈{0,1,…,t-1} such that a∈ I_j. We will also use the following convenient notation. Let Y and Z be registers and f: Y → Z. The measurement y↦ f(y) on register Y refers to the measurement implemented by computing f(y) into a separate register Z, measuring Z in the computational basis, and discarding the result.Finally, we will use the following lemma.Let 0≠Δ∈ R_q^l and α∈{0,…,n-1}. If b ← R_q^l,then (b·Δ)_α is uniformly distributed in _q.Writing b = (b_1,…,b_l) and Δ = (Δ_1,…,Δ_l), we have (b ·Δ)_α = (b_1 Δ_1)_α + … + (b_l Δ_l)_α.Since Δ≠ 0, there exists an i∈ [l] such that Δ_i≠0. To prove the lemma, it suffices to prove that (b_iΔ_i)_a is uniformly distributed in _q.Let ϕ, ϕ' be as defined in the proof of <ref>. Write ϕ(Δ_i) = (c_0,…,c_n-1)∈_q. Since Δ_i≠ 0 there exists j∈{0,…,n-1} such that c_j ≠ 0. Since b_i is a uniformly random element of R_q, ϕ(b_i) is a uniformly random element of _q^n. Therefore, the distribution of (b_iΔ_i)_α = ϕ'(ϕ(b_i)ϕ(Δ_i))_α (where we used <ref> for the equality) is the same as the distribution ofϕ'(d_0 c_0, … ,d_n-1c_n-1)_α, where d_0,…,d_n-1←_q.By the linearity of ϕ',ϕ'(d_0 c_0, … ,d_n-1c_n-1)_α = d_j c_jϕ'(e_j)_α + ∑_j'≠ j d_j'c_j'ϕ'(e_j')_α,where e_j denotes the jth standard basis vector of _q. But ϕ'(e_j)_α = n^-1· w^-(2j+1)α≠ 0 (see <ref>). Therefore d_j c_jϕ'(e_j)_α is uniformly distributed in _q if d_j←_q. Hence (b_iΔ_i)_α is uniformly distributed in _q as required. The main result of this subsection is the following proposition.Let m,l,γ,η∈ℕ. Suppose q≥ 16 and 2γη n l < q/32. Suppose there exists a quantum algorithmthat succeeds in _m,l,γ with advantage ϵ. Then, for all w∈ℕ, there exists a quantum algorithmthat solves _l,m,η with advantage at least (ϵ - 3^-w)/4. Moreover, () ≤() + (w). Before proving this proposition, we first prove two lemmas. Let Y be a register on R_q^l and A ∈ R_q^m× l. For t ∈ℕ, we define the following measurements on Y: * M_0: computational basis measurement.* M_1^t: sample e_1 ← S_η^m, e_2← S_η^l, set be_1^⊤ A+ e_2^⊤∈ R_q^l, sample s ← R_q, then perform measurement y ↦(b· y + s)_0.* M_2^t: sample b ← R_q^l, s← R_q, then perform measurement y ↦(b· y + s)_0. Let t∈ℕ be such that 2γη nl < q/t. For all y,y' ∈ R_q^l with Ay = Ay' and y',y≤γ,M_1^t(|y⟩⟨y'|) = (1 - t/q·[e ·(y - y')_0 | e ← S_η^l])|y⟩⟨y'|. We haveM_1^t(|y⟩⟨y'|) = [(b· y + s)_0 = (b· y' + s)_0| e_1 ← S_η^m, e_2← S_η^l, be_1^⊤ A+ e_2^⊤, s← R_q] ·|y⟩⟨y'|Writing z Ay = Ay', we haveb· y + s = (e_1· z + s) + e_2· y and b· y' + s = (e_1· z + s) + e_2· y'.The result follows by observing that e_2·(y-y')_0 ≤e_2·y-y'· nl ≤ 2γη nl < q/t and (e_1· z + s) is a uniformly random element of R_q. Let t∈ℕ be such that t^2≤ q. Then there exists 0≤ p_t ≤ 2/t such that for all y,y' ∈ R_q^l with y'≠ y, we have M_2^t(y) =yand M_2^t(|y⟩⟨y'|) = p_t |y⟩⟨y'|. The first equality is clearly true. For the second, observe thatM_2^t(|y⟩⟨y'|) = [(b· y + s)_0 = (b· y' + s)_0| b ← R_q^l, s ← R_q].Write y' = y + Δ for some 0≠Δ∈ R_q^l. Then,(b·Δ)_0 is uniformly distributed in ℤ_q by <ref>. Therefore, writing p_t [ u = u + v| u,v ←_q], we have[(b· y + s)_0 = (b· y' + s)_0| b ← R_q^l, s ← R_q] =p_t = 1 - ((t-1)q/t/q·q-q/t/q + I_t-1/q·q - I_t-1/q) ≤1/t + t/q≤2/t,where the last inequality uses t^2≤ q. Combining <ref> gives the following corollary. Let t,d∈ℕ be such that 2γη n l < q/(td) and t^2≤ q. Let ρ be a density matrix on register Y. Suppose there exists z∈ R_q^m such that ρ is supported on {y ∈ R_q^l | Ay = z, y≤γ}. ThenM_1^t(ρ) = 1/d M_1^t d(ρ) + (1- 1/d)ρ, M_2^t(ρ) = 1/dM_0(M_1^t d(ρ)) + (1- 1/d - p_t) M_0(ρ) + p_t ρ,where p_t is as defined in <ref>. The first equality is immediate. The second equality follows from the observation that M_0(M_1^td(ρ)) = M_1^td(M_0(ρ)) since M_0 and M_1 both act on ρ by entry-wise multiplication. Given the above lemmas, <ref> follows from the proof of <cit.>. The high-level idea of the proof is that M_1^t is close to the identity operation while M_2^t is close to M_0. Therefore, if the identity operation can be efficiently distinguished from M_0, then M_1^t and M_2^t can be efficiently distinguished, which solves theproblem.For completeness, we give the details below.Let t4 and d8 so that g 1 - 1/d - p_t ≥ 3/8 and dg ≥ 3, where p_t is as defined in <ref>. Let = (_1,_2) be a valid algorithm for the _m,l,γ experiment (<ref>) with advantage ϵ. Fix w ∈ℕ and A ∈ R_q^m× l. Let T∑_j=0^w - 1 (dg)^-j and letbe the quantum algorithm defined on input b ∈ R_q^l as follows: * Create state ρ on register (Y, Z, T) by running _1(A).* Sample j∈{0,1,…, w-1} with probability (dg)^-j/T. * Apply M_1^t d to ρ on the Y register for j times. Call the resulting state ρ_j.* Sample s← R_q and apply the measurement x↦(b· x + s)_0 to ρ_j on the Y register to give state ρ_j'.* Compute bit b'∈{0,1} by running _2(ρ_j').* Output b' if j is even and 1-b' if j is odd.For j∈{0,1,…, w-1}, let ϵ_j denote the signed distinguishing advantage of _2 on inputs ρ_j versus M_0(ρ_j), i.e., ϵ_j [_2(ρ_j) = 0] - [_2(M_0(ρ_j)) = 0], and let δ_j denote the signed distinguishing advantage of _2 on inputs M_1^t(ρ_j) versus M_2^t(ρ_j). Then the signed distinguishing advantage ofon input distributions [e_1← S_η^m,e_2← S_η^l,b e_1^⊤ A + e_2^⊤] versus [b^⊤← R_q^l] is δ1/T∑_j=0^w -1 (-dg)^-jδ_j,because ρ_j' = M_1^t(ρ_j) if b is sampled according to [e_1← S_η^m,e_2← S_η^l,b e_1^⊤ A. .+ e_2^⊤] and ρ_j' = M_2^t(ρ_j) if b is sampled according to [b^⊤← R_q^l].By <ref> (which applies by the assumptions in the proposition and the validity of ), we have δ_j = 1/dϵ_j+1 + gϵ_j for all j∈{0,1,…, w-2}. Therefore,ϵ_i(-dg)^-i = ϵ_0 - 1/g∑_j=0^i-1(-dg)^-jδ_j for all i∈{0,1,…, w-1}.Then,δ = g/T(ϵ_0 - ϵ_w(-dg)^-w). We now unfix A∈ R_q^m× l and take the expectation of <ref> over A ← R_q^m× l to see that_A[δ] =g/T|_A[ϵ_0 - ϵ_w(-dg)^-w]| ≥(g-1/d)(ϵ - (dg)^-w) ≥1/4(ϵ - 1/3^w),where the first inequality uses T≤ dg/(dg-1), ϵ_w≤ 1, and ϵ = _A[ϵ_0]. Since () = () + (w) and _A[δ] is the advantage offor solving _l,m,η, the proposition follows. § CONCRETE PARAMETERS In this section, we describe how to adjust the parameter settings ofusing <ref> to achieve different levels of security defined by NIST in the relevant Federal Information Processing Standards (FIPS) <cit.>. We will use the same notation as in thespecification, <cit.>. <cit.> specifiesin terms of the following variablesq, n, k, l, H, τ, d, τ, γ_1, γ_2, η, β.The variables q and n specify the ring R_q as before. The variables k,l are associated with sizes of matrices over R_q. H is the hash function used inand τ is such thatis the codomain of H. For conciseness, we will not explain the variables d,γ_1,γ_2,η,β, and refer the reader to <cit.> for their definitions.The security analysis ofin <cit.> leads to <cit.> which shows the following. Given a quantum query algorithmfor breaking the -security of , there exist quantum algorithms , 𝒟, ℰ and quantum query algorithmsuch that () = () =() and() ≈() with ^_() ≤ 2^-α+1 + _k, l, η^() + _H, τ, k, l+1, ζ^() + _k, l, ζ'^(𝒟) + _^(),where ζ, ζ' are functions of parameters γ_1, γ_2, β, d, τ defined as follows:ζmax(γ_1 - β, 2γ_2+1+2^d-1τ) andζ' max(2(γ_1 - β), 4γ_2 + 2)._^() is the advantage of any algorithm distinguishing between the pseudorandom function used byand a randomly selected function; and α is a min-entropy term that can be bounded using <cit.> byα≥min(-nlog(2γ_1 + 1/2γ_2-1),-kllog(n/q)).In the QROM, we can construct an optimal pseudorandom function using a random oracle such that _^() is asymptotically negligible and can be neglected.<ref> shows that the hardness ofin the QROM is at least that of . Therefore, <ref> and <ref> rigorously imply the asymptotic result that, under suitable choices of parameters as functions of the security parameter λ, if there are no -time quantum algorithms that solveorthen there is no -time quantum algorithm that breaks thesecurity of . This is a very positive sign for the security ofasandare far better-studied problems and there is substantial support for the assumption that they are hard problems.We proceed to give concrete estimates of the Core-SVP security ofunder several choices of parameters using <ref> and <ref>. These estimates rely on some heuristic assumptions that we will clearly state. We remark that the concrete security estimates appearing in <cit.> use similar heuristic assumptions. We begin by dividing both sides of <ref> by (). Using () = () = (), assuming the approximation in () ≈() can be replaced by equality, and using the “Our Work” parameters in <ref> for which α≥ 257, we obtain^_()/()≤ 2^-256 + _k, l, η^()/() + _H, τ, k, l+1, ζ^()/() + _k, l, ζ'^(𝒟)/(). By <ref>, for any η' ∈ℕ with η' < q/32/(2ζ n(k+l+1)), there exists a quantum algorithm ' for _k+l+1,k,η', such that^_()/()≤ 2^-256 + _k, l, η^()/() + 8Q^2√(_k+l+1, k, η'^('))/() + _k, l, ζ'^(𝒟)/(),where Q is the number of queriesuses and we assumed that <ref> is well-approximated by ϵ^2/(64Q^4), in particular, that τ is sufficiently large.Also by <ref>, we have (') is at most () plus polynomial terms. Heuristically assuming that we can neglect the polynomial terms and using Q ≤(), we obtain^_()/()≤ 2^-256 + _k, l, η^()/() + 8Q^3/2√(_k+l+1, k, η'^(')/(')) + _k, l, ζ'^(𝒟)/(). Now, for NIST security level l∈ [5], we upper bound Q by B_l, where B_l is given in <ref>.From the third term on the right-hand side of <ref>, we see that the Quantum Core-SVP security ofcan be estimated by z/2 - 3/2log(B_l) - 3,where z is the Quantum Core-SVP security of the associatedproblem.Having reduced thesecurity ofto the security of standard lattice problemsand , we proceed to estimate their security. Following the analysis in thespecifications <cit.>, we perform our security estimates via the Core-SVP methodology introduced in <cit.>. In the Core-SVP methodology, we consider attacks using the Block Khorkine-Zolotarev (BKZ) algorithm <cit.>. The BKZ algorithm with block size μ∈ℕ works by making a small number of calls to an SVP solver on μ-dimensional lattices. The Core-SVP methodology conservatively assumes that the run-time of the BKZ algorithm is equal to the cost of a single run of the SVP solver at its core. The latter cost is then estimated as 2^0.265μ since this is the cost of the best quantum SVP solver <cit.> due to Laarhoven <cit.>. Therefore, to estimate the security of anorproblem, it suffices to estimate the smallest μ∈ℕ such that BKZ with block-size μ can solve the problem.Then we say 0.265μ is the Quantum Core-SVP security of the problem. To describe how the block-size can be estimated, it is convenient to define the function δℕ→ℝ,δ(μ) ((μπ)^1/μμ/2π e)^1/2(μ-1). Concrete security of MLWE. Our security analysis ofgenerally follows thespecifications, <cit.>. For a,b,ϵ∈ℕ, we first follow <cit.> and assume that _a,b,ϵ is as hard as the Learning With Errors problem _na,nb,ϵ — for a',b' ∈ℕ, _a',b',ϵ is defined to be the same as _a,b,ϵ with n set to 1 so that R_q = ℤ_q.Then, as done in <cit.>, we follow the security analysis in <cit.>. <cit.> considers two attacks based on the BKZ algorithm, known as the primal attack and dual attack. The block-size is then taken as the minimum of the block-sizes for the primal and dual attacks. These attacks are analyzed as follows.* Primal attack <cit.>. Let dna + nb + 1. Then to solve _na,nb,ϵ, we set the BKZ block-size μ to be equal to the smallest integer ≥ 50 such that[In <cit.>, the exponent on δ(μ) is given as 2μ-d-1, but it is correct to 2μ-d by <cit.>. There can be spurious solutions with 0<μ<50 for which the approximations leading to the inequality break down.]ξ√(μ)≤δ(μ)^2μ-d· q^na/d. * Dual attack <cit.>. Let d'na+nb. Then to solve _na,nb,ϵ, we set the BKZ block-size μ to be equal to the smallest integer ≥ 50 such that-2 π^2 τ(μ)^2 ≥ln(2^-0.2075μ/2),where τ(μ) δ(μ)^d'-1q^nb/d'ϵ /q.Concrete security of MSIS.Our security analysis ofuses heuristics in thespecifications <cit.> and <cit.> (which is in turn based on <cit.>).[We were unable to completely reuse the analysis in <cit.> as it is not completely described. Comparing the estimates for μ obtained by the method here with that in <cit.> (also reproduced in <ref>), we find our estimates are consistently around 4/5 times that given in <cit.>. Therefore, our estimates underestimate the security ofcompared to <cit.>.] For a,b,ξ∈ℕ, we first follow <cit.> and assume that _a,b,ξ is as hard as the Short Integer Solutions problem _na,nb,ξ — for a',b'∈ℕ, _a',b',ξ is defined to be the same as _a',b',ξ with n set to 1 so that R_q = ℤ_q. Following <cit.>, we estimate the security of _na,nb,ξ, by considering the attack that uses the BKZ algorithm with block-size μ to find a short non-zero vector in the latticeL(A) {y ∈ℤ^na+nb| [I_na| A] · y = 0q },where A ←ℤ_q^na× nb. Following <cit.>, the BKZ algorithm is expected to find a vector v ∈ L(A) of Euclidean length[Compared to <cit.>, we do not take the min of <ref> with q since “trivial” vectors of the form q times a standard basis vector have too large of an infinity-norm to be a solution to _na,nb,ξ when ξ < q, as will be the case for our parameter choices.] 2^2 √(na log(q) log(δ(μ)) ).We assume that the entries of v have the same magnitudes since a similar assumption is made in <cit.>. Then, to solve _na,nb,ξ, we set the BKZ block-size μ to be the smallest positive integer such that1/√(na + nb)·2^2 √(na log(q) log(δ(μ)) )≤ξ. To setparameters, we also require q = 12γ_2, q> 4γ_2 (see <cit.>), and β = τη (see <cit.>). Moreover, we set parameters to minimize the following metrics <cit.>: * the public key size in bytes, (nk(log(q) -d) + 256)/8,* the signature size in bytes, (nl(log(2γ_2)) + nk + τ (log(n) + 1))/8,* the expected number of repeats to sign a message, exp(nβ(l/γ_1 + k/γ_2)). In <ref>, we give parameter sets achieving different levels of security that we calculated using the methodology described above. In both tables, we useq_029996224302593 = 2^9(218107)(268613)+1.In particular, q_0 = 12n.In <ref>, we compare our parameters with those of thespecifications <cit.>. For the same security levels, we see that our public key sizes are about 12× to 15× that ofand our signature sizes are about 5× to 6× that of . We stress that themain advantage of our parameters compared tois that ours are based on rigorous reductions from hard lattice problems, whereas 's are based on highly heuristic reductions. In particular, the heuristic reduction fromto (a variant of)given in <cit.> has been recently challenged <cit.>.In <ref>, we compare our parameters with those of , another -based scheme with rigorous reductions from hard lattice problems.For the same security levels, we see that our public key and signature sizes are about 2.5× to 2.8× that of . We stress that the main advantage of our parameters compared tois that ours have q = 12n which is crucial for the efficient implementation of the scheme. More specifically, when q = 12n, multiplying two elements in R_q can be performed using the Number Theoretic Transform in time O(nlog(q)) (compared to the naive cost of O(n^2log(q))). In contrast,uses q = 58 which is incompatible with q = 12n as n>2 is a power of 2.The main reason why we need to increase the public key and signature sizes is due to the loss in the reduction fromto , as stated in <ref>. More specifically, when we calculate the Quantum Core-SVP numbers for the -basedproblem, we use <ref> which considerably lowers security. <ref> is derived from <ref>. We do not know if the loss is inherent or if our reduction could be tightened. Deciding which is the case is the main open question of our work.§ ACKNOWLEDGMENTSThis work is supported by the National Institute of Standards and Technology (NIST) and the Joint Center for Quantum Information and Computer Science (QuICS) at the University of Maryland. This research paper is partly a contribution of the U. S. federal government and is not subject to copyright in the United States. We thank Marcel Dall'Agnol, Jiahui Liu, Yi-Kai Liu,and Ray Perlner for helpful feedback and correspondence. We thank Amin Shiraz Gilani for his involvement during the early stages of this project.
http://arxiv.org/abs/2312.16619v1
{ "authors": [ "Kelsey A. Jackson", "Carl A. Miller", "Daochen Wang" ], "categories": [ "cs.CR", "quant-ph" ], "primary_category": "cs.CR", "published": "20231227155627", "title": "Evaluating the security of CRYSTALS-Dilithium in the quantum random oracle model" }
label1,label2,label3]Guojian Wang label2,label3,label4,label5]Faguo Wu cor1 label1,label2,label3,label5]Xiao Zhang cor2label1,label2,label3]Ning Guo label2,label3,label4,label5]Zhiming Zheng[label1]School of Mathematical Sciences, Beihang University, Beijing 100191, China [label2]Key Laboratory of Mathematics, Informatics and Behavioral Semantics, Ministry of Education, Beijing 100191, China [label3]Peng Cheng Laboratory, Shenzhen 518055, Guangdong, China [label4]Institute of Artificial Intelligence, Beihang University, Beijing 100191, China [label5]Zhongguancun Laboratory, Beijing 100194, China[cor1]Corresponding author at: Institute of Artificial Intelligence, Beihang University, Beijing100191, China. E-mail address: faguo@buaa.edu.cn [cor2]Corresponding author at: School of Mathematical Sciences, Beihang University, Beijing100191, China. E-mail address: xiao.zh@buaa.edu.cn Deep reinforcement learning (DRL) faces significant challenges in addressing the hard-exploration problems in tasks with sparse or deceptive rewards and large state spaces. These challenges severely limit the practical application of DRL. Most previous exploration methods relied on complex architectures to estimate state novelty or introduced sensitive hyperparameters, resulting in instability. To mitigate these issues,we propose an efficient adaptive trajectory-constrained exploration strategy for DRL. The proposed method guides the policy of the agent away from suboptimal solutions by leveraging incomplete offline demonstrations as references. This approach gradually expands the exploration scope of the agent and strives for optimality in a constrained optimization manner. Additionally, we introduce a novel policy-gradient-based optimization algorithm that utilizes adaptively clipped trajectory-distance rewards for both single- and multi-agent reinforcement learning. We provide a theoretical analysis of our method, including a deduction of the worst-case approximation error bounds, highlighting the validity of our approach for enhancing exploration. To evaluate the effectiveness of the proposed method, we conducted experiments on two large 2D grid world mazes and several MuJoCo tasks. The extensive experimental results demonstrate the significant advantages of our method in achieving temporally extended exploration and avoiding myopic and suboptimal behaviors in both single- and multi-agent settings. Notably, the specific metrics and quantifiable results further support these findings. The code used in the study is available at <https://github.com/buaawgj/TACE>. deep reinforcement learning, hard-exploration problem, policy gradient, offline suboptimal demonstrations § INTRODUCTION Deep reinforcement learning has achieved considerable success in various fields over the past few years, for example, playing Atari games with raw pixel inputs <cit.>, mastering the game of Go <cit.>, and acquiring complex robotic manipulation and locomotion skills from raw sensory data <cit.>. Despite these success stories, these DRL algorithms may suffer from poor performance in tasks with sparse and deceptive rewards, and large state spaces<cit.>. We refer to this as a hard-exploration problem, which has received increasing attention <cit.>. Such hard-exploration tasks are common in the real world. For example, in a navigation task, a reward is only received after the agent collects certain items or reaches terminal points. Our proposed method can be applied to such tasks and help agents explore their environment more systematically.The hard-exploration problem of reinforcement learning (RL) can be formally defined as an exploration in environments where rewards are sparse and even deceptive <cit.>. For both single- and multi-agent RL methods, the difficulty lies primarily in the fact that random exploration rarely results in terminated states or meaningful feedback collection. The sparsity of rewards makes the training of neural networks extremely inefficient <cit.>, because there are no sufficient and immediate rewards as supervisory signals to guide the training of neural networks. The challenge is trickier when troublesome deceptive rewards exist in these tasks because they can lead to myopic behaviors; hence, the agent may often lose the chance to obtain a higher score <cit.>. Efficient exploration is the key to solving these problems by encouraging the agent to visit underexplored states.Generally, sample efficiency and premature convergence interfere with the exploration of DRL algorithms in environments with sparse and deceptive rewards and large state spaces <cit.>. First, when we use DRL algorithms to solve these hard-exploration tasks in single- and multi-agent settings, such as ϵ-greedy <cit.> or uniform Gaussian exploration noise <cit.>, may cause an exponential difference in the sampling complexity for different goals <cit.>, which places a high demand on computing power. Second, when an agent frequently collects trajectories for goals with deceptive rewards, it tends to adopt myopic behaviors and learn suboptimal policies. Under the current reinforcement learning paradigm, the agent further limits its exploration to small regions of the state space around suboptimal goals because of the myopic behaviors learned from previous experiences <cit.>. Therefore, the agent will permanently lose the opportunity to achieve a higher score and become stuck in local optima <cit.>.In this study, we developed a novel TrAjectory-Constrained Exploration (TACE) method to overcome these challenges. Our method exploits offline suboptimal demonstration data for faster and more efficient exploration without incurring high computational costs and ensuring stability. Our approach orients its policy away from suboptimal policies in the perspective of constrained optimization by considering offline data as a reference. We developed three practical policy-gradient-based algorithms, TCPPO, TCHRL, and TCMAE, with clipped novelty distance rewards. These algorithms can search for a new policy whose state-action visitation distribution is different from the policies represented by offline data. Furthermore, the scale of novelty for a state-action pair is determined based on the maximum mean discrepancy, and this definition of novelty is comparable. Therefore, a distance normalization method was introduced to enable the agent to gradually expand the scope of exploration centered on past trajectories. Thus, the proposed method adaptively adjusts constrained boundaries. To further improve algorithm performance, an adaptive scaling method was developed to ensure that the agent remained inside the feasible region. We then provide a theoretical analysis of our algorithm, deduce the worst-case approximation error bound and theoretically determine the range of hyperparameter values. Finally, sufficient experimental results demonstrated the effectiveness of TACE compared with other state-of-the-art baseline methods for various benchmarking RL tasks.In summary, our contributions are summarized as follows:* This paper investigated a trajectory-constrained exploration strategy that promotes sample efficiency and avoids premature convergence for the hard-exploration challenge of single- and multi-agent tasks.* We show a feasible instance where offline suboptimal demonstrations are used as a reference to provide dense and sustainable exploration guidance and enhance the sample efficiency of RL methods.* No additional neural networks are required to model the novelty. Our proposed algorithm is simple in form and explicit in physical meaning, which helps adjust constrained boundaries and expand the exploration scope adaptively. * A theoretical analysis of our method is provided, explaining the validity of TACE in achieving diverse exploration and the rationality of the TACE design.* The proposed methods were evaluated for various benchmarking tasks, including two large 2Dgrid world mazes and two MuJoCo mazes. Our method outperforms other advanced baseline methods in terms of exploration efficiency and average returns. The remainder of this paper is organized as follows: Section <ref> describes the progress of the related work. Section <ref> briefly describes the preliminary knowledge of the article. Section <ref> introduces the proposed trajectory-constrained exploration strategy. The experimental results are presented in Section <ref>. Finally, the conclusions are presented in Section <ref>. § RELATED WORKSeveral methods have been proposed in previous works to encourage sufficient exploration of the agent. Some studies suggest adding noise sampled from a stochastic distribution, such as the Gaussian distribution <cit.> or the Ornstein-Unlenbeck process <cit.>, to actions generated by the policy network, which can motivate the agent to access underexplored areas. Maximum entropy RL methods <cit.> allow the agent to explore the environment by encouraging high-entropy distributions over action spaces, given the state inputs. However, such methods are unlikely to achieve satisfactory performance and may result in suboptimal behavior in tasks with sparse and deceptive rewards, long horizons, and large state spaces <cit.>.At the same time, some methods use intrinsic rewards <cit.>, such as surprise <cit.> and curiosity <cit.>, to encourage the agent to visit unfamiliar states. For example, curiosity in <cit.> was formulated as an error in an agent's ability to predict the consequences of its actions in a visual feature space. Although these methods can alleviate the sparse reward problem, they usually rely on auxiliary models to calculate intrinsic rewards, and therefore increase the complexity of the entire model and may incur high computational costs. Another way to cope with sparse or delayed rewards is to define a reward function as a combination of shaping and terminal rewards <cit.>. However, these methods need to introduce extra hyperparameters to balance the weight of importance between the RL task rewards and intrinsic rewards, which might incur instability.Diversity-regularized exploration expands an agent's exploration space efficiently, and there have been several recent studies in this area <cit.>. For instance, collaborative exploration <cit.> employs a team of heterogeneous agents to explore an environment and utilizes a special regularization mechanism to maintain team diversity. Diversity-driven exploration <cit.> proposes the addition of a Kullback-Leibler (KL) divergence regularization term to encourage the DRL agent to attempt policies that differ from previously learned policies. Other studies have enabled RL agents to perform exploration more consistently without incurring additional computational costs by adding random noise to the network parameters <cit.>. However, the diversity term in these methods only considers the divergence in the action space.DIPG <cit.> uses a maximum mean discrepancy (MMD) regularization term to encourage the agent to learn a novel policy that induces a different distribution over the trajectory space. Specifically, the MMD distance was introduced between the trajectory distribution of the current policy and that of the previously learned policies. Moreover, DIPG considers the diversity gradient during training because it adds an MMD regularization term to its objective function. Therefore, its objective function incurs sensitive hyperparameters that cause instability and even lead to failure to solve the specified task. There are many differences between our proposed method and DIPG, including the definition of the distance measure, formulation of the optimization problem, and design of the optimization solution method. These significant differences are described in detail in Section <ref>.Reinforcement Learning from Demonstrations (RLfD) has been proven to be an effective approach for solving problems that require sample efficiency and involve difficult exploration. Demonstrations of RLfD were generated by either the expert or the agent. For example, self-imitation learning <cit.> demonstrates that exploiting previous good experiences can indirectly drive deep exploration. Deep q-learning from demonstrations (DQfD) <cit.> leverages even very small amounts of demonstration data to accelerate learning and enhance the exploration of the agent. Recurrent Replay Distributed DQN from Demonstrations (R2D3) <cit.> extracts information from expert demonstrations in a manner that guides an agent's autonomous exploration in the environment. The Learning Online with Guidance Offline (LOGO) algorithm <cit.> orients the update of its policy by obtaining guidance from offline data, which can significantly reduce the number of exploration actions in sparse reward settings. These methods train the output of the current policies to be close to that of expert policies represented by the demonstration data. However, the cost of obtaining expert demonstration data may be high, and it is often hopeless for the agent to generate highly rewarded trajectories in some hard-exploration tasks. HRL has long been recognized as a promising approach to overcoming sparse reward and long-horizon problems <cit.>. Recent studies proposed a range of HRL methods to efficiently learn policies in long-horizon tasks with sparse rewards <cit.>. Under this paradigm, the state-action search space for the agent is exponentially reduced through several modules of abstraction at different levels, and some subsets of these modules may be suitable for reuse <cit.>. Currently, HRL methods are divided into two main categories. The first category is subgoal-based HRL methods, in which the high-level policy sets a subgoal for the low-level policy to achieve. Distance measurement is required to measure the internal rewards of low-level policies according to current states and subgoals. Some algorithms, such as HAC <cit.> and HIRO <cit.>, simply use the Euclidean distance, while FeUdal Networks (FuNs) <cit.> adopt the cosine distance. However, these measurements of state spaces do not necessarily reflect the "true" distance between two states; therefore, these algorithms are sensitive to state-space representation <cit.>.The second category of HRL methods allows a high-level policy to select a pre-trained low-level skill over several time steps. Therefore, they typically require training in high- and low-level policies for different tasks. Moreover, low-level skills are pre-trained by maximizing diversity objectives <cit.>, proxy rewards <cit.>, or specialized simple tasks <cit.>. When solving downstream tasks, pre-trained skills are often frozen and only high-level policies are trained, which may lead to significant suboptimality in future tasks <cit.>. The Option-Critic algorithm <cit.> makes an effort to train high-level and low-level policies jointly. However, joint training may lead to semantic loss of high-level policies <cit.> and the collapse of low-level skills <cit.>.Although the single-agent exploration problem is extensively studied and has achieved considerable success, few exploration strategies have been developed for multi-agent reinforcement learning (MARL). MAVEN <cit.> encodes a shared latent variable with a hierarchical policy and learns several separate state-action value functions for each agent. EITI <cit.> uses mutual information (MI) to capture the influence of one agent's behavior on expected returns of the multi-agent team. Previous work <cit.> demonstrates that exploiting structural information on the reward function in MARL tasks can promote exploration. EMC <cit.> introduces a curiosity-driven exploration for episodic MARL by utilizing the results in the episodic memory to regularize the loss function.§ PRELIMINARIES§.§ Reinforcement LearningWe consider an infinite-horizon discounted Markov decision process (MDP) defined by a tuple M = (𝒮, 𝒜, P, R_e, ρ_0, γ), where 𝒮 is a state space, 𝒜 is a (discrete or continuous) action space, P: 𝒮×𝒜×𝒮→ℝ_+ is the transition probability distribution, R_e: 𝒮×𝒜→ [R_min, R_max] is the reward function, ρ_0: 𝒮→ℛ_+ is the distribution of the initial state s_0, and γ∈ [0,1] is a discount factor. A stochastic policy π_θ: 𝒮→𝒫(𝒜) parametrized by θ, maps the state space 𝒮 to a set of probability distributions over the action space 𝒜. The standard state-action value function Q_e, the value function V_e and the advantage function A_e are defined as follows:Q_e(s_t, a_t) = 𝔼_s_t+1, a_t+1, …[∑_l=0^∞γ^l R_e(s_t+l, a_t+l)],V_e(s_t) = 𝔼_a_t, s_t+1, a_t+1, …[∑_l=0^∞γ^l R_e(s_t+l, a_t+l)],A_e(s, a) = Q_e(s, a) - V_e(s),where s_t∼π(a_t| s_t), s_t+1∼ P(s_t+1| s_t, a_t), ∀ t≥ 0.Generally, the objective of the RL algorithm is to determine the optimal policy π_θ that maximizes the expected discounted return. J(π_θ) = 𝔼_τ[∑_t=0^∞γ^t R_e(s_t,a_t)], here we use τ = (s_0, a_0, s_1, a_1, …) to denote the entire history of state-action pairs in an episode, and s_0 ∼ρ_0(s_0), a_t ∼π_θ(a_t | s_t), and s_t+1∼ P(s_t+1| s_t,a_t).State visitation distribution is also of interest. When γ<1, the discounted state visitation distribution d^π is defined as d^π(s) = (1-γ)∑_t=0^∞γ^t ℙ(s_t=s|π, ρ_0), where ℙ(s_t=s|π, ρ_0) denotes the probability of s_t=s concerning the randomness induced by π, P and ρ_0.§.§ Multi-Agent Reinforcement LearningA cooperative multi-agent system can be modeled as a multi-agent Markov decision process. An n-agent MDP is defined by a tuple (𝒮̅, 𝒜̅, P̅, I, R̅_e, γ), where I = {1, 2, …, n} is the finite sets of agents, 𝒮̅ = ×_i∈ I𝒮_i is the joint state space and 𝒮_i is the state space of i-th agent. At each time step, each agent selects an action a_i ∈𝒜_i at state 𝐬∈𝒮̅ to form a joint 𝐚∈𝒜̅, a shared extrinsic reward R̅_e(𝐬, 𝐚) can be received for each agent, and the next state 𝐬^' is generated according to the transition function P̅(·|𝐬, 𝐚). The objective of the cooperative multi-agent task is that each agent learns a policy π_i(a_i| s_i) to jointly team performance. In this study, different from this original multi-agent optimization objective, we focus on multi-agent exploration (MAE) and design a novel approach to encourage the agent team to cooperatively explore the environment.§.§ Maximum Mean DiscrepancyMaximum Mean Discrepancy is an integral probability metric that measures the difference (or similarity) between two different probability distributions <cit.>. Let the probability distributions p and q be defined in a nonempty compact metric space 𝕏. Let x and y be observations taken independently of p and q, respectively. Subsequently, the MMD metric was defined as MMD(p, q, ℱ) = sup_f ∈ℱ(𝔼_x∼ p[f(x)] - 𝔼_y ∼ q[f(y)]), where ℱ is the class of the functions in 𝕏. If ℱ satisfies the condition p=q, if and only if 𝔼_x∼ p[f(x)]=𝔼_y∼ q[f(y)],∀ f∈ℱ, then MMD is a metric measuring the discrepancy between p and q <cit.>.What type of function class makes up the MMD metrics? According to the literature <cit.>, the space of bounded continuous functions on 𝕏 satisfies the condition; however, it is difficult to compute the MMD distance between p and q with finite samples in such a function class because of its uncountability. If ℱ is the Reproducing Kernel Hilbert Space (RKHS) ℋ, f in Eq. (<ref>) can be replaced by a kernel function k∈ℋ, such as the Gaussian or Laplace kernel functions. Gretton et al. <cit.> show that MMD^2(p, q, ℋ) = 𝔼[k(x, x^')] - 2𝔼[k(x, y)] + 𝔼[k(y, y^')],where x, x^' i.i.d.∼ p and y, y^' i.i.d.∼ q. §.§ Skill-Based HRL AlgorithmsConsider a skill-based HRL algorithm with a 2-level hierarchy, whose policy is composed of a high-level policy π_θ_h and a low-level policy π_θ_l. In this framework, the skills of a low-level policy are usually pre-trained and distinguished using different latent codes z. For example, a single stochastic neural network (SNN) <cit.> can encode many low-level skills simultaneously <cit.>. The high-level policy π_θ_h(z | s) does not take actual actions to interact with the external environment directly. Instead, it samples latent codes on a slower timescale than the low-level policy, such as observing the environment once every p time steps. The inputs of low-level policy π_θ_l(a | s, z) are not only environment state vectors but also latent codes from the high-level policy. That is, the high-level policy determines the action output of the low-level policy for the next p time steps <cit.>. Fig. <ref> illustrates the framework of an HRL algorithm with a two-level hierarchy.§ PROPOSED APPROACH In this section, we propose a trajectory-constrained exploration strategy for efficient exploration, which solves the hard-exploration problem from the perspective of constrained optimization. The main objective of the proposed trajectory-constrained exploration strategy is to encourage an RL agent to visit underexplored regions of the state space and prevent it from adopting suboptimal and myopic behaviors.§.§ Trajectory-Constrained Exploration Strategy Assuming that there is a collection ℳ of offline suboptimal trajectory data that might lead to goals with deceptive rewards, we aim to drive the agent to systematically explore the state space and generate new trajectories by visiting novel regions of the state space. One method to achieve this goal is to maximize the difference between the current and previous trajectories. We used the MMD metric defined in Eq. (<ref>) to measure the disparity between the different trajectories. The agent collects a certain number of trajectories and stores them in on-policy trajectory buffer ℬ at each epoch. Specifically, every trajectory τ in buffer ℬ and offline replay memory ℳ is treated as a deterministic policy. Then we calculate their corresponding state-action visitation distributions ρ_τ. Finally, instead of computing the MMD diversity measurement of the distribution on the trajectory space induced by previous policies <cit.>, in our study, the MMD distance is calculated between different state-action visitation distributions that belong to the offline demonstration trajectory in ℳ and the current trajectory in ℬ.Let K(·, ·) denote a kernel of the reproducing kernel Hilbert space ℋ, such as a Gaussian kernel, then an unbiased empirical estimate of MMD(τ, υ, ℋ) is given by: MMD^2(τ, υ, ℋ) = x, x^'∼ρ_τ𝔼[k(x, x^')] - 2cx ∼ρ_τy ∼ρ_υ𝔼[k(x, y)] + y, y^'∼ρ_υ𝔼[k(y, y^')], where τ∈ℬ, υ∈ℳ, and ρ_τ and ρ_υ are the corresponding state-action visitation distributions of τ and υ, respectively. The function k(·, ·) is given by: k(x, y) = K(g(x), g(y)). Function g provides the flexibility to adjust the focus of the MMD distance metric for different aspects, such as state visits, action choices, or both. In our experiments, we measure the MMD distance only concerning a relevant subset of the information contained in each state-action pair and choose this subset to be the coordinate c of the center of mass (CoM), i.e., the function g maps a state-action pair (s,a) to c. Moreover, we believe it should also make sense to let g(s,a)=(c,a), although it may require us to design a new kernel function K(·,·). Finally, we define the distance D(x, ℳ) of the state-action pair x = (s, a) to replay memory ℳ as follows: D(x, ℳ) = τ∈ℬ_x𝔼[MMD^2(τ, ℳ, ℋ)], where ℬ_x = {τ| x∈τ, τ∈ℬ}, and MMD^2(τ, ℳ, ℋ) is defined by: MMD^2(τ, ℳ, ℋ) = min_υ∈ℳMMD^2(τ, υ, ℋ). To emphasize that the distance is defined based on the MMD, we add the subscript MMD to the symbol D in Eq. (<ref>). The stochastic optimization problem with MMD distance constraints is defined as follows: max_θJ(θ), s.t. D_ MMD(x, ℳ) ≥δ, ∀ x ∈ℬ, where J is an ordinary reinforcement learning objective, and δ is a constantMMD boundary constraint. Replay memory ℳ did not change during any epochs of the training process. Instead, replay memory ℳ is updated only with trajectories generated by a suboptimal policy learned after a previous training process. Furthermore, the offline trajectories in ℳ can be collected from human players. The key insight of our study is that it orients its policy away from suboptimal policies by considering incomplete offline demonstrations as references. In contrast to RLfD methods, our method does not require perfect and sufficient demonstrations, which is more realistic in practice.DIPG <cit.> uses the MMD distance between probability distributions over trajectory spaces induced by the current policy and the previously learned suboptimal policy as a diversity regularization term. However, DIPG simply stacks the states and actions from the first N steps of a trajectory into a single vector, which is not sufficient for measuring the distance between trajectory distributions. Instead, we regard each past suboptimal trajectory as a deterministic policy and use the MMD distance metric to calculate the distance between the state-action visitation distributions induced by the current and past suboptimal trajectories. Consequently, our method can reduce the sampling and computational complexities compared with DIPG. The constrained optimization problem (<ref>) can be solved using the Lagrange multiplier method, and its corresponding unconstrained form is as follows:L(θ, σ) = J(θ) + σ∑_x ∈ℬmin{D_ MMD(x, ρ_μ) - δ, 0 }. The unconstrained form in Eq. (<ref>) is intractable because its second term is numerically unstable, and its gradient concerning the policy parameters is difficult to calculate directly. To address these challenges, we provide a specialized approach to achieve the goal of introducing policy parameters into the second term of the unconstrained optimization problem and balancing the contributions of the RL objective and constraints. According to statistical theory, frequency is an unbiased estimate of probability when the number of samples is sufficiently large. Therefore, when the sample number N of the on-policy buffer ℬ is sufficiently large, the following formula holds: lim_N →∞1/N∑_x∈ℬmin{D_ MMD(x, ρ_μ)-δ, 0} = x∼ρ_π𝔼[min{D_ MMD(x, ρ_μ)-δ, 0}]. Furthermore, the optimization problem (<ref>) is converted into an unconstrained form: L(θ, σ) = J(θ) + σx∼ρ_π𝔼[min{D_ MMD(x, ρ_μ) - δ, 0}], where σ > 0 is the Lagrange multiplier. Since N is a constant, it is absorbed by choosing the appropriate coefficient σ.Second, we estimate the gradient of the unconstrained optimization problem concerning the policy parameters. The first term of the unconstrained problem is an ordinary RL objective, and hence, the gradient of this term can be calculated easily <cit.>. We then derive the gradient of the MMD term in Eq. (<ref>) for policy parameters θ, which enables us to efficiently optimize the policy. The result is described in the following lemma.[Gradient Derivation of the MMD term]lemmammdgradientLet ρ_π(s,a) be the state-action visitation distribution induced by the current policy π. Let D(x,ℳ) be the MMD distance between the state-action pair x and replay memory ℳ. Then, if the policy π is parameterized by θ, the gradient of the MMD term of Eq. (<ref>) for parameters θ is derived as follows: ∇_θ D_ MMD=ρ_π(s,a)𝔼[∇_θlogπ_θ(a | s)Q_i(s,a)], where Q_i(s_t, a_t) = ρ_π(s, a)𝔼[∑_l=0^T-tγ^lR_i(s, a)],andR_i(s, a) = min{D_ MMD(x, ℳ)-δ, 0}.  <ref> presents the derivation of the MMD term gradient gradient.In Algorithm <ref>, we showed that this trajectory-constrained exploration strategy can be readily applied to on-policy algorithms, such as PPO <cit.>. Whereas the PPO algorithm does not maintain replay memory, our method maintains n trajectories of each past suboptimal policy in replay memory ℳ to compute the D_ MMD distance measure. Generally, n=5 is sufficient to achieve satisfactory performance. The batch of on-policy data is stored in buffer ℬ. Moreover, because the MMD distance term is treated as a diversity reward in Lemma <ref>, we believe that it can also be integrated into bootstrapped Q-learning by adding the MMD diversity bonus to both the immediate reward and the next Q value, such as OB2I <cit.>, which deserves further investigation. DIPG <cit.> considers the diversity gradient during the training process because it adds an MMD regularization term to its objective function. Therefore, its objective function incurs sensitive hyperparameters that cause instability and even lead to failure to solve the specified task. To resolve the contradiction between the RL and diversity objectives, we reformulate the hard-exploration task as a constrained optimization problem, where the optimization goal is formulated by the naive RL objective, and the MMD constraints bind the agent's exploration region away from the offline demonstrations above a certain threshold. Using this formulation, the offline trajectories regulate the policy updating only when the current trajectories are outside the constraint region, which ensures stability and enables the agent to eliminate the local optima. §.§ Fast Adaptation Methods §.§.§ Adaptive Constraint Boundary Adjustment MethodGenerally, the constraint boundary δ in Eq. (<ref>) should be environmentally dependent for different tasks, which remains a great challenge for us to determine the size of parameter δ in various tasks. Moreover, the expected MMD distance of each epoch gradually increases as the training process continues because of the gradient of the MMD term derived in Lemma <ref>. Consequently, it is not conducive for the agent to achieve temporally extended exploration when adopting a constant distance constraint boundary δ during the entire training process. To resolve the problem of an increased MMD distance, we propose an adaptive constraint distance normalization method, inspired by the batch normalization method <cit.>. After the agent generates a batch of N trajectories {τ_i}_i^n and stores them in ℬ, the distance d(x, ℳ) = D_ MMD(x,ℳ) of each state-action pair x in the on-policy buffer ℬ is calculated in each epoch. We then compute the normalized distance d̂(x, ℳ) according to: d̂(x, ℳ) = d(x, ℳ) - 𝔼[D(ℬ)]/√( Var[D(ℬ)]), where the expectation 𝔼[D(ℬ)] and variance Var[D(ℬ)] are computed over the distanceset D(ℬ)={d(x_i, ℳ)| x_i∈ℬ)} of all state-action pairs in ℬ in every epoch.Intuitively, our proposed policy gradient updates the policy parameters to drive the agent to visit the underexplored state-action pairs while increasing the return value of the agent. Hence the expected MMD distance of state-action pairs in ℬ for each epoch increases gradually during the training process. Distance normalization enables us to adjust distance constraint boundaries dynamically because true distance boundaries represented by the normalized parameter δ adaptively change for each epoch. With distance normalization, the problem of determining parameter δ becomes relaxed and environmentally independent. In our experiments, we usually choose δ=0.5 as the distance constraint boundary. We compare the performance of our algorithm with different parameter choices in Section <ref>. The experimental results demonstrate that this method can stabilize the training process and enable the agent to achieve better temporally extended exploration.§.§.§ Adaptive Scaling Method Although the value of the Lagrange multiplier σ can be updated by gradient ascent, we find that this method is less than ideal in some cases, as demonstrated by the experimental results presented in Section <ref>. To solve this problem, similar to <cit.> and <cit.>, we propose an adaptive scaling method (ASM) based on the MMD distance to adjust the contributions of the ordinary RL objective and the MMD term to the gradient. We associate σ with the MMD distance metric D_ MMD and adaptively increase or decrease the value of σ depending on whether the MMD distance D_ MMD between current trajectories and previous suboptimal trajectories are below or above a certain distance threshold ϵ. Different values of the threshold ϵ are adopted by different methods in our experiments. The simple approach employed to update σ for each training iteration <cit.> is given by: σ = 1.05σ, if ∃υ∈ℬ s.t.MMD(υ, ℳ) ≤ϵ,0.98σ, if ∀υ∈ℬ s.t.MMD(υ, ℳ) ≥ 2ϵ, where MMD(υ, ℳ):=min_τ∈ℳ MMD(υ, τ). In addition, if trajectories generated by the current policy lead to the same sparsereward as previously stored trajectories, then we adjust the value of σ to 1.2 times the current value, i.e. σ = 1.2σ. It is worth noting that the values of 1.05, 0.995, and 1.2 are selected empirically. However, they have a certain universality, and we use these values in all the experiments of this study. § THEORETICAL ANALYSIS OF TACE PERFORMANCE BOUNDSIn this section, we present the theoretical foundation of TACE and demonstrate how it improves the exploratory performance of the agent. We derive a novel bound for the difference in returns between two arbitrary policies under the proposed MMD constraints. This result can be viewed as an extension of the previous work <cit.> on the new constrained policy search problem in Eq. (<ref>). Against this background, this section provides some guarantees of performance improvement and helps us better understand our proposed algorithms at a theoretical level.The D_ MMD gradient estimation in Eq. (<ref>) in Lemma <ref> is similar to policy gradients introduced in <cit.>, except that environmental rewards R_e(s, a) are replaced by MMD-based diversity constraints min{D_ MMD(x, ℳ)-δ, 0}. Thus, TACE considers the long-term effects of constraints while first guaranteeing that the constraints are valid. When the MMD gradient of Lemma <ref> is integrated with the gradient of J(θ) to update the policy parameters, the final gradient g_θ for the parameter update is expressed as g_θ = 𝔼_ρ_π(s,a)[∇_θlogπ_θ(a | s)(Q_e(s, a) + σ Q_i(s,a))]. Due to the similarity between the forms of the D_ MMD gradient and the RL gradient of J(θ), MMD-based diversity constraints min{D_ MMD(x, ℳ)-δ, 0} can be viewed as an intrinsic reward R_i(s, a) for each state-action pair and be integrated with environmental rewards as follows: Q_e(s_t, a_t) + σ Q_i(s_t,a_t) = 𝔼_s_t+1, a_t+1, …[∑_l=0^∞γ^l (R_e(s_t+l, a_t+l) + σ R_i(s_t+l, a_t+l))],where R_e(s_t+l, a_t+l) + σ R_i(s_t+l, a_t+l) can be viewed as the total reward R(s,a). The following theorem connects the difference in returns between two arbitrary policies to their average variational divergence. theoremmainthm[Performance bound of the trajectory-constrained exploration strategy]For the proposed constrained optimization problem of Eq. (<ref>). Subsequently, for any policies π and π^', define δ_f (s,a,s') ≐ R_e(s, a) + σ R_i(s_t, a_t) + γ f(s') - f(s),ϵ_f^π^'≐max_s |𝔼_a ∼π^' [δ_f (s,a, s^')] |,T_π,f (π') ≐c s∼ d^πa ∼π𝔼[ (π'(a|s)/π(a|s) - 1 ) δ_f(s,a, s^') ],andD_π,f^± (π') ≐T_π,f (π')/1-γ±2γϵ_f^π'/(1-γ)^2s ∼ d^π𝔼[ D_TV (π'||π)[s] ], where s^'∼ P(·| s,a), R_i(s, a) is the MMD-based distance reward min{D_ MMD(x, ℳ)-δ, 0} and γ is the discount factor. D_TV(π'||π)[s] = 1/2∑_a | π'(a|s) - π(a|s) | is used to represent the total variational divergence between two action distributions of π and π^' when the state is s. The following bounds hold: D_π,f^+ (π') ≥ L(θ^', σ) - L(θ, σ) ≥ D_π,f^- (π'),Furthermore, the bounds are tight (when π' = π, all three expressions are identically zero). Here, L(·, ·) is defined in Eq. (<ref>), σ is the Lagrange multiplier. Before proceeding, it is worth noting that Theorem <ref> is similar to Theorem 1 of <cit.>. When choosing σ=0, the result of this theorem degenerates into Theorem 1 of <cit.>. According to Lemma <ref>, our approach transforms the constrained optimization problem into an unconstrained policy search task. In this manner, it considers the long-term effects of constraints on returns. Moreover, different from Theorem 1 in <cit.>, our method derives a new performance bound D_π,f^±(π^') based on δ_f (s,a,s') ≐ R_e(s, a) + σ R_i(s_t, a_t) + γ f(s') - f(s). Hence, this theorem can be used to analyze the effectiveness of our approach in improving exploration. By bounding the expectation of the total variational divergence 𝔼_s∼ d^π[ D_TV (π'||π)[s] ] with max_s[ D_TV (π'||π)[s] ], picking f(s) to be the value function V(s) computed with the total reward R(s,a), the following corollary holds:corollaryadvboundFor any policies π and π^', with ϵ^π'≐max_s | 𝔼_a ∼π' [A(s,a)] |, in which A(s,a) is the advantage function calculated with the total reward R(s, a) ≐ R_e(s, a) + σ R_i(s_t, a_t), then the following bound holds:L(θ^', σ) - L(θ, σ) ≥1/1-γc s ∼ d^πa ∼π' 𝔼[A(s,a) - 2γϵ^π'/1-γD_TV (π'||π)[s] ].Here, L(·, ·) is defined in Eq. (<ref>), γ is the discount factor, and σ is the Lagrange multiplier.The bound in Corollary <ref> can be regarded as the worst-case approximation error. The TV-divergence and KL-divergence are related by D_TV (p||q) ≤√(D_KL (p||q) /2) <cit.>. Combining this inequality with Jensen’s inequality, we obtain: s ∼ d^π𝔼[D_TV(π'||π)[s]] ≤s ∼ d^π𝔼[√(1/2 D_KL(π'||π)[s])]≤√(1/2s∼ d^π𝔼[D_KL(π'||π)[s]]). It is worth mentioning that the advantage A(s,a) can be decomposed as the sum of the environmental advantage A_e (s,a) and the MMD-based advantage A_i (s,a), which is expressed as: A(s,a) = A_e (s,a) + A_i (s,a).Substituting Eq. (<ref>) and (<ref>) into Eq. (<ref>), we obtain the following corollary about determining the value of σ, such that the worst-case approximation error in Eq. (<ref>) is greater than the threshold Δ:corollaryfindsigmaSuppose a performance improvement threshold Δ for any policies π and π^', and π and π^' satisfy 𝔼_s ∼ d^π[D_KL(π'||π)[s]] ≤η and 𝔼[A_i (s,a)] ≥β > 0. When (1-γ)Δ- 𝔼[A_e (s,a)] - √(2η)γϵ^π^'(1 -γ)^-1 > 0, then ifσ≥c s ∼ d^πa ∼π' 𝔼^-1[A_i (s,a)][(1-γ)Δ- c s ∼ d^πa ∼π' 𝔼[A_e (s,a)] - √(2η)γϵ^π^'/1 -γ], we have L(θ^', σ) - L(θ, σ) ≥Δ. Here, L(·, ·) is defined in Eq. (<ref>), γ is the discount factor, and σ is the Lagrange multiplier. This corollary illustrates the feasibility of the adaptive scaling method in Section <ref>. According to Corollary <ref>, by choosing suitable parameters σ and δ, we can make L(θ^', σ) - L(θ, σ) ≥Δ hold. Note that when 𝔼[A_e (s,a)] decreases, for example, if the agent adopts a single behavioral pattern and learns a suboptimal policy, a larger lower bound of σ is calculated. In this manner, our approach helps the agent to be exempt from myopic behaviors and drives it to stay in the feasible region. In practice, even if we use the minimal value of σ recommended by the corollary above, σ is still very large to obtain the performance improvement of Δ. Moreover, σ is only computed when the parameters of the policy have been updated. Hence, determining the value of σ before the update of the policy parameters in each iteration is difficult. One way to take a smaller value of σ robustly is to use the heuristic adaptive scaling method.§ EXPERIMENTAL SETUP§.§ EnvironmentsGridworld. We evaluated the performance of the TCPPO (PPO with TACE) algorithm in tasks with discrete state and action spaces, as shown in Fig. <ref>fig:discrete_maze. In this experimental setting, the agent started from the bottom-left corner of the map, and the optimal goal with the highest reward of 6 is located in the top-right corner. Moreover, there is a suboptimal goal with a relatively small reward of 1 on the right side of the initial position that can be accessed by the agent more easily. The deceptive reward provided by a suboptimal goal can easily distract the agent from finding the goal with the highest reward in the top-right corner. At each time step, the agent observes its coordinates relative to the starting point and chooses from four possible actions: moving east, south, west, and north. An episode terminates immediately once the agent reaches either of the two goals or the maximum number of steps for an episode is exceeded.Deceptive Reacher. To test the proposed TCPPO method in continuous robotic settings, we used a variant of the classic 3D-Reacher environment with formulable obstacles and misleading rewards <cit.>. In this problem, as shown in Fig. <ref>fig:deceptive_reacher, a two-joint robot arm <cit.> attempts to move its end effector (fingertip) close to the red target position to obtain an optimal reward of 60. Instead, the end effector obtains a small deceptive reward of 10 more easily by entering the box. At the start of a new episode, the robot arm is spawned at a random position sampled from a specific range. The agent's observation space consisted of the angles and angular velocities of the two arms and the coordinates of the reacher's fingertips. Furthermore, the actions performed by the agent are sampled from a two-dimensional continuous action space <cit.>.Hierarchical Control Tasks.Two MuJoCo mazes with continuous state-action spaces were adapted from the benchmarking hierarchical tasks used in <cit.> and <cit.>, which were used to test TCHRL (SNN4HRL with TACE). The observation space of the agent in these tasks is composed of the internal information S_a of the agent, such as the agent's joint angles, and the task-specific characteristics S_e, for example, walls, goals, and other objects seen through a range sensor. These robots were described in <cit.>. In these tasks, the agent is rewarded for reaching a specified position in the maze, as shown in Fig. <ref>. The problem of sparse rewards and long horizons continues to pose significant challenges to RL because an agent rarely obtains nonzero rewards; therefore, the gradient-based optimization of RL for parameterized policies is incremental and slow.As shown in Fig. <ref>fig:maze_0, the structure of Maze 0 is the same as that introduced in <cit.>, except that our maze is larger in scale and has two goals placed in the upper-left and upper-right rooms. The agent can receive rewards of 60 for reaching the goal in the top-left room and 50 for reaching another goal. The agent was initially positioned in the bottom-right room of this maze.Compared to Maze 0, Maze 1 has a different structure and more goals. Maze 1, shown in Fig. <ref>fig:maze_1, has three different goals, located at the top-left corner, bottom-left corner, and right side of the maze. The agent obtains rewards of 90, 60, and 30 to reach each goal, and its initial position is near the bottom-left corner. The agent can be more easily distracted from finding the optimal goal using two suboptimal goals. Multi-Agent Control Tasks.TCMAE (Multi-Agent Exploration with TACE) was evaluated on two challenging environments: (1) a discrete version of the multiple-particle environment (MPE) <cit.>; and (2) a modified MuJoCo continuous control task - SparseAnt Maze. In both environments, sparse rewards are only collected when agents reach the specified locations. Moreover, agents can receive deceptive suboptimal rewards from targets that are closer to them. As shown in Fig. <ref>fig:ma_grid, two agents operate within the room of a 70× 70 grid in the MPE environment. There is a suboptimal goal of a deceptive reward near the initial position, and the optimal goal is located in the up-right corner of the grid. The agents in the team can receive reward signals only when they reach any goal in the environment. The action space of each agent consists of four discrete actions: moving east, south, west, and north, similar to the grid-world environment. The agent team hopes to get rid of the suboptimal goal and cooperatively collect the optimal reward by sharing the trajectory information. In Fig. <ref>fig:ma_ant, two MuJoCo Ant agents are rewarded for reaching the specified position in a maze, and other than that, they cannot receive any reward signal. The ant agents start at the same location and cooperatively explore the environment similar to the agents in the MPE task. The observation space of the ant agent is composed of the internal information S_a of the agent and the task-specific characteristics S_e as described in the hierarchical control tasks. §.§ Neural Architectures and HyperparametersFor grid world tasks, all policies were trained with a learning rate of 0.000018 and a discount factor of 0.99. All neural networks of the policies were represented with fully connected networks that have two layers of 64 hidden units. The batch size and maximum episode length in the 50× 50 grid world maze were 8 episodes and 160, respectively. Meanwhile, The batch size was also 8 episodes in the 70× 70 grid world maze, while the maximum episode length was 220. The initial value of σ is set to 0.5 for these two tasks.For the deceptive reacher task, all policies were trained with a learning rate of 0.000006 and a discount factor of 0.99. We used fully connected networks that have two layers of 64 hidden units to implement the agent policy. The batch size and maximum episode length were 8 episodes and 150 steps, respectively. The initial value of σ is set to 0.5 for TCPPO.For the MuJoCo maze tasks, we used an SNN to learn pre-trained skills that were trained by TRPO. The number of pre-trained skills (i.e., the dimensions of the latent code fed into the SNN) was six. A high-level policy is implemented using a fully connected neural network trained by the PPO. All the neural networks (SNN and fully connected networks) had two layers of 64 hidden units. The other settings were identical to those in <cit.>. The initial value of the parameter σ is set to 0.4.For the discrete MPE environment, all neural networks of the policies were implemented with fully connected networks that have two layers of 64 hidden units. The size of the learning rate was 0.000018, and the discount factor was 0.99. The batch size and maximum episode length were 8 episodes and 240 steps. For the SparseAnt maze task, the network structure is the same as that of the discrete MPE task. The size of the learning rate is 0.001, and the discount factor is 0.99. The batch size and maximum episode length were 30 episodes and 500 steps. §.§ Baseline MethodsThe baseline methods used for performance comparisons varied for different tasks. For discrete control tasks, we compared TCPPO with the following baseline methods: (1) DIPG [10], (2) vanilla PPO <cit.>, (3) Noisy-A2C: the noisy network <cit.> variant of A2C <cit.>, (4) Div-A2C: A2C with diversity-driven exploration strategy <cit.>, (5) RIDE <cit.>, and (6) NovelD <cit.>. DIPG and Div-A2C promote agent exploration by adding a diversity regularization term to the original RL objective function. PPO is an on-policy RL method derived from TRPO, which is a practical algorithm that addresses continuous state-action spaces. Noisy-A2C is a variant of A2C that adopts noisy networks to increase the randomness of the action outputs. RIDE and NovelD design novel intrinsic reward functions for driving deep exploration in the sparse setting.For hierarchical continuous control tasks, we compared TCHRL with the following baseline methods: (1) PPO, (2) SNN4HRL <cit.>, (3) DIPG-HRL, and (4) HAC <cit.>. SNN4HRL is a state-of-the-art sub-goal-based HRL method. DIPG-HRL is a combination of DIPG <cit.> and SNN4HRL, where we use the DIPG objective function to train a high-level policy based on pre-trained skills. Similar to our method, SNN4HRL, and DIPG-HRL share the same set of pre-trained low-level skills as TCHRL. However, pre-trained skills are unadaptable in the training processes of SNN4HRL and DIPG-HRL. Instead, TCHRL sets auxiliary MMD distance rewards for low-level skill training to enable efficient and simultaneous learning of high-level policies and low-level skills without using task-specific knowledge. HAC is a standard end-to-end subgoal-based HRL baseline without delicate techniques of subgoal discovery or quantization.In multi-agent control tasks, TCMAE is compared with the following baseline methods: (1) SAC <cit.>, (2) QMIX <cit.>, (3) EMC <cit.>, (4) MAPPO <cit.> and (5) IPPO <cit.>. QMIX is a novel value-based method that trains decentralized policies of multiagent in a centralized end-to-end learning fashion. EMC is a curiosity-driven exploration method for deep cooperative multi-agent reinforcement learning (MARL). MAPPO is a PPO-based algorithm with centralized value function inputs for MARL. IPPO represents the independent PPO algorithm where each agent has local inputs for both the policy and value function networks. Recently, MAPPO and IPPO were revisited by <cit.> and demonstrated that they could achieve competitive or superior results in various challenging tasks. § EVALUATION OF RESULTS We evaluate the proposed method using several discrete and continuous control tasks. Discrete control tasks comprise 2D grid world environments of different sizes. The continuous control tasks consisted of simulated robotic environments developed by MuJoCo <cit.>. Furthermore, we directly compared the proposed approach with other state-of-the-art algorithms. §.§ Grid-world Task §.§.§ Performance Comparisons with Baseline Methods In this experiment, we combined our trajectory-constrained exploration strategy with the PPO algorithm  <cit.> to obtain the TCPPO algorithm. We evaluated the trajectory-constrained exploration strategy using 2D grid worlds of two different sizes: 50 × 50, and 70 × 70. The performances of the baseline methods and TCPPO are reported based on their average returns in Table <ref>. Some results of the 70× 70 maze are presented in Fig. <ref>. All curves in Fig. <ref> were obtained by averaging over eleven different random seeds, and for clarity, the shaded error bars represent 0.35 standard errors. Furthermore, we plotted the state-visitation count graphs of each method (Fig. <ref>) in a 70 × 70 grid world, which illustrates the differences in the exploration behaviors for different agents.As shown in Table <ref> and Fig. <ref>, the results of TCPPO outperform those of the other methods in these two 2D grid worlds of different sizes. Specifically, TCPPO learns faster and achieves higher average returns. The average return of RIDE dramatically increases, and the agent quickly learns to reach the optimal goal. However, the convergent values of both average return and success rate are inferior to those of TCPPO. These results verify that the TCPPO prevents the agent from adopting myopic behaviors and drives deep exploration. From the state-visitation count graph (see Fig. <ref>), the four baseline approaches visited only a smaller part of the state spaces and hardly collected higher rewards. However, Fig. <ref>fig:tcppo shows that the TCPPO method enhances the exploring behavior of the agent and promotes the agent to explore a wider region of the 2D grid world. Consequently, our method helps the agent escape from the region in which the deceptive reward is located and successfully reach the optimal goal with a higher score. Furthermore, we compare the changing trends in the MMD distances of the different methods in Fig. <ref>fig:mmds. According to Fig. <ref>fig:mmds, TCPPO increases the MMD distance between the old and current policies during the training process. Note that DIPG does not learn the optimal policy, but also results in a larger MMD distance, similar to TCPPO.§.§.§ Analysis of Adaptive Scaling Method The adaptive scaling method for the Lagrange multiplier σ serves as a critical component of our proposed trajectory-constrained exploration strategy. This ensures that the agent remains within the feasible region by increasing the Lagrange multiplier if there are state-action pairs outside the feasible region. As shown in Eq. (<ref>), we also adopt a naive linear decay method to stabilize training processes because it hinders the learning process if the Lagrange multiplier σ remains a large constant throughout the whole training process. Table <ref> lists the influence of the adaptive scaling method on the agent performance. The experimental results illustrate that the adaptive scaling method enables the agent to gain a higher average return and outperform other methods during training.Fig. <ref>fig:sigmatrend shows the changing trend of the Lagrange multiplier σ during the training process. At the beginning of the training phase, the agent occasionally encounters a suboptimal goal, which is the same as the goal demonstration trajectories lead to. According to our design in Section <ref>, our method drastically increases the value of parameter σ to force the agent away from the local maximum. After several episodes, the agent gets rid of the suboptimal goal and gradually explores more diverse regions of the state-action space, wandering between areas of radius ϵ and 2ϵ. After about 200 episodes, the value of the parameter σ drops exponentially and slowly, which illustrates that the agent always stays outside the area with a radius of 2ϵ. Therefore, our adaptive scaling method significantly improves the efficiency of exploration and protects the agent from falling into a local maximum and adopting myopic behaviors. §.§ Deceptive Reacher TaskA variant of the classic two-jointed robot arm environment was used to test the proposed method on a continuous robotic control problem. In this deceptive reacher task, we compared our proposed TCPPO algorithm with the same baseline methods used in the grid world task. In this task, to fulfill our design, the replay memory was maintained to store the suboptimal trajectories. Furthermore, we found that it is sufficient to store no more than five trajectories in the replay memory. We report the learning curves of all the methods in Fig. <ref> in terms of the average return and success rate. All learning curves were obtained by averaging the results generated with different random seeds, and the shaded error bars represent the standard errors.Compared with other baseline methods, our approach succeeded in moving out of the box with deceptive rewards and did not adopt myopic behaviors. Therefore, TCPPO can learn faster and achieve higher return values at the end of the training. RIDE fully explored the environment and quickly reached the optimal goal during training. In contrast, the other baseline methods rarely encountered the optimal goal and always fell into the local optimum by collecting deceptive rewards from the box. The PPO is not aimed at long-horizon sparse-reward problems. Consequently, the success rate of the PPO in this continuous control task was always zero. Noisy-A2C and Div-A2C are designed for efficient exploration and occasionally generate trajectories with the optimal rewards. However, our results indicate that these two algorithms do not eliminate the myopic behaviors and cannot learn the optimal policy after training. §.§ Performance in Four-Room Maze§.§.§ Results and ComparisonsIn this task, we compared the TCHRL algorithm with the state-of-the-art skill-based method SNN4HRL <cit.>. In addition, we conducted comparative experiments using a DIPG <cit.> variant that is combined with SNN4HRL (denoted DIPG-HRL). In this algorithm, DIPG is only used to train the high-level policy and not to adapt pre-trained skills along with the high-level policy during the training process. We maintain empty replay memories ℳ for DIPG-HRL and TCHRL at the beginning of the training. When ℳ is empty, TCHRL and DIPG-HRL degenerate into SNN4HRL. The number of prior suboptimal trajectories is at most n, which are generated by the same previous policy. In our experiments, n=5 was sufficient to produce satisfactory performance. In the worst case, an agent learns the optimal policy only after it learns all the suboptimal policies and stores all the corresponding trajectories in ℳ. Consequently, the agent may need to sequentially train g different policies in a complete training session, where g represents the number of goals in the maze. For fairness, we trained the PPO agent the same number of times. We plotted the statistical results of different methods based on different goals. All the curves were obtained by averaging over different random seeds, and the shaded error bars represent the confidence intervals. Note that the learning curves shown in Fig. <ref> are drawn when the replay memory stores the previous suboptimal trajectories, leading to a suboptimal goal. In Fig. <ref>, we compare the different methods from two aspects of average return and success rate. Our method is superior to the other methods in both aspects. Specifically, the PPO is not designed for long-horizon tasks with sparse rewards; hence, the success rate of the PPO in achieving the optimal goal is zero in this maze. Although HAC is a subgoal-based HRL algorithm, HAC cannot find goals and obtain any sparse reward according to Fig. <ref>, which may be caused by the sensitivity of goal space design <cit.>. SNN4HRL learned only the myopic policy leading to the suboptimal goal and was rewarded with 50 during training, which indicated that the agent was trapped in the local optimum. DIPG-HRL reached the optimal goal at the top-left corner with a high percentage. Moreover, it can be seen from Fig. <ref>fig:exp1_rate that its success rate in reaching the global optimal goal is lower than that of TCHRL, and it learns slower. Compared with SNN4HRL and DIPG-HRL, the experimental results in Fig <ref> demonstrate that TCHRL drives deep exploration and avoids suboptimal and misguided behaviors, thereby attaining a higher score and learning rate. §.§.§ Effect of the Distance Normalization In this section, we examine the influence of the distance normalization method on the performance of agents in continuous control tasks. We chose different values of the parameter δ=0, 0.5, and 0.75, as the boundary constraints for comparative experiments. The TCHRL algorithm without distance normalization is viewed as a basic comparison baseline, in which the value of the parameter δ needs to be chosen manually according to different experimental settings. We use “none” to represent this method in Fig. <ref>fig:intrinsic_reward. The experimental results reported in Fig. <ref>fig:intrinsic_reward demonstrate that by simply setting δ=0.5, TCHRL achieved the best results among the four different parameter settings. Hence, the distance normalization method can reduce the dependence of parameter δ on the environment and stabilize the learning process. §.§ Multiple-Goal Maze Task §.§.§ Performance ComparisonsWe compared our algorithm with state-of-the-art skill-based HRL methods SNN4HRL <cit.> and DIPG-HRL in a multiple-goal maze task. When the replay memory ℳ maintains trajectories leading to suboptimal goals, TCHRL encourages the agent to generate new trajectories that visit novel regions of the state-action space, gradually expanding the exploration range. As shown in Fig. <ref>, the TCHRL agent collects trajectories ending with the optimal goal and learns the policy to collect optimal rewards. In the early training phase, the average return of the TCHRL algorithm was smaller than that of SNN4HRL because our approach preferred to explore the environment more systematically at the beginning of the training time. Moreover, TCHRL gradually adapted its pre-training skills during the training process. Hence, TCHRL does not immediately adopt myopic behaviors to obtain deceptive rewards more easily. All the curves are obtained by averaging over different random seeds, and the shaded error bars represent the confidence intervals.In Fig. <ref>, the PPO is not an algorithm specifically designed for long-horizon tasks with sparse or deceptive rewards. Therefore, the success rate of the PPO in this maze was zero, as it was in Maze 0. HAC did not receive any reward and learn meaningful policies, which is consistent with previous results in Maze 0. SNN4HRL was rewarded with 30 from the suboptimal blue goal, and DIPG-HRL reached the suboptimal green goal and obtained a reward of 60. Neither received the reward with the highest score from the red goal. Therefore, this result verifies that SNN4HRL and DIPG-HRL cannot fully explore the environment, always adopt myopic behaviors, and learn suboptimal policies. In contrast, our TCHRL can escape suboptimal behaviors and reach the global optimal goal.The MMD distances between the different trajectories that reached the two suboptimal goals were larger than those of the other two trajectories. Because DIPG uses a regularization term based on trajectory distributions, it is difficult for DIPG to adjust the contributions of the RL objective and regularization term. Therefore, the DIPG-HRL agent tends to learn to reach another suboptimal goal if it has learned a suboptimal policy. Notably, these trajectory distributions were induced by the corresponding policies, and additional trajectory data were required to achieve a good estimation of these distributions. Moreover, DIPG-HRL does not adapt the parameters of the pre-training skills along with a high-level policy, which further degrades the performance of the algorithm. This concept is discussed in detail in the following section. In this manner, we illustrate the inability of the DIPG-HRL agent to learn the optimal policy when the replay memory ℳ maintains trajectories that lead to two suboptimal goals.§.§.§ Adaptation of Pretrained SkillsTo further explain why TCHRL achieves such excellent performance compared with other baseline methods, we conducted a more in-depth study of the experimental results. TCHRL is a skill-based HRL method. To train the TCHRL agent, we first use stochastic neural networks to learn diverse low-level skills <cit.>. In Fig. <ref>, we compare the low-level skills before and after training in the Swimmer Maze task. The swimmer agent was always initialized at the center of the maze and used a single skill to travel for a fixed number of timesteps, where the colors indicate the latent code sampled at the beginning of the rollout. Each latent code generates a particular interpretable behavior. Given that the initialization orientation of the agent is always the same, different skills are truly distinct ways of moving: forwards, backward, or sideways. Many other existing HRL algorithms either use pre-trained skills that are unadaptable or require domain-specific information to define the low-level rewards. In contrast, TCHRL simultaneously adapts low-level skills to downstream tasks while maintaining the generality of low-level reward design by setting auxiliary rewards for low-level skill training, based on the MMD distance. Comparing Fig. <ref>fig:skill_after with Fig. <ref>fig:skill_before, we note that the swimmer agent learns to turn right (skill in baby blue) and left (skill in navy blue). It is favorable to perform these two skills in the maze task in Fig. <ref>fig:maze_1 when attempting to reach the green and blue goals. Therefore, the adaptation of pre-trained skills distinctly leads to more diverse skills and effectively drives the agent to explore a wider range of state spaces, which is beneficial for the downstream tasks based on the experimental results. §.§ Results of Multi-Agent Tasks§.§.§ Performance Comparisons of Discrete MPE EnvironmentIn this discrete MPE environment, TCMAE was used to encourage the multi-agent team to fully explore the grid world, and its experimental results were compared with those of the baseline methods. In this task, to fulfill our design, the replay memory of each agent was maintained to store the suboptimal trajectories. The agents of the team shared trajectory information during each epoch. We report the learning curves of all the methods in Fig. <ref>. All learning curves were obtained by averaging the results generated with different random seeds, and the shaded error bar represents the standard error. Since TCMAE focuses on enhancing the exploration of the multi-agent team, we choose to report the results of agents that generate the highest return values and learning rates.As shown in Fig. <ref>, TCMAE achieves competitive and superior performance in terms of average return and success rate compared to the other baseline methods. The large performance gap can be observed from Figs. <ref>fig:ma_grid_return and <ref>fig:ma_grid_rate. Due to exploratory learning behavior, the learning rate of EMC is significantly slower than TCMAE, and the learning process of EMC is more instability than TCMAE. QMIX and IPPO can occasionally find the optimal goal with a simple heuristic exploration strategy, which can lead to inefficient exploration, especially for IPPO. The experimental results demonstrate the great capability of TCMAE to help the multi-agent team explore the environment and reach the optimal goal. §.§.§ Experimental results of SparseAnt Maze Task To evaluate TCMAE in environments with continuous state-action space, we designed a SparseAnt maze task with sparse and deceptive rewards and compared the performance of TCMAE with several baseline methods. The agent of the team shared the good trajectory information during the training process, which can be used to compute the MMD-based intrinsic reward. We report the learning curves of all the methods in Fig. <ref>. All learning curves were obtained by averaging the results generated with different random seeds, and the shaded error bar represents the 95% confidence interval. The available code of QMIX and EMC released by the authors can only be applied to the environments with discrete action spaces; hence they cannot be used as baseline methods in the SparseAnt maze task.In Fig. <ref>, the experimental results are reported in terms of average return and success rate. TCMAE achieves a remarkable performance level, and the results of TCMAE considerably outperform baseline methods. ISAC did not obtain significant policy performance improvement after training, and cannot find a policy to collect the optimal rewards. Noticeably, MAPPO can achieve competitive results in both performance metrics, however, its final results are inferior to those of TCMAE and its learning rate is slower than TCMAE. We also compared TCMAE with the multi-agent RL algorithm IPPO to further demonstrate the effectiveness of TCMAE in facilitating multi-agent exploration. § CONCLUSION In this study, we present a trajectory-constrained exploration strategy for long-horizon tasks with large state spaces and sparse or deceptive rewards. We propose to promote the agent's exploration by treating incomplete offline demonstration data as references and demonstrate that this goal can be achieved by introducing an effective distance metric to measure the disparity between different trajectories. We reformulated the policy optimization for RL as a constrained optimization problem, which enhanced the agent's exploration behavior and avoided sensitive hyperparameters. Subsequently, we developed a novel policy-gradient-based algorithm with adaptive clipped trajectory-based distance rewards. Our method can be effectively combined with non-hierarchical and hierarchical RL methods and can continuously adapt pre-trained skills along with high-level policies when the agent employs a hierarchical policy. Furthermore, we introduced an adaptive scaling method and a distance normalization strategy to achieve better performance. The proposed trajectory-constrained strategy is evaluated in large 2D grid worlds and MuJoCo maze environments, and the experimental results show that our method outperformed other baseline algorithms in terms of improving exploration efficiency in large state spaces and avoiding local optima.Our method encourages agents to visit underexplored regions by considering imperfect offline demonstrations as references. However, when offline imperfect trajectories are on the way to the optimal goal, our method may ignore the fact that exploiting such experiences can indirectly drive deep exploration. Further research may focus on considering the diverse exploration problem in a teamwork setting and exploiting imperfect demonstrations to indirectly accelerate learning and drive deep exploration. § REPRODUCING KERNEL HILBERT SPACES This section mainly refers to <cit.>.§.§ Reproducing Kernel Hilbert Spaces An inner product ⟨μ, υ⟩ can be* a dot product: ⟨μ, υ⟩ = υ^'μ = ∑_iυ_iμ_i;* a kernel product: ⟨μ, υ⟩ = k(υ, μ) = ψ(υ)^'ψ(μ) (where ψ(μ) may have infinite dimensions).Obviously, an inner product ⟨·, ·⟩ must satisfy the following conditions:* Symmetry⟨μ, υ⟩ = ⟨υ, μ⟩ ∀μ, υ∈𝒳 * Bilinearity⟨αμ + βυ, ω⟩ = α⟨μ, ω⟩ + β⟨υ, ω⟩ ∀μ, υ, ω∈𝒳,∀α, β∈ℝ * Positive definiteness⟨μ, μ⟩≥0, ∀μ∈𝒳 ⟨μ, μ⟩=0 ⟺μ=0A Hilbert Space is an inner product space that is complete and separable with respect to the norm defined by the inner product. The vector space ℝ^n with the vector dot product ⟨ a, b ⟩ = b^' a ∀ a, b∈ℝ^n is an example of Hilbert space. k: 𝒳×𝒳→ℝ is a kernel if* k is symmetric: k(x, y) = k(y, x);* k is positive semi-definite, i.e., ∀ x_1, x_2, …, x_n ∈𝒳, the Gram Matrix K defined as K_ij = k(x_i, x_j) is positive semi-definite. k(·, ·) is a reproducing kernel of a Hilbert space ℋ if ∀ f∈ℋ, f(x) = ⟨ k(x, ·), f(·)⟩.A Reproducing Kernel Hilbert Space (RKHS) is a Hilbert space ℋ with a reproducing kernel whose span is dense in ℋ.Therefore, an RKHS is a Hilbert space of functions with all evaluation functionals bounded and linear. §.§ Build a Reproducing Kernel Hilbert SpaceGiven a kernel k, we can define a reproducing kernel feature map Φ: 𝒳→ℝ^𝒳 as: Φ(x) = k(·, x). Consider the vector space: span({Φ(x: x∈𝒳)}) = {∑_i=1^nα_i k(·, x_i): n∈ℕ,x_i∈𝒳,α_i∈ℝ}. For f = ∑_iα_i k(·, μ_i) and g = ∑_iβ_i k(·, υ_i), define ⟨ f,g⟩=∑_i,jα_iβ_j k(μ_i, υ_j). Note that:⟨ f, k(·, x)⟩= ∑_i α_i k(x, μ_i) = f(x), i.e., k has the reproducing property.We show that ⟨ f, g⟩ is an inner product on the vector space defined by Eq. (<ref>) by checking the following conditions:* Symmetry: ⟨ f, g⟩ = ∑_i,jα_iβ_j k(μ_i, υ_j) = ∑_i,jβ_jα_i k(υ_j, μ_i) = ⟨ g, f⟩;* Bilinearity: ⟨ f, g⟩ = ∑_i α_i g(υ_i) = ∑_j β_j f(μ_j);* Positive definiteness: ⟨ f, f⟩ = α^' Kα≥ 0 with equality if f=0. Then we can define a new Hilbert space by completing the inner product space ⟨·, ·⟩: For a (compact) 𝒳⊆ℛ^d, and a Hilbert space ℋ of functions f: 𝒳→ℝ, we say ℋ is a Reproducing Kernel Hilbert Space if ∃ k: 𝒳→ℝ, s.t. * k has the reproducing property, i.e., f(x) = ⟨ f(·), k(·, x)⟩;* k spans ℋ = span{k(·, x): x∈𝒳}.§ PROOF OF LEMMA 1*Let x = (s, a) denote a state-action pair. Let R_i(s, a) be the intrinsic reward function derived from the maximum mean discrepancy: R_i(s, a) = min{D_ MMD(x, ℳ) - δ, 0}, and Q_i(·, ·) be the Q-function calculated using R_i(s, a) as the reward: Q_i(s_t, a_t) = 𝔼[∑_l=0^T-tγ^lR_i(s_t+l, a_t+l)]. We can easily derive the formula from the policy gradient theorem <cit.>: ∇_θ D_ MMD = 𝔼_ρ_π(s, a)[∇_θlogπ_θ(a | s)Q_i(s, a)]. § PROOF OF THEOREM 1 AND TWO RELEVANT COROLLARIESThe following lemmas are proved in <cit.>, and we have excerpted them here. The detailed proof process can be found in the appendix of <cit.>.For any function f: S→ℝ and any policy π, (1-γ) s ∼ρ_0𝔼[f(s)] + c s ∼ d^πa ∼πs' ∼ P𝔼[ γ f(s') ] - s ∼ d^π𝔼[f(s)] = 0,where γ is the discount factor, ρ_0 is the starting state distribution, and P is the transition probability function. For any function f: S →ℝ and any policies π' and π, define T_π,f (π') ≐c s ∼ d^πa ∼πs' ∼ P𝔼[ (π'(a|s)/π(a|s) - 1 ) (R(s,a) + γ f(s') - f(s) )], and ϵ_f^π'≐max_s | 𝔼_ca ∼π', s'∼ P [R(s,a) + γ f(s') - f(s)] |. Consider the standard RL objective function J(π_θ) = 𝔼_τ[∑_t=0^∞γ^t R(s_t,a_t)], the following bounds hold: J(π') - J(π) ≥1/1-γ(T_π,f (π') - 2ϵ_f^π' D_TV (d^π' || d^π)), J(π') - J(π) ≤1/1-γ(T_π,f (π') + 2ϵ_f^π' D_TV (d^π' || d^π)), where D_TV is the total variational divergence. Furthermore, the bounds are tight (when π' = π, the LHS and RHS are identically zero). Here, γ is the discount factor and d^π is the discounted future state distribution. The divergence between discounted future state visitation distributions, d^π' - d^π_1, is bounded by an average divergence of the policies π' and π: d^π' - d^π_1 ≤2γ/1-γs ∼ d^π𝔼[ D_TV (π' || π)[s]], where D_TV (π'||π)[s] = (1/2) ∑_a |π'(a|s) - π(a|s)| is the total variational divergence at s. *When the MMD gradient of Lemma <ref> is integrated with the gradient of J(θ) to update the parameters of the policy, the final gradient g_θ for the parameter update of Eq. (<ref>) can be expressed as: g_θ = 𝔼_ρ_π(s,a)[∇_θlogπ_θ(a | s)(Q_e(s, a) + σ Q_i(s,a))]. Due to the similarities of the forms between the D_ MMD gradient and the RL gradient of J(θ), MMD-based constraints min{D_ MMD(x, ℳ)-δ, 0} can be viewed as an intrinsic reward r(s, a) = min{D_ MMD(x, ℳ)-δ, 0} for each state-action pair and be integrated with environmental rewards as follows: Q_e(s_t, a_t) + σ Q_i(s_t,a_t) = 𝔼_s_t+1, a_t+1, …[∑_l=0^∞γ^l (R_e(s_t+l, a_t+l) + σ R_i(s, a))] Then, with the bounds from Lemma <ref> and bound the divergence D_TV (d^π' || d^π) by Lemma <ref>, we can easily come to the conclusion. Theorem <ref> is similar to Theorem 1 of <cit.>. Different the Theorem 1 in <cit.>, our approach transforms constrained optimization problems into unconstrained problems and considers the long-term effects of constraints for returns. Our proposed theorem can be used to analyze the effectiveness of our approach in improving exploration.* Let f=Ṽ in Theorem <ref>, and Ṽ is the value function computed with environmental and MMD-based rewards. Then, we can derive this corollary. * the TV-divergence and KL-divergence are related by D_TV (p||q) ≤√(D_KL (p||q) /2) <cit.>. Combining this inequality with Jensen’s inequality, we obtain: s ∼ d^π𝔼[D_TV(π'||π)[s]] ≤s ∼ d^π𝔼[√(1/2 D_KL(π'||π)[s])] ≤√(1/2s ∼ d^π𝔼[D_KL(π'||π)[s]]). Further, notice that the advantage A(s,a) can be decomposed as the sum of the environmental advantage A_e (s,a) and the MMD-based advantage A_i (s,a), which is expressed by: A_π (s,a) = A_e(s,a) + A_i(s,a),Substituting Eq. (<ref>) and (<ref>) into the right-hand side of Eq. (<ref>), and suppose that π and π^' satisfy s ∼ d^π𝔼[D_KL(π'||π)[s]] ≤η, we can obtain the following equation: 1/1-γc s∼ d^πa ∼π' 𝔼[A(s,a) - 2γϵ^π'/1-γD_TV (π'||π)[s] ]≥ 1/1-γ[c s ∼ d^πa ∼π' 𝔼[ A_e (s,a)] + σc s ∼ d^πa ∼π' 𝔼[ A_i (s,a)] - √(2)γϵ^π'/1-γ√(s ∼ d^π𝔼[D_KL(π'||π)[s]])] ≥ 1/1-γ[ c s ∼ d^πa ∼π' 𝔼[ A_e (s,a)] + σc s ∼ d^πa ∼π' 𝔼[ A_i (s,a)] - √(2η)γϵ^π^'/1-γ]. Let the last term of Eq. (<ref>) be greater than Δ, i.e.1/1-γ[ c s∼ d^πa∼π^'𝔼[A_e (s,a)] + σc s ∼ d^πa ∼π' 𝔼[A_i (s,a)] - √(2η)γϵ^π'/1-γ] ≥Δ. After sorting out Eq. (<ref>), we get:σ≥c s∼ d^πa∼π' 𝔼^-1[A_i (s,a)][(1-γ)Δ - c s ∼ d^πa ∼π^'𝔼[A_e (s,a)] + √(2η)γϵ^π^'/1 -γ].That is to say, L(θ^', σ) - L(θ, σ) ≥Δ when σ satisfies Eq. (<ref>).§ AN OVERVIEW OF HYPERPARAMETER CONFIGURATIONS & SEARCH SPACES §.§ Stable Baselines Default ConfigurationsTable <ref> shows the default hyperparameters we used in the experiments of Section <ref>. TCHRL is based on TRPO in the hierarchical navigation tasks, and hence some hyperparameters are not necessary for it, and we use "-" to indicate this situation.§.§ Sweep Values of HyperparametersTo determine the optimal hyperparameter value, we first swept over the different hyperparameter values for the same dimension in a relatively large range. For some hyperparameters, we further refined the scope of the search.For the learning rate, the first search scope was {1e-2,5e-3,1e-3,5e-4,1e-4,5e-5,1e-5,5e-6,1e-6,5e-7}. Then, we narrowed down the search and reduced the search step size, and the second search scope was {5e-5,4e-5,3e-5,2e-5,1e-5,9e-6,8e-6,7e-6,6e-6,5e-6}. After two rounds of searching, we find that the highest performance of TACE was obtained when =2e-5. Finally, we fine-tuned its value around 2e-5 and finally obtained the learning rate of 1.8e-5.For the clip range hyperparameter of PPO, which was the basis of our algorithm TCPPO, we selected its proper value by scanning the values in {0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9}. We selected the initial Lagrange multiplier σ from the search space {0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9}. The suitable value of ϵ is determined by scanning over the set {0.0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4}. Because we adopted the heuristic adaptive scaling method, the value of δ is environment-independent, and hence the optimal value of δ can be selected for different environments by sweeping over the set {0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9}. § PSEUDOCODE OF TCPPO ALGORITHMSAlgorithm <ref> describes our method TCPPO in detail. Notations:θ = Policy parametersσ = Lagrange multiplierα = Learning rateN = Size of on-policy bufferG = Number of goals in the environment § TRAINING PROCESS OF TCHRL ALGORITHMSAlgorithm <ref> describes our method TCHRL in detail. At each time step, the algorithm is executed according to the framework shown in Fig. <ref>. State-action pairs generated by the current policy are stored in the on-policy set ℬ. Then we use these experiences to estimate the policy gradient ∇_θL according to Eq. (<ref>) and calculate the diversity measurement D_ MMD between different policies. Finally, we update parameters of π_θ with the gradient ascent algorithm and adapt the penalty factor according to Eq. (<ref>).Notations:θ := {θ_h, θ_l} = Policy parametersσ = Lagrange multiplierα = Learning rateN = Size of on-policy bufferG = Number of goals in the environment § TCHRL LEARNING FRAMEWORKS In this section, we introduce the implementation of our exploration strategy using a hierarchical policy. Our implementation was based on the state-of-the-art skill-based hierarchical reinforcement learning algorithm SNN4HRL <cit.>. Instead of freezing the low-level skills during the training phase of the downstream task, our proposed diversity incentive adapts the pre-training skills along with the high-level policy training.Fig. <ref> shows the execution process of our trajectory-constrained hierarchical reinforcement learning algorithm. In this HRL algorithm with a 2-level hierarchy, the agent takes a high-level action (or a latent code) z_t every p timesteps after receiving a new observation s_t, i.e., z_t = z_kpif kp ≤ t ≤ (k+1)p-1. The low-level policy π_θ_l is another neural network that treats the current observation s_t and high-level action z_t as inputs, and its outputs are low-level actions a_t used to interact with the environment directly. The skills selected by the high-level policy are executed by the low-level policy for the next p time steps. In our framework. Different skills of the low-level policy are distinguished by different latent codes z, and a single stochastic neural network <cit.> is employed to encode all the pre-trained skills <cit.>.Under our framework, a trajectory τ can be expressed as:τ = (s_0, a_0, s_1, a_1, …, s_T, a_T),and the probability of generating this trajectory can be expressed as <cit.>:p(τ) = (∏_k=0^T/p[∑_j=1^mπ_θ_h(z_j | s_kp) ∏_t=kp^(k+1)p - 1π_θ_l(a_t | s_t, z_j) ] )·[ρ(s_0)∏_t=1^TP(s_t+1| s_t, a_t)],where m represents the number of different low-level skills, s_0∼ρ_0(s_0), z_kp∼π_θ_h(z_kp| s_kp), a_t ∼π_θ_l(a_t | s_t, z_t), and s_t+1∼ P(s_t+1| s_t, a_t). Hence, we define the sequence of a high-level action z_k followed by p low-level actions (a_kp, …, a_(k+1)p - 1) as a macro action ã, then the probability of a macro actioncan be written asπ(ã| s_kp) = π_θ_h(z_j | s_kp)∏_t=kp^(k+1)p-1π_θ_l(a_t | s_t, z_j). TCHRL allows a high-level policy to select a pre-trained skill to perform low-level actions over several time steps. Unlike general sub-goal-based HRL methods <cit.>, these pre-trained skills do not need to reach the sub-goals set by a high-level policy. Moreover, these low-level skills can be rewarded by environmental and MMD distance rewards, which can be used to compute the Q-functions Q̃_h(s_kp, z_kp) and Q̃_l(s_t, z_kp, a_t) for low-level and high-level policies, respectively. By Lemma <ref>, combining Eqs. (<ref>) and (<ref>), the following hierarchical MMD gradient formula holds:∇_θ D_ MMD=𝔼_ρ_π(s, ã)[∇_θ_hlogπ_θ_h(z_kp| s_kp)Q̃_h(s_kp, z_kp) +∑_t=kp^(k+1)p - 1∇_θ_llogπ_θ_l(a_t | s_t, z_kp)Q̃_l (s_t, z_kp, a_t)],where Q̃(s_kp, z_kp) is calculated with high-level trajectories based on Eq. (<ref>), and Q̃(s_t, z_kp, a_t) is calculated with low-level trajectories similar to Q̃(s_kp, z_kp).In this framework, replay memory ℳ is maintained to store the suboptimal trajectories generated by past suboptimal policies learned during the past training process. Instead, state-action pairs collected by the current policy are stored in the on-policy buffer ℬ. Our method uses these trajectories in ℳ and ℬ to compute the MMD distance. § TRAINING PROCESS OF TCMAE ALGORITHMSAlgorithm <ref> describes our method TCMAE in detail. Notations:I := {1, 2,…, n}= the finite sets of agentsθ := {θ_h, θ_l} = Policy parametersσ = Lagrange multiplierα = Learning rateN = Size of on-policy bufferG = Number of goals in the environment § ADDITIONAL EXPERIMENTAL RESULTSIn this section, we further evaluate the performance of TCPPO in tasks with discrete state-action spaces, as shown in Fig. <ref>fig:discrete_maze. Compared with the maze in Fig. <ref>fig:discrete_maze, the only difference in Fig. <ref> is the addition of a new deception goal with a reward of 2. In addition, the settings for both grid world mazes were identical. The size of the 2D grid world maze was 70×70 steps. All curves are obtained by averaging over different random seeds, and for clarity, the shaded error bars represent 0.35 standard errors.As shown in Figs. <ref>, the TCPPO algorithm significantly outperformed the other baseline methods in this 2D grid world maze. Similar to the results in Section <ref>, TCPPO learned faster and achieved higher average returns. Meanwhile, with the help of TCPPO, the agent avoided adopting myopic behaviors and performed deep exploration. In particular, the performance gap between our approach and the baseline methods was even greater than that described in Section <ref>. Hence, the proposed method has obvious advantages for this more difficult task.Furthermore, we compared the changing trends in the MMD distances of the different methods, as shown in Fig. <ref>. The results illustrate that TCPPO increases the MMD distance between the old and current policies during the training process. By contrast, the four baseline methods could not achieve such a large MMD distance.§ ACKNOWLEDGMENTThis work was supported by the National Key R&D Program of China (2022ZD0116401) § CONFLICT OF INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper § AUTHORS’ CONTRIBUTIONSGuojian Wang: Conceptualization, Methodology, Software, Formal analysis, Writing–Original Draft. Faguo Wu: Writing–Review and Editing, Supervision. Xiao Zhang: Validation, Supervision, Funding acquisition. Ning Guo: Formal analysis, Visualization. Zhiming Zheng: Supervision, Validation, Funding acquisition. § AVAILABILITY OF DATA AND MATERIALSThe datasets generated during and/or analysed during the current study are available from thecorresponding author on reasonable request. Code for this paper is available at <https://github.com/buaawgj/TACE>.elsarticle-num
http://arxiv.org/abs/2312.16456v1
{ "authors": [ "Guojian Wang", "Faguo Wu", "Xiao Zhang", "Ning Guo", "Zhiming Zheng" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231227075715", "title": "Adaptive trajectory-constrained exploration strategy for deep reinforcement learning" }
Graph Context Transformation Learning for Progressive Correspondence Pruning Myung-Ki Cheoun January 14, 2024 ============================================================================ An a posteriori error estimator based on an equilibrated flux reconstruction is proposed for defeaturing problems in the context of finite element discretizations. Defeaturing consists in the simplification of a geometry by removing features that are considered not relevant for the approximation of the solution of a given PDE. In this work, the focus is on Poisson equation with Neumann boundary conditions on the feature boundary. The estimator accounts both for the so-called defeaturing error and for the numerical error committed by approximating the solution on the defeatured domain. Unlike other estimators that were previously proposed for defeaturing problems, the use of the equilibrated flux reconstruction allows to obtain a sharp bound for the numerical component of the error. Furthermore, it does not require the evaluation of the normal trace of the numerical flux on the feature boundary: this makes the estimator well-suited for finite element discretizations, in which the normal trace of the numerical flux is typically discontinuous across elements.The reliability of the estimator is proven and verified on several numerical examples. Its capability to identify the most relevant features is also shown, in anticipation of a future application to an adaptive strategy. Keywords: Geometric defeaturing problems, a posteriori error estimation, equilibrated fluxMSC codes: 65N15, 65N30 § INTRODUCTIONThe need of solving problems on complex domains, characterized by the presence of geometrical features of different scales and shapes, arises in many practical applications. In particular, in the process of simulation-based manufacturing, repeated simulations are often to be performed, in order to analyze the impact of design changes or to adjust geometric parameters. In many cases, before even solving the problem at hand, the first issue to overcome is the definition of the features themselves and the construction of a suitable computational mesh. For this reason it can be fundamental to simplify the geometry as much as possible, in order to avoid the definition of those features which may not have an actual impact on the accuracy of the solution. This process is commonly called defeaturing. Some criteria based on some a priori knowledge of the computational domain and of the properties of the materials have been used in the past (see, e.g., <cit.>). However, in order to fully automatize the process, an a posteriori criterion is necessary and many different proposals can be found in literature (see <cit.>).In this paper we start from the work presented in <cit.>, which proposes an a posteriori error estimator for analysis-aware defeaturing, in the context of the Poisson equation with Neumann boundary conditions on the feature boundary.In particular in <cit.>, an estimator is designed to control the overall error between the exact solution of the PDE defined in the exact domain, and the numerical approximation of the solution of the corresponding PDE defined in the defeatured domain. This estimator is made by two components, one accounting for the defeaturing error, i.e. the error committed by neglecting the features, and the other accounting for the numerical error committed when solving the problem on the defeatured geometry. The first component has the big advantage of being explicit with respect to the size of the geometrical features, and in <cit.> the authors prove that it is a reliable and efficient bound for the energy norm of the defeaturing error. The second component is instead built as a residual-based estimator of the numerical error. The overall estimator is defined up to two positive parameters, related to the unknown constants appearing in the bounds of the defeaturing and of the numerical errors. Such parameters need to be tuned in order to correctly weight the two components. In order to partially overcome this issue, in this work we propose a novel a posteriori error estimator that is strongly based on <cit.> for what concerns its defeaturing component, but which resorts to an equilibrated flux reconstruction (see, among others, <cit.>). Indeed, one of the main drawbacks of residual-based error estimators is that the reliability constants are usually unknown and problem dependent. On the contrary, the difference between the numerical and the equilibrated flux provides an upper bound for the energy norm of the numerical error having reliability constant equal to 1. Although we do not get rid of the unknown constant related to the defeaturing component, the use of the equilibrated flux reconstruction also allows to avoid the computation of the normal trace of the numerical flux on the feature boundary. This makes the estimator well-suited for finite element discretizations, in which the normal trace of the numerical flux is typically not continuous.On the contrary, the estimator proposed in <cit.> was designed to be applied along with an IGA discretization.The equilibrated flux reconstruction is built following the steps in <cit.>, solving mixed local problems on patches of elements and leading to a discrete reconstructed flux in a Raviart—Thomas finite element space.The paper is organized in four Sections. In Section <ref> we introduce some notation and the defeaturing model problem, while in Section <ref> and <ref> we derive and analyze an a posteriori error estimator resorting to a generic equilibrated flux reconstruction and providing a bound for the overall error. Section <ref> describes a practical way to build the equilibrated flux reconstruction and, finally, in Section <ref> the proposed estimator is validated by some numerical experiments. § NOTATION AND MODEL PROBLEM In the following we adopt the notation introduced in <cit.>, which is here recalled for the sake of clarity. Let ω be any open k-dimensional manifold in ℝ^d, d=2,3 and k≤ d. We denote by |ω| the measure of ω, and for any function φ defined on ω, we denote by φ^ω its average over ω. We will denote by (·,·)_ω the L^2-inner product on ω and by ||·||_ω the corresponding norm. If k<d, then ⟨·,·⟩_ω stands for a duality paring on ω. For future use, let us define the quantity c_ω:=max(-log(|ω|),ζ)^1/2 ifk=1,  d=2 1ifk=2,  d=3 where ζ∈ℝ is the unique solution of ζ=-log(ζ). Let us consider an open Lipschitz domain Ω⊂ℝ^d and let us denote by ∂Ω its boundary. We suppose that Ω contains one feature F⊂ℝ^d, i.e. a geometrical detail of smaller scale, which is assumed to be an open Lipschitz domain, as well. The boundary of F is denoted by ∂ F. We consider two main types of features. In particular, a feature F is said to be * negative, if (F∩Ω)⊂∂Ω; * positive, if F⊂Ω. In the following we will refer to Ω as the exact or original geometry. For the sake of simplicity we restrict ourselves to the case of an exact geometry with a single feature, but the generalization to the multiple feature case easily follows from <cit.>. Let us now define the so called defeatured geometry, i.e. Ω_0⊂ℝ^d such that Ω_0:=int(Ω∪F)ifFis negative Ω∖FifFis positive.Hence, if the feature is negative, Ω⊂Ω_0 (Figure <ref>), while if the feature is positive, Ω_0 ⊂Ω (Figure <ref>). In the following, the boundary of Ω_0 is denoted by ∂Ω_0. We denote by n, n_0 and n_F the unitary outward normals respectively of Ω, Ω_0 and F. Let ∂Ω=Γ_D∪Γ_N, with Γ_D∩Γ_N=∅ and Γ_D≠∅, and we assume that ∂ F∩Γ_D=∅. Let γ_0:=∂ F ∖Γ_N⊂∂Ω_0 and, finally, let γ:=∂ F∖γ_0⊂∂Ω, so that ∂ F=γ_0∪γ and γ_0∩γ=∅. Let us observe that, if γ_0=∅, then we are in the case of a negative internal feature, i.e. Ω is a perforated domain (see, for an example, Figures <ref> and <ref> in Section <ref>). On the exact geometry Ω we use the Poisson problem as a model problem: -Δ u =fin Ω u=g_D on Γ_D ∇ u ·n=gon Γ_N, to which we will also refer as the original problem. Defining H_0,Γ_D^1(Ω)={ v ∈ H^1(Ω): vΓ_D=0 },H_g_D,Γ_D^1(Ω)={ v ∈ H^1(Ω): vΓ_D=g_D}, the variational formulation of Problem (<ref>) reads: find u ∈ H^1_g_D,Γ_D(Ω) which satisfies (∇ u,∇ v)_Ω=(f,v)_Ω+⟨ g,v⟩_Γ_N∀ v ∈ H^1_0,Γ_D(Ω). On the defeatured geometry Ω_0 we consider instead the problem -Δ u_0 =fin Ω_0 u_0=g_D on Γ_D ∇ u_0 ·n_0=gon Γ_N∖γ ∇ u_0 ·n_0=g_0on γ_0to which we will also refer to as defeatured problem. With an abuse of notation, in the negative feature case, we denote by f∈ L^2(Ω_0) a suitable L^2-extension of f ∈ L^2(Ω) to F, while the Neumann datum g_0 has to be chosen. The variational formulation of problem (<ref>) reads: find u_0 ∈ H^1_g_D,Γ_D(Ω_0) which satisfies, ∀ v ∈ H^1_0,Γ_D(Ω_0) (∇ u_0,∇ v)_Ω_0=(f,v)_Ω_0+⟨ g,v⟩_Γ_N∖γ+⟨ g_0,v⟩_γ_0. Let us consider a partition h of Ω_0 consisting of closed triangles K for d=2, or tetrahedrons for d=3, such that Ω_0=⋃_K ∈hK. Hereby, we suppose that the mesh faces match with the boundaries Γ_D, Γ_N∖γ and γ_0. Let us then introduce the set Q_h=𝒫_p(h):={ q_h∈ L^2(Ω_0): q_hK∈𝒫_p(K), ∀ K ∈h}, with 𝒫_p(K) denoting the set of polynomials of degree at most p≥ 1 on K ∈h, and V_h^0={ q_h ∈𝒞^0(Ω_0)∩ Q_h: q_hΓ_D=0}, V_h:={ q_h∈𝒞^0(Ω_0): q_hΓ_D=g_D}. In the following, for the sake of simplicity, we assume f∈ Q_h. Similarly, let us consider the partition of ∂Ω_0 induced by the elements of h and let us denote its restriction to (Γ_N∖γ)∪γ_0 by ∂Ω_0,h^N. Introducing g_N= g on Γ_N∖γ g_0 on γ_0we assume g_N to be an element of the broken space 𝒫_p(∂Ω_0,h^N), defined in the same manner as (<ref>). Hence, the finite element approximation of (<ref>) reads as: find u_0^h ∈ V_h which satisfies, ∀ v_h ∈ V_h^0 (∇ u_0^h,∇ v_h)_Ω_0=(f,v_h)_Ω_0+⟨ g,v_h⟩_Γ_N∖γ+⟨ g_0,v_h⟩_γ_0. Let us remark that our aim is to never solve Problem (<ref>), but to design a proper a posteriori error estimator capable to control the energy norm of the error committed by approximating the exact solution of (<ref>) by u_0^h. We will refer to this error as the overall error, as it accounts both for the error introduced by defeaturing and for the error introduced by the numerical approximation of u_0. In particular, we aim at designing an estimator based on an equilibrated flux reconstruction, which has the advantage of bounding the numerical error with a sharp reliability constant equal to 1. The flux reconstruction will be used to bound also the defeaturing error even if, in this case, we will not get rid of the unknown constant.In the following we provide the definition of the overall error for the negative and positive feature case, referring again to <cit.>. Negative feature: in this case Ω⊂Ω_0, hence we restrict u_0 to Ω and we define the overall error as ||∇ (u-u_0^hΩ)||_Ω. Positive feature: this case is slightly more complicated, since u_0 and its finite element approximation are defined only on Ω_0 and Ω_0⊂Ω. Hence, in order to define the overall error, we need to extend u_0 to the feature F. However meshing F and solving a problem on it may be non trivial, in particular if F has a complex boundary. Hence we follow the steps in <cit.>: we consider a suitable extension F̃⊂ℝ^d of F, being as simple as possible, in particular F ⊂F̃ and γ_0⊂(∂F̃∩∂F), as reported in Figure <ref>. Let γ be decomposed as γ=int(γ_s∪γ_r) where γ_s=γ∩∂F̃ is the portion of γ shared by ∂ F and ∂F̃ and γ_r=γ∖γ_s (Figure <ref>). We denote by ñ the unitary outward normal of F̃ and we set γ̃=∂F̃∖∂F. On F̃ we solve the problem -Δũ_0 =fin F̃ ũ_0=u_0on γ_0 ∇ũ_0 ·ñ=g̃ on γ̃ ∇ũ_0 ·ñ=gon γ_s where, with an abuse of notation, we still denote by f any L^2-extension of the forcing term to F̃ and the Neumann datum g̃ on γ̃ has to be chosen. Introducing H_u_0,γ_0^1(F̃)={ v ∈ H^1(F̃): vγ_0=u_0γ_0},the variational formulation of (<ref>) is: find ũ_0 ∈ H^1_u_0,γ_0(F̃) which satisfies, ∀ v ∈ H^1_0,γ_0(F̃) (∇ũ_0,∇ v)_F̃=(f,v)_F̃+⟨g̃,v⟩_γ̃+⟨ g,v⟩_γ_s. We denote by ũ_0^h the finite element approximation of ũ_0 on a partition h of F̃.Note that this partition does not need to be conforming to γ. We suppose, however, that the mesh faces match with γ_0, γ̃ and γ_s and that h matches with h on γ_0. Let Q_h=𝒫_p(h):={ q_h∈ L^2(F̃): q_hK∈𝒫_p(K), ∀ K ∈h},and let us introduce V_h^0={ q_h ∈𝒞^0(F̃)∩Q_h: q_hγ_0=0}, V_h:={ q_h∈𝒞^0(F̃)∩Q_h: q_hγ_0=u_0^hγ_0}. We assume for simplicity that fF̃∈Q_h. Considering the partition of ∂F̃ induced by the elements of h and denoting its restriction to γ_s∪γ̃ as ∂F̃_h^N, we also assume that g̃_N= g on γ_s g̃on γ̃ is an element of the broken space 𝒫_p(∂F̃_h^N). The finite element approximation of Problem (<ref>) is hence: find ũ_0^h ∈V_h which satisfies, ∀ v ∈V_h^0 (∇ũ_0^h,∇ v_h)_F̃=(f,v_h)_F̃+⟨g̃,v_h⟩_γ̃+⟨ g,v_h⟩_γ_s. Finally, we define the extended defeatured solution and its numerical approximation as u_d:= u_0in Ω_0 ũ_0in F̃, u_d^h:= u_0^hin h ũ_0^hin h, while the overall error is ||∇ (u-u_d^h)||_Ω. § NEGATIVE FEATURE A POSTERIORI ERROR ESTIMATOR In this section we propose a reliable estimator for the overall error ||∇(u-u_0^hΩ)||_Ω, in the case of a single negative feature. To simplify the notation, in the following we omit the explicit restriction of u_0^h (and u_0) to Ω. Let us consider the solution to problem (<ref>): introducing the flux σ=-∇ u_0, we have that σ∈H(div,Ω_0), ∇·σ=f, -σ·n_0=g on Γ_N∖γ and -σ·n_0=g_0 on γ_0. At discrete level, a suitable definition of flux is more involved. Indeed ∇ u_0^h∉Ω_0 and hence the divergence equation and the Neumann boundary condition are not exactly satisfied. The idea behind the equilibrated flux reconstruction is to use the discrete solution u_0^h to build a discrete fluxsuch that ∈Ω_0 is an approximation of σ satisfying ∇·=f in Ω_0 ·n_0=-g on Γ_N∖γ ·n_0=-g_0 on γ_0. We will give more details about how an equilibrated flux reconstruction can actually be computed in Section <ref>, following <cit.>. For the time being we assume that we haveat our disposal. Referring to the notation introduced in Section <ref>, let us introduce, onγ, the quantity d_γ^h:=g+·non γ which is the error between the Neumann datum g on γ and the normal trace of the equilibrated flux reconstruction. Following <cit.>, denoting by d_γ^h^γ the average of d_γ^h over γ, let us define γ:=(|γ|^1/d-1||d_γ^h-d_γ^h^γ||_γ^2+c_γ^2|γ|^d/d-1|d_γ^h^γ|^2)^1/2, 0:=||+∇ u_0^h||_Ω_0 where c_γ is defined as in (<ref>). Let us remark that, unlike<cit.>, the quantity d_γ^h does not depend on the normal trace of the numerical flux, but on the normal trace of the equilibrated flux reconstruction, which is continuous across the elements of the mesh h. The following proposition establishes our a posteriori bound: Let u be the solution of (<ref>) and u_0^h the solution of (<ref>). Then ||∇(u-u_0^h)||_Ω≤ C_Dγ+0, with C_D>0 being a constant independent of the size of feature F. Let v ∈ H_0,Γ_D^1(Ω). Adding and subtracting (,∇ v)_Ω, exploiting (<ref>), applying Green's theorem and using the characterization ofprovided in (<ref>), we have (∇(u-u_0^h),∇ v)_Ω =(∇ u+,∇ v)_Ω-( +∇ u_0^h,∇ v)_Ω=(f-∇·,v)_Ω+⟨ g +·n,v⟩_Γ_N-( +∇ u_0^h,∇ v)_Ω=⟨ g +·n,v⟩_γ-( +∇ u_0^h,∇ v)_Ω=⟨ d_γ^h,v⟩_γ-( +∇ u_0^h,∇ v)_Ω. Referring the reader to the steps reported in <cit.>, with the difference that the numerical flux is here substituted by the equilibrated flux reconstruction, it is possible to prove that ⟨ d_γ^h ,v⟩_γ≤ C_Dγ||∇ v ||_Ω with C_D>0 being a constant independent of the size of feature F (see Theorem 4.3 in <cit.>). If we choose v=u-u_0^h∈ H^1_0,Γ_D(Ω) in (<ref>), we apply (<ref>) and the Cauchy–Schwarz inequality, we have ||∇(u-u_0^h)||_Ω^2 ≤ C_D γ||∇(u-u_0^h)||_Ω+|| +∇ u_0^h||_Ω||∇(u-u_0^h)||_Ω≤ C_D γ||∇(u-u_0^h)||_Ω+|| +∇ u_0^h||_Ω_0||∇(u-u_0^h)||_Ω= (C_Dγ+0)||∇(u-u_0^h)||_Ω, where we have also exploited the fact that, in the negative feature case, Ω⊂Ω_0. Simplifying on both sides yields (<ref>). It is well known from literature (see, among others, <cit.>) that the quantity 0 provides a sharp upper bound for the numerical error ||∇(u_0-u_0^h)||_Ω_0. Let us remark that, if no feature is present, the same result is provided also by (<ref>). Indeed, if γ=∅, then u=u_0, Ω=Ω_0 and (<ref>) reduces to||∇( u_0-u_0^h)||_Ω_0≤ ||+∇ u_0^h||_Ω_0. For this reason we will refer to 0 as the numerical component of the estimator and to γ as the defeaturing component. § POSITIVE FEATURE A POSTERIORI ERROR ESTIMATOR In this section we propose a reliable estimator for the overall error ||∇(u-u_d^h)||_Ω, in the case of a single positive feature F. For the sake of generality, we consider the case in which F is embedded in a smooth extension F̃, as detailed in Section <ref>. Let us introduce an equilibrated flux reconstruction on F̃, i.e. a discrete flux ∈F̃ built somehow from ũ_0^h such that ∇·=f in F̃ ·ñ=-g̃on γ̃ ·ñ=-g on γ_s. Again the details on the construction of this flux will be provided in Section <ref> and, for the time being, we assume we have . In this case we define on γ_0 the quantity d_γ_0^h:=·n_F-g_0 on γ_0 which approximates the jump in the normal derivative of u_d on γ_0, while on γ_r we define d_γ_r^h:=·n_F+g on γ_r which is the error between the Neumann datum g on γ_r and the normal trace of the equilibrated flux reconstruction computed on F̃. Again we observe how, unlike <cit.>, the normal trace of the numerical flux is not involved in the definition of these quantities. Denoting by d_γ_0^h^γ_0 the average of d_γ_0^h on γ_0 and by d_γ_r^h^γ_r the average of d_γ_r^h on γ_r and following <cit.>, let us introduce γ_0:=(|γ_0|^1/d-1||d_γ_0^h-d_γ_0^h^γ_0||_γ_0^2+c_γ_0^2|γ_0|^d/d-1|d_γ_0^h^γ_0|^2)^1/2 γ_r:=(|γ_r|^1/d-1||d_γ_r^h-d_γ_r^h^γ_r||_γ_r^2+c_γ^2|γ_r|^d/d-1|d_γ_r^h^γ_r|^2)^1/2, where c_γ_0 and c_γ_r are defined as in (<ref>). Let us also define 0:=||+∇ũ_0^h||_F̃,and let us recall that 0=||+∇ u_0^h||_Ω_0, whereis an equilibrated flux reconstruction defined in Ω_0 as in (<ref>). Let u be the solution of (<ref>) and u_d^h be defined as in (<ref>). Then ||∇(u-u_d^h)||_Ω≤C̃_D(γ_0^2+γ_r^2)^1/2+(0^2+0^2)^1/2 with C̃_D being a constant independent of the size of feature F. Let us consider the restriction to Ω_0 of the solution u of problem (<ref>), which satisfies -ΔuΩ_0 =fin Ω_0 uΩ_0 =g_D on Γ_D ∇uΩ_0·n=gon Γ_N∖γ ∇uΩ_0·n_0=∇ u·n_0on γ_0. Omitting the explicit restriction on u to Ω_0 the variational formulation of problem (<ref>) reads: find u ∈ H^1_g_D,Γ_D(Ω) which satisfies, ∀ v ∈ H^1_0,Γ_D(Ω) (∇ u,∇ v)_Ω_0=(f,v)_Ω_0+⟨ g,v⟩_Γ_N∖γ+⟨∇ u ·n_0,v⟩_γ_0. Let v ∈ H_0,Γ_D^1(Ω_0). Adding and subtracting (,∇ v)_Ω_0, exploiting(<ref>), applying Green's theorem and using the characterization ofprovided in (<ref>), we have (∇(u-u_0^h),∇ v)_Ω_0 =(∇ u+,∇ v)_Ω_0-( +∇ u_0^h,∇ v)_Ω_0=(f-∇·,v)_Ω_0+⟨ g +·n,v⟩_Γ_N∖γ+⟨∇ u·n_0 +·n_0,v⟩_γ_0-( +∇ u_0^h,∇ v)_Ω_0=⟨∇ u·n_0 -g_0,v⟩_γ_0-( +∇ u_0^h,∇ v)_Ω_0. In order to obtain an actual error indicator we need to estimate the quantity ⟨∇ u·n_0 -g_0,v⟩_γ_0 on the right hand side, and for this reason we must consider the error committed on the feature as well. Hence, let us consider the restriction to the positive feature F of the solution u of (<ref>), satisfying -ΔuF =fin  F ∇uF·n_F=gon γ ∇uF·n_F=∇ u·n_Fon γ_0, so that, omitting the explicit restriction of u to F, u∈ H^1(F) is one of the infinitely many solutions, defined up to a constant, of (∇ u,∇ v)_F=(f,v)_F+⟨ g,v⟩_γ+⟨∇ u ·n_F,v⟩_γ_0∀ v ∈ H^1(F). Let v ∈ H^1(F). Adding and subtracting (,∇ v)_F, exploiting(<ref>), applying Green's theorem and using the characterization ofprovided in (<ref>), we have (∇(u-ũ_0^h),∇ v)_F=(∇ u+,∇ v)_F-( +∇ũ_0^h,∇ v)_F=(f-∇·,v)_F+⟨ g +·n_F,v⟩_γ_r∪γ_s +⟨∇ u·n_F +·n_F,v⟩_γ_0-( +∇ũ_0^h,∇ v)_F=⟨ g +·n_F,v⟩_γ_r+⟨∇ u·n_F +·n_F,v⟩_γ_0-( +∇ũ_0^h,∇ v)_F. Choosing v ∈ H_0,Γ_D(Ω), observing that vΩ_0∈ H_0,Γ_D^1(Ω_0) and vF∈ H^1(F), and summing (<ref>) and (<ref>), we obtain (∇(u-u_d^h), ∇ v)_Ω =(∇(u-u_0^h), ∇ v)_Ω_0+(∇(u-ũ_0^h), ∇ v)_F=⟨ g +·n_F,v⟩_γ_r+⟨·n_F-g_0,v⟩_γ_0-( +∇ u_0^h,∇ v)_Ω_0-( +∇ũ_0^h,∇ v)_F=⟨ d_γ_r^h ,v⟩_γ_r+⟨ d_γ_0^h ,v⟩_γ_0-( +∇ u_0^h,∇ v)_Ω_0-( +∇ũ_0^h,∇ v)_F, where we have used that n_F=-n_0 on γ_0. For the terms involving d_γ_r^h andd_γ_0^h we proceed similarly to the negative feature case: referring the reader to Theorem 5.5 in <cit.>, it is possible to prove that there exists a constant C̃_D>0 such that ⟨ d_γ_0^h ,v⟩_γ_0+ ⟨ d_γ_r^h , v⟩_γ_r≤C̃_D(γ_r^2+γ_0^2)^1/2||∇ v||_Ω. If we choose v=u-u_d^h∈ H_0,Γ_D(Ω) in (<ref>), we use (<ref>) and the Cauchy–Schwarz inequality we obtain ||∇(u-u_d^h)||_Ω^2≤C̃_D(γ_r^2+γ_0^2)^1/2||∇(u-u_d^h)||_Ω+||+∇ u_0^h||_Ω_0||∇(u-u_d^h)||_Ω_0+||+∇ũ_0^h||_F||∇(u-u_d^h)||_F≤C̃_D(γ_r^2+γ_0^2)^1/2||∇(u-u_d^h)||_Ω+(||+∇ u_0^h||_Ω_0^2+||+∇ũ_0^h||_F^2)^1/2(||∇(u-u_d^h)||_Ω_0^2+||∇(u-u_d^h)||_F^2)^1/2≤(C̃_D(γ_r^2+γ_0^2)^1/2+(0^2+0^2)^1/2)||∇(u-u_d^h)||_Ω, where, in the last step, we have exploited the fact that F⊆F̃. The thesis follows by simplifying on both sides. If the feature F is relatively simple, there is no need to use an extension, and problem (<ref>) is solved directly in F. In this case, maintaining the tilde notation (since F=F̃), expression (<ref>) simplifies into ||∇(u-u_d^h)||_Ω≤C̃_Dγ_0+(0^2+0^2)^1/2. § EQUILIBRATED FLUXES RECONSTRUCTION In this section we describe how to build, in practice, an equilibrated flux starting from the discrete solution of the defeatured problem u_0^h or from u_d^h in the positive feature case. The proposed procedure is directly adapted from <cit.> and resorts to a local reconstruction of the fluxes. Given the triangular/tetrahedral mesh h built on the defeatured geometry Ω_0, let us denote by h the set of its vertices and let us divide it into interior vertices h^int and boundary vertices h^ext. We aim at reconstructing the flux in the Raviart–Thomas space of order p≥ 1, namely in M_h:={v_h∈Ω_0: v_hK∈ [𝒫_p(K)]^d+x𝒫_p(K), ∀ K ∈h}. The best choice for the equilibrated flux reconstruction would then be =min_v_h ∈M_h||v_h+∇ u_0^h||_Ω_0 subject to ∇·v_h =f in Ω_0 v_h ·n=-g on Γ_N∖γ v_h ·n_0=-g_0on γ_0. However, findingthrough (<ref>) implies solving a global optimization problem in the domain Ω_0. Following <cit.> we adopt instead a different strategy, in which local flux reconstructions are built on patchesof elements sharing a vertex a∈h. Let us denote by ψ_a the hat function in 𝒫_1(h)∩ H^1(Ω_0) taking value 1 in vertex a and 0 on all the other vertices.Let us denote by ∂ the boundary of the patchand let ∂ω_a^0⊆∂ be defined as ∂ω_a^0={x∈∂ :ψ_a(x)=0},and ∂ω_a^ψ=∂∖∂ω_a^0. Let us remark that, if a∈h^int then ∂ω_a^0=∂. Let Γ_N^0=(Γ_N∖γ)∪γ_0 and let us introduce M_h^a,0={v_h ∈M_h(): v_h·n_=0on ∂ω_a^0∪(∂ω_a^ψ∩Γ_N^0)} and :=M_h^a,0if a∈h^int {v_h ∈M_h(): v_h·n_=0on ∂ω_a^0, v_h·n_=- gon ∂ω_a^ψ∩ (Γ_N∖γ), v_h·n_=- g_0on ∂ω_a^ψ∩γ_0 }if a∈h^ext, :={ q_h ∈ Q_h():(q_h,1)_=0 }if a∈h^int or a∈int(Γ_N^0) Q_h() if a∈h^ext and a∉int(Γ_N^0), whereM_h() and Q_h() are respectively the restrictions of M_h and Q_h to the patchand Q_h is defined as in (<ref>). We then look for local equilibrated flux reconstructions as =min_v_h ∈||v_h+ψ_a∇ u_0^h||_0, subject to ∇·v_h = f-∇·∇ u_0^hand then we set =∑_a∈h. The optimization problem (<ref>) is equivalent to look for ∈ and ∈ such that, (,v_h)_-(,∇·v_h)_=-(∇ u_0^h,v_h)_∀v_h ∈M_h^a,0 (∇·,q_h)_=( f,q_h)_-(∇·∇ u_0^h,q_h)_∀ q_h ∈, which is the strategy that we are actually adopting in practice. The equilibrated flux on the extension F̃ of a positive feature F is reconstructed exactly in the same manner. Denoting by h^int and h^ext respectively the internal and the boundary vertices of the mesh h of F̃, introducing M_h(h):={v_h∈F̃: v_hK∈ [𝒫_p(K)]^d+x𝒫_p(K), ∀ K ∈h},and recalling the definition of Q_h given in (<ref>), we look for the couple (^a,λ̃_h^a) in the sets and spaces M_h^a :={v_h ∈M_h(): v_h·n_=0on ∂ω_a^0∪(∂ω_a^ψ∩(γ̃∪γ_s))}if a∈𝒩̃_h^int {v_h ∈M_h(): v_h·n_=0on ∂ω_a^0, v_h·n_=-g̃ on ∂ω_a^ψ∩γ̃, v_h·n_=- gon ∂ω_a^ψ∩γ_s }if a∈𝒩̃_h^ext, Q_h^a :={ q_h ∈Q_h(): (q_h,1)_=0 }if a∈𝒩̃^int_hor a∈int(γ̃∪γ_s) Q_h()if a∈𝒩̃_h^ext and a∉int(γ̃∪γ_s) solving a problem analogous to (<ref>)-(<ref>). § NUMERICAL EXPERIMENTS In this section we propose some numerical experiments to validate the proposed estimator. We here focus on the case d=2 and p=1. All the simulations were performed in Matlab and meshes were built using the Triangle mesh generator <cit.>. For each element K of a mesh h we denote by h_K the diameter of the element, and we choose as a mesh parameter h=max_K ∈hh_K. In the following we define the effectivity index as the ratio between the total estimator and the overall error. i.e. η=tot/||∇ (u-u_0^h)||_Ω. In case of a domain Ω characterized by a single negative feature, following Proposition <ref>, the total estimator is defined as tot=C_Dγ+0, while in the case of a single positive feature it is tot=C̃_Dγ_0+(0^2+0^2)^1/2, assuming F̃=F (see Remark <ref>). For all the proposed experiments, a reference solution is built by solving the problem on the original geometry Ω by linear finite elements on a very fine mesh. With an abuse of notation this reference solution is still denoted by u. Three numerical examples are proposed. In Test 1 we consider the case of a single negative internal feature, analyzing the convergence of the total estimator and of the overall error under mesh refinement and feature size reduction. Test 2 deals instead with the case of positive and negative boundary features and the convergence of the estimator and of the error are again analyzed. Finally, in Test 3 we consider the presence of multiple internal negative features, showing how the proposed estimator allows to point out which features have the greatest impact on the error. §.§ Test 1: negative internal feature For this first numerical example we consider a square domain characterized by a single negative (see Figure <ref>). We denote by ϵ the characteristic size of the feature, i.e. the radius of the circle circumscribing the feature itself. Setting Ω_0=(0,1)^2 and Ω_ϵ=Ω_0∖F_ϵ we consider on the exact geometry Ω_ϵ the problem -Δ u_ϵ=fin Ω_ϵ u_ϵ=0on ∂Ω_ϵ∖γ_ϵ ∇ u_ϵ·n=0on γ_ϵ, with f(x,y)=x. For what concerns the defeatured problem, γ_0=∅, since the feature is internal. Figure <ref> reports an example of the computational mesh h used to solve the defeatured problem on Ω_0. We remark how the mesh has no need to be conforming to the feature boundary since the equilibrated fluxis reconstructed on the defeatured geometry, which is blind to the feature, and γ is computed by simply defining a proper quadrature rule on the feature boundary itself and evaluating the normal trace ofin the chosen quadrature nodes. In the numerical experiments that follow we choose C_D=1 in (<ref>). Figure <ref> shows the convergence of the estimator tot and of the energy norm of the overall error ||∇(u_ϵ-u_0^h)||_Ω, under mesh refinement. The values of γ and 0 are also reported. Three fixed values of ϵ are considered, namely ϵ=7.00· 10^-2,  1.75· 10^-2, 4.83· 10^-3. As expected, going from a coarse to a fine mesh, the error reaches a plateau when the defeaturing error becomes more relevant than the numerical one. The bigger the feature is, the earlier the plateau is reached. The same behavior is captured also by the estimator. For a fixed feature size, the value of γ remains constant while 0 converges as 𝒪(h), as expected. The trend of the effectivity index η under mesh refinement and for the three considered feature sizes is reported in Figure <ref>. As expected, since the numerical source of the error is sharply bounded by 0, when 0≫γ we have η∼1. The effectivity index appears instead to be around 2.5 when the defeaturing component is dominating. The highest values of η, namely η∼ 3, are registered when γ>0 but the two components have still a comparable magnitude. Figure <ref> focuses instead on the convergence of the estimator and of the error under the reduction of the feature size, for three fixed mesh sizes, namely h=1.25· 10^-1, 3.13· 10^-2, 7.81· 10^-3. As expected, both the error and the estimator reach a plateau when the numerical error dominates over the defeaturing one. The value of the effectivity index η is reported in Figure <ref>, with the same considerations done for the Figure <ref> still holding. §.§ Test 2: positive and negative boundary features In this second numerical example we consider the case of a positive and a negative boundary feature. As in the previous test case we choose Ω_0=(0,1)^2, while we define F_n=(1-ϵ/2, 1+ϵ/2)×(1-ϵ, 1), F_p=(1-ϵ/2, 1+ϵ/2)× (-ϵ, 0). For the negative feature case we chose as exact geometry Ω_n=Ω_0∖F_n, while for the positive feature case we choose Ω_n=int(Ω_0∪F_p), as reported in Figure <ref>. In both cases we consider the problem -Δ u=fin Ω_⋆ u=0on Γ_D ∇ u·n=0on Γ_N with ⋆={ n,p }, f=1, Γ_D={ (x,y): x=0 ∨ x=1} and Γ_N =∂Ω_ϵ∖Γ_D. We recall that, according to its definition, Γ_N includes also the feature boundary. Figure <ref> compares the convergence under mesh refinement of the total estimator and error in the negative and in the positive feature case and for two different values of ϵ. Let us recall how, in presence of a negative feature the total estimator is defined as in (<ref>), with γ=γ_n computed on γ_n (see Figure <ref>) from the equilibrated flux reconstructed on Ω_0; in the case of a positive feature, instead, the definition is provided by (<ref>) with γ_0=γ_0,p computed on γ_0,p (see Figure <ref>) from the equilibrated flux reconstructed on the feature itself. In the following we choose C_D=C̃_D=1 and, for the sake of clarity, we denote respectively by tot^n and tot^p the total estimator in the negative and in the positive feature case. As in the previous test case, fixing the feature size and refining the mesh, we can observe how the overall error reaches a plateau, and how this behavior is captured also by the total estimator, both in the negative and in the positive feature case. As expected, a bigger feature (Figure <ref>) produces a stagnation of the error and of the estimator already for coarse meshes, while if the feature is smaller (Figure <ref>) the defeaturing source of error becomes relevant only for finer meshes. Figure <ref> shows the trend of the effectivity index related to the curves reported in Figure <ref>. Both for the negative and the positive feature case we observe that, as expected, η∼ 1 when the numerical component is dominating (coarse meshes in Figure <ref>). We can instead observe how η∼ 1.5 when the defeaturing component dominates (fine meshes in Figure <ref>), and how the effectivity index is in general lower with respect to the internal negative feature case (Test 1), with η<2 even when both the defeaturing and the numerical component have a significant impact. Finally, Figure <ref> refers to the case in which the negative and the positive features are simultaneously present, i.e. Ω=int(Ω_0∪F_p)∖F_n. For both features we choose ϵ=0.2. Figure <ref> reports the convergence of the error and of the estimator under mesh refinement. The total estimator is, in this case, defined astot=C(γ_0,p^2+γ_n^2)^1/2+(0^2+0^2)^1/2, with C>0 being a constant independent of the size of both features. In particular, we choose here C=1. As in the previous test cases, the estimator appears to correctly capture the behavior of the overall error. The corresponding effectivity index is reported in Figure <ref>. §.§ Test 3: multiple internal features For this last numerical example we consider a case with multiple internal features, similar to the one proposed in <cit.>. Our aim is to show the capability of the proposed estimator to identify the most relevant features and to provide a criterion to decide whether a feature should be added or not, according also to the magnitude of the numerical source of the error. Let us define the defeatured geometry again as Ω_0=(0,1)^2 and let us consider a set of I features ℱ={F_i}_i∈ℐ, ℐ={1,...,I}, each of which is a polygon of 16 faces, inscribed in a circle of radius ϵ_i and centered in x_C^i. In particular we choose I=5 and x_C^1=(0.12,0.12),  ϵ_1=0.02; x_C^2=(0.35,0.35), ϵ_2=0.05x_C^3=(0.65,0.65),  ϵ_3=0.10;x_C^4=(0.20,0.68), ϵ_4=0.05;x_C^5=(0.65,0.16), ϵ_5=0.05. The boundary of the i-th feature is denoted by γ_i. On the exact geometry Ω=Ω_0∖⋃_i ∈ℐF_i, which is reported in Figure <ref>, we consider the problem Δ u=0in Ω u=g_D on Γ_D ∇ u·n=0on Γ_N with Γ_D={ (x,y): x=0∨ y=0}, g_D(x,y)=e^-8(x+y) and Γ_N=∂Ω∖Γ_D, including also the feature boundaries. The reference solution u is also reported in Figure <ref>. Let ℳ⊆ℱ be a subset of features indexed by j ∈ℐ^⋆⊆ℐ and let us denote by Ω_0^ℳ a generic partially defeatured geometry, obtained by including the features in ℳ to the defeatured geometry, i.e.Ω_0^ℳ=Ω_0∖⋃_j ∈ℐ^⋆F_j. If ℳ=∅, then Ω_0^ℳ=Ω_0. An example of a computational mesh defined on Ω_0 is reported in Figure <ref>. We will use u_0^h to refer both to the numeric solution computed on Ω_0 and to the one computed on Ω_0^ℳ, the meaning being clear from the context. In presence of multiple negative features the total estimator is defined as tot=α_Dγ+0, where γ=(∑_k ∈ℐ∖ℐ^⋆γ_k^2)^1/2. Table <ref> reports the value of the components of the total estimator for differently refined meshes and for different choices of ℳ, i.e. of the partially defeatured geometry on which u_0^h is computed. The rows of the table are divided into three sets, corresponding to three differently refined meshes. The variation in the number of degrees of freedom which can be observed when a feature is included into the geometry is related to the adaptation of the mesh to the feature boundary and to the deletion of the degrees of freedom lying inside the feature itself. Looking at the columns from 5 to 9 we can see how feature F_1 is clearly the most relevant, since γ_1>γ_i for all i>1. This is expected since, although being the smallest feature, it is located in a region in which the gradient of the solution is very steep. Feature F_3 is, instead, almost irrelevant: despite being the biggest one it is located in a region in which the solution is rather flat and hence its impact on the solution accuracy tends to be negligible. As expected, the values of γ_i are independent from the mesh size, meaning that the relevance of the features can be evaluated even on a coarse mesh. However, the choice of including the i-th feature into the geometry should be taken by comparing γ_i with 0, which is a sharp indicator of the numerical source of error. In particular, a value of γ_i considerably bigger than 0 means that we will not be able to significantly reduce the error by mesh refinement, unless the feature is added. This is true, for example, for feature F_1 with the second considered mesh and for features F_1 and F_2 for the finest mesh. Table <ref> focuses exactly on these cases, reporting the values of the energy norm of the overall error and of the total estimator, along with the corresponding effectivity index. In particular, tot is computed with α_D=1. In Table <ref>, we can observe that, if ℳ=∅, the reduction of 0 when going from h=3.13· 10^-2 (∼5k degrees of freedom) to h=1.56· 10^-2 (∼20k degrees of freedom) is of about 50%. This is expected, since 0 should converge as 𝒪(h). However, looking at rows 1 and 3 in Table <ref>, we see that the drop in the total estimator, and hence in the error, is under 20%. Adding feature F_1 and refining the mesh at the same time the drop is instead of about 60%, as it can be seen by comparing rows 1 and 4 in Table <ref>. Let us remark that, for the finest considered mesh, also feature F_2 becomes rather relevant. However, γ_2 is closer to 0 and hence adding it to the geometry has a smaller impact on the solution accuracy. This experiment is to be intended as a preliminary test for the use of the proposed estimator in an adaptive strategy, involving both geometrical adaptation (i.e. feature inclusion) and local mesh refinement. We decide to leave this to a forthcoming work: indeed the procedure adopted for the computation of the equilibrated flux requires the mesh to be conforming to the domain boundaries, and hence also to the boundary of the features which are actually included in the partially defeatured geometry. However, to build an efficient and flexible adaptive algorithm we do not want to remesh the geometry each time a feature is added, and for this reason a generalization of the equilibrated flux reconstruction to trimmed meshes needs to be considered. § CONCLUSIONS In this work we have proposed a new a posteriori error estimator for defeaturing problems based on an equilibrated flux reconstruction and designed for finite elements. The Poisson equation with Neumann boundary conditions on the feature boundary was taken as a model problem. The reliability of the estimator has been proven both in the negative and in the positive feature case, and tested with several numerical examples. The choice of using an equilibrated flux reconstruction leads to an estimator which is able to bound sharply the numerical component of the error and which never requires to evaluate the normal trace of the numerical flux, which is typically discontinuous on element edges in a standard finite element discretization. This work is to be intended as a preliminary analysis for the use of the proposed estimator in an adaptive strategy, allowing not only for mesh refinement, but also for an automatic inclusion of those features whose absence causes most of the accuracy loss. The proposed estimator does not require the mesh to be conforming to the feature boundary until the feature is included in the computational domain itself. Indeed, computing the integral of the normal trace of the equilibrated flux reconstruction on a generic curve is always possible, regardless of the intersections with the mesh elements. However, the procedure which was adopted to reconstruct the equilibrated flux is designed for meshes which are conforming to the computational domain boundary and this would require to remesh the domain each time a feature is added by the adaptive procedure, hence increasing the complexity of the algorithm. For this reason, an extension of the equilibrated flux reconstruction to the case of trimmed meshes needs to be considered, so that the geometry never needs to be remeshed. This generalization is left to a forthcoming work, that is currently under preparation. Although the proof of the reliability of the estimator holds in ℝ^d, d=2,3, and for any polynomial order p≥ 1, we decided to propose numerical experiments only in ℝ^2 and for p=1. The application of the estimator on more complex, realistic and tridimensional geometries, or the use of a higher order finite element approximation, are left to a forthcoming work as well, both extensions having an impact only on implementation aspects. § ACKNOWLEDGMENTSThe authors are grateful to Zhaonan Dong (Inria, Paris) for sharing his code on equilibrated fluxes and to Paolo Bardella (DET, Politecnico di Torino) for the MeshToolbox library.The authors acnowledge the support of the Swiss National Science Foundation (via project MINT n. 200021_215099, PDE tools for analysis-aware geometry processing in simulation science) and of European Union Horizon 2020 FET program (under grant agreement No 862025 (ADAM2)).Author O. Chanon acknowledges the support of the Swiss National Science Foundation through the project n.P500PT 210974.
http://arxiv.org/abs/2312.15968v1
{ "authors": [ "Annalisa Buffa", "Ondine Chanon", "Denise Grappein", "Rafael Vázquez", "Martin Vohralík" ], "categories": [ "math.NA", "cs.NA", "65N15, 65N30" ], "primary_category": "math.NA", "published": "20231226092939", "title": "An equilibrated flux a posteriori error estimator for defeaturing problems" }
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China School of Physics, Jiangsu Normal University, Xuzhou 221116, China wxfeng@bit.edu.cn Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China Laboratory of Quantum Functional Materials Design and Application, School of Physics and Electronic Engineering, Jiangsu Normal University, Xuzhou 221116, China Institute of Physics, Johannes Gutenberg University Mainz, 55099 Mainz, Germany Peter Grünberg Institut and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, 52425 Jülich, Germany ygyao@bit.edu.cn Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China Mn_3Sn has garnered significant attention due to its kagome lattice, 120^∘ noncollinear antiferromagnetic order, and substantial anomalous Hall effect. In this study, we comprehensively explore intrinsic and extrinsic contributions to anomalous Hall, anomalous Nernst, and anomalous thermal Hall effects, employing first-principle calculations and group theory analysis.Comparative analysis between our theoretical results and available experimental data underscores the predominance of intrinsic mechanism in shaping anomalous transport properties at low temperatures.Specifically, Weyl fermions are identified as the primary contributors to intrinsic anomalous Hall conductivity. The significance of extrinsic mechanisms becomes evident at high temperatures, especially when the longitudinal charge conductivity falls into the dirty regime, where the side jump mechanism plays a vital role.Extrinsic contributions to anomalous transport properties are primarily influenced by the electronic states residing at the Fermi surfaces.Furthermore, anomalous transport properties exhibit periodic variations when subjected to spin rotations within the kagome plane, achievable by applying an external magnetic field.Our findings advance the understanding of anomalous transport phenomena in Mn_3Sn and offer insights into potential applications of noncollinear antiferromagnetic materials in spintronics and spin caloritronics. Intrinsic and extrinsic anomalous transport properties in noncollinear antiferromagnetic Mn_3Sn from first-principle calculations Yugui Yao January 14, 2024 =================================================================================================================================§ INTRODUCTION The anomalous Hall effect (AHE), discovered by Hall in 1881, refers to the emergence of a transverse charge current in response to a longitudinal electric field without the presence of an external magnetic field <cit.>. It remains a fundamental aspect of condensed matter physics, shedding light on the intricate nature of magnetism <cit.>. Over time, the understanding of the physical mechanisms underlying AHE has evolved, dividing the effect into intrinsic and extrinsic components. The intrinsic mechanism, which is not influenced by electron scattering, was initially proposed by Karplus and Luttinger <cit.>, and is now well explained by Berry phase theory, relying solely on the electronic band structure of pristine crystals <cit.>. In contrast, the extrinsic mechanisms, such as skew scattering <cit.> and side jump <cit.>, hinge on electron scattering caused by impurities or disorder.Moreover, there are two other remarkable anomalous transport phenomena: the anomalous Nernst effect (ANE)<cit.> and the anomalous thermal Hall effect (ATHE)<cit.>, which involve the emergence of transverse charge and heat currents driven by longitudinal temperature gradients, respectively.Analogous to the anomalous Hall conductivity (AHC) σ_ij, the anomalous Nernst and anomalous thermal Hall conductivities (ANC and ATHC), α_ij and κ_ij, can be decomposed into three distinct parts: σ_ij=σ_ij^int+σ_ij^sj+σ_ij^isk,α_ij=α_ij^int+α_ij^sj+α_ij^isk, κ_ij=κ_ij^int+κ_ij^sj+κ_ij^isk, Here, the subscripts i,j∈x,y,z represent Cartesian coordinates, and the superscripts int, sj, and isk denote the intrinsic, side jump, and skew scattering contributions, respectively. The AHE is commonly observed in ferromagnetic conductors and it is assumed to be proportional to the magnetization. In contrast, antiferromagnets (AFMs) have long been considered to lack the AHE due to their zero net magnetization <cit.>. However, recent advancements have challenged this notion. For example, a significant AHC was predicted in the noncollinear antiferromagnetic Mn_3Ir through a combination of symmetry analysis and first-principles calculations <cit.>. Subsequent experiments have confirmed substantial AHC in noncollinear AFMs Mn_3X (X = Sn, Ge) even in the absence of an external magnetic field <cit.>.Compared to ferromagnets, AFMs exhibit an array of exotic properties, including insensitivity to magnetic-field perturbations <cit.>, ultrafast spin dynamics <cit.>, and high-frequency uniform spin precession <cit.>. These attributes position AFMs as an excellent platform for antiferromagnetic spintronics <cit.>. Mn_3X, as a representative family of noncollinear AFMs, has garnered significant attention due to its intriguing features, including substantial AHE <cit.>, ANE <cit.>, magneto-optical effects <cit.>, magnetic Weyl fermions <cit.>, magnetic spin Hall effect <cit.>, and spin–orbit torque <cit.>.Moreover, Mn_3X possesses a unique breathing-type kagome lattice structure formed by Mn atoms, as shown in Fig. <ref>. This lattice hosts intriguing topological electronic bands, superconducting phases, and strong electromagnetic and transport responses<cit.>, making it an ideal platform for exploring novel states of quantum matter.However, previous theoretical investigations on Mn_3X <cit.> have primarily focused on the anomalous transport properties induced by intrinsic Berry curvature mechanism, with limited attention paid to the extrinsic mechanisms related to the scattering of electrons off impurities or disorder.In reality, understanding the contribution of extrinsic mechanisms to the AHE in kagome materials is crucial.For instance, remarkable AHC values ranging from 10^4 to 10^5 S/cm, driven by extrinsic mechanisms, have been discovered in other kagome materials such as KV_3Sb_5 <cit.>, CsV_3Sb_5 <cit.>, and Nd_3Al <cit.>. These observations underscore the predominant role played by extrinsic mechanisms in governing the AHE, ANE, and ATHE in the kagome antiferromagnetic materials. In this work, we conduct a comprehensive investigation on the intrinsic and extrinsic mechanisms of anomalous transport properties, including the AHE, ANE, and ATHE, in noncollinear antiferromagnetic Mn_3Sn, using the state-of-the-art first-principles calculations.By collectively rotating all spins within the kagome plane, we discern the tensor shapes of AHC, ANC, and ATHC through magnetic group theory. For nonzero tensor elements, we compute the intrinsic, side jump, and skew scattering contributions individually.A profound anisotropy in the AHE, intricately connected to the evolving coplanar noncollinear spin configurations, is unveiled.Through careful comparisons with available experimental data, we establish the consistent prevalence of the intrinsic mechanism in driving the AHE at low temperatures, notably when the longitudinal conductivity exceeds 10^4 S/cm.Our study highlights the influential role of Weyl fermions near the Fermi energy in shaping the intrinsic AHC in Mn_3Sn.Nevertheless, we also observe a significant increase in the impact of extrinsic mechanisms, especially the side jump component, as the longitudinal conductivity falls below 10^4 S/cm.The extrinsic AHC predominantly emanates from electronic states positioned precisely at the Fermi surface sheets.Furthermore, our calculations of ANC and ATHC, as well as the anomalous Lorentz ratio, consistently align with experimental observations at low temperatures. Through these findings, we advance the understanding of the intricate competition between intrinsic and extrinsic mechanisms that govern anomalous transport phenomena in the realm of noncollinear antiferromagnetic Mn_3Sn. § THEORY AND COMPUTATIONAL DETAILS The AHE, ANE, and ATHE are interconnected through the generalized Landauer-Büttiker formalism <cit.> as expressed by the anomalous transport coefficients : R^(n)_ij=∫^∞_-∞(E-μ)^n(-∂ f/∂ E)σ_ij(E)dE, where μ is the chemical potential, f = 1/[exp((E-μ)/k_BT) + 1] represents the Fermi-Dirac distribution function, and σ_ij is the AHC at zero temperature. The temperature-dependent ANC and ATHC can be expressed as follows: α_ij=-R^(1)_ij/eT,κ_ij= R^(2)_ij/e^2T, From the equations (<ref>) to (<ref>), it is evident that the AHC σ_ij plays a crucial role in determining the other anomalous transport properties. Following the Kubo formalism within the linear-response theory <cit.>, the AHC can be partitioned into Fermi surface (σ_ij^I) and Fermi sea (σ_ij^II) components <cit.>: σ_ij^I = -e^2ħ/2π∫d^3k/2π^3∑_m≠ n Im[v_mn^i(k)v_nm^j(k)] = (E_mk-E_nk)Γ/[(E_f-E_mk)^2+Γ^2][(E_f-E_nk)^2+Γ^2], and σ_ij^II = e^2ħ/π∫d^3k/(2π)^3∑_m≠ n Im[v_mn^i(k)v_nm^j(k)] = {Γ/(E_mk-E_nk)[(E_f-E_mk)^2+Γ^2]. . -1/(E_mk-E_nk)^2 Im[ InE_f-E_mk+iΓ/E_f-E_nk+iΓ]}, where i,j∈ x,y,z represent Cartesian coordinates, v is the velocity operator, E_f is the Fermi energy, E_nk is the energy eigenvalue with band index n at momentum k, and Γ is an adjustable smearing parameter (0 ∼ 0.09 eV), respectively.This constitutes the constant smearing (CS) model, which describes the intrinsic AHE. In this model, a constant Γ parameter is assigned, providing all electronic states with the same finite lifetime. In the clean limit (i.e., Γ→ 0), the summation of Eqs. (<ref>) and (<ref>) converges to the well-established Berry curvature expression <cit.>: σ^int_ij = e^2ħ∫d^3k/(2π)^3∑^occ_n,m ≠ n2Im[v^i_mn(k)v^j_nm(k)]/(E_mk-E_nk)^2. It should be noted that the complex scattering mechanisms are not explicitly considered within the CS model. Alternatively, the inclusion of a short-range Gaussian disorder potential allows for the consideration of scattering-dependent AHC, encompassing the side jump and skew scattering mechanisms. Within the Gaussian disorder (GD) model, the impurity potential is described as V=U∑_i^Nδ(r-R_i), where U signifies the scattering strength, δ is the delta function, and R_i corresponds to the i-th random atomic position among a total of N impurities. Consequently, the impurity concentration is denoted as n_i = N/V, with V being the volume of the cell. For convenience, the disorder parameter is expressed as 𝒱=U^2n_i (0∼80 eV^2a_0^3).It is crucial to emphasize that this impurity potential is spin-independent as it does not encompass spin degrees of freedom. With the incorporation of spin-orbit coupling, the electron's spin becomes intricately reliant on the modification of its orbital angular momentum during scattering. Although the impurity potential utilized in this context can only be interpreted as nonmagnetic impurities in magnetic materials, the possibility of a transverse flow of spin-polarized electrons induced by scattering (i.e., extrinsic anomalous Hall conductivity) is feasible, as demonstrated in previous works <cit.>. The self-energy Σ(E,k), which accounts for the impact of electron scattering off impurities, can be expressed as follows, truncated to the lowest order <cit.>: Σ(E,k)=𝒱∫d^3k'/(2π)^3O_kk'G_0(E,k')O_k'k. Here, O_kk' represents the overlap matrix for the eigenstates at different momenta, and G_0(E,k')=[E-H(k')]^-1 stands for the bare Green's function with the unperturbed Hamiltonian H(k'). After accounting for the scattering effects, the AHC can be formulated using the full Green's functions G^R/A (R: retarded and A: advanced) <cit.> as follows: σ_ij^I = e^2ħ/4π∫d^3k/(2π)^3 Tr[Γ^i (E_f, k)G^R(E_f, k)v^jG^A(E_f, k)-(i↔ j)], and σ_ij^II = e^2ħ/2π∫d^3k/(2π)^3∫^E_f_-∞ Re{ Tr[Γ^i(E, k)G^R(E, k) ×γ(E, k) G^R(E, k)Γ^j(E, k)G^R(E, k) -(i↔ j)]}dE. Here, γ(E, k) and Γ(E, k) are scalar and vector vertex functions, respectively, defined as γ(E, k) = I+𝒱∫d^3k'/(2π)^3O_kk'G^R(E, k')γ(E, k') × G^R(E,k')O_k'k, and Γ(E, k) = v(k)+𝒱∫d^3k'/(2π)^3O_kk'G^A(E, k')Γ(E, k')× G^R(E, k')O_k'k, where I and v are identity and velocity vector operators, respectively.The Fermi sea term, Eq. (<ref>), is conventionally regarded as intrinsic, devoid of any scattering-driven behavior.In contrast, the Fermi surface term, Eq. (<ref>), encompasses intrinsic, side jump, and skew scattering contributions.Examining Eq. (<ref>), if the bare Green function G_0 replaces the full Green function G and the vertex correction is not considered (i.e., Γ^i→ v^i), it reflects an intrinsic contribution and yields intrinsic AHC (σ_ij^int) when combined with the Fermi sea term.When the full Green function G is used and the vertex correction is not considered (i.e., Γ^i→ v^i), the side jump contribution to the AHC (σ_ij^sj) emerges.Finally, if the full Green function G is used and the vertex correction is included (i.e., using Γ^i), the skew scattering contribution to the AHC (σ_ij^isk) is introduced. The decomposition of AHC can be elucidated through Feynman diagrams, referring to Czaja et al.'s work <cit.>. By plugging the decomposed AHC into Eqs. (<ref>)-(<ref>), the corresponding components of ANC and ATHC can be obtained accordingly. In the GD model, the skew scattering term is also known as “intrinsic" skew scattering (σ_ij^isk), originally proposed by Sinitsyn and co-workers. <cit.>.Similar to conventional skew scattering, “intrinsic" skew scattering also arises from the asymmetric part of the collision kernel. However, it converges to a finite value in the clean limit (𝒱→0). In contrast, conventional skew scattering is inversely proportional to impurity concentrations and becomes divergent in the clean limit. Diagrammatically speaking, “intrinsic" skew scattering solely results from Gaussian disorder correlations, while conventional skew scattering involves vertex corrections that include correlators of three or more disorder vertices <cit.>. The Gaussian disorder model utilized in this study does not explicitly define the types (such as crystal defect or phonon) and spin structures of impurities. In adopting a “mean-field" approach, the Gaussian disorder model accommodates various scattering channels without delving into the detailed characteristics of the internal nature of scattering sources. Taking into account temperature effects, the microscopic motions within the crystal become more intricate, potentially introducing variations between theoretical calculations and experimental measurements. A comprehensive disorder potential that encompasses all these details could offer a more accurate representation of electronic conductivity and its individual decomposed components. However, the computational treatment of these scattering processes at a detailed microscopic level remains a challenging task for first-principles methods.Thus, the Gaussian disorder model proves suitable for Mn_3Sn, identified as a moderately disordered metal, given that its longitudinal conductivity falls within the dirty and intrinsic regimes (σ_ii<10^6 S/cm), but not the clean regime (σ_ii>10^6 S/cm), as illustrated in Fig. <ref>. The first-principle calculations are carried out using the full-potential linearized augmented plane-wave (FP-LAPW) method implemented in the fleur code <cit.>. The exchange-correlation functional is treated within the generalized gradient approximation using the Perdew-Burke-Ernzerhof parameterization <cit.>.The spin-orbit coupling is included in all calculations. For Mn_3Sn, a plane-wave cutoff energy of 3.80 a_0^-1 is selected, and the experimental lattice constants (a = b = 5.66 Å and c = 4.53 Å) are adopted. The self-consistent calculations and magnetic anisotropy energy calculations are conducted with a 16×16×18 mesh of k-points.To construct maximally localized Wannier functions, s, p, and d orbitals of Mn atoms, as well as s and p orbitals of Sn atoms, are projected onto a uniform k-mesh of 8×8×8 using the wannier90 package <cit.>. For calculating the AHC, an ultra-dense k-mesh of 300×300×300 is employed. For the calculations of the ANC and ATHC using Eq. (<ref>), the AHC is computed with an energy interval of 0.1 meV. § RESULTS AND DISCUSSION Bulk Mn_3Sn alloy crystallizes in a layered hexagonal structure with the crystallographic space group of P6_3/mmc. The primitive unit cell consists of two atomic layers stacked along the c axis. Within each layer, the arrangement of three Mn atoms forms a kagome lattice, while the Sn atom is positioned at the center of each hexagon, as depicted in Fig. <ref>. The spin magnetic moments of the three Mn atoms on the same kagome plane adopt a 120^∘ noncollinear antiferromagnetic order with a Néel temperature (T_N) of 430 K <cit.>. Our calculated spin magnetic moment for each Mn atom is 3.26 μ_B, which closely matches the experimental value of ∼ 3.0 μ_B <cit.>.Despite being classified as a noncollinear AFM, Mn_3Sn exhibits a very small net magnetic moment (∼ 0.002 μ_B) <cit.>. This residual magnetic moment allows for the manipulation of the spin orientation within the kagome plane, for instance, through an external magnetic field. Such spin rotations alter the magnetic group and total energy of the system, and consequently impact the anomalous transport properties. In this context, examining the variations in the AHC tensor due to spin rotation is adequate, since the ANC and ATHC share the same symmetry requirements according to Eqs. (<ref>)-(<ref>). The off-diagonal elements of the AHC can be represented in a vector notation as σ = [σ^x, σ^y, σ^z] = [σ_yz, σ_zx, σ_xy]. Notably, the anomalous Hall vector σ can be analogously regarded as a pseudovector, akin to spin. Given that the translational operation (τ) does not alter the anomalous Hall vector <cit.>, i.e.,τσ= σ, our subsequent analysis is effectively limited to magnetic point groups. This magnetic symmetry analysis has been previously employed in the study of other two- and three-dimensional magnetic materials <cit.>.Table <ref> underscores that the magnetic point group of Mn_3Sn demonstrates a periodicity of 30^∘: m'm'm → 2'/m' → m'm'm, as the spin rotates within the kagome plane (x-y plane).Here, it is only worth discussing two non-repetitive groups, namely m'm'm and 2'/m', in relation to four distinct spin configurations (θ= 0^∘, 15^∘, 30^∘, and 90^∘), illustrated in Fig. <ref>.Note that the specific mirror symmetries plotted in Fig. <ref> only characterize the spin configurations but not preserve the crystal structure.First, for the m'm'm group (θ= 0^∘), it consists of a mirror symmetry ℳ_1 and two combined symmetries 𝒯ℳ_2 and 𝒯ℳ_3, as depicted in Fig. <ref>(a). The mirror plane ℳ_1 is perpendicular to the x-axis and parallel to the y-axis, which leads to a sign change in σ^y and σ^z while leaving σ^x unchanged. Similarly, the mirror plane ℳ_2 (parallel to x-axis, perpendicular to y-axis) changes the signs of σ^x and σ^z but preserves σ^y.As for the time-reversal symmetry altering signs of all σ^x, σ^y, and σ^z, the combined 𝒯ℳ_2 symmetry changes the sign of σ^y but preserves σ^x and σ^z.Since ℳ_3 (perpendicular to z-axis, parallel to x-y plane, between two kagome planes) changes the signs of σ^x and σ^y, the combined symmetry 𝒯ℳ_3preserves σ^x and σ^y. Consequently, for the m'm'm group (θ= 0^∘), we find σ = [σ^x, 0, 0] = [σ_yz, 0, 0].Second, when the spin rotates to 30^∘ within the same m'm'm group, the positions of ℳ_1 and ℳ_2 mirror planes change accordingly [Fig. <ref>(c)]. Now, both σ^x and σ^y become nonzero under ℳ_1 and 𝒯ℳ_2 operations.Hence, for the m'm'm group (θ= 30^∘), we obtain σ = [σ^x, σ^y, 0] = [σ_yz, σ_zx, 0].Third, when θ = 90^∘,the anomalous Hall vector resorts to the shape σ = [0, σ^y, 0] = [0, σ_zx, 0] because the ℳ_1 and ℳ_2 mirror planes are parallel and perpendicular to the x-axis, respectively, as shown in Fig. <ref>(d).Finally, the 2'/m' group (θ = 15^∘) only possesses a combined symmetry operation 𝒯ℳ_3, where the mirror plane ℳ_3 is parallel to the x-y plane and perpendicular to the z-axis [Fig. <ref>(b)].For this group, the nonzero elements of the AHC are σ = [σ^x, σ^y, 0] = [σ_yz, σ_zx, 0].The above results of symmetry analysis can also be obtained by the Neumann principle <cit.>, wherein all symmetry operations of the corresponding magnetic point group are applied to the conductivity tensor.Additionally, the cluster multipole theory <cit.> serves as another valuable analysis tool, revealing the shape of conductivity tensor by assessing the cluster multipole moment, which acts as a macroscopic magnetic order. In the magnetic group analysis detailed above, we have determined all possible nonzero elements of the AHC vector corresponding to different spin rotation angles, as summarized in Table <ref>. However, one may wonder which angle θ represents the magnetic ground state of Mn_3Sn.For the noncollinear AFMs considered here, the magnetic anisotropy energy (MAE) can be defined as the total energy difference between distinct spin configurations: MAE(θ)=E_θ≠0-E_θ=0. The variation of MAE with respect to θ is illustrated in Fig. <ref>(a), from which it becomes evident that the spin configuration with θ = 0^∘ (magnetic space group Cmc'm') represents the magnetic ground state of Mn_3Sn, being 0.019 meV lower in energy than the configuration with θ = 90^∘ (magnetic space group Cm'cm'). This finding is consistent with a prior theoretical calculation <cit.>. Figure <ref>(a) only presents the MAE results within the θ range from 0 to π, as the 120^∘ noncollinear spin order exhibits a discrete two-fold energy degeneracy, rendering MAE(θ) = MAE(θ+π). This degeneracy characteristic aligns with that observed in the cases of noncollinear antiferromagnetic Mn_3XN (X = Ga, Zn, Ag, or Ni)<cit.> as well as two-dimensional van der Waals layered magnets 1T-CrTe_2 <cit.>, Fe_nGeTe_2 (n = 3, 4, 5) <cit.>, and CrXY(X = S, Se, Te; Y = Cl, Br, I) <cit.>. Correspondingly, Fig. <ref>(b) portrays the total AHC (σ_ij) as a function of θ, computed using two representative disorder parameters. At θ = 0^∘ or 180^∘, solely the yz component of AHC exhibits a nonzero value, whereas at θ = 90^∘, only the zx component is nonzero. For other θ values, both yz and zx components contribute to the AHC, harmonizing seamlessly with our magnetic group analysis. The AHC is depicted over the range 0≤θ≤π, while the results for π≤θ≤2π can be acquired by following the relation σ(θ)=-σ(θ+π). This observation arises from the fact that the spin state at θ+π constitutes the time-reversed counterpart of the state at θ, and the AHC maintains an odd symmetry under time-reversal operations <cit.>. Another intriguing observation from Fig. <ref>(b) is that the AHC is enhanced across all spin rotation angles when the disorder parameter is decreased. This phenomenon aligns precisely with the disorder-induced amplification of anomalous transport phenomena previously observed in topological semimetals MF_3 (M = Mn, Pd) <cit.>. Based on the dependence of the total AHC (σ_ij) on the longitudinal conductivity (σ_ii), three distinct scaling relations have been proposed for various magnetic materials <cit.>: σ_ij∝σ_ii^2 or σ_ii^1 in the clean regime (σ_ii > 10^6 S/cm), σ_ij∝σ_ii^0 in the intrinsic regime (10^4 < σ_ii < 10^6 S/cm), and σ_ij∝σ_ii^1.6 in the dirty regime (σ_ii < 10^4 S/cm).While earlier theoretical investigations have primarily focused on the intrinsic AHE in Mn_3Sn <cit.>, the influence of scattering-dependent extrinsic mechanisms has yet to be explored comprehensively.Figure <ref>(a) showcases the total AHC (σ_yz) and its decomposition (σ^int_yz, σ^sj_yz, and σ^isk_yz) as a function of longitudinal conductivity σ_xx for Mn_3Sn in its magnetic ground state (θ=0^∘). Notably, Mn_3Sn predominantly lies within the dirty and intrinsic regimes due to its σ_xx< 10^6 S/cm. As σ_xx increases, the total AHC σ_yz rises and gradually approaches a constant plateau of -230 S/cm for σ_xx >10^4 S/cm. In the intrinsic regime, the intrinsic AHC σ^int_yz (≈ -146 S/cm) plays a dominant role, aligning well with previous theoretical calculations <cit.>. Meanwhile, the extrinsic mechanisms (σ^sj_yz + σ^isk_yz) assume a secondary role, contributing to 34% of the total AHC.Given that the intrinsic mechanism is independent of scattering, it should be much less affected by changes in longitudinal conductivity, as compared to extrinsic mechanisms. Our calculations indeed demonstrate that the intrinsic contribution remains fairly stable within the dirty regime (σ_xx< 10^4 S/cm). Conversely, skew scattering rapidly diminishes towards zero, while side jump experiences a significant increase, primarily governing the declining trend of the total AHC.A recent experimental study has reported a reduction in the total AHC as σ_xx decreases below 10^4 S/cm <cit.>. However, this work <cit.> mentioned that the contribution of the side jump mechanism can be ruled out due to the weak spin-orbit coupling strength of Mn 3d electrons, suggesting that the reduction in total AHC is driven by the intrinsic mechanism.In direct comparison, Fig. <ref>(b) demonstrates the excellent agreement between our calculations and the experimental results <cit.>.In the dirty regime (σ_xx < 10^4 S/cm), a scaling relation of σ_yz∼σ_xx^1.6 is evident, highlighting the pronounced significance of the extrinsic side jump mechanism. Consequently, our analysis tends to a conclusion that whereas the intrinsic mechanism dominates in the intrinsic regime, the large reduction of AHC in dirty regime is primarily attributed to the contribution of the side jump mechanism. The intrinsic mechanism of AHC stems from the presence of nonvanishing Berry curvature in momentum space and can be largely enhanced by topological features in the band structure like Weyl nodal points.Nevertheless, the link between extrinsic mechanisms of AHC and the underlying band structure remains less elucidated.Recent theoretical and experimental investigations have illuminated the existence of magnetic Weyl fermions near the Fermi energy (E_f) in Mn_3Sn <cit.>. As these Weyl points can be interpreted as effective magnetic monopoles in momentum space, the increased Berry curvature in proximity to these points contributes to a substantially amplified AHC.The band structure calculated with spin-orbit coupling for the ground state spin configuration (θ=0^∘) of Mn_3Sn is presented in Fig. <ref>(a).The time-reversal symmetry breaking triggers the emergence of multiple pairs of Weyl points at varying energy levels.For our analysis, we focus on those near E_f as they are pertinent to the anomalous transport properties.Owing to the ℳ_1 and 𝒯ℳ_2 symmetries within the m'm'm group, all K points in the first Brillouin zone are equivalent, while two inequivalent M points are labeled M and M'.In the vicinity of the M point, an intersection between a parabolic band and an anti-parabolic-like band engenders Weyl points (W^+_1, W^-_2) at E_f + 36 meV, accompanied by their counterparts (W^-_1, W^+_2) at E_f + 72 meV. However, no Weyl points are present near the M' point. The spatial distribution of Weyl points along the K-M-K path on the k_z = 0 plane is showcased in Fig. <ref>(b).Upon shifting the Fermi energy upwards to 36 and 72 meV, the intrinsic AHC around the Weyl points exhibits a sharp increase, as depicted in Fig. <ref>(c). This observation affirms the inherent enhancement of intrinsic AHC through topological Weyl nodal structures.Furthermore, Fig. <ref>(d) illustrates the extrinsic AHC at the same Fermi energies. In contrast to the intrinsic AHC, the extrinsic AHC is primarily distributed along the Fermi surface sheets, indicating a more substantial contribution from the Fermi surfaces as compared to the Fermi sea. Next, we turn to the variation in the anomalous transport properties of Mn_3Sn with temperature.By utilizing the experimentally derived relationship between the longitudinal conductivity σ_ii and temperature, we can easily map our results of σ_ij to temperature.Figure <ref>(a) illustrates the calculated AHC as a function of temperature, compared with available experimental data. Notably, three previous experimental studies <cit.> show noticeable discrepancies in the magnitude of the AHC. This can be attributed to variations in the chemical composition of the samples, as the Mn-to-Sn atomic ratio deviates from the ideal 3:1, leading to differences in Mn content across different samples. However, the crucial observation is that all of these studies consistently demonstrate a declining trend in AHC with increasing temperature. we observe that the intrinsic AHC (σ^int_yz) remains relatively constant over the entire temperature range, and the reduction in AHC with increasing temperature can be attributed to the behavior of the side jump and skew scattering (σ^sj_yz+σ^isk_yz) contributions.This underlines the significant role that the extrinsic mechanisms play in shaping the AHE of Mn_3Sn at higher temperatures.We have to remark that with increasing temperature, the spin fluctuation away from the equilibrium direction - which are not included into our analysis explicitly - may become more prominent. This will eventually result in a suppression of the averaged over possible angles values of the AHC [Fig. <ref>(b)]. The presence of this mechanism may explain the stronger decay of the AHC with T observed experimentally. Since the latter effective averaging also results in smoother behavior of the AHC with band filling, similar effect of suppression at higher temperatures can be experienced in the case of ANC, discussed below. The total ANC (α_yz) along with its components (α^int_yz, α^sj_yz, and α^isk_yz) are computed using Eq. (<ref>) and presented as a function of temperature in Fig. <ref>(b). It can be observed that the ANC gradually increases with rising temperature, with a particularly pronounced rise occurring when the temperature exceeds 100 K. This upward trend contrasts with experimental findings, which have demonstrated a decrease in ANC as the temperature goes beyond 150 K <cit.> or 200 K <cit.>. Notably, it is worth mentioning that a phase transition from a noncollinear antiferromagnetic structure to a helical spin structure has been reported around 200 K <cit.>. In our calculations, we have exclusively considered a perfect magnetic crystal featuring a 120^∘ noncollinear antiferromagnetic structure, thereby excluding the influence of any additional phase transitions.Furthermore, in addtion to the mechanism discussed above for the case of AHC, as the temperature increases, phonon thermal dynamics becomes more pronounced, which undoubtedly impacts the anomalous transport properties of magnetic materials. The Gaussian disorder model <cit.> employed in our work broadly encompasses all “mean-field" scattering channels. However, the intricate details of electron scattering arising from phonons are not explicitly accounted for. Consequently, as temperature rises, our calculated ANC is expected to increase due to its positive temperature-dependent nature. Subsequently, we delve into the ATHE of Mn_3Sn, akin to the thermal counterpart of AHE, as illustrated in Fig. <ref>(c).At lower temperatures (below 100 K), the calculated ATHC is in good agreement with experimental results<cit.>. As the temperature increases, the intrinsic ATHC (κ^int_yz) exhibits a monotonic increase, leading to an overall rising trend in the total ATHC (κ_yz). However, experimental observations have indicated a relatively minor temperature dependence in ATHC <cit.>, fitting to the behaviors of our calculated extrinsic ATHC (κ^isk_yz + κ^sj_yz).The anomalous thermal and electrical transports can be interconnected through the anomalous Lorenz ratio, defined as L_ij = κ_ij / (σ_ijT), which has be employed to judge the contributions of intrinsic and scattering to the AHE <cit.>. Similar to the AHC, ANC, and ATHC, L_ij can also be decomposed into three parts: L_ij = L_ij^int + L_ij^sj + L_ij^isk, where L_ij^int represents the intrinsic contribution, L_ij^sj and L_ij^isk are the extrinsic contributions from side jump and skew scattering, respectively.As the temperature approaches to zero, L_ij converges to the free-electron Lorenz number, commonly referred to as the Wiedemann-Franz law: L_ij (T→0) ≈ L_0 = π^2k_B^23e^2 = 2.44×10^-8 V^2/K^2. Examining Fig. <ref>(d), we observe that when the temperature is less than 50 K, the calculated L_yz closely aligns with L_0. This indicates that at low temperatures, the intrinsic mechanism predominantly governs the anomalous electrical and thermal transport in Mn_3Sn, suggesting that transverse charge and heat currents flow in a nearly dissipationless way. As temperature rises, the transverse heat current carried by conducting electrons is expected to experience progressively increased dissipation due to inelastic scattering with phonons. For instance, above 50 K, experimentally measured L_yz for Mn_3.06Sn_0.94<cit.> deviates noticeably from L_0, signaling a crossover in the dominant role from intrinsic mechanism to extrinsic mechanisms. This is consistent with our calculations of extrinsic contributions (L^sj_yz+L^isk_yz) to the anomalous Lorenz ratio as the temperature increases. However, for the Mn_3.09Sn_0.91 sample <cit.> and another experimental study of Mn_3Sn <cit.>, the deviation of L_yz from L_0 is not substantial, indicating the potential persistence of the intrinsic mechanism's predominance over the extrinsic ones. Thus, in our calculations, the anomalous Lorenz ratio L_yz remains relatively close to L_0 across the entire temperature range. § SUMMARY In summary, we have systematically studied the intrinsic and extrinsic anomalous Hall, anomalous Nernst, and anomalous thermal Hall effects in noncollinear antiferromagnetic Mn_3Sn, utilizing advanced first-principle calculations and magnetic group analysis. In our study, the intrinsic contribution is associated with the Berry phase effect of relativistic bands within a pristine crystal, free from impurities. All additional contributions arising from scatterings on impurities or disorder are classified as extrinsic. The spin-independent impurity potential utilized in our study can be understood as representing nonmagnetic impurities in magnetic materials. With the incorporation of spin-orbit coupling, the electron's spin becomes intricately dependent on the modification of its orbital angular momentum during scattering. Consequently, the transverse flow of spin-polarized electrons induced by scattering on nonmagnetic impurities is indeed feasible.The definitions of intrinsic and extrinsic contributions to the anomalous transport properties align with established conventions in the majority of previous research. We first identified the nonvanishing tensor elements of anomalous Hall conductivity for diverse coplanar noncollinear spin configurations, according to symmetry requirements under relevant magnetic point groups.Upon the collective rotation of all spins within the kagome plane, the anomalous Hall conductivity showcases periodic patterns, giving rise to a pronounced magnetic anisotropy.Previous theoretical works have primarily focused on studying the intrinsic anomalous transport properties of Mn_3Sn through Berry curvature calculations, with relatively less attention given to extrinsic mechanisms. Through computations of the total anomalous Hall conductivity and its constituent components, we have unveiled that the intrinsic mechanism uniformly dominates within the intrinsic regime, especially when the longitudinal conductivity σ_xx surpasses 10^4 S/cm.The intrinsic mechanism can be traced to substantial Berry curvatures encircling Weyl points proximate to the Fermi energy.In the realm of the dirty regime (σ_xx<10^4 S/cm), extrinsic mechanisms, notably the side jump, emerge as potent contributors, which brings our theoretical results closer to experimental measurements.This extrinsic anomalous Hall conductivity predominantly stems from electronic states positioned precisely at the Fermi surfaces.Moreover, our findings with regards to the anomalous thermal Hall effect and anomalous Lorenz ratio compare well with experimental outcomes at low temperatures, consistently indicating the dominated role of the intrinsic mechanism.As the temperature rises, a certain degree of deviation between theoretical and experimental results becomes apparent. These deviations may be attributed to enhanced phonon scattering and increased complexity in the internal structure of the crystal, factors that are not fully accounted for in the Gaussian disorder model.Through these comprehensive insights, our study has substantially enriched the understanding of anomalous transport phenomena in noncollinear antiferromagnetic Mn_3Sn. Furthermore, our work offers valuable perspectives for potential applications in the realms of spintronics and spin caloritronics, harnessing the distinctive attributes of noncollinear antiferromagnetic materials. This work is supported by the National Key R&D Program of China (Grants No. 2022YFA1403800, No. 2022YFA1402600, and No. 2020YFA0308800), the National Natural Science Foundation of China (Grants No. 12274027 and No. 11874085), the Science & Technology Innovation Program of Beijing Institute of Technology (Grant No. 2021CX01020).Y.M. acknowledges the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 288—422213477 (Project No. B06). Y.M., W.F., and Y.Y. acknowledge the funding under the Joint Sino-German Research Projects (Chinese Grant No. 12061131002 and German Grant No. 1731/10-1) and the Sino-German Mobility Program (Grant No. M-0142).
http://arxiv.org/abs/2312.16050v1
{ "authors": [ "Xiuxian Yang", "Wanxiang Feng", "Xiaodong Zhou", "Yuriy Mokrousov", "Yugui Yao" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20231226133426", "title": "Intrinsic and extrinsic anomalous transport properties in noncollinear antiferromagnetic Mn$_3$Sn from first-principle calculations" }
We study embedded rational curves in projective toric varieties. Generalizing results of the first author and Zotine for the case of lines, we show that any degree d rational curve in a toric variety X can be constructed from a special affine-linear map called a degree d Cayley structure. We characterize when the curves coming from a degree d Cayley structure are smooth and have degree d.We use this to establish a bijection between the set of irreducible components of the Hilbert scheme whose general element is a smooth degree d curve, and so-called maximal smooth Cayley structures.Furthermore, we describe the normalization of the torus orbit closure of such rational curves in the Chow variety, and give partial results for the orbit closures in the Hilbert scheme.Integrated Access and Backhaul via LEO Satellites with Inter-Satellite Links Zaid Abdullah^†, Eva Lagunas^†, Steven Kisseleff^⋆, Frank Zeppenfeldt^, and Symeon Chatzinotas^† ^† Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, LuxembourgEmails: {zaid.abdullah, eva.lagunas, symeon.chatzinotas}@uni.lu^⋆ Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany. Email: steven.kisseleff@iis.fraunhofer.de ^ European Space Agency (ESA), Noordwijk ZH, The Netherlands. Email: frank.zeppenfeldt@esa.int================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTIONAn insightful approach to better understanding the geometry of a projective variety X⊆^n involves studying all subvarieties contained in X. Of particular interest are rational curves. In this paper, we will contribute to the study of rational curves in projective toric varieties.Rational curves are often studied by considering (log stable) maps ^1 → X, see <cit.> and <cit.> for results in the toric setting. However, our focus will not be on the map, but on the curve itself, that is, on the image of the map. In particular, the underlying moduli spaces we are primarily interested in are the Hilbert schemes _dm+1(X) (parametrizing subschemes of X with Hilbert polynomial P(m)=dm+1 equal to that of a smooth rational curve of degree d) and _d(X) (parametrizing one-cycles of X of degree d). This paper builds on previous work of the first author and Zotine <cit.>, in which they study Fano schemes of toric varieties. The special case of the Fano scheme of lines is that of rational curves of degree d=1, in which case the Chow and Hilbert schemes coincide. We now summarize our main results. For this purpose, we first introduce a bit of notation. Throughout, we will work over an algebraically closed fieldof characteristic zero. Letbe a finite subset of a lattice M≅^n. Associated tois a projective toric variety X_⊆^#-1, see <ref>. The key combinatorial gadgets in our study of _dm+1(X_) are degree d Cayley structures of length ℓ on faces of the set , see <ref> for a definition. Roughly speaking, a degree d Cayley structure of length ℓ is an affine linear map mapping a subset τ ofto the dth dilate of a standard simplex of dimension ℓ. Any two Cayley structures are equivalent if they differ by a permutation of the vertices of the simplex.A degree d Cayley structure π defines a family over M_0,ℓ+1× T_τ of non-constant basepoint-free maps from ^1 to X_ (see <ref>). Here, T_τ is a quotient of the dense torus of X_, and M_0,ℓ+1 the moduli space of ℓ+1-marked points on ^1. (In the special case ℓ=1, the family is over T_τ/^*). We identify a combinatorial criterion on the Cayley structure π that characterizes when the image of a generic map in this family has degree d (corresponding to π being primitive, see Definition <ref> and Theorem <ref>). Likewise, we identify a combinatorial criterion that characterizes when the image of a generic map in the family is smooth (corresponding to π being smooth, see Definition <ref> and Theorem <ref>). For a smooth primitive degree d Cayley structure π, we thus have a rational map M_0,ℓ+1× T_τ_dm+1.In fact, this map is generically finite. We denote the closure of its image by Z_π.There is a natural combinatorially-defined partial order ≤ on the set of smooth primitive Cayley structures of degree d defined on faces of(see Definition <ref>). Using this, we obtain:[See Corollary <ref>] The map π↦ Z_π induces a bijection between equivalence classes of maximal smooth primitive degree d Cayley structures and irreducible components of _dm+1(X_) whose general element is a smooth rational curve. The most interesting behaviour of many moduli spaces is found along the boundary. Motivated by this, we study the limiting behaviour of a general element η of Z_π under a one-parameter subgoup of T_τ. In Theorem <ref>, we give a combinatorial description of this limit as a one-cycle. Using this we are able to describe the normalization of the closure of the T_τ-orbit of η in the Chow variety _d(X_). Indeed, using the combinatorics of π, we construct a fan Σ_π (see Definition <ref>) and prove the following: [See Theorem <ref>] The normalization of the T_τ-orbit closure of a general curve corresponding to π is the toric variety corresponding to the fan Σ_π. Understanding the normalization of the T_τ-orbit closure in _dm+1 of a general point η∈ Z_π is much more subtle. Using the Hilbert-Chow morphism, we see that this toric variety is described by a fan given by a refinement of Σ_π (see Proposition <ref>). However, in the case of the Hilbert scheme of conics, we are able to say exactly what happens: the fan is the coarsest common refinement of Σ_π with the normal fan Σ' of a certain matroid polytope (see Theorem <ref>).We briefly comment on the relationship between this paper and <cit.>. In <cit.>, Ranganathan considers the moduli space of log stable maps from ^1 to a toric variety X. The interior of this moduli space, corresponding to maps from ^1 whose images meet the dense torus of X and intersect the boundary in prescribed fashion, is similar to the family of maps we obtain from a Cayley structure π. However, the behaviour at the boundary is very different from what happens in the Hilbert or Chow schemes. In <cit.>, Banerjee gives a combinatorial description of the space of morphisms of fixed multidegree from ^1 to a simplicial toric variety X. This is perhaps more similar to our approach, in the sense that such maps can have images that are contained in the toric boundary. Nonetheless, the focus there remains on the morphism, not the image as in our case.The remainder of this paper is organized as follows. In <ref> we introduce some basic notation for toric varieties and define degree d Cayley structures and related notions. We show in <ref> how to construct a family of rational curves from any Cayley structure, and conversely how a rational curve in a toric variety determines a corresponding Cayley structure. In <ref> we more closely study the geometry of the rational curves obtained from a degree d Cayley structure, characterizing in particular when these curves are smooth and of degree d.Our discussion on irreducible components of the Hilbert scheme is found in <ref>. We conclude in <ref> with a study of torus orbits in the Chow and Hilbert schemes. We finish this introduction with an example illustrating some of our results. [A singular Fano threefold] Letbe the subset of ^3 whose elements are the columns of the matrix ( [ -10110 -1000; -1 -10110000;00000001 -1;]),see Figure <ref>. The toric variety X_ is a singular Fano threefold in ^8. The setadmits 9 non-equivalent maximal degree two Cayley structures of length 1:(u_1,u_2,u_3) ↦ (1+u_1+c· u_3,1-u_1-c· u_3) c∈{-1,0,1} (u_1,u_2,u_3) ↦ (1+u_2+c· u_3,1-u_2-c· u_3) c∈{-1,0,1} (u_1,u_2,u_3) ↦ (1+u_1-u_2+c· u_3,1-u_1+u_2-c· u_3) c∈{-1,0,1}.The sethas 12 two-dimensional faces, each consisting of exactly three points. Up to equivalence, each of these faces has a unique maximal degree two Cayley structure of length 5. See also Example <ref> and Figure <ref>.All of these Cayley structures give rise to smooth conics in X_, and hence to irreducible components in the Hilbert scheme of conics in X_; see Example <ref>. The 9 length 1 Cayley structures yield 9 components of dimension 2; these components are themselves toric. After appropriate choice of coordinates, the fans corresponding to the toric varieties appearing as the normalizations of these components are pictured in Figure <ref>. See Examples <ref> and <ref>. The 12 length 5 Cayley structures yield 12 components of dimension 5. Each of these components is isomorphic to the ^5 parametrizing conics in the plane.§.§ AcknowledgementsBoth authors were supported by NSERC Discovery Grants. We thank Dhruv Ranganathan, Sandra Di Rocco, and Luca Schaffler for helpful discussions. § PRELIMINARIES §.§ Toric VarietiesWe will always be working over an algebraically closed fieldof characteristic zero. Fix a lattice M. To a finite subset ⊂ M, we associate the projective toric varietyX_=[S_]⊂^#-1where S_ is the semigroup generated by elements (u,1)∈ M× for u∈, and [S_] is the corresponding semigroup algebra. For any subset τ of M, we will use ⟨τ⟩ to denote the sublattice consisting of all differences of elements of τ. Given v∈, we denote the associated homogeneous coordinate of X_ by x_v. The variety X_ comes equipped with an action of the torus T_=[⟨⟩]; the action on the projective coordinate x_v has weight v. We will use the notation T=[M] and note that the tori T_τ (for τ a subset of ) are quotients of T. In particular, T also acts on X_. For more details on toric varieties see <cit.>.A face τ ofis the intersection ofwith a face of the convex hull , and we write τ≼.Note that we considerto be a face of itself.There is a natural closed embedding X_τ⊂ X_ determined by the homomorphism [S_]→[S_τ] which for any v∈ sendsx_v↦ x_vv∈τ 0v∉τ. Given a face τ of , we let x_τ be the point of X_ such thatx_u=0 for u∉τ and x_u=1 for u∈τ. We note that the T_τ orbit of x_τ is dense in X_τ. We write ∂ X_τ for the complement of this orbit, i.e. ∂ X_τ = ⋃_τ' ≺τ X_τ'. §.§ Degree-d Cayley StructuresWe now generalize the notion of a Cayley structure from <cit.>.For natural numbers d and ℓ, we setΔ_ℓ(d)= {(u_0,…,u_ℓ)∈^ℓ+1_≥ 0 : u_0+…+u_ℓ=d}.Note that the convex hull of Δ_ℓ(d) is the d-th dilation of an ℓ-dimensional standard simplex. We will use e_0,…,e_ℓ to refer to the elements of the standard basis of ^ℓ+1, and e_0^*,…,e_ℓ^* to refer to the dual basis elements.Let τ be a face of .A weak Cayley structure of length ℓ and degree d on τ is a non-constant affine-linear map π:τ→Δ_ℓ(d).Consider any map π:τ→Δ_ℓ(d). * We say i∈{0,…, ℓ} is a basepoint for π if for every w∈π(τ), w_i>0. We say π is basepoint-free if no element of {0,…,ℓ} is a basepoint. * We say that π is concise if for every 0≤ i ≤ℓ there exists v∈π(τ) such that v_i ≠ 0.A Cayley structure is a weak Cayley structure that is basepoint-free and concise. Note that in the special case d=1, basepoint-freeness and conciseness are equivalent to surjectivity of π. We call any two (weak) Cayley structures equivalent if they differ only by a permutation of the basis vectors of ^ℓ+1. This is the same as identifying any two (weak) Cayley structures differing by an affine automorphism of Δ_ℓ(d); this defines an equivalence relation on the set of (weak) Cayley structures. Let ⊆Δ_ℓ(d) be the image of a weak Cayley structure π:τ→Δ_ℓ(d). The map π determines a surjective ring homomorphism[S_τ] →[S_] x_u ↦ x_π(u)and hence an embedding X_↪ X_τ. This induces an inclusion of tori T_→ T_τ; we will denote by T_π the image of T_ in T_τ.Let N_τ=(⟨τ⟩,) be the cocharacter lattice of T_τ. The weak Cayley structure π:τ→Δ_ℓ(d) induces a linear mapπ^*:(^ℓ+1)^* → N_τwhere for v∈^ℓ+1 and u,w∈τ, π^*(v) sends u-w∈⟨τ⟩ to v(π(u)-π(w)). Let π:τ→Δ_ℓ(d) be a weak Cayley structure. Let c ∈ℤ^ℓ+1 be the coordinatewise minimum of π, that is, c_i = min{π(u)_i : u ∈τ}. The resolution of π is the map (π) :τ →Δ_ℓ(d')u ↦π(u)- cwhere d' = d - ∑ c_i. For any weak Cayley structure π, the resolution (π) will be basepoint-free. If π is already basepoint-free, (π)=π. Let π:τ→Δ_ℓ(d) be a weak Cayley structure, and let F be the minimal face of Δ_ℓ(d) containing π(τ). A concision of π is the composition of π with any affine linear bijection F→Δ_ℓ'(d), where ℓ'= F.For any weak Cayley structure π, its concisions will all be equivalent. Furthermore, they will be concise. If π was basepoint-free, then so is any concision. Hence, given a weak Cayley structure π, we may always obtain a Cayley structure by considering a concision of (π). Note that equivalent Cayley structures have equivalent resolutions and concisions. The notion of a weak Cayley structure is closely related to the generalized π-twisted Cayley sums of <cit.>. Indeed, consider R a π-twisted Cayley sum over the polytope F. If F is a dilate of a standard simplex, then by restricting to the lattice points of R, the map π gives a Cayley structure. For more general F we may choose an (affine-linear) inclusion of F in a dilate of a standard simplex; by restricting to the lattice points of R we obtain a weak Cayley structure (which may be made into a Cayley structure by taking a concision of its resolution). [Cayley structures on a Fano threefold] We consider the setfrom Example <ref> and pictured in Figure <ref>. Up to equivalence, there are exactly nine length 1 degree 2 Cayley structures defined on the entire set . The images of the elements ofunder the two Cayley structuresπ:(u_1,u_2,u_3) ↦ (1+u_2,1-u_2); π':(u_1,u_2,u_3) ↦ (1+u_2+u_3,1-u_2-u_3)are depicted in the top of Figure <ref>. A length 5 degree 2 Cayley structure π” on the face τ={(1,0,0),(0,-1,0),(0,0,1)} is depicted in the bottom of Figure <ref>. Up to equivalence, this is the only length 5 degree 2 Cayley structure on τ.Restricting the Cayley structure π to the face τ, we obtain a weak Cayley structure π|_τ that is concise, but not basepoint-free: 1 is a basepoint. The resolution of π|_τ is the degree 1 Cayley structure(π|_τ):(u_1,u_2,u_3)↦ (1+u_2,-u_2).sending (1,0,0) and (0,0,1) to e_0, and (0,-1,0) to e_1.Let τ' be the face τ'={(0,-1,0),(0,0,1)}. Restricting π” to τ' we obtain a weak Cayley structure π”|_τ' that is basepoint-free but not concise. A concision for this weak Cayley structure is given by the length 3 Cayley structure sending (0,-1,0) to e_2+e_3 and (0,0,1) to e_0+e_1. § CAYLEY STRUCTURES AND RATIONAL CURVES §.§ Constructing CurvesFix a natural number ℓ. For i=0,…, ℓ, we consider linear forms f_i∈[y_0,y_1]. We will write=(f_0,…,f_ℓ)and for v=(v_0,…,v_ℓ)∈^ℓ+1 set^v=∏_i=0^ℓ f_i^v_i.Let π:τ→Δ_ℓ(d) be a weak Cayley structure. Letbe a tuple of linear forms. We defineρ_π,:^1 X_τ⊂^#-1, y↦( ^π(u))_u∈,where we adopt the convention that for u∉τ, ^π(u):= 0. We denote the closure of the image of ρ_π, by C_π,. Note that C_π, ⊆ X_τ because π is affine-linear. It is straightforward to observe that C_π,=C_(π),. In the special case that π has length 1, the curve C_π, is independent ofas long as its entries have distinct roots. In this case, we may simply write C_π for the image of ρ_π,.Let π andbe as in Definition <ref>. Suppose the entries ofhave distinct roots and π is basepoint-free. * The map ρ_π, is a basepoint-free morphism. * If in addition π is concise, the preimage of the boundary, ρ_π, ^-1(∂ X_τ), is the set of roots of the f_i. Let y ∈^1. By assumption, at most one f_i vanishes at y, so let i be such that f_j(y)0 for all ji. Since π is basepoint-free, there exists u ∈ such that π(u)_i = 0, that is, f_i ∤ f^π(u). Then ^π(u)(y)0 as it is a product of nonvanishing forms. This shows (<ref>). For (<ref>), we have ρ_π, (y) ∈∂ X_τ if and only if ^π(u)(y) = 0 for some u. Since π is concise, this holds if and only if y is a root of some f_i. For the remainder of this section, we will assume that π is a Cayley structure, that is, is both basepoint-free and concise. We may reduce to this situation by replacing a weak Cayley structure by a concision of its resolution. By (<ref>), the image curve C_π, intersects the dense torus orbit of X_τ. By acting on the curve by the torus T_τ, we obtain a family of curves {t· C_π,}_t∈ T_τ. Note that the roots of the f_i (as in Proposition <ref>(<ref>)) give well-defined marked points on ^1, but determine the forms f_i only up to scalar multiple. These scalars are captured by the action of the torus T_τ: Let = (f_0, …, f_ℓ) and let = (c_0 f_0, …, c_ℓ f_ℓ) where c_i ∈^* for all i. Then ρ_π,= t ·ρ_π, for some unique t ∈ T_τ. We have ρ_π, (y) = (^π(u) : u ∈τ) and ρ_π, (y) = (^π(u) : u ∈τ). For each u ∈τ, let λ_u = (c_0, …, c_ℓ)^π(u)∈^*, so^π(u) = λ_u ^π(u).Note that both ρ_π, and ρ_π, satisfy the defining equations of X_τ. In particular, consider an affine relation ∑_u ∈τ a_u u = ∑_u ∈τ b_u u for some coefficients a_u, b_u ∈. Applying this to Equation <ref>, we see that the point λ = (λ_u : u ∈τ) ∈^|τ|-1 itself satisfies these relations, i.e. is a point of the interior of X_τ and so a translate of the distinguished point x_τ∈ X_τ. That is, λ = t · x_τ for some unique t ∈ T_τ. Recall that M_0,ℓ+1 is the moduli space of ℓ+1 distinct marked points on ^1 up to automorphism.Let = (P_0, …, P_ℓ) ∈ M_0, ℓ+1. We choose coordinates P_0 = (1 : 0) and P_i = (c_i : 1) for i > 0, with c_1 = 0, c_2 = 1. We let () = (y_1, y_0, y_0 - y_1, …, y_0 - c_ℓ y_1) and defineρ_π : M_0,ℓ+1× T_τ×^1 → X_τ, ρ_π(, t, s) = t ·ρ_π, ()(s).When ℓ=1, we abuse notation and set M_0,ℓ+1=, and ()=(y_1,y_0).We show below in Proposition <ref> that the family ρ_π induces a quasifinite map from M_0,ℓ+1× T_τ to the Hilbert scheme as long as ℓ>1, which we use for dimension counting. If ⊆Δ_ℓ(d) is the image of a Cayley structure π, we may also consider the Cayley structure ι:→Δ_ℓ(d), where ι is just the inclusion. The curve C_,:=C_ι, is an isomorphic linear projection of C_π,. Hence, when considering properties of C_π, such as degree or arithmetic genus, we may consider instead the corresponding properties of C_,. We will also denote the map ρ_ι, by ρ_,.§.§ From Curve to Cayley StructureWe show that all rational curves in X_ arise via the above construction. Consider a rational curve C⊆ X_τ of degree d that intersects the dense torus orbit of X_τ⊆ X_. * There existsℓ∈,a Cayley structure π:τ→Δ_ℓ(d), a point ∈ M_0,ℓ+1, and t ∈ T_τ such that C=t· C_π, (). * The Cayley structure π and ∈ M_0,ℓ+1 are uniquely determined up to permutation by an element of the symmetric group S_ℓ+1. * If ℓ>1, there are only finitely many t∈ T_τ such that t · C = C. One may prove this proposition using Cox's notion of a Δ-collection to describe maps from ^1 to X_τ in terms of the Cox ring of a resolution of X_τ <cit.>. This is the approach used in <cit.> to describe genus zero log stable maps to a toric variety. Here, we will instead take a direct, more elementary approach. Consider the normalization ^1 → C of C. We consider the preimage { P_0, …, P_ℓ} of the finite, nonempty set C ∩∂ X_τ under this map. This defines ℓ; after ordering the elements of this preimage, we obtain ∈ M_0,ℓ+1. We choose coordinates on ^1 so that P_0 = (1 : 0) ∈^1 and P_i = (c_i : 1) for i = 1, …, ℓ. We put () = (y_1, y_0, y_0-y_1, …, y_0 - c_ℓ y_1) as in Definition <ref>.The normalization map ^1 → X_τ has degree one and is thus given by a tuple of forms (F_u ∈[y_0, y_1]_d : u ∈τ). We have F_u ≢0 for each u since C intersects the dense torus orbit of X_τ. The factors of F_u are all from () up to scalar, so we define π : τ→Δ_ℓ(d) byF_u = λ_u ^π(u)for each u and for some tuple of constants λ = (λ_u ∈^* : u ∈τ). Our choice of P_i's implies that π is basepoint-free and concise (cf. Definition <ref>). To see that π is affine-linear, consider an affine relation ∑ a_u u = ∑ b_u u for some coefficients a_u, b_u ∈. Then X_τ has the defining equation ∏ x_u^a_u = ∏ x_u^b_u. Since C ⊆ X_τ, we may pull this back to ^1 to obtain∏ (λ_u ^π(u))^a_u = ∏ (λ_u ^π(u))^b_u.Since the factorsare all distinct, by uniqueness of factorization we have ∑ a_u π(u) = ∑ b_u π(u). This shows π is affine-linear, so π is a Cayley structure. The argument from the proof of Lemma <ref> then shows that there exists a unique t ∈ T_τ such that ^1 → C is exactly the map t ·ρ_π, ().This shows the existence of ℓ, π, , and t.For any ℓ', π' of degree d, ', and t' satisfying C=t'· C_π',(), the map t·ρ_π,():^1 → C must also be the normalization of C. The uniqueness claims follow.Consider now all t∈ T_τ such that t· C=C. If this is infinite, it contains a one-parameter subgroup, and C is a torus translate of the orbit closure of this subgroup. In particular, C is a torus translate of a toric curve, and is thus parametrized by monomials. It follows that ℓ=1.§.§ Stabilizers For a rational curve C ⊆ X_τ, any t ∈ T_τ such that t · C = C induces a permutation of the ℓ+1 points of the normalization of C lying over C ∩∂ X_τ. If ℓ > 1, this permutation determines t as an automorphism of the curve, hence identifies the stabilizer of C with a subgroup of S_ℓ+1 via its action on M_0, ℓ+1.In the setting of Proposition <ref>, when ℓ=1 the stabilizer in T_τ of the curve C consists exactly of the one-dimensional torus T_π (and C is a translate of the closure of T_π). We briefly examine the stabilizers when ℓ > 1. Note that the finite subgroups of PGL_2() are, uniquely up to conjugacy, cyclic, dihedral, or A_4, S_4 or A_5. The stabilizer of C is moreover abelian since it is also a subgroup of T_τ, so it is either cyclic or μ_2×μ_2. In any case, the existence of a nontrivial stabilizer highly constrains the Cayley structure, the choice of marked points and the cycle type of the resulting permutation σ.Let π : τ→Δ_ℓ(d) be a Cayley structure, with ℓ > 1. Let σ∈ S_ℓ+1 be a permutation. Then ρ_π admits fibers C_π, () with stabilizers containing σ if and only if the following holds: * π∘σ = π, and * the cycle type of σ is (1, 1, k, …, k) for some k. Without loss of generality assume σ fixes 0 and 1 and let ζ_k denote a k-th root of unity. The corresponding fibers are given by putting P_0 = (1:0), P_1 = (0:1), and the remaining marked points in disjoint orbits each of the form{(1:ζ_k^i c) | 0 ≤ i < k}for some c ∈^*.If ℓ > 1, C_π, has trivial stabilizer for general .Let ρ_π, () : ^1 → C_π,() be a rational curve using π. Let t ∈ T_τ be such that t · C_π, () = C_π, (). There is a unique ϕ : ^1 →^1 such that t ·ρ_π, () = ρ_π, ()∘ϕ as maps. As discussed above, ϕ induces a permutation σ of {0, …, ℓ} and likewise an automorphism of Δ_ℓ(d).Claim (<ref>) follows by examining factors of the forms F_u as in the proof ofLemma <ref>, using uniqueness of factorization.For claim (<ref>), we use the fact that an order-k automorphism of ^1 is conjugate to multiplication by ζ_k. In particular, ϕ has two fixed points, which must lie over C ∩∂ X_τ since T_τ acts freely on the interior of X_τ. That is, the fixed points of ϕ are among the marked points. All other orbits of ϕ, hence of σ, are of size k and of the stated form. This shows (<ref>) and the last claim.Conversely, suppose (<ref>)-(<ref>) hold and the marked points are chosen as described; we check that σ arises from a torus element. We denote the marked points as P_0 = (1 : 0), P_1 = (0: 1) and P_ij = (ζ_k^i c_j : 1) for some choices of c_j and for 0 ≤ i < k. Note that∏_0 ≤ i < k (y_0 - ζ_k^i c_j y_1) = y_0^k - c_j^k y_1^k.By abuse of notation, we write π(u)_*j for the common value of π(u)_ij for all i. ThenF_u(y_0, y_1) = y_1^π(u)_0 y_0^π(u)_1∏_i,j (y_0 - ζ_k^i c_j y_1)^π(u)_ij = y_1^π(u)_0 y_0^π(u)_1∏_j (y_0^k - c_j^k y_1^k)^π(u)_*j.Applying the substitution y_0 ↦ζ_k y_0, we findF_u(ζ_k y_0, y_1) = ζ_k^π(u)_1 F_u(y_0, y_1).In particular, setting t_u := ζ_k^π(u)_1, the tuple (t_u | u ∈τ) satisfies the defining relations of X_τ, since π does, hence corresponds to an element t ∈ T_τ. We have t ·ρ_π,= ρ_π, ∘ϕ, as required. A stabilizer isomorphic to μ_2×μ_2 can be represented by the automorphisms z ↦ z, -z, 1/z, -1/z of ^1. For such C_π,, the non-free orbits {0, ∞}, {1, -1} and {i, -i} must all be among the marked points, since T_τ acts freely on the interior of X_τ. Any additional marked points must come in disjoint 4-tuples of the form {c, -c, 1/c, -1/c} for various c. The Cayley structure itself must also satisfy strong constraints: for indices j,k corresponding to any of the pairs {0, ∞}, {1, -1} and {i, -i} we must have π(u)_j = π(u)_k for all u. The remaining marked points come in 4-tuples j,k,l,m such thatπ(u)_j = π(u)_k = π(u)_l=π(u)_m for all u.Later we will need to distinguish between T_τ and the torus acting on C_π, when the stabilizer is nontrivial. Let π : τ→Δ_ℓ(d) be a Cayley structure anda choice of forms. Let _π, ⊂ T_τ be the corresponding stabilizer and T_π,:= T_τ / _π, the quotient torus. We denote the cocharacter lattice of T_π, by N_π,. We have a natural mapN_τ→ N_π, .When ℓ=1, this map is surjective with kernel (_m, T_π) ≅, and N_π, may be identified with the quotient of N_τ by the image of π^*.When ℓ > 1, this map is an inclusion of lattices with index k=|_π, |.The map is dual to an inclusion of lattices M_π,→ M_τ=⟨τ⟩.When _π, is cyclic, M_π, is the kernel of the mapM_τ →/ku-v ↦π(u)_1-π(v)_1where we have ordered coordinates such as in the statement of Proposition <ref>. In the case of μ_2×μ_2 stabilizer, we obtain M_τ by intersecting the kernels of two such maps. § NODES AND CUSPS §.§ Primitive and Smooth Cayley StructuresIn this section, we seek to clarify when a general rational curve determined by a degree d Cayley structure has degree d and is smooth. A Cayley structure π:τ→Δ_ℓ(d) is imprimitive if either * π(τ)=1 and ℓ>1; or * ℓ=1 and ⟨π(τ) ⟩⊂ m·^2 for some m>1. Note that π(τ) = 1 in either case. We say that π is primitive if it is not imprimitive.Consider an imprimitive Cayley structure π of degree d with image . Let d' be the lattice length of the convex hull ofwith respect to the one-dimensional lattice ⟨⟩. Then up to transposition of e_0,e_1, there is a unique affine-linear inclusion →Δ_1(d'). The reduction of π is the composition(π) : τ→→Δ_1(d').It is straightforward to see that this is a primitive Cayley structure, and is well-defined up to equivalence. The multiplicity of the imprimitive Cayley structure π is the ratio d/d'. If π is already primitive, we define it to be its own reduction and to have multiplicity one. Let π be a degree d Cayley structure. Then C_π, = d for general choice ofif and only if π is primitive. If π is imprimitive, C_π,=C_(π) and C_π, is the degree of (π).We complete the proof of this theorem in <ref>. Continuing Example <ref>, recall that a concision of the restriction of π” to τ' was the length ℓ=3 degree two Cayley structure sending (0,-1,0) to e_2+e_3 and (0,0,1) to e_0+e_1. Since this Cayley structure has one-dimensional image but ℓ>1, we see that it is imprimitive. A reduction of this Cayley structure is the map taking (0,-1,0) to e_0 and (0,0,1) to e_1, which has degree one. The original imprimitive Cayley structure had multiplicity 2. By Theorem <ref>, we see that the curves coming from the restriction of π” to τ' are actually just lines, with the parametrizing map having degree 2. We now let π:τ→Δ_ℓ(d) be a primitive Cayley structure. The Cayley structure πis cuspidal if there exists i∈{0,…,ℓ} and v ∈π(τ) such that v_i = 0 and v'_i > 1 for all v'v. For a tuple of formswith roots P_0,…,P_ℓ, this condition says ^v(P_i)0, and that ^v' vanishes to order at least two at P_i for all v'v.The Cayley structure πis nodal if either: * there exists 0≤ i< j ≤ℓ and v ∈π(τ) such that v_i = v_j = 0, and v'_i, v'_j > 0 for all v'v. * π(τ)=2, e_i-e_j∉⟨π(τ)⟩ for all i≠ j, and up to permutation of the coordinates, ⟨π(τ)⟩ is not one of the exceptional lattices listed in Table <ref>. For a tuple of forms , the condition in the first bullet point says ^v(P_i), ^v(P_j)0, and that ^v'(P_i) = ^v'(P_j) = 0 for all v'v. The Cayley structure πis smooth if it is neither nodal nor cuspidal.See Figure <ref> for examples of the images of imprimitive, cuspidal, and nodal Cayley structures.Any singular point of a curve with a single branch we call cuspidal. We call singular points with multiple branches nodal.The definition of cuspidal Cayley structure says C_π, has a cusp at the image of P_i (for some i). Likewise, the first case of the definition of nodal Cayley structure says C_π, has a node at the common image of P_i and P_j for some ij. We will call these marked cusps and nodes. Only in the second type of nodal Cayley structure is the singularity unmarked. Let π be a primitive Cayley structure. For general choice of , * the curve C_π, has a cuspidal singularity if and only if π is cuspidal; * the curve C_π, has a nodal singularity if and only if π is nodal; * the curve C_π, is smooth if and only if π is smooth. We will complete the proof of this theorem in <ref> and <ref>.We will say a subset ⊆Δ_ℓ(d) is primitive (resp. cuspidal, nodal, smooth) if the inclusion ↪Δ_ℓ(d) is. For a general Cayley structure π, each of these properties depends only on the image = π(τ), so π is primitive (resp. cuspidal, nodal, smooth) if and only ifis. Indeed, as noted in <ref>, C_, is in general an isomorphic linear projection of C_π,.§.§ Setup and ProjectionsWe describe our approach to proving Theorems <ref> and <ref>. Let ⊂Δ_ℓ(d) be the image of a Cayley structure π.We may simply assume π is the inclusion ℬ↪Δ_ℓ(d), following Remark <ref>.We denote by C_d the rational normal curve in ^d. The map ρ_, arises as the composition of ^1→ C_d⊆^d with a linear projection ^d^#-1. The center L of this projection is the projectivization of the orthogonal complement of ⟨^v : v∈⟩⊆[y_0,y_1]_d.Since the projection is basepoint-free, L does not intersect C_d. Since =0, ρ_, is generically unramified, so C_,=d if and only ifρ_, is birational. This is equivalent to L intersecting only finitely many secant or tangent lines of C_d. When this holds, C_, has a nodal singularity if and only if L intersects a secant line of C_d. Likewise, C_, has a cuspidal singularity if and only if L intersects a tangent line of C_d. Thus, in the following we will be analyzing the intersection behaviour of L with secants and tangents of C_d for generic choice of .First we will deal with the case of imprimitive Cayley structures. If π is imprimitive, then for anywith distinct roots, C_π,=C_(π). Moreover, C_,<d. In both cases in the definition of imprimitive, =1, so the curve C_, is just the toric variety X_. Let d' be the length ofwith respect to the lattice ⟨⟩, and let ' be the image ofin Δ_1(d') as in the definition of the reduction. The variety X_ is projectively equivalent to X_', which has degree d'. It follows from construction that C_π,=C_(π). If ℓ=1 and ⊂ m·^2, then clearly d' ≤ d/m. Suppose instead that ℓ>1.Let v,w∈^ℓ+1 be the endpoints of the convex hull of . Since π is a Cayley structure and ℓ>1, there must be some index i such that either 0<v_i<d or 0<w_i<d. Without loss of generality, assume that 0<v_0<d. Since π is a Cayley structure and w is the other endpoint of , w must have the form (0, w_1, …, w_ℓ). Then v+⟨⟩ intersects Δ_ℓ(d) in at most v_0+1 lattice points, and we conclude that d'≤ v_0<d. §.§ CuspsIn this section, we assume that π is primitive. Letbe general, with corresponding roots P_0,…,P_ℓ∈^1. We identify these points with their images on the rational normal curve C_d.The center of projection L meets the tangent lineT_P_i C_dfor some i = 0, …, ℓ if and only ifis cuspidal. After changing coordinates on ^1 and rescaling the f_j, we may assume without loss of generality that P_i=(1:0), f_i=y_1,and f_j=y_0-a_jy_1 for all j≠ i. We give ^d the dual coordinates to {y_0^d, y_0^d-1y_1, …, y_1^d}, so the tangent line T_P_i C_d is the span of (1:0:…:0) and (0:1:0:…:0). The center of projection L meets T_P_i C_d at the point (λ:1:0 :…:0) for some λ∈^* if and only if for every v∈, ^v=y_1^v_i∏_j≠ i (y_0-a_jy_1)^v_jis orthogonal to (λ,1,0,…,0). This is equivalent to requiring that for every v∈, v_i≠ 1, and if v_i=0, we have ∑_j a_jv_j=λ.If two distinct elements v, v' have v_i = 0, then for L to meet T_P_iC_d we must have∑_j a_j(v_j-v_j')=0.This is impossible ifis general, so if L meets T_P_iC_d, we see (since π is a Cayley structure) that there is a unique v such that v_i = 0. Thusis cuspidal.Conversely, ifis cuspidal, then for some index i and for every v∈, v_i≠ 1, and there is a unique v such that v_i = 0. Sinceis general, for that v we have ∑_j a_jv_j≠ 0 and so L intersects T_P_i at a point other than P_i. The center of projection L does not intersect any T_Q C_d for Q∉{P_0,…,P_ℓ}. If ℓ=1, the map ρ_, is a monomial map, and the only points of intersection with tangents can be P_0 and P_1. We now assume ℓ>1. Since we assume π is primitive,⟨⟩ has rank at least two. Consider the moduli space M_0,ℓ+2 of ℓ+2 marked points P_0,…,P_ℓ,Q on ^1. Let V be the locus inside M_0,ℓ+2 of those P_0,…,P_ℓ,Q such that L intersects the tangent line of C_d at Q. Note that the center of projection L depends on the points P_0,…,P_ℓ. Consider the map ϕ:_0,ℓ+2→_0,ℓ+1obtained by forgetting Q. We wish to show ϕ(V) _0,ℓ+1, that is, for general , the center of projection L does not intersect any T_Q C_d for Q∉{P_0,…,P_ℓ}. It is enough to showϕ(V)<ℓ-2=_0,ℓ+1.Fix coordinates on ^1 so that Q=(0:1) and P_i=(1:z_i) with z_0=0 and z_1=1. Using a computation similar to in the proof of Lemma <ref>, it is straightforward to show that V is cut out by the equationsz· v=0 v∈⟨⟩.In particular, the codimension of V is the rank of ⟨⟩, which is at least two. Hence V≤ℓ+2-3-2=ℓ-3, so ϕ(V)<ℓ-2 as required.§.§ NodesWe continue under the assumption that π is primitive andis general.The center of projection L meets a secant line through P_i and P_j if and only ifthere exists v ∈π(τ) such that v_i = v_j = 0, and v'_i, v'_j > 0 for all v'v.Moreover, if L meets a secant line passing through P_i, then that secant line must also pass through P_j for some j≠ i. If ℓ=1, then the map ρ_, is a monomial map, and L cannot intersect the secant line through P_0 and P_1. Likewise, the condition in the statement of the lemma cannot be fulfilled. Thus, we may assume going forward that ℓ>1.We first show that if L meets a secant line passing through P_i and some point Q∈^1, then Q = P_j for some j. Fix coordinates on ^1 so that Q=(0:1) and P_i=(1:0). Then the secant line through P_i and Q is the span of (1:0:⋯:0) and (0:⋯:0:1). Let v∈ be any element with v_i>0. Then y_1 divides ^v. The secant line must contain a non-zero vector orthogonal to ^v, so the coefficients of y_0^d and y_1^d in ^v are proportional, hence both 0. Then y_0 divides ^v as well, and so Q = P_j for some j≠ i. We also see that, for any v∈, if v_i>0, then v_j>0 (and vice versa).We now show the first claim of the lemma. Suppose that for all v ∈, v_i > 0 if and only if v_j > 0. Let _ij = {v ∈ : v_i = v_j = 0}. As above, we fix coordinates on ^1 so that P_i=(1:0), P_j=(0:1), and write P_k=(z_k:1) for k≠ i,j (where we fix z_k=1 for some k≠ i,j).Consider the moduli space M_0,ℓ+1 of ℓ+1 marked points P_0,…,P_ℓ on ^1. Let V be the locus inside M_0,ℓ+1 of those P_0,…,P_ℓ such that L intersects the secant line through P_i and P_j. As above, we determine when VM_0, ℓ+1 by dimension counting. It is straightforward to see that V is cut out by the equationsz^v=1∀ v∈⟨_ij⟩.This locus has codimension equal to the rank of ⟨_ij⟩.We conclude that for general , L intersects the secant through P_i and P_j if and only if |_ij| = 1. We next analyze the condition that L meets an unmarked secant line, i.e. a secant line through points Q_0,Q_1 both distinct from P_0,…,P_ℓ. We work over the moduli space M_0,ℓ+3. Let V ⊆ M_0,ℓ+3 be the locus of ℓ + 3 marked points P_0,…,P_ℓ,Q_0,Q_1 ∈ℙ^1 such that L intersects the secant line through Q_0 and Q_1. We choose coordinates on ^1 so that Q_0=(1:0) and Q_1=(1:0). Write P_i=(z_i:1) for i=0,…,ℓ. It is straightforward to check that V is cut out by z^v=1 v∈⟨⟩ . If ≥ 3, =1, or e_i - e_j ∈⟨⟩ for some i≠ j, then L does not intersect any unmarked secant lines. If =2, then the center of projection L only intersects finitely many unmarked secant lines. If =1, then since π is primitive, we have ℓ=1 and the map ρ_, is a monomial map. The claim is easily verified in this case. Likewise, if e_i - e_j ∈⟨⟩ for some i≠ j, the locus V is empty in _0,ℓ+3, so L cannot intersect an unmarked secant. We now assume ≥ 2 (and in particular ℓ≥ 2). Consider the map ϕ : M_0,ℓ+3→ M_0,ℓ+1 forgetting Q_0 and Q_1. We see that ϕ(V) ≤ V=ℓ-. If ≥ 3, we see that ϕ(V) < ℓ - 2 = _0, ℓ+1. If =2 and the dimension of the image is ℓ-2, then a generic fiber is zero-dimensional, that is, there are at most finitely many unmarked secant lines. If the dimension of the image is any smaller, the generic fiber is empty. We have seen in Lemma <ref> that if π is imprimitive, C_π,<d. Conversely, by Lemma <ref>, the center of projection L intersects at most finitely many unmarked secant lines, and hence at most finitely many secant lines. It follows that the map ρ_π, is generically injective, hence C_π,=d. In the imprimitive case, C_π,=C_(π) by construction. The Cayley structure (π) is primitive, and so the claim regarding its degree follows from the first statement of the theorem. The claim regarding cuspidal singularities follows from Lemmas <ref> and <ref>. The claim regarding C_π, being smooth will then follow from the claim on nodal singularities. If ≥ 3, = 1, or e_i - e_j ∈⟨⟩ for some i≠ j, we see from Lemmas <ref> and <ref> that C_π, has a nodal singularity if and only if π is nodal. It remains to deal with the case where =2 and e_i - e_j ∉⟨⟩ for all i≠ j. We address this case in the following section with Theorem <ref>. §.§ Nodes in Rank Two CaseWe analyze whether L intersects unmarked secant lines in the exceptional case: we assume, throughout this subsection, =2 and e_i - e_j ∉⟨⟩ for all i≠ j. The center of projection L intersects unmarked secant lines if and only if the image of V in _0,ℓ+1 has dimension ℓ-2. Consider the solution set of (<ref>) as a codimension-two subset Y ⊆^ℓ+1. The condition that the image of V in _0,ℓ+1 has dimension ℓ-2 is equivalent to the condition that the orbit of Y under the natural rational GL(2)-action is dense in (^*)^ℓ+1. Here, GL(2) acts by[ a_11 a_12; a_21 a_22 ]· (z_i)=(a_11z_i+a_12/a_21z_i+a_22).The orbit of Y under the natural rational GL(2)-action fails to be dense in (^*)^ℓ+1 if and only if for all z∈ Y, the linear space generated by z and z^-1 is linearly dependent with ⟨⟩ ^⊥. We consider the image of the differential of the rational mapGL(2)× Y (^*)^ℓ+1at the point (e,z) where e is the identity element of GL(2).Differentiating with respect to the GL(2) directions, we obtain the span of z^2:=(z_i^2)_i=0,…,ℓ, z:=(z_i)_i=0,…,ℓand z^0:=(1)_i=0,…,ℓ. On the other hand, the variety Y∩ (^*)^ℓ+1 is the quasitorus with character group ^ℓ+1/⟨⟩, and hence cocharacter lattice ⟨⟩^⊥∩^ℓ+1. It follows that the tangent space of Y at z is the span of the vectors z ⟨⟩^⊥. Since ⟨⟩ ^⊥ contains (1)_i=0, …, ℓ, this tangent space contains z.Putting this together, we obtain that the image of the differential at (e,z) is the span of z^2, z^0 and z⟨⟩^⊥. The orbit of Y fails to be dense in (^*)^ℓ+1 if and only if for general (or equivalently all) z∈ Y, this linear space has dimension less that ℓ+1=⟨⟩ ^⊥+2, in other words, the linear space generated by z^2 and z^0 is linearly dependent with z⟨⟩^⊥. Dividing each coordinate by z_i, this is equivalent to requiring thatthe linear space generated by z and z^-1 is linearly dependent with ⟨⟩^⊥. We will now characterize lattices ⟨⟩ for which the condition of Lemma <ref> is fulfilled. We note the following: if I is an ideal generated by binomials in a polynomial ring, and f ∈ I is a polynomial with support containing a monomial z^v, then there is a monomial z^v' in the support of f, with v'v, such that z^v - z^v'∈ I. This can be shown, for example, by reducing f using a Gröbner basis for I consisting of binomials. Suppose that for all z∈ Y, z and z^-1 is linearly dependent with ⟨⟩ ^⊥. Let 0≤α ,β≤ℓ with α≠β and suppose further that the projection of ⟨⟩ to the αth and βth coordinates has rank 2. Then ⟨⟩ contains an element of the form e_α-e_β+e_γ-e_δ for someγ≠δ with (α,β)≠ (δ,γ) . Let ⟨⟩_ be the -span of ⟨⟩, and let w_2,…,w_ℓ be a basis for ⟨⟩_^⊥. Let W be the matrix with rows z, z^-1, w_2,…,w_ℓ, where z is treated as a vector of indeterminates. We consider the Laurent polynomial (W). Since for all z∈ (^*)^ℓ+1 satisfying (<ref>), z and z^-1 are linearly dependent with ⟨⟩ ^⊥, and the ideal I_⟨⟩⊆[z_i^± 1] corresponding to the equations (<ref>) is radical, we have (W)∈ I_⟨⟩.We may write (W)=∑_α< β±Δ_αβ (z_α z_β^-1-z_β z_α^-1)where Δ_αβ is the determinant of the matrix obtained from the matrix W by deleting the first two rows and the αth and βth columns. We note that Δ_αβ0 if and only if the projection of ⟨⟩ to the αth and βth coordinates has rank two.Suppose that Δ_αβ≠ 0. Then the monomial z_α z_β^-1 appears with non-zero coefficient in (W). As discussed above, it follows thatthere must also exist γ≠δ with (α,β)≠ (δ,γ) such that Δ_γδ≠ 0 and z_α z_β^-1-z_δ z_γ^-1∈ I_⟨⟩, or equivalently,e_α-e_β+e_γ-e_δ∈⟨⟩. The element given above is sufficiently close to one of the form e_i - e_j ∉⟨⟩ that we may directly characterize the possible lattices. Note that if p (e_i - e_j) ∈⟨⟩ where p is a prime integer, then p divides the index of ⟨⟩ in ⟨⟩_∩ℤ^ℓ+1. Suppose that L does not intersect any unmarked secants, and e_i-e_j∉⟨⟩ for all i≠ j.The lattice ⟨⟩ is generated by two elementse_α-e_β+e_γ-e_δ∈⟨⟩of the form promised by Lemma <ref> whose supports intersect in at least one coordinate. Since L does not intersect any unmarked secants, and there must exist some coordinates α,β such the projection of ⟨⟩ to these coordinates has rank two, it follows from Lemmas <ref> and <ref> that there exists some element in ⟨⟩ of the forme_α-e_β+e_γ-e_δ.Since e_i-e_j∉⟨⟩, we cannot have α=δ or β=γ. This means that we must be in one of the following cases: * 2e_α-2e_β∈⟨⟩ (two-term relation); * 2e_α-e_β-e_δ∈⟨⟩ (α,β,δ distinct) or 2e_β-e_α-e_γ∈⟨⟩ (α,β,γ distinct) (three-term relation); * e_α-e_β+e_γ-e_δ∈⟨⟩ (α,β,γ,δ distinct) (four-term relation). We analyze of each these cases in turn. Suppose first that ⟨⟩ has a two-term relation, without loss of generality 2e_0-2e_1. Since ℓ≥ 2, there must be an element v of ⟨⟩ with v_2≠ 0, so we can apply Lemma <ref> to α=0,β=2 to obtain another two, three or four-term relation w whose support overlaps with 2e_0-2e_1.Let Γ be the lattice generated by 2e_0-2e_1 and w. Clearly ⟨⟩_=Γ_, but we claim that in fact ⟨⟩=Γ. Indeed, if w_i=1 for some i≥ 2, the index of Γ in Γ_∩^ℓ+1 is 2. However, e_0-e_1∈⟨⟩_∖⟨⟩, so ⟨⟩ also has index two in Γ_∩^ℓ+1 and we are done. Only two possibilities remain: either w=2e_1-2e_2 or w=-e_0-e_1+2e_2. In both cases Γ_∩^ℓ+1 = ⟨ e_0-e_1, e_1-e_2⟩, in which Γ and ⟨⟩ must both then have index four, so they coincide.We next suppose that ⟨⟩ has a three-term relation, without loss of generality 2e_0-e_1-e_2, but no two-term relations. If ℓ=2, then since ∑_i v_i=0 for any v∈⟨⟩, it follows that the projection of ⟨⟩ to the coordinates 1 and 2 must have full rank, so by Lemma <ref> we may assume without loss of generality that -e_0-e_1+2e_2∈⟨⟩. The sublattice generated by2e_0-e_1-e_2 and-e_0-e_1+2e_2 has index 3 in ⟨⟩_∩^3 and contains 3e_0 - 3e_2. Since e_0-e_2∉⟨⟩ we conclude that it must coincide with ⟨⟩. If instead ℓ≥ 3, then by Lemma <ref> we must have a three or four-term relation w whose support includes 2 and 3. If w=-e_1-e_2+2e_3, the sublattice generated by 2e_0-e_1-e_2 and w contains 2e_0 - 2e_3, contradicting our assumption on ⟨⟩. For all other w,the sublattice generated by 2e_0-e_1-e_2 and w is saturated,so it must coincide with ⟨⟩.Finally, suppose that ⟨⟩ only has four-term relations. Without loss of generality, e_0-e_1+e_2-e_3∈⟨⟩. If ℓ≥ 4, Lemma <ref> guarantees a four-term relation whose support includes 0 and 4; the lattice this generates is saturated, hence must coincide with ⟨⟩. If instead ℓ=3, it follows from Lemma <ref> that there must be another linearly independent four-term relation, without loss of generality e_0+e_1-e_2-e_3. But then we also obtain the two-term relation 2e_0-2e_3, contradicting our assumption. Suppose that L does not intersect any unmarked secants, and e_i-e_j∉⟨⟩ for all i≠ j. Then ⟨⟩=⟨'⟩ for ' the image of a Cayley structure of degree at most three and with #'=3. By Lemma <ref>, ⟨⟩ is generated by elements v,w of the form stated in Lemma <ref> whose support intersects in some index i. After possibly scaling by -1, we assume that v_i,w_i<0. Define u=(u_j)∈^ℓ+1 as u_j=max{0,-v_j,-w_j}.Since the sum of the negative entries of v and w are both -2, it follows that d:=∑_j u_j ≤ 3. We set ':={ u, u+v,u+w}⊆Δ_ℓ(d).Then ' is the image of a Cayley structure of degree d. Indeed, sinceis the image of a Cayley structure, for every j either u_j, (u+v)_j, or (u+w)_j≠ 0. Likewise, by definition of u, for each j either u_j=0, u_j+v_j=0, or u_j+w_j=0.Suppose that #=3 andis the image of a Cayley structure of degree at most three. Assume that e_i-e_j∉⟨⟩ for any i≠ j. Then L does not intersect any marked secants, or any tangents. Let ={u,v,w}. By Lemma <ref>, L does not intersect any unmarked tangents. Suppose L intersects the tangent T_P_k C_d. Then without loss of generality, by Lemma <ref> u_k=0 and v_k,w_k≥ 2. But since ∑ v_j=∑ w_j ≤ 3, this implies v-w=e_α-e_β∈ for some α,β, a contradiction. Suppose instead that L intersects the marked secant through P_i and P_j. Then without loss of generality, by Lemma <ref> u_i=u_j=0 and v_i,v_j,w_i,w_j>0. Again since ∑ v_j=∑ w_j ≤ 3, this implies v-w=e_α-e_β∈ for some α,β, a contradiction. Suppose that =2, #=3, andis the image of a Cayley structure of degree 2. Then up to permutation of the coordinates, ⟨⟩ is one of the lattices from Table <ref>. Since the degree is two and #=3, we obtain ℓ≤ 5. The claim follows from a straightforward case-by-case analysis. Suppose that =2 and e_i-e_j∉⟨⟩ for all i≠ j. Then L does not intersect an unmarked secant if and only if ⟨⟩ is one of the lattices listed in Table <ref>. The property of L intersecting unmarked secant lines depends only on ⟨⟩, not on . If ⟨⟩ is one of the lattices from Lemma <ref> listed in Table <ref>, then we may replaceby the image of a Cayley structure with d=2 and #=3. Then C_, is a smooth (plane) conic, so L cannot intersect any secant lines at all. Suppose we are not in one of these cases. Then by Lemma <ref>, we may replace by the image of a Cayley structure with d=3 and #=3. Then C_, is a degree three rational plane curve, hence singular. By Lemma <ref>, L does not intersect any tangents or marked secants. Hence, in order for C_, to be singular, L must intersect an unmarked secant.§ STRATIFICATION OF THE HILBERT SCHEME §.§ Map to the Hilbert SchemeAs noted in the introduction, given a polynomial P(m)∈[m] and a projective variety X⊂^n, we let _P(m)(X) denote the Hilbert scheme parametrizing closed subschemes of X with Hilbert polynomial P(t). In particular, _d· m +1(X_) is the fine moduli space parametrizing closed one-dimensional subschemes of the toric variety X_ with the same Hilbert polynomial as a smooth degree d rational curve. Fix a smooth degree d Cayley structure π. By Theorems <ref> and <ref>, for a general point ∈ M_0,ℓ+1 and any t∈ T_τ, the rational curve t· C_π,() has degree d and is smooth, hence corresponds to a point [t· C_π,()]∈_d· m+1(X_). We thus obtain a rational map, when ℓ > 1, M_0,ℓ+1× T_τ_dm+1. When ℓ = 1, we instead have T_τ/T_π_dm+1. Let Z_π^∘ denote the image of this map, and Z_π its closure in _dm+1(X_). By Proposition <ref>, every degree d smooth rational curve in X_ corresponds to a point in some Z_π^∘. By Remark <ref> and Proposition <ref>, the above rational maps have finite fibers. In particular, Z_π = ℓ-2 + τ. §.§ Partial Order on Cayley StructuresWe now define a partial order on the set of all degree d smooth Cayley structures defined on some face of .Given a mapϕ:{0,…,ℓ}→{0,…,ℓ'},we obtain a linear map ^ℓ+1→^ℓ'+1 sending e_i to e_ϕ(i). We also denote this linear map by ϕ. Note that ϕ(Δ_ℓ(d)) ⊆Δ_ℓ'(d).Let τ,τ' be faces of . Consider degree d Cayley structures π:τ→Δ_ℓ(d) andπ':τ'→Δ_ℓ'(d). We say that π'≤π if τ' is a face of τ and there exists a mapϕ:{0,…,ℓ}→{0,…,ℓ'}such that π'=ϕ∘π|_τ'.Note that ϕ is necessarily surjective, or else π'(τ') would lie in a face of Δ_ℓ'(d). Clearly π and π' are equivalent Cayley structures if and only if π≤π'≤π. Let π and π' be smooth Cayley structures on faces of . Then π'≤π if and only if Z_π'⊆ Z_π. We will prove this theorem in the following subsection. The map π↦ Z_π induces a bijection between equivalence classes of maximal smooth primitive degree d Cayley structures and irreducible components of _dm+1(X_) whose general element is a smooth rational curve. Consider any point z of _dm+1(X_) corresponding to a smooth curve C. Then C is rational of degree d. By Proposition <ref>, C is a torus translate of some curve C_π,(), where π is a Cayley structure. It follows from Theorems <ref> and <ref> that π is smooth (and thus primitive). Thus, the point z is contained in some Z_π. The irreducible components of _dm+1(X_) thus correspond to the maximal elements of {Z_π}, as π ranges over smooth degree d Cayley structures. But by Theorem <ref>, these maximal elements are given by exactly those Z_π where π is maximal among all smooth degree d Cayley structures. Since Z_π=Z_π' if and only if π and π' are equivalent, we obtain the desired bijection.Let = Δ_ℓ(1), the unimodular simplex. Then up to equivalencehas a unique maximal Cayley structure of degree d, namely π : →Δ_(ℓ+1)d-1(d) given bye_j ↦ e_jd + e_jd+1 + ⋯ + e_jd+d-1 j ∈{0, …, ℓ}. We continue with the setfrom Example <ref> and pictured in Figure <ref>. As in Example <ref>, consider the face τ={(1,0,0),(0,-1,0),(0,0,1)}. We may consider the length 1 degree 2 Cayley structure π”' on τ sending (1,0,0) to e_0+e_1, (0,-1,0) to 2e_1, and (0,0,1) to 2e_0. This is pictured in Figure <ref>. The Cayley structure π”' is not maximal: letting π' and π” be as in Example <ref>, we have π”' ≤π' and π”'≤π”. Indeed, we obtain π”' from π' by restricting it fromto the face τ. On the other hand, we obtain π”' from π” by composing π” with the map sending e_0,e_1,e_2 to e_0 ande_3,e_4,e_5 to e_1. By Theorem <ref>, we see that the conics corresponding to the Cayley structure π”' may be deformed into two different kinds of conics: those corresponding to π' (certain conics intersecting the open orbit of X_ that meet the boundary in exactly two points) and those corresponding to π” (conics contained in a boundary stratum of X_ isomorphic to ^2 that meet the boundary of this ^2 in six points). It is straightforward to determine all the maximal Cayley structures (up to equivalence) onand its faces. Because of the geometry of , any Cayley structure defined on all ofmust have length one; the only possibilities are exactly the nine described in Example <ref>. By Corollary <ref>, this gives us nine components of the Hilbert scheme of conics. Each of these components has dimension ℓ-2+=2. On the other hand, any facet ofis a unimodular 2-simplex, which has a unique maximal Cayley structure (cf. Example <ref>). The resulting 12 components of the Hilbert scheme of conics all have dimension ℓ-2+τ=5-2+2=5. In fact, each of these components is parametrizing conics in one of the ^2-boundary strata, so it is just a copy of ^5.The set of Cayley structures on τ of degree d is always finite, since the length ℓ is bounded, e.g. ℓ < |τ| · d. As such it is possible to find the maximal Cayley structures onby enumerating all possible Cayley structures. It would be interesting to have a direct characterization of maximality: Is there a combinatorial criterion for maximality of a Cayley structure π : τ→Δ_ℓ(d) on ?§.§ Smooth Degenerations We first show that if π'≤π, then Z_π'⊆ Z_π. Let ϕ be as in Definition <ref>. Consider some ' such that C_π',' is smooth of degree d. Let v∈(M,) be such that the face of τ on which v is minimal is exactly τ'. We may view v as a one-parameter subgroup of T=[M], giving us a map v:^*→ T. Let g_0,…,g_ℓ∈[y_0,y_1] be general linear forms. For t∈^1, set (t):=(f'_ϕ(0)+t· g_0,…,f'_ϕ(ℓ)+t· g_ℓ). We thus obtain a morphism ^1 → Z_π t ↦ [v(t)· C_π,(t)]. At the point t=0, the corresponding curve is exactly C_π','. Hence, [C_π','] is in the closure of Z_π. Since Z_π is T-invariant, it follows that all of Z_π' is contained in Z_π. We now show instead that Z_π'⊆ Z_π implies that π'≤π. We first note that if Z_π' is in the closure of Z_π, clearly τ' must be contained in τ, and hence a face of it. Now, take a curve Y in _dm+1 passing through a general point of Z_π and a general point of Z_π', say at η∈ Y. After pulling back along an étal map, we may assume that the family over Y is a trivial family of rational curves over an irreducible affine curve Y= R, see e.g. <cit.>. Thus, we have a map^1× Y→ X_τ× Ygiven by degree d forms F_u∈ R[y_0,y_1]_d for each u∈τ. After possibly further pullback along a finite map, we may assume that each F_u factors as a product of linear factors. From Proposition <ref>, the factors of the {F_u} at a general point determine a Cayley structure τ→Δ_ℓ+1, which is exactly the Cayley structure π. In particular, the {F_u} have ℓ+1 distinct factors f_0,…,f_ℓ∈ R[y_0,y_1]_1 up to scaling by non-zero elements of the field of fractions of R. Without loss of generality, we will order them so that the factors of the F_u for u∈τ' are f_0, …, f_j for some j.Similarly, the factors of the {F_u(η)} for u∈τ' determine a Cayley structure τ'→Δ_ℓ'+1 (and in particular, F_u(η)=0 for u∉τ'). After enumerating the factors of the {F_u(η)}_u∈τ' up to scaling, the map f_i↦ f_i(η) for i≤ j induces a surjective map {0,…,j}→{0,…,ℓ'}. We may extend this arbitrarily to a surjective map ϕ:{0,…,ℓ}→{0,…,ℓ'}.It follows from the construction of ϕ that π'=ϕ∘π, and hence π'≤π.§ TORUS ORBITS §.§ Limiting CyclesLet _d(X_) denote the Chow variety parametrizing degree d one-cycles in X_. For a curve C⊆ X_, we denote the corresponding one-cycle by {C}. Given a primitive Cayley structure π of degree d, we wish to describe the closure in _d(X_) of the torus orbitT·{C_π,}for any choice ofwith whose entries have distinct roots. This orbit closure is a (potentially non-normal) complete toric variety. In order to understand this orbit closure, we will describe the limit of {C_π,} under a one-parameter subgroup.We first introduce a bit of notation. We refer the reader to <cit.> for the correspondence between normal toric varieties and fans. Let τ be a face of , let V=(⟨τ⟩,) and consider any v∈ V. We denote by τ^v≺τ the face on which v is minimal. Note that τ^v is well-defined, although the value of v on a point is τ is defined only up to a constant. The dependence of τ^v on v can be understood combinatorially. Let Σ be the inner normal fan of τ viewed as a subset of V.Then τ^v=τ^v' if and only if v and v' belong to the relative interior of the same cone of Σ. Accordingly, for any cone σ∈Σ, we denote by τ^σ the face τ^v where v is any element in the relative interior of σ.Let π:τ→Δ_ℓ(d) be a primitive Cayley structure and let 0 ≤ i ≤ℓ. A face τ'≼τ is an i-face if e_i^* is non-constant on π(τ'). Equivalently, π^*(e_i^*) ∈ V is not in the linear span of the cone corresponding to τ'.We denote by Σ_i the subfan of Σ consisting of the cones corresponding to i-faces. Finally, for v∈ V let Σ_i^v consist of those cones in Σ_i such that the ray v+_>0π^*(e_i^*) intersects their relative interior.A minimal i-face is always of dimension 1 (if e_i^* is nonconstant on π(τ'), it is nonconstant on some edge of τ'). Equivalently, the maximal cones of Σ_i are of codimension 1. Accordingly, for general v ∈ V, Σ_i^v consists only of cones of codimension 1 and corresponds only to edges of τ. Note that Σ^i_v is not a fan, only a collection of cones. We will use the restrictions of π to τ^v and to each cone of Σ_i^v to describe the limit of {C_π, }, as follows.For any i=0,…,ℓ, let κ_i:^ℓ+1→^2 be the linear map defined byκ_i(e_j)= e_0j=ie_1j≠ i .Let v∈(⟨τ⟩,). First, if π|_τ^v is non-constant, let m^v be the multiplicity of (π|_τ^v) and letπ^v=((π|_τ^v)),a basepoint-free weak Cayley structure.For any basepoint i of π|_τ^v and any σ∈Σ_i, let m_i^σ be the multiplicity of (κ_i∘π|_τ^σ)) and letπ_i^σ :=((κ_i∘π|_τ^σ)) : τ^σ→Δ_1(d/m_i^σ).Then π_i^σ is a Cayley structure of length one. Finally, for linear forms f, g, setf∧ g=f(1,0)g(0,1)-g(1,0)f(0,1).Note that f ∧ g = 0 if and only if f and g have a common root. Viewing π^*(e_j^*) ∈ N_τ≅(^*, T_τ), we definet_i = ∏_j π^*(e_j^*)(f_j ∧ f_i).Let π:τ→Δ_ℓ(d) be a primitive degree d Cayley structure. For general choice of , the limit under v∈ N_τ of the degree d cycle {C_π,} ism^v·{C_π^v,}+∑_i,σ∈Σ_i^v m_i^σ·{t_i· C_π_i^σ} if π|_τ^v non-constant; ∑_i,σ∈Σ_i^v m_i^σ·{t_i· C_π_i^σ} if π|_τ^vconstant.The sum is taken over all basepoints i of π|_τ^v. Furthermore, if we assume that π|_τ^v is constant, the claim is true for anywhose entries have distinct roots such that C_π,=d.We will prove this theorem in <ref>. We apply Theorem <ref> to compute the limit of C_π, under the one-parameter subgroup v=(-1,-1,-1), where π is the Cayley structure from Example <ref>. The face τ^v is just the point (1,1,0), so π|_τ^v=2e_0 is constant and has 0 as a basepoint. Note that π^*(e_0^*)=(0,1,0). The cones of Σ_0^v are exactly the rays generated by (-1,0,-1) and (-1,1,-1); the corresponding faces ofare the convex hulls of {(1,1,0),(0,0,1),(1,0,0)} and {(0,0,1),(1,0,0),(0,-1,0)}. These faces, along with the images of the corresponding Cayley structures π_i^σ are pictured in Figure <ref>. Note that for the first face we have permuted the roles of e_0 and e_1 for convenience. Both of these Cayley structures are primitive of degree one, hence all m_i^σ=1. Furthermore, since π already had length 1, t_0=1. We see that the limit of C_π, is the union of two distinct lines intersecting in a point. Each of these lines is contained in one of the ^2 boundary strata depicted in Figure <ref>; the intersection of these lines is not a torus fixed point.§.§ Orbits in _d(X_)We will now describe the closure ofT·{C_π,}in _d(X_).Let ε be a minimal i-face of τ. (Recall that ϵ = 1, cf. Remark <ref>). We define its i-multiplicity to be _i(ε):=max_u,u'∈ε (π(u)_i-π(u')_i)/L(ε),where L(ε) is the length of ε with respect to the lattice ⟨ε⟩. This is, equivalently, the multiplicity of (κ_i ∘π|_ε). If ε is not minimal, we set _i(ε)=0.Let Z(τ) be the group of formal integral linear combinations of faces of τ. Define a mapϕ:V → Z(τ) ϕ(v) =∑_i,σ∈Σ_i v∈σ-_≥ 0π^*(e_i^*)_i(τ^σ)·τ^σ. Let Σ_π consist of the closures in V of those full-dimensional regions on which ϕ is constant, along with all faces of these sets. Recall the definition of the lattice N_π, along with the map N_τ→ N_π, from Definition <ref>. Let π be a primitive degree d Cayley structure. Lethave distinct roots and be such that C_π, has degree d. Let Z be the normalization of T·{C_π,}⊆_d(X_). * If ℓ>1, the set Σ_π is a fan. If ℓ=1, the set Σ_π modulo its one-dimensional lineality space is a fan. * If ℓ>1 andis sufficiently general, then Z is the toric variety associated to the fan Σ_π with respect to the lattice N_τ. * More generally, for ℓ arbitrary and anysatisfying the above hypotheses, Z is the toric variety associated to the image of Σ_π in N_π,⊗ with respect to the lattice N_π,. The variety Z is a complete toric variety. The action by the torus T on Z factors through the quotient torus T_τ. The kernel of this action is exactly the stabilizer of {C_π,} which is described in Proposition <ref>. The quotient of T_τ by this kernel is the torus whose character lattice is N_π,; this is the lattice whose associated vector space contains the fan Σ' for Z. We see that the second claim of the theorem will follow from the third, since forgeneral and ℓ>1, N_τ=N_π,, see Corollary <ref>. In Lemma <ref> below we will show that for general v,v'∈ N_τ, the limits of {C_π,} under the corresponding one-parameter subgroups of T_τ coincide if and only if v and v' belong to the same (full-dimensional) region in Σ_π. On the other hand, general w,w'∈ N_π, belong to the same (full-dimensional) cone of Σ' if and only if they give the same limit of {C_π,}, see e.g. <cit.>. Since any complete fan is determined by its full-dimensional cones, it follows that Σ_π is the preimage in V of Σ'. The first and third claims now follow. For general v∈ N_τ, the limit of {C_π,} under v is the composition of ϕ(v) with the map sending a face ε≺τ to the cycle {X_ε}. In particular, for generic v,v'∈ N_τ, the limits of {C_π,} coincide if and only if v and v' belong to the same full-dimensional region of Σ_π.Fix a general v∈ N_τ. By Theorem <ref>, the limit of {C_π,} is∑_i,σ∈Σ_i^v m_i^σ·{t_i· C_π_i^σ}. Indeed, since v is general, it follows that τ^v is a vertex, so π_τ^v is constant and there is no m^v·{C_π^v,} term in the limit. We claim that (<ref>) agrees with ∑_i,σ∈Σ_i^v_i(τ^σ)·{X_τ^σ}.Indeed, since v is general, every σ∈Σ_i^v has codimension one, so every corresponding τ^σ is an edge of τ. It follows that C_π_i^σ is torus fixed, so we may dispense with the action by the t_i. Moreover, for any i, the Cayley structure π_i^σ results in the curve X_τ^σ . Finally, it is clear from the definition that _i(σ) is just the multiplicity of (κ_i ∘π|_τ^σ). This shows the first claim. The second is immediate. We continue our analysis of the Cayley structures π and π' from Example <ref>. For ease of referring to the edges of , we label the vertices in the left of Figure <ref>. The normal fan of the convex hull ofis pictured in Figure <ref>. For the Cayley structure π, the behaviour of the map ϕ is described in Figure <ref>. Depicted there are the z=1 and z=-1 hyperplanes in ^3. In each region, the label [α,β] means that the contribution of the 0-basepoint to ϕ is α∈ Z() and the 1-basepoint contribution to ϕ is β∈ Z(). The value of ϕ is α+β. As is predicted by Theorem <ref>, the lineality space of Σ_π is exactly the image of π^*:the span of (0,1,0). Projecting onto the first and third coordinates, we obtain exactly the fan on the left of Figure <ref>. The values of ϕ on the interior of these regions are exactly the generic limits of C_π,. Each of these limits is a pair of torus-invariant lines meeting in a torus fixed point. The corresponding edges ofare colour-coded on the right of Figure <ref>. The six uncoloured edges (23, 56, 1A, 1B, 4A, 4B) are not 0- or 1-faces and so do not arise in any generic limit. We note that since all these limits are reduced conics, in this case the Hilbert-Chow morphism induces an isomorphism of normalizations of the orbit closures in the Hilbert scheme and Chow scheme. The orbit has the same dimension as the component Z_π of the Hilbert scheme, so the normalization of Z_π is the toric variety corresponding to the left fan of Figure <ref>. We now consider the Cayley structure π' instead. By Theorem <ref>, we know that the quasifan Σ_π' will have lineality space spanned by (0,1,1). A slice of this quasifan is depicted on the right of Figure <ref>. The maximal cones are labeled with the corresponding values of ϕ; we notice that in this case, some of the generic limits are non-reduced. After an appropriate choice of coordinates, the projection of Σ_π' to the quotient space is exactly the fan on the left of Figure <ref>. One may conduct a similar analysis for the other length 1 Cayley structures of Example <ref>. The resulting fans are, up to change of coordinates, exactly those pictured in Figure <ref>. §.§ Chow PolytopesLet Z be any be any cycle in ^ℓ. Recall that its Chow polytope (Z) is the convex hull of the weights in ^ℓ+1 in the corresponding Chow form <cit.>. This is the moment polytope of the orbit closure of {Z} in the Chow variety under the action by (^*)^ℓ+1; its normal fan is the fan corresponding to this toric variety. More generally, consider any subtorus T' of (^*)^ℓ+1 with character lattice M'. The Chow polytope of Z with respect to T' is the linear projection _T'(Z) of (Z) induced by the map ^ℓ+1→ M', see e.g. <cit.>. This is the moment polytope of the T'-orbit closure of {Z}.Returning to our previous setup with a Cayley structure π:τ→Δ_ℓ(d), we see from Theorem <ref> that the image of the fan Σ_π is the normal fan to _T(C_π,). We may explicitly describe this polytope using the combinatorics of π. Let μ:Z(τ)→ M be the linear map sending an edge ϵ≼τ to 2· L(ϵ) times the midpoint of ϵ. Assume thathas distinct roots. The Chow polytope _T(C_π,) is the convex hull of the pointsμ(ϕ(v))∈ Mas v ranges over general elements of each cone in Σ_π. Before proving the corollary, we recall several facts about Chow polytopes we will use;these are stated in <cit.> for the “classical” case (Z) but are straightforward to generalize to the case of _T'(Z). Firstly, _T'(Z) is the convex hull of the polytopes _T'(Z_v) where v ranges over general one-parameter subgroups of T' and Z_v is the limit cycle of Z under v. Secondly, for a cycle Z=∑ c_iZ_i, _T'(Z) is the Minkowski sum _T'(Z)=∑ c_i_T'(Z_i).Finally, we need the following: Let ε be an edge of . Then _T(X_ϵ)=μ(ϵ). First consider the special case that =Δ_n(1). Then indeed (X_ϵ)=μ(ϵ), see <cit.>. Returning to the situation of general , by taking n=#ϵ-1, we may obtain an affine-linear bijection Δ_n(1)→ϵ with some edge ϵ' of Δ_n(1) mapping to the endpoints of ϵ. There is an affine-linear Cayley structure ϵ→Δ_1(L(ϵ)); composition gives a Cayley structure π':Δ_n(1)→Δ_1(L(ϵ)). The toric stratum X_ϵ is just C_π'. We may apply Theorem <ref> to any v minimized on ϵ' to obtain L(ϵ)·{X_ϵ'} as a limit of C_π'. Since this is torus-fixed with respect to (^*)^n+1, L(ϵ)·μ(ϵ') is a vertex of (X_ϵ). Since X_ϵ is torus fixed with respect to T, _T(X_ϵ) is a single vertex. It follows that this is simply the projection of L(ϵ)·μ(ϵ') under the map induced by Δ_n(1)→ϵ. This map takes μ(ϵ') to the sum of the endpoints of ϵ, so L(ϵ)·μ(ϵ') maps to μ(ϵ) as desired. The corollary is now immediate: This follows from the above discussion, Lemma <ref>, and Lemma <ref>. One could also prove Corollary <ref> and Theorem <ref> using tropical geometry. Indeed, since ρ_π, is the composition of a linear map with a monomial map, the tropicalization of C_π, (as a subvariety of the toric variety X_τ) is given by the image of the standard tropical line in ℓ-dimensional tropical projective space under the linear map π^*⊗:^ℓ+1→ V. One may compute the multiplicities of its cones using <cit.>. An application of e.g. <cit.> in this situation yields a description of the vertices of the Chow polytope. It is straightforward to see that this coincides with the result of Corollary <ref>; we leave this as an exercise to the tropically-minded reader. Although this approach to Corollary <ref> and Theorem <ref> is arguably simpler than the one we have taken here, we do not see how to obtain the full strength of Theorem <ref> (for non-general v) using tropical methods. Continuing Example <ref>, we use Corollary <ref> to compute _T(C_π',), the Chow polytope for the Cayley structure π'. The result is pictured on the right of Figure <ref>. Note that the normal fan of this polytope is exactly the quasifan Σ_π'. §.§ Blowing UpIn this subsection we will prove Theorem <ref>. Before doing so, we discuss the behaviour of families induced from Cayley structures under blowup. Fix a Cayley structure π:τ→Δ_ℓ(d).Suppose that for some natural number j, we have the following data:a_i^(j),b_i^(j)∈[z]i=0,…,ℓand an affine linear map λ^(j) : τ→. Assume that for each i, a_i^(j) and b_i^(j) are not both divisible by z. We then setf_i^(j)=a_i^(j)y_0^(j)+b_i^(j)y_1^(j)[j]=(f_0^(j),…,f_ℓ^(j)).We consider the rational mapϕ:^1×^1 X_τwhere for z∈^1 and y_0^(j),y_1^(j) homogeneous coordinates on ^1, we have x_u= 0u∉τ z^λ^(j)(u)·[j]^π(u)u∈τ. Fix 0 ≤ k ≤ℓ and assume that a_k^(j),b_k^(j) are constants, b_k^(j)≠ 0, and V(f_k^(j),z) is distinct from V(f_i^(j),z) for i≠ k. Set γ_j=a_k^(j)/b_k^(j).We blow up ^1×^1 at the point V(γ_j y_0^(j)+ y_1^(j),z)=V(f_k^(j),z) and consider the induced map, which we again call ϕ,ϕ : Bl_V(f_k^(j),z)(^1 ×^1)X_τ.We first consider the chart of the blowup with local coordinates z',y_0^(j),y_1^(j) wherez=z'(y_1^(j)/y_0^(j)+γ_j) = z' f_k^(j) / (b_k^(j) y_0^(j)).In these coordinates, we havex_u= 0u∉τ (z')^λ^(j)(u)(b_k^(j)y_0^(j))^-λ^(j)(u)[j]^π(u)+λ^(j)(u)· e_ku∈τ. Suppose that there exists w∈τ that minimizes both λ and λ + π^*(e_k^*). Then in the above coordinates, ϕ does not have a basepoint at f_k^(j)=z'=0. We show that for any u∈τ, x_u/x_w is regular at f_k^(j)=z'=0. Since x_w/x_w=1, this implies there is no basepoint. Fix any u∈τ. Using the above description of x_u, we have x_u/x_w=(z')^λ^(j)(u)-λ^(j)(w)(f_k^(j))^e_k^*(π(u))+λ^(j)(u)-(e_k^*(π(w))+λ^(j)(w))·ζ where ζ is regular at f_k^(j)=z'=0. But by assumption, λ^(j)(u)-λ^(j)(w)≥ 0e_k^*(π(u))+λ^j(u)-(e_k^*(π(w))+λ^j(w))≥ 0so x_u/x_w is regular.We now consider the other chart of the blowup with local coordinates z and y_0^(j+1),y_1^(j+1) withy_1^(j)/y_0^(j)+γ_j=zy_1^(j+1)/y_0^(j+1).Let E_j+1 denote the closure of the line z=0 in this chart. For u∈τ, defineλ^(j+1)(u)= λ^(j)(u)+e_k^*(π(u)).Likewise, we seta^(j+1)_i = 0 i=k a_i^(j)-b_i^(j)γ_j else b_i^(j+1) = b_i^(j)i=k zb_i^(j) elseandf_i^(j+1)=a_i^(j+1)y_0^(j+1)+b_i^(j_1)y_1^(j+1)[j+1]=(f_0^(j+1),…,f_ℓ^(j+1)).It is straightforward to check that in these coordinates, ϕ is given byx_u= 0u∉τ z^λ^(j+1)(u)·[j+1]^π(u)u∈τ. Fixsuch that the entries have distinct roots. After an automorphism of ^1, we may assume that none of the f_i have V(y_0) as a root. Let a_i^(0),b_i^(0)∈ be such that f_i^(0)=f_i. Consider v∈ N_τ; after replacing v by a sufficiently divisible multiple, we may assume that for every cone σ∈Σ whose relative interior intersects v+_≥π^*(e_i*), there is an integer j≥ 0 such that v+ j·π^*(e_i^*) is in the relative interior of σ. Note that taking a multiple of v does not change the limit cycle. Let λ^(0):τ→ be any affine-linear map whose linear part is exactly v. We claim that for any j≥ 0, and for any 0≤ k ≤ℓ, there exists w∈τ that minimizes both λ+j· e_k^*∘π and λ+(j+1)· e_k^*∘π. Indeed, by our assumption on v, the linear parts of these two maps either belong to the same cone of Σ, or one belongs to a cone which is a face of the other. It follows that one of the corresponding faces of τ must be a subface of the other, hence such w exists.Considering the map ϕ as above, for z∈^*v(z) · C_π,=ϕ(^1,z)by construction. The restriction of ϕ toE_0, the line where z vanishes, is given by ρ_π|_τ^v,. The basepoints are given by V(f_i) for those i that are basepoints of π|_τ^v. If π|_τ^v is constant, the line z=0 gets contracted to a point by ϕ. Otherwise, assuming thatis sufficiently general, the image of this line is C_π^v, by construction of π^v and Theorem <ref>. Moreover, by loc. cit. the multiplicity of the pushforward of the line E_0 is exactly m^v.It remains to resolve the basepoints of ϕ and determine their contributions to the limiting cycle. Fix a basepoint k of π|_τ^v and let γ_0=a_k^(0)/b_k^(0). We blow up and use the coordinates discussed above; note that a_k^(0) and b_k^(0) are constants.By Lemma <ref> and the discussion above, ϕ has no basepoint at E_0∩ E_1. We obtainλ^(1)(u)= λ^(0)(u)+ e_k^*(π(u)).anda^(1)_i = 0 i=k a_i^(0)-b_i^(0)γ_0 else b_i^(1) = b_i^(0)i=k zb_i^(0) else . The function f_k^(1) then vanishes at (1:0); note that a_k^(1) and b_k^(1) are again constants. We continue in this manner: taking γ_1=…=γ_j=0 and blowing up j≥ 0 more times leads to λ^(j+1)(u)= λ^(0)(u)+ (j+1)· e_k^*(π(u)).anda^(j+1)_i = 0 i=k a_i^(0)-b_i^(0)γ_0 else b_i^(j+1) = b_i^(0)i=k z^j+1b_i^(0) else .The face τ' of τ on which λ^(j+1) is minimal is exactly τ^(v+(j+1)π^*(e_k^*)). Furthermore, restricting to the line E_j+1 on this chart, we obtain(f_i^(j+1))|_E_j+1= b_i^(0)y_1 i=k (a_i^(0)-b_i^(0)γ_0)y_0elseand the map ϕ|_E_j+1 is given by x_u= 0u∉τ' [j+1]^π(u)|_E_j+1u∈τ'.Again by Lemma <ref>, the only possible basepoint of ϕ on E_j+1 is at (1:0). Hence, by continuing this blowup procedure, we eventually resolve the basepoints coming from f_k.The map ϕ|_E_j+1 is non-constant if and only if τ' is a k-face. Hence, the map becomes non-constant exactly when choosing j≥ 0 such that v+(j+1)π^*(e_k^*)∈Σ_k, and in this case the cone corresponding to τ' is in Σ_k^v.For such j≥ 0, set f_i'=(f_i^(j+1))|_E_j+1. All the f_i' have the same roots except for f_k', so the map ϕ|_E_j+1=t_k'·ρ_κ_i∘π|_τ'=t_k'·ρ_(κ_k∘π|_τ')where, thinking of V as (^*, T_τ),t_k' = π^*(e_k^*)(b_k^(0)) ·∏_ikπ^*(e_i^*)(a_i^(0) - b_i^(0)γ_0).Sincef_i∧ f_k=b_k^(0)(a_i^(0)-b_i^(0)/γ_0)we may act on ^1 by replacing y_0^(j+1) with b_k^(0)y_0^(j+1) and y_1^(j+1) with y_1^(j+1)/b_k^(0) to see that after rescaling coordinates,we may replace t_k' with t_k as defined prior to the statement of the theorem. Finally, by Theorem <ref>, the pushforward of E_j+1 under ρ_(κ_k∘π|_τ') is exactly m_k^τ'· C_π_k^τ'.Resolving each basepoint of ϕ in this manner, we obtain exactly the formula from the statement of the theorem. We note that in order to apply Theorem <ref> for the Cayley structure π^v, we need to assume thatis general. However, to apply it for the Cayley structures obtained after blowing up, we need no such assumption since these are Cayley structures of length one (whose input forms are thus automatically general). §.§ Orbits in the Hilbert SchemeTo probe the boundary of the Hilbert scheme _dm+1(X_) it is also of interest to ask for a description of the orbit closure of [C_π,] there. This seems challenging in general, but we do know the following: Let π be a primitive smooth Cayley structure of degree d and assume thatis general. Then the fan describing the normalization of the orbit closure of [C_π,] in _dm+1(X_) is a refinement of the fan from Theorem <ref>. Let Z' be the normalization of T· [C_π,] and Z the normalization of T·{C_π,}. Here, closures are being taken respectively in the Hilbert scheme and Chow variety. There is a natural T-equivariant morphism _dm+1(X_) →_d(X_) taking a scheme to its underlying cycle; this map induces an isomorphism from the T-orbit of [C_π,] to the T-orbit of {C_π,} and hence a birational torus equivariant morphism Z'→ Z. This means that the fan forZ' is a subdivision of the fan for Z, see e.g. <cit.>. The fan for Z is described in Theorem <ref>. In the case of conics, that is, degree two Cayley structures, we can give a more explicit answer. It is well-known that any subscheme of ^n with Hilbert polynomial 2m+1 is a plane conic. For the sake of the reader, we provide a short proof of this fact: Any subscheme of ^n with Hilbert polynomial 2m+1 is contained in a plane. Let Y⊂^n be any plane conic. Then Y is a complete intersection of a conic with n-2 linear forms. It is straightforward to check that the Piene-Schlessinger comparison theorem applies to Y <cit.>. Since Y is a complete intersection, it follows that _2m+1(^n) is smooth at [Y]. By connectedness of the Hilbert scheme <cit.>, we conclude that _2m+1(^n) is irreducible and only consists of plane conics. Let π:τ→Δ_ℓ(2) be a degree two primitive Cayley structure. We define Σ_π' to be the normal fan in V=(⟨τ⟩ ,) of the convex hull of{u+v+w∈ M | u,v,w∈τ, π(u),π(v),π(w) distinct}. Let π be a primitive Cayley structure of degree two andsufficiently general. Let Z' be the normalization of T· [C_π,]⊆_2m+1(X_). Then the fan describing Z' is the image in N_π,⊗ of the coarsest common refinement of Σ_π and Σ'_π. Let Λ be the unique plane containing C_π,. We will see that Σ_π' is the fan corresponding to the closure in (3,#) of the torus orbit of Λ under the action by T. Since any (possibly degenerate) conic is determined by the underlying cycle and the plane containing it (cf. Lemma <ref>), the claim of the theorem follows from Theorem <ref>. Let (Λ) be the matroid of Λ⊆^#-1. The elements of the ground set are labeled by u∈, and a collection of elements S⊂ is a basis of (Λ) if the projection of Λ to the projective space with coordinates indexed by S is bijective. It follows that any basis has exactly three elements, and by construction of C_π,, these elements must lie in different fibers of π. In fact, this condition also suffices for three elements to form a basis. Indeed, consider the composition of the embedding ^1→^ℓ determined bywith the second Veronese map ^ℓ→^ℓ+1 2-1. The image of this map is a conic C; let Λ' denote the plane containing it. Sinceis general, the matroid of Λ' is the uniform rank 3 matroid on ℓ+1 2 elements. By construction of C_π,, u,v,w∈ form a basis for (Λ) if and only if π(u),π(v),π(w) form a basis for (Λ'). Since the latter matroid is uniform, we see that u,v,w form a basis if and only if they lie in different fibers of π. The matroid polytope for (Λ) is thus P={e_u+e_v+e_w∈^# | u,v,w∈τ, π(u),π(v),π(w) distinct},see <cit.>. The closure of the orbit of Λ under the big torus (^*)^# is the toric variety associated to the polytope P. Since we are interested instead in the T-orbit, we consider the projection of P to M under the map e_u↦ u. The resulting polytope describes the T-orbit closure of Λ, and its normal fan Σ_π' the normalization thereof. [General conics in ^3] Let =Δ_3(1). The toric variety X_ is simply ^3. We consider the length seven Cayley structure π:→Δ_7(2) sending e_i to e_i+e_i+4 for i=0,1,2,3. This Cayley structure gives the most general conics in ^3. We will compare torus orbit closures in the Chow variety and Hilbert scheme. The normal fan ofin N_ has rays generated by the images of e_0^*,…,e_3^* in N_=(^4)^*/⟨ e_0^*+… +e_3^*⟩. The maximal cones of the normal fan are generated by any three of these rays. One may calculate that the fan Σ_π has rays generated by ± e_i^*, i=0,…,3. The six maximal cones are generated by collections of rays of the form e_i^*,e_j^*,-e_k^*,-e_ℓ^* for {i,j,k,ℓ}={0,1,2,3}. See Figure <ref> for a schematic representation of this fan: two ray generators are joined by a dashed black line segment if and only if they generate a face of a cone in Σ_π. This fan describes the normalization Z of the orbit closure in _2(^3) by Theorem <ref>. This toric variety has six isolated singularities, each of which is a cone over the quadric surface. On the other hand, the fan Σ_π' has rays generated by -e_i^*, i=0,…, 3. The four maximal cones are generated by any set of three rays. In the schematic representation of Figure <ref>, two ray generators are joined by a gray line segment if and only if they generate a face of a cone in Σ_π'. The coarsest common refinement of Σ_π and Σ_π' thus has twelve maximal cones, generated by e_i^*,-e_j^*,-e_k^* for i,j,k distinct. Each of the six cones of Σ_π is subdivided into two maximal cones, see Figure <ref>, taking both the dashed and gray line segments into account. By Theorem <ref> this fan describes the normalizationZ' of the orbit closure in the Hilbert scheme. Geometrically, in this example the map Z'→ Z is a crepant resolution of the singularities of Z. We continue our analysis of Example <ref> by applying Theorem <ref>. For the Cayley structure π, we had already noted in Example <ref> that the fan Σ_π describes the orbit closure in the Hilbert scheme. We can now see this in another way: the polytope used to construct the fanΣ_π' is depicted on the left of Figure <ref>. Its normal fan is exactly the quasifan Σ_π, so by Theorem <ref> we recover that the orbit closures in the Hilbert scheme and in the Chow variety coincide. We may instead consider the Cayley structure π'. The polytope used to construct the fan Σ_π'' is depicted on the right of Figure <ref>. Its normal fan is a coarsening of the quasifan Σ_π', so again by Theorem <ref> we recover that the orbit closures in the Hilbert scheme and in the Chow variety coincide. A similar analysis shows that the same statement holds for any of the length one Cayley structures considered in Example <ref>. Since the orbit closures have dimension two, and the corresponding Hilbert scheme components also have dimension two, we obtain that the normalizations of the Hilbert scheme components are the toric surfaces corresponding to the fans of Figure <ref>. amsalpha
http://arxiv.org/abs/2312.16590v1
{ "authors": [ "Nathan Ilten", "Jake Levinson" ], "categories": [ "math.AG", "14M25 (Primary) 14C05, 14H10 (Secondary)" ], "primary_category": "math.AG", "published": "20231227143506", "title": "Rational Curves in Projective Toric Varieties" }
Citizen science for social physics]Citizen science for social physics: Digital tools and participation[1,2]Josep Perellójosep.perello@ub.edu1,2]Ferran Larroyaferran.larroya@ub.edu 1,2]Isabelle Bonhoureisabelle.bonhoure@ub.edu1,2]Franziska Peterfpeter@ub.edu *[1]OpenSystems Research Group, Departament de Física de la Matèria Condensada, Universitat de Barcelona, Martí i Franquès, 1, Barcelona, 08028, Spain[2]Universitat de Barcelona Institute of Complex Systems UBICS, Martí i Franquès, 1, Barcelona, 08028, SpainSocial physics is an active and diverse field in which many scientists with formal training in physics study a broad class of complex social phenomena. Social physics investigates societal problems but most often does not count on the active and conscious participation of the citizens. We here want to support the idea that citizen science, and more particularly citizen social science, can contribute to the broad field of social physics. We do so by sharing some of our own experiences during the last decade. We first describe several human mobility experiments in urban contexts with the participation of concerned young students, old women or other different groups of neighbours. We second share how we have studied community mental health care provision in collaboration with a civil society organisation and with the intense involvement of persons with lived experience in mental health. In both cases, we narrow down the discussion to digital tools being used and the involved participatory dynamics. In this way, we share key learnings to enhance a synergistic relationship between social physics and citizen science and with the aim increase the societal impact of the research on complex social phenomena. [ [ January 14, 2024 ====================§ INTRODUCTION Human-behaviour research topics can today be grounded on a data-driven basis which was inconceivable only 20 years ago. Digital transformation of our societies with the broad use of the internet has led to a variety of digital interactive platforms and fully-equipped mobile phones. These devices can store extensively and intensively human movements, decision making processes or emotional reactions. They have opened the door to analyse and model both individual and aggregated human behaviours in an unprecedented manner.This sort of data-driven research has deep interdisciplinary and multidisciplinary roots. Many of the academics contributing to a better understanding of the related social phenomena are physicists <cit.>, generally with a strong background in statistical physics <cit.> and complex systems science <cit.>. Physicists have seen the need of cutting across academic boundaries. Their curiosity has led them to look outside their traditional domains <cit.> expecting to unveil laws similar to those in physics <cit.>. Jusup and coauthors <cit.> define social physics “as a collection of active research topics aiming to resolve societal problems to which scientists with formal training in physics have contributed and continue to contribute substantially”. The authors also qualify social physics as an extremely active and diverse field which broadly includes human behavior and interaction, but also human cooperation or human mobility to just name a few of topics <cit.>.Citizen science is also being a growing research practice during the last decade <cit.>. Citizen science broadly refers to the active engagement of the general public in scientific research tasks <cit.>. In citizen science, scientists and citizens collaborate to produce new knowledge for science and society. Crowdsourced data can be collected with Apps on mobile phones or web-based platforms. Amateurs have volunteered to take pictures of for instance bird species, invasive marine species or tiger mosquitoes. The outcomes of this joint effort is already showing relevant scientific impact <cit.>. In the particular field of physics, participants have classified millions of galaxies thanks to a well-documented website and a carefully guided interaction. The articulated effort has been key in numerous scientific publications <cit.>. In another level but still within the context of physics, local communities have created a network of low cost earthquake sensors <cit.> or became key actors in the quick reaction after the Fukushima disaster collecting extensive radiation measurements <cit.>. A scientific research can be thus organised by building collaborative networks of participants. These networks can also serve to effectively report volcano observations in Europe or eventually in all parts of the world <cit.>. All these cases are just few examples of the wide set of initiatives flourishing worldwide following very different formats and strategies.However, it is not that frequent to find citizen science projects within the specific field of social physics. Human-related activities and human behavior topics make less clear the roles and the position of the participants in front of the social topic under investigation which very often is related to social concerns that are at stake <cit.>. There is a sophistication and a subtlety in the research that contrasts with more established citizen science practices on for instance biodiversity monitoring or astronomy observation.We want here to further support the idea that citizen science practices can effectively bring everyone into the important task of better understanding those complex social phenomena <cit.>. To encourage researchers from social physics to further embrace citizen science, more reflection is needed. There are not enough spaces for sharing experiences from which extract key learnings. These spaces can help to further reflect in the configuration of the related digital tools which needs to be adaptive to participants' perspective, skills, concerns and motivation. The building or adaptation of digital tools for crowdsourced data collection is not the only one aspect to be considered in citizen science practices. To favour a citizen science for social physics, it is also necessary to further reflect on why and how citizens can be involvedand carefully structure participation in research activities. At least part of the reflection can be nurtured by the so-called citizen social science or social citizen science <cit.> which enhances social dimensions in citizen science projects and increments participatory research on social shared concerns with communities, sometimes in a vulnerable situation.Our OpenSystems research group <cit.> has been developing during the last decade in urban contexts and mostly in the Barcelona (Spain) metropolitan area <cit.>.We will restrict the paper to the presentation of two distinct themes with the aim to describe in very practical terms the development and the implementation of digital tools to collect crowd-sourced data, and the ways and reasons to incorporate citizens' participation into the research activities described. The first theme is being traditionally extensively explored in the context of social physics: human mobility <cit.>. The second theme focuses on community mental health care provision and it shows its link to social interactions <cit.>, human cooperation <cit.> and complex systems science in general <cit.>. Discussion and conclusion section summarizes and reflects on main aspects presented.§ HUMAN MOBILITY Human mobility has been gaining attention within the physics community <cit.>. Contributions have analyzed location data to learn about existing patterns <cit.>. Some of them have also provided new models or, alternatively, improved or assessed the existing ones <cit.>. Some of the papers have also brought out relevant insights about urban contextual issues like walkability or COVID response policies <cit.>. Census data has been for decades the (public) data source, the same for all the academic community. It has been able to assess the quality of well-known models in the literature such as the radiation model <cit.>. However, census data might be unsatisfactory for the actual research agenda. Higher level of data granularity is very much often required, with higher time and space resolution. The wide use of mobile phones and their related technologies give the capacity to collect the location data <cit.>. Social media accesses/posts collected by the researchers themselves <cit.> or mobile phone call data records (CDR) provided to specific research groups are an alternative to gain knowledge on human mobility patterns at a mesoscopic level <cit.>. If it is of interest to look at an even higher resolution scale, then other tracking systems might be necessary. This can be for instance the case of specific mobile applications (Apps) tracking systems <cit.>.Highest-resolution mobility data is generally owned by private companies. However, in some cases, societal challenges and humanitarian purposes have motivated a closer collaboration with academia. A mobile company worked with academic researchers and the researchers analyzed the movements of 1.9 million mobile phone users around Port au Prince city before and after the Haiti earthquake in 2010 <cit.>. Some companies opted to launch open calls for receiving specific academic demands (related to humanitarian purposes) and the companies selected an identified group of scientists to have access to very valuable location data <cit.>. Hackathons and similar gatherings had also been a strategy by telecommunications companies in Africa to mobilize the academic community towards sustainable or societal challenges by releasing partial data from some countries <cit.> but without questioning ethical and privacy related issues. All in all, it is still difficult to find open mobility data. Data reuse is thus representing an obstacle for scientific reproducibility and the further advancement of the related science <cit.>. Research agenda is thus mostly driven on which data a researcher have access to. In scientific papers, raw data is most often said to be not available to other researchers and only an anonymized and coarse grained data set might be offered upon request (see for instance the cases of Refs. <cit.>). Some scientific papers alternatively recommend contacting the company owning the data if the reader wants to access and perform new research over the same data. §.§ Digital tools: crowdsourced data collection and human mobility Concerns within human mobility research community are not limited to data accessibility and scientific reproducibility. There is also an intricate ethical debate that involves informed consent by the owners of the mobile phones. The debate indeed puts at the center of the discussion whether it is feasible to use highly sensitive data as a public good <cit.>. There are initiatives that have built the whole research around a digital tool avoiding the researchers’ intermediation with telecommunication companies <cit.>. This approach is deeply linked with citizen science practices and can overcome privacy and ethical problems, as participants are clearly informed and specifically consent to their data being used.§.§.§ Building and updating an App Under this more open crowdsourcing basis, some of us and other collaborators started new citizen science research in human mobility by building from scratch an App called BeePath (see a screenshot in Figure <ref>). The App worked in iOs and Android mobile phones and the first version was used in 2012 and left open in github <cit.>. We were able to collect high resolution GPS data every second and sent it to our university servers. The data was also shown in real time in an anonymized manner on a screen placed in a hotspot within the Barcelona Science Festival (Festa de la Ciència, in Catalan) located at Parc de la Ciutadella (4 ha, Barcelona). Visitors were recruited as participants of the mobility experiment. This action was part of the efforts to promote citizen science within the frame of the Barcelona Citizen Science Office <cit.> (an initiative led by the Barcelona municipality). We updated and modified the BeePath App several times since then. Several other citizen science investigations developed were valuable to participants themselves but less relevant for academic audiences and scientific journals. In 2017, we finally decided to run a large-scale pedestrian mobility experiment in a broader level, in urban contexts around 10 schools in the Barcelona metropolitan area (see Subsection <ref> below for further details). The data collected is carefully described and fully accessible in Ref. <cit.>. We considered pedestrian mobility in a neighborhood level, covering a typical distance range of hundreds of metres (≈ 700m) around the participating schools in the experiment. The typical journey duration was about 8 minutes <cit.>. This latter version allowed participants to take further ownership of data collected. We built some basic protocols that must be followed by the participants to deanonymize GPS datasets from early start. The data from this experiment was sent to our server and the participants could consult a brief report on their own data and download the data in a .csv format file. The data collected had however some additional potential risks to identify participants as natural persons inferring their home addresses. The risk thus needed to be handled with extra work made by professional scientists and using geo-masking techniques (k-anonimity) <cit.>. Privacy protection was also the reason why we did not collect gender identity or any other socio-demographic trait. All these measures can become a limitation in the scientific outcomes expected from participatory mobility experiments.§.§.§ Reusing existing Apps The constant changes in mobile phones Apps technical requirements oblige to release periodic upgrades. Unfortunately, these improvements add new challenges to a citizen science research led by a university and/or any organisation with limited resources. Besides, funding schemes by research agencies are most often not aligned with the necessary stable funding to maintain a long-term vision in citizen science projects, even if the amount of money is small. These funding schemes even sometimes determine that the expenses to keep an App alive might not be eligible as these sort of activities are still not seen as part of the daily research activity. A possible way out is to build a partnership with specific companies. They generally have the capacity and the flexibility to dedicate a stable taskforce to develop the new code and the related upgrades. This was for instance the case of some of the BeePath initiatives mentioned above.Alternatively, it is also possible to take advantage of existing Apps in the market which have quite different purposes (e.g. trekking or other outdoor sports activities). We used this strategy to run a new mobility experiment with very few resources available and to set up the related logistics very quickly. Before running the experiment, we checked the GPS precision of the App. We also tested the procedure of how data is collected and on how stopping times were treated by one specific App (Wikiloc <cit.>) with reverse engineering. See a screenshot of the Wikiloc App in Figure <ref>. This approach minimized the time needed to set up the digital infrastructure and the related taskforce but it required to redefine the privacy and consent protocols with care to comply data privacy regulations (GDPR, in the European case). For privacy reasons, we opted to offer the mobile devices to the participants but this in turn limited the amount of participants. The devices already had an account activated so that we can easily collect data after the experiment. These aspects might however limit the scope and goals of the experiment.§.§ Participation: engaging people in pedestrian mobility experimentsExisting research papers in the literature without active and conscious participation show very large amount of records. For instance, in Ref. <cit.>, 67.0 billion GPS records from 4.5 million unique smartphones (from Cuebiq company, anonymized location data from applications of opted-in users in an anonymized way) are used to explore how mobility patterns relate to economic segregation in large US cities. Similarly, Ref. <cit.> reports the GPS locations of 1.62 million users in 10 US metropolitan areas with the aim to analyse the impact of COVID-19 pandemic response measures on walking behaviour. Our own citizen science experiences <cit.> reported in Table <ref> lack the large amounts of GPS records owned by telecommunication or digital companies that other scientific publications use. However, citizen science mobility experiments brings the opportunity to narrow down the scope of the research and obtain only the data that is more capable to respond a predefined research question. We thus present below how to study purposed-based mobility and how to activate emergent uses of public space with a mobility experiment. Citizen science also offers the opportunity to work with specific communities so that specific socio-demographic groups can be identified beforehand. This sort of information can be precious to better understand mobility patterns.Following subsections report our experience for purposed-based pedestrian mobility in urban contexts <cit.> to take further inspiration about the kind of contributions that citizen science can bring into this field. Table <ref> shows basic information from each of the citizen science experiments being developed in 4 different urban contexts. §.§.§ Purposed-based pedestrian mobilityIn subsection <ref>, we already mentioned a very specific experimental setting within the Barcelona Science Festival and involving their visitors (101 participants). Scientific results are related to the exploration of the different spots in a public park and thus in a very well defined area. Active and reactive mobility patterns were identified <cit.>. Figure <ref> shows a frame of the video explaining the experiment <cit.>. We complied with data privacy regulations and signed the related informed consent on paper. We were also able to talk and discuss with interested citizens in participating to the experiment. We wanted to raise public awareness about the fact that our mobile phones are constantly collecting location data.Citizen science also brings the possibility to imagine repeated experiments with the same group. For healthy and well-being purposes, a group of about 10-15 old women used to come out together to walk for about an hour twice a week. The activity was part of a particular community center program (Centre Cívic Pere Quart, Les Corts, Barcelona). At least one member of the group activated the App in every journey. The journeys always started and ended at the two possible locations (depending on the day of the week). The women learn about new uses of their mobile phones. Also, the data collected served to identify most pleasant locations for a pedestrian (streets with green or tree shadows, wider side-walks...). The community center shared the routes taken by the group with the neighborhood under the form of maps (see one sample in Figure <ref>). The joint effort appeared in the local tv and two old women were able to explain the main results of the research to the audience <cit.>. Different scientific analysis were also made by one last-year graduate student in Physics from Universitat de Barcelona as part of his Bachelor's Degree Final Project.Another experiment using the BeePath App involved teachers and students from 10 schools in the Barcelona metropolitan area. The effort was indeed considered an innovative STEAM activity in formal education. School participants were actively engaged and contributed to the project in different stages. They decided the scientific question about mobility: “How do we arrive and leave the school?”, with the aim to study and learn how is the mobility in schools' vicinity and what obstacles students face in their daily home-to-school (or school-to-home) journey (e.g. narrow side-walks or absence of pedestrian crossings) <cit.>. In the different stages of the co-design process, participants developed and refined the experimental protocol for collecting data. They recorded their journey to/from school using an smartphone device with the BeePath App and in a predetermined time window. The students also became testers of the technology developed and interpreted their own school group's mobility data based on their knowledge of the school and the neighbourhood, and produced visualisations of the trajectories. The students finally presented the results in a public event in front of other school students and municipality representatives. The research done by the students resulted in the delivery of a set of evidence-based recommendations to the municipality representatives, in order to reach a wider urban perspective and thus improve accessibility to schools <cit.>. A total of 427 Secondary school students (14-16 years old) and 31 teachers participated in the experiment. 262 out of the 427 students recorded the journey to (or from) school with their own smartphone. All of the details about the co-design and the co-creation phases of the experimental protocol, the data gathering, the data processing and filtering and the results obtained can be found in Ref. <cit.>. Figure <ref> shares the visualisation of the trajectories to arrive or leave the Sant Gabriel de Viladecans school. In this case, 28 unique trajectories were gathered after filtering and processing the collected data. Different scientific analysis were also made by one last-year graduate student in Physics from Universitat de Barcelona as part of his Bachelor's Degree Final Project. Also, we will soon publish scientific results on walkability by means of different statistical analysis typically performed by physicists by means of magnitudes such as the instantaneous velocity and many related statistical features.§.§.§ Activating public spaces in a neighborhood through mobility To close this Section <ref>, we provide specific details of our most recent citizen science urban mobility experiment that does not focus on purposed-based mobility and sees pedestrian mobility and public participation as a way to imagine and discuss together possible transformations of the existing public space. The experiment was conducted in the “Primer de Maig” neighbourhood (3 ha, Granollers, Spain). The experiment was part of the research project “Civic Placemaking: Design, Public Space and Social Cohesion". This project was led by architects and urbanists and some of us were invited to join the research. The project broadly seeked to promote social cohesion and inter-cultural integration through ephemeral architecture projects that have an impact on public space and rely on the active engagement of citizens <cit.>.We there implemented citizen science participatory stategies and sophisticated methods for the data analysis that a physicist working on complex social phenomena would apply (e.g., waiting time distribution, transects lengths and reorientation angles or the identification of specific key locations with GPS data clustering). 72 people got involved in 19 groups. They became explorers of the neighbourhood, walking and seeking out specific places to perform a set of festive actions. The objective was to study the urban spaces of appropriation in an emergent manner, with a bottom-up approach.The call for participation was launched through three local neighborhood associations. The engaged participants therefore had different socio-demographic profiles. Groups used tablet devices (one per group) with the Wikiloc App <cit.> previously installed and registered with Google accounts by the research staff. In this way, participants did not have to provide any personal information that could compromise their anonymity. In addition, running the experiment in groups and starting data gathering from the public associations sites further guarantees the anonymity of the participants (for example, the location of their homes cannot be inferred). Data collection was divided into 2 consecutive days. 4 groups of teachers did the experiment in the morning and 7 groups of students (Primary school) in the afternoon. During the following day, the rest of the groups (8) participated to the experiment. The duration of the experiment and data collection was 1 hour and 30 minutes, in which participants explored the neighbourhood and completed 6 festive “missions” (e.g., choosing a place to open a drink and toast as a group, or finding a place to play a song on the speaker and dance).The collected high-resolution pedestrian mobility data helped us to capture movement flows within the neighbourhood and to identify those sites chosen by participants to perform the actions (and their duration). Actions chosen had urban relevance. Data is planned to be released during 2024 jointly with a scientific publication. The municipality is currently using the results as an additional source of information to undergo the urban transformation of this rather small but complex neighborhood <cit.>.§ MENTAL HEALTH As already mentioned in the introduction, Jusup and coauthors <cit.> define social physics “as a collection of active research topics aiming to resolve societal problems to which scientists with formal training in physics have contributed and continue to contribute substantially”. Societal problems are key drivers in funding schemes from research agencies. Societal problems are intricate, wicked and intrinsically involves uncertainty, complexity and divergence of perspectives <cit.>. Societal problems require the joint effort of a wide range of actors and stakeholders <cit.>. A recent report <cit.> by the European Research Executive Agency has proposed a mission-oriented research framework where “bold missions can provide new syntheses that are today impossible and thus will hopefully achieve the breakthroughs that are urgently needed to solve some of the most pressing issues facing our citizens.” The same report states that “citizens can possibly be mobilised to become active participants in missions, for example by cleaning plastics from beaches or by providing real-time monitoring data as enabling technologies develop and become more universally present in society”. Citizen science and social physics may therefore together meaningfully contribute to a societal issue and initiate a research differently. We here take one specific societal problem: mental health and the related care provision. The World Health Organisation (WHO) understands mental health as a “state of well-being in which the individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to his or her community”. WHO advocates for urgent change in mental health care shifting from a biomedical approach to a recovery model based on principles that include self-determination, resources beyond professional care, and a community approach. WHO also states that “recovery-oriented care is not about treatment of symptoms but about empowering people to have control of their own lives” and that “we must intensify our collective actions to reform mental health systems towards comprehensive community-based networks of support” <cit.>.Below, we will share two experiences where the participation of people with lived experience in mental health is enhanced and in which the partnership with a civil society organisation becomes totally relevant. These two experiences are summarized in Table <ref>. Next subsections will first explain the related digital tools to collect data and the methodologies involved, some of them straightforwardly related to social physics. We will later emphasize those aspects related to the participatory dynamics and the specificities of the research topic.§.§ Digital tools: crowdsourced data collection on mental health and human behaviourMental health research reported here looks for behavioural traits organsing a set of experiments to collect data related to these traits. The participants contributes using digital devices through which they can express themselves. Their interaction with digital devices may only take few minutes but it is still possible to get a benefit as a participant. The interactions trigger self-reflection about their own actions during the experiment. Specific personalized reports or updated information about the research progress were also offered. It is however also possible to build mobile-based digital tools that allow for brief but repeated interactions during a longer period of time (several months) as we will also describe below. §.§.§ Social dilemmas in digital platformsSome of us have recently developed Games for Mental Health (Jocs per la Salut Mental, in Catalan) <cit.>. The project (2015-2018) was a joint collaboration with the civil society organisation called Catalonia Mental Health Federation (Federació Salut Mental Catalunya, in Catalan). The Federation is currently composed of 79 associations of families and users of mental health care services.As a point of departure, Games for Mental Health took social dilemmas and games which have been extensively explored by the social physics community <cit.>. The broad aim of the social physics community is to better understand social interactions and human cooperation in a stylized manner within the mental health ecosystem conformed by persons with mental health condition, their families jointly with social and health-care professionals. Behavioural experiments havetraditionally been conducted in laboratories generally placed in universities and research centers <cit.>. Individuals usually do not know who they are paired with. They take decisions through a computer, in front a screen that provide the instructions to perform the experiment. Also, online experiments has become increasingly popular during last years, particularly through Amazon Mechanical Turk. Online experiments allows to recruit very large samples (thousands of subjects) worldwide and several authors claim that online experiments show similar results to physical laboratory experiments <cit.>.However, neither the physical laboratories nor the online experiments seemed to us the best way to involve the mental health care provision community. We opted to run in-the-field (or in-the-wild) experiments <cit.>. We relied on our own previous experiences in running several public experiments <cit.> on human behaviour <cit.>. This option required to use a flexible digital platform to run the experiment in more natural conditions, where social interactions of the mental health ecosystem are generally taking place as they are the spaces of the mental health care community. Non-permanent and pop-up infrastructure <cit.> with electronic tablets, a laptop that acted as a local server and a router to set up a local wi-fi network allowed us to run behavioural experiments easily and in a flexible manner. Figure <ref> illustrates these settings with one participant holding an electronic tablet during an experiment in the World Mental Health Day 2016. A specific platform was developed under the name of Citizen Social Lab <cit.>. The source code developed by one of our collaborators is available at Github <cit.>. §.§.§ A chatbot to maintain a long-term interaction An alternative approach to the event-oriented platforms described in the previous section is to build a chatbot within a popular messaging App. Main motivation is to set up a long-term interaction with the participant. The periodic interactions of the participants from within their different private spaces gives the researchers access to a much more natural context <cit.>. We have recently developed the chatbot CoActuem per la Salut Mental <cit.> (CoAct for Mental Health, in English) in Telegram. The chatbot is part of the subproject of the Horizon 2020 project titled CoAct <cit.> (2020-2022). The Federation was also part of the consortia.The chatbot <cit.> meets high standards of data privacy and is therefore especially apt for exploring any stigmatized social topic, not only mental health. The automated data collection results in well-structured digital data and can be adapted to research questions and scientific methods, especially in the field of social physics, comprising network theory, game theory, epidemics or opinion dynamics <cit.>.Anyone, with or without a mental health problem, can subscribe to the CoActuem Telegram chatbot. The chatbot automatically sends messages and particularly questions to mutually unconnected participants. The participants receive the contents with a flexible rhythm, e.g. daily, and can decide on the timing of their answering themselves. Figure <ref> show the chatbot and the messages that a participant receives in its own mobile device.First dialogue that the participant subscribed to the chatbot receives welcomes them and ask for providing informed consent. The second dialogue serves as capacitation building and gives examples of what later will be the two types of dialogues the participants are asked to answer. The participants is also asked to complete an extensive socio-demographic survey, comprising 32 questions.After these interactions, the participant periodically receives a dialogue which includes a micro-story experienced and put into words by persons with lived experience in mental health (see co-creative sessions below reported in Section <ref>). One type of micro-story, “share experiences”, asks the participants whether they lived the shared experience, too, and whether someone in their surrounding lived it. The other type, “find solutions”, asks the participants to decide between two different reactions to the presented micro-story, also written down by persons with lived experience in mental health. All in all, participants can react to up to 222 microstories and receive a total of more than 300 dialogues. The source code is available at Github <cit.> and is also documented in Zenodo <cit.>. The interaction via buttons facilitates the later analysis considerably. All sent contents and data collected from the participants are stored in a database in a way that is easily accessible during its runtime. After having collected the participants’ answers and for scientific research, the unstructured data is converted into several tables that grant quick access to the data analysts. One possible analysis is to build a network of micro-stories of type “share experiences”. A link can be built based on whether a pair of participants answers in the same manner different micro-stories. It is then possible to build a network. The nodes can be taking information from the socio-demographic survey (e.g. gender self-reported identity). We exemplify these possibilities in Figure <ref> where we select a group of participants and build a multiplex network with the two answers from each participant to a set of micro-stories of type “share experiences”.§.§ Participation: lived experiences in mental healthWHO is emphasizing that to transform mental health care provision it is necessary to enhance the participation of people with lived experience in mental health <cit.>. There are also other several official recommendations to ensure participation of persons with mental health problems and their families, at all levels, including research, design, and implementation of services and programmes <cit.>. A citizen science project in this context can therefore provide new tools and strategies to valorize inside a scientific research the knowledge from the people with lived experience. This expertise is mostly held at an individual level and it is barely socialized due to social stigma or the limited number of spaces available to put knowledge in common. The scientific research in this community-based frame is still limited and there is a lack of data to provide evidence on the benefits and challenges behind the community-based mental care provision <cit.>.§.§.§ Behavioural traits within the mental health care community With the support of the Federation, a working group was created to co-design the experiments. The working group was formed by a range of 20 people with diverse experiences and expertise: people with mental health conditions, non-professional caregivers, relatives, social workers, mental health nurses, psychologists, and psychiatrists, as well as experts and board members of the Federation. During a set of workshops, we together decided which were the most relevant behavioural traits (cooperation, trust, reciprocity and sense of collectiveness) and the most interesting games (Prisoner's Dilemma, Trust Game and Collective Risk Dilemma). The working group also framed the experimental settings and the methods and protocols related to see the experiments also as an experiential activity that enables participants to self-reflect about their position within the mental health care community and in the mental health ecosystem.As carefully described in Ref. <cit.>, we first ran the Games for Mental Health (120 participants) as one of the activities in a mass event at a regional level during the week of the World Mental Health Day. Participants belonged to the mental health associative movement, including people with mental health conditions, their families, and social and health-care professionals from the sector. We also embedded the experiment in three social spaces. In these settings, 150 more people played the games, thus reaching out a total number of participants of 270 individuals. This number of participants is below the number of experimental subjects gathered in Mechanical Turk behavioural experiments but the numbers result to be comparable to those gathered in physical laboratories that have studied mental conditions in clinical settings <cit.>.Results obtained from the public experiments with the active and conscious participation people with lived experience in mental health, non-professional caregivers (e.g., family members) and professional caregivers (e.g., social workers) was published in a scientific journal <cit.> and the experimental data was also released <cit.>. Results showed that caregivers are the strong ties in the community as they showed high levels of cooperation and optimism. Also, participants with lived experience also showed to play a leading role in the ecosystem because they were putting larger efforts towards reaching a collective goal in an specific game. The study also emphasized the need to take complex systems science perspective when looking at the community involved in mental health care provision. A press conference was also jointly organised by the Universitat de Barcelona and the Federation. An easy-to-read document with policy recommendations was also released jointly with the press release of the scientific paper <cit.>.§.§.§ The social support networks in mental healthSocial physics and complex systems science can also find of interest the so-called social support networks in mental health. These networks are a broad construct of social resources that an individual perceives <cit.>. They also refer to mutual assistance, guidance, and validation about life experiences and decisions <cit.>. However, there is no much data to further understand how social support networks work in the context of mental health. This lack of empirical data is hindering the transforming mental health care provision towards an enhancement community-based health services. With the high ambition of maximizing the collaboration with citizens, a group of 32 co-researchers (either persons with mental health issues or their family members) were gathered in the earliest stage of the project. They were part of the research team as they were considered competent experts in-the-field, based on their daily lived experiences. This hand-in-hand contribution was parallel to another effort in creating a community that we gave the name of knowledge coalition. It was formed by 65 institutional representatives from public administrations, civil society organisations, educational organisations, and academia working on mental health. The first step made was to together frame the research. Collaborative documents and definitions were also prepared, to establish a common definition of social support network and a shared basis on the research approach being taken. Then, co-researchers were invited to write personal stories about their lived experiences of social support, accompanied by a professional writer and a professional graphic artist who illustrated the stories. The process of writing microstories was not a straightforward task and a Research Diary was produced to give support to co-researchers. The Research Diary <cit.> was meant to be used as physical supporting material after, during and between the sessions to favour individual and self-reflective processes. The resulting micro-stories were then shared to anyone subscribed in the CoActuem per la Salut Mental chatbot as described in Section <ref>. In this way, we wanted to explore whether participants in the chatbot and people around them have lived similar experiences. We together hypothesized that this serve us as a first proxy to explore the roles of everyone and the experiences that can help to better identify key aspects of social support networks (see also Figure <ref>). This is still an ongoing work with the co-researchers and the knowledge coalition. However, it has already served to support with evidences a set of 14 policy recommendations delivered to local public authorities (Barcelona City Council and commissioner for the National Mental Health Plan at a regional level). The document under the form of a policy brief was discussed during an assembly (November 2023) with knowledge coalition members and co-researchers <cit.>. The event was also open to anyone who has participated in the chatbot or that was simply interested in mental health.Both the co-researchers and the knowledge coalition has been involved for a quite long term period (2-3 years) and for a wide variety tasks within a research which is indeed not fully ended. Figure <ref> describes the co-researchers’ contributions so far under the form of a journey. The same dynamics can be used to any other citizen science project facing a societal problem where a group of co-researchers can contribute with their lived experiences. We are still under the process of writing several related scientific publications which some of them will be co-authored by some co-researchers. § DISCUSSION AND CONCLUSION We have aimed to call attention to some citizen science practices and encourage social physics community to adopt them and expand them. We believe that citizen science offers ways to circumvent current obstacles to the further advancement of social physics and by extension the broader field of computational social science <cit.>. Experts have observed inadequate data sharing paradigms to consolidate scientific outcomes and the absence of clear mechanisms to perform ethical research <cit.>. Running an ethical research is not only related to privacy issues and regulation compliance. It also refers to a further consideration socio-economic inequalities and vulnerabilities as part of the research agenda, to the inclusion of concerned citizens collectives and eventually those in a vulnerable position in research activities,or to an increment of the citizens’ sense of ownership on their own digital data <cit.>.Along these lines, citizen science practices can become a very relevant approach. In citizen science, participants takes an active role in the research definition, the research question, the results interpretation or the transformation of the scientific results into valuable knowledge <cit.>. If we talk about social phenomena, both the motivation and the value of the co-produced knowledge are deeply involved in participants’ everyday life and the results can in turn influence policies and actions to promote social change <cit.>. In a broader level, benefits also cover a wide list of aspects that surpass scientific interest and include innovative STEM or STEAM education <cit.>, democratic values enhancement <cit.>, social inclusion promotion <cit.>, or evidence based policy making <cit.>. Furthermore, citizen science practices can be key in the problem identification and agenda setting in scientific research related to societal problems <cit.> or become a key element in sustainability transitions <cit.>. And, both globally and locally, citizen science projects can bring to the table non-traditional data sources to contribute to the United Nations Sustainable Development Goals <cit.>.We have here focussed in our own experience in two particular topics which can be linked to the approaches and methods that social physics typically use <cit.>. Learnings from the shared experiences being presented in this paper are many. We would like to stress the importance of building new digital tools which need to be versatile and adaptive to participants' needs, concerns and constraints. This generally leads to the use of electronic tablets and mobile devices which are more accessible and closer to daily-life activities of the participants. Sometimes, it is however more effective to customize existing digital tools rather than building new ones. Customization may involve new features and uses but most importantly it allows to run in-the-field human behaviour experiments in more natural conditions. These efforts allows to collect data avoiding the intermediation of tech companies or to get data much more aligned to concerned groups of citizens. In both cases, these data can be complementary to existing data. The data can allow to narrow down the research question and more effectively address very specific issues. Also, participatory dynamics needs to be carefully organised. One can take windows of opportunity in massive events such as a science festival or the public events during the World Mental Health Day. It is however also important to approach to several contexts much closer to daily social interactions, even if it is for few minutes or for a much longer period of time. In any case, it is particularly relevant to stablish a long-lasting relationship with specific groups that have a clear motivation (young students or a group of old women). Building a participatory research with a civil society organisation can also make the whole effort much more robust as research can be better framed. Knowledge being co-produced have then more chances to become actionable in terms of policies and collective actions. These collaborations are valuable to construct communities around the research (such as the knowledge coalition described in CoAct for Mental Health) but it is even more important to incorporate as co-researchers persons with lived experience as they already have invaluable knowledge in their hands. Further reflections of a citizen science for social physics can be highly benefit from ongoing discussion around the so-called citizen social science or social citizen science <cit.>. Citizen social science enhances the social dimension in citizen science and can be understood as participatory research co-designed and directly driven by citizen groups sharing a social concern <cit.>. Several other topics that social physics investigates can also be part of these ongoing efforts. Many techniques from social physics can be further developed or even reformulated when considering citizen science data. For instance, GPS collected data can help to build completely new stochastic models with special interest in humanmicro-mobility <cit.>. Social interactions can also be characterized with the definition of specific traits that quantifies ties among individuals within the mental health care ecosystem <cit.>. The sample of participants in citizen science projects might be more limited (hundreds of participants) compared to other efforts coming from social physics, complex systems and computational social science. However, the smaller size of the datasets is compensated with the fact that citizen science can gather richer data or at least more meaningful data. The collected data can be more oriented to specific research questions. In the case of pedestrian mobility, the location data is from specific socio-demographic profiles and belongs to already identified purposed-based mobility or narrows down the analysis to a concrete neighbourhood. In the cases related to mental health care, it is possible to run behavioural experiments to characterize the social interactions in terms of traits such as cooperation or trust and reinterpret these traits to better understand the mental health ecosystem. Or alternatively, it is possible to map out and better understand key elements in social support networks based on lived experiences narrated under the form of micro-stories with complex networks, clustering analysis, and machine learning algorithms.Overall, the adoption of a citizen science for social physics increases research legitimacy and the potential to transform scientific knowledge into specific actions and policies. Crowd-sourced citizen science practices around complex social phenomena extend the research effort to the general public. A deep involvement of co-researchers and knowledge coalition members in scientific research can promote an evidence-based culture beyond the academic context and can open new deliberative spaces to a wide variety of groups and organisations in our societies. This joint effort has the potential to nurture richer public debates around many crucial societal challenges.AcknowledgmentsWe acknowledge the participation of all volunteers involved and all collaborators and former members of the group OpenSystems. We particularly thank the Consorci d’Educació de Barcelona and the Barcelona City Council through its Citizen Science Office and Salut Mental Catalunya for their commitment to the projects reported and their support to citizen science practices. This work was partially supported by Ministerio de Ciencia e Innovación (MCIN, Spain), Agencia Estatal de Investigación (AEI) AEI/10.13039/501100011033 and Fondo Europeo de Desarrollo Regional (FEDER) [grant number PID2019-106811GB-C33, JP, FL, IB and FP]; by the ERA-Net Urban Transformation Capacities (ENUTC) program [OPUSH, contract number 101003758, JP, FL, IB and FP] and by MCIN/AEI/10.13039/501100011033 and European Union NextGenerationEU/PRTR [grant number PCI2022-132996, JP, FL, IB and FP]; by Horizon 2020 program [COACT, contract number 873048, JP, FL and IB]; by Horizon Europe WIDERA program [SENSE., contract number 101058507, JP]; by Generalitat de Catalunya (Spain) through Complexity Lab Barcelona [grant numbers 2017 SGR 608 and 2021 SGR 00856; JP, FL, IB and FP].§ STATEMENTS AND DECLARATIONSCompeting Interests The authors declare no competing interests.9Jusup2022 Jusup, M., Holme, P., Kanazawa, K., Takayasu, M., Romić, I., Wang, Z., ... & Perc, M. (2022). Social physics. Physics Reports, 948, 1–148. <https://doi.org/10.1016/j.physrep.2021.10.005>Barbosa2018 Barbosa, H., Barthelemy, M., Ghoshal, G., James, C. R., Lenormand, M., Louail, T., ... & Tomasini, M. (2018). Human mobility: Models and applications. Physics Reports, 734, 1-74. <https://doi.org/10.1016/j.physrep.2018.01.001>Barthelemy2019 Barthelemy, M. (2019). The statistical physics of cities. Nature Reviews Physics, 1(6), 406–415. <https://doi.org/10.1038/s42254-019-0054-2>Castellano2009 Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of Modern Physics, 81(2), 591. <https://doi.org/10.1103/RevModPhys.81.591>Perc2017 Perc, M., Jordan, J. J., Rand, D. G., Wang, Z., Boccaletti, S., & Szolnoki, A. (2017). Statistical physics of human cooperation. Physics Reports, 687, 1–51. <https://doi.org/10.1016/j.physrep.2017.05.004>Sanchez2018 Sánchez, A. (2018). Physics of human cooperation: experimental evidence and theoretical models. Journal of Statistical Mechanics: Theory and Experiment, 2018(2), 024001. <https://doi.org/10.1088/1742-5468/aaa388>Wang2016 Wang, Z., Bauch, C. T., Bhattacharyya, S., d'Onofrio, A., Manfredi, P., Perc, M., ... & Zhao, D. (2016). Statistical physics of vaccination. Physics Reports, 664, 1–113. <https://doi.org/10.1016/j.physrep.2016.10.006>Ball2012 Ball, P. Why Society is a Complex Matter: Meeting twenty-first century challenges with a new kind of science (Springer, Berlin, 2012). <https://doi.org/10.1007/978-3-642-29000-8>Stewart1947 Stewart, J. Q. (1947). Suggested Principles of Social Physics. Science, 106(2748), 179–180. <https://doi.org/10.1126/science.106.2748.179>Stewart1950 Stewart, J. Q. (1950). The development of social physics. American Journal of Physics, 18(5), 239–253. <https://doi.org/10.1119/1.1932559>Perc2019 Perc, M. The social physics collective. Sci Rep 9, 16549 (2019). <https://doi.org/10.1038/s41598-019-53300-4>Irwin2018 Irwin, A. N. PhDs needed: how citizen science is transforming research. Nature 562, 480–483 (2018). <https://doi.org/10.1038/d41586-018-07106-5>Vohland2021 Vohland, K. et al. (2021). Editorial: The Science of Citizen Science Evolves. In: Vohland, K., et al. The Science of Citizen Science. Springer, Cham. <https://doi.org/10.1007/978-3-030-58278-4_1>Cooper2014 Cooper, C. B., Shirk, J., & Zuckerberg, B. (2014). The invisible prevalence of citizen science in global research: migratory birds and climate change. PloS one, 9(9), e106508. <https://doi.org/10.1371/journal.pone.0106508>Marshall2015 Marshall, P. J, Lintott, C. J., Fletcher, L. N. (2015). Ideas for Citizen Science in Astronomy. Annual Review of Astronomy and Astrophysics 53:1, 247–278. <https://doi.org/10.1146/annurev-astro-081913-035959>Calais2022 Calais, E., Symithe, S., Monfret, T., Delouis, B., Lomax, A., Courboulex, F., ... & Meng, L. (2022). Citizen seismology helps decipher the 2021 Haiti earthquake. Science, 376(6590), 283–287. <https://doi.org/10.1126/science.abn1045>Brown2016 Brown, A., Franken, P., Bonner, S., Dolezal, N., & Moross, J. (2016). Safecast: successful citizen-science for radiation measurement and communication after Fukushima. Journal of Radiological Protection, 36(2), S82. <https://doi.org/10.1088/0952-4746/36/2/S82>Sandri2023 Sandri, L., Ilyinskaya, E., Traver, A. G., Barsotti, S., Duncan, M., & Loughlin, S. (2023). The EUROVOLC citizen-science tool: collecting volcano observations from Europe. Europhysics News, 54(2), 24–27. <https://doi.org/10.1051/epn/2023205>Albert2021 Albert, A., Balázs, B., Butkevičienė, E., Mayer, K., & Perelló, J. (2021). Citizen social science: New and established approaches to participation in social research. Chapter 7. In: Vohland K. et al.(Eds). 2021. The Science of Citizen Science. Springer. <https://doi.org/10.1007/978-3-030-58278-4>. pp: 119–138Bonhoure2023 Bonhoure, I., Cigarini, A., Vicens, J. et al. Reformulating computational social science with citizen social science: the case of a community-based mental health care research. Humanit Soc Sci Commun 10, 81 (2023). <https://doi.org/10.1057/s41599-023-01577-2>OpenSystems <http://www.ub.edu/opensystems>Senabre2018 Senabre, E., Ferran Ferrer, N., & Perelló, J. (2018). Participatory design of citizen science experiments, Comunicar - Scientific Journal of Media Education 54, 29–38. <https://doi.org/10.3916/C54-2018-03>Gutierrez2014 Gutiérrez-Roig M, Gracia-Lázaro C, Perelló J, Moreno Y, Sánchez A (2014) Transition from reciprocal cooperation to persistent behaviour in social dilemmas at the end of adolescence. Nat Commun 5(1):1–7. <https://doi.org/10.1038/ncomms5362>Gutierrez2016 Gutiérrez-Roig, M. et al. Active and reactive behaviour in human mobility: the influence of attraction points on pedestrians. R. Soc. Open Sci. 3, 160177 (2016). <https://doi.org/10.1098/rsos.160177>Gutierrez2016b Gutiérrez-Roig M, Segura C, Duch J, Perelló J (2016) Market imitation and win-stay lose-shift strategies emerge as unintended patterns in market direction guesses. PLoS ONE 11(8):e0159078. <https://doi.org/10.1371/journal.pone.0159078>Sagarra2016 Sagarra O, Gutiérrez-Roig M, Bonhoure I, Perelló J (2016) Citizen science practices for computational social science research: the conceptualization of pop-up experiments. Front Phys 3:93. <https://doi.org/10.3389/fphy.2015.00093>Vicens2018b Vicens, J., Perelló, J., & Duch, J. (2018). Citizen Social Lab: A digital platform for human behavior experimentation within a citizen science framework. PloS one, 13(12), e0207219. <https://doi.org/10.1371/journal.pone.0207219>Poncela2016 Poncela-Casasnovas J, Gutiérrez-Roig M, Gracia-Lázaro C, Vicens J, Gómez-Gardeñes J, Perelló J, Moreno Y, Duch J, Sánchez A (2016) Humans display a reduced set of consistent behavioral phenotypes in dyadic games. Sci Adv 2(8):e1600451. <https://doi.org/10.1126/sciadv.1600451>Vicens2018 Vicens J, Bueno-Guerra N, Gutiérrez-Roig M, Gracia-Lázaro C, Gómez-Gardeñes J, Perelló J, Sánchez A, Moreno Y, Duch J (2018) Resource heterogeneity leads to unjust effort distribution in climate change mitigation. PLoS ONE 13(10):e0204369. <https://doi.org/10.1371/journal.pone.0204369>Cigarini2018 Cigarini, A., Vicens, J., Duch, J. et al. Quantitative account of social interactions in a mental health care ecosystem: cooperation, trust and collective action. Sci Rep 8, 3794 (2018). <https://doi.org/10.1038/s41598-018-21900-1>Cigarini2020 Cigarini, A., Vicens, J. & Perelló, J. Gender-based pairings influence cooperative expectations and behaviours. Sci Rep 10, 1041 (2020). <https://doi.org/10.1038/s41598-020-57749-6>Cigarini2021 Cigarini, A., Bonhoure, I., Vicens, J., & Perelló, J. (2021). Public libraries embrace citizen science: Strengths and challenges. Library & Information Science Research, 43(2), 101090. <https://doi.org/10.1016/j.lisr.2021.101090>Perello2021 Perelló, J., Cigarini, A., Vicens, J., Bonhoure, I., Rojas-Rueda, D., Nieuwenhuijsen, M. J., ... & Ripoll, A. (2021). Large-scale citizen science provides high-resolution nitrogen dioxide values and health impact while enhancing community knowledge and collective action. Science of the Total Environment, 789, 147750. <https://doi.org/10.1016/j.scitotenv.2021.147750>Perello2022 Perelló, J. (2022). New knowledge environments: On the possibility of a citizen social science. Metode Science Studies Journal, 12, 25-–31. <http://dx.doi.org/10.7203/metode.12.18136>Larroya2023 Larroya, F., Díaz, O., Sagarra, O. et al. Home-to-school pedestrian mobility GPS data from a citizen science experiment in the Barcelona area. Sci Data 10, 428 (2023). <https://doi.org/10.1038/s41597-023-02328-3>Gonzalez2008 González, M., Hidalgo, C. & Barabási, AL. Understanding individual human mobility patterns. Nature 453, 779–782 (2008). <https://doi.org/10.1038/nature06958>Chen2018 Chen, Y., & Huang, L. (2018). A scaling approach to evaluating the distance exponent of the urban gravity model. Chaos, Solitons & Fractals, 109, 303-313. <https://doi.org/10.1016/j.chaos.2018.02.037>Gallotti2016 Gallotti, R., Bazzani, A., Rambaldi, S. et al. A stochastic model of randomly accelerated walkers for human mobility. Nat Commun 7, 12600 (2016). <https://doi.org/10.1038/ncomms12600>Simini2012 Simini, F., González, M., Maritan, A. & Barabási, A.-L. (2012). A universal model for mobility and migration patterns. Nature 484, 96–100. <https://doi.org/10.1038/nature10856>Hunter2021 Hunter, R.F., Garcia, L., de Sa, T.H. et al. Effect of COVID-19 response policies on walking behavior in US cities. Nat Commun 12, 3652 (2021). <https://doi.org/10.1038/s41467-021-23937-9>Rhoads2021 Rhoads, D., Solé-Ribalta, A., González, M.C. et al. A sustainable strategy for Open Streets in (post)pandemic cities. Commun Phys 4, 183 (2021). <https://doi.org/10.1038/s42005-021-00688-z>Blondel2015 Blondel VD, Decuyper A, Krings G. 2015 A survey of results on mobile phone datasets analysis. EPJ Data Sci. 4, 10. <https://doi.org/10.1140/epjds/s13688-015-0046-0>Llorente2016 Llorente, A., Garcia-Herranz, M., Cebrian, M., & Moro, E. (2015). Social media fingerprints of unemployment. PloS one, 10(5), e0128692. <https://doi.org/10.1371/journal.pone.0128692>Schlapfer2021 Schläpfer, M., Dong, L., O’Keeffe, K. et al. The universal visitation law of human mobility. Nature 593, 522–527 (2021). <https://doi.org/10.1038/s41586-021-03480-9> Moro2021 Moro, E., Calacci, D., Dong, X. et al. Mobility patterns are associated with experienced income segregation in large US cities. Nat Commun 12, 4633 (2021).<https://doi.org/10.1038/s41467-021-24899-8>Lu2012 Lu, X., Bengtsson, L., & Holme, P. (2012). Predictability of population displacement after the 2010 Haiti earthquake. Proceedings of the National Academy of Sciences, 109(29), 11576-11581. <https://doi.org/10.1073/pnas.1203882109>Taylor2016 Taylor, L. (2016). The ethics of big data as a public good: which public? Whose good?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160126. <https://doi.org/10.1098/rsta.2016.0126> Lazer2020 Lazer, D. M., Pentland, A., Watts, D. J., Aral, S., Athey, S., Contractor, N., ... & Wagner, C. (2020). Computational social science: Obstacles and opportunities. Science, 369(6507), 1060-1062. <https://doi.org/10.1126/science.aaz8170>Eagle2006 Eagle, N., & Pentland, A. (2006). Reality mining: sensing complex social systems. Personal and ubiquitous computing, 10, 255-268. <https://doi.org/10.1007/s00779-005-0046-3>gitbee-path <https://github.com/bee-path>Barcelona<https://www.barcelona.cat/barcelonaciencia/en/science-city/science-and-citizenship/citizen-science/citizen-science-office> wikiloc <https://www.wikiloc.com>videofesta <https://vimeo.com/99609723>beteve <https://beteve.cat/general/centre-civic-pere-quart-bee-path-ciencia-ciutadana>Diaz2023 Díaz, O., Sagarra, O., Colomer Simón, P., Ferré, S. & Perelló, J. Beepath a les escoles - Resultats. Zenodo (2023) <https://doi.org/10.5281/zenodo.7948586>Elisava2022 Paez, R., Valtchanova, M., Perelló, J., Larroya, F. & Sànchez, E. Civic Placemaking 3: Disseny, espai públic i cohesió social. Elisava (2022). <https://doi.org/10.46467/ElisavaResearch_CivicPlacemaking3> Head2022 Head, B.W. (2022). The Rise of ‘Wicked Problems’—Uncertainty, Complexity and Divergence. In: Wicked Problems in Public Policy. Palgrave Macmillan, Cham. <https://doi.org/10.1007/978-3-030-94580-0_2>Mazzucato European Commission, Directorate-General for Research and Innovation, Mazzucato, M., Mission-oriented research & innovation in the European Union : a problem-solving approach to fuel innovation-led growth, Publications Office, 2018, <https://data.europa.eu/doi/10.2777/360325>WHO2022 World Health Organization. (2022). Word mental health report. Transforming mental health for all. <https://www.who.int/publications/i/item/9789240049338>Perello2012 Perelló, J., Murray-Rust, D., Nowak, A. et al. Linking science and arts: Intimate science, shared spaces and living experiments. Eur. Phys. J. Spec. Top. 214, 597–634 (2012). <https://doi.org/10.1140/epjst/e2012-01707-y>Citizen <https://github.com/CitizenSocialLab>CoActuem <https://coactuem.ub.edu>CoAct <https://coactproject.eu>CoActgithub <https://github.com/Chaotique/CoActuem_per_la_Salut_Mental_Chatbot>Peter2021 Peter, F., Bonhoure, I., & Perelló, J. (2021). CoActD3.2: Digital and non-digital tools for conducting research. Zenodo. <https://doi.org/10.5281/zenodo.6078916>UN2006 United Nations. (2006). Convention on the Rights of Persons with Disabilities and Optional Protocol. <https://www.un.org/disabilities/documents/convention/convoptprot-e.pdf>EC2016 European Commission. (2016). European Framework for Action on Mental Health and Wellbeing. EU Joint action on mental health and wellbeing. <https://ec.europa.eu/research/participants/data/ref/h2020/other/guides_for_applicants/h2020-SC1-BHC-22-2019-framework-for-action_en.pdf>Cigarini2018c Cigarini A, Vicens J, Duch J, Sánchez A, Perelló J (2018a) Dataset. Quantitative account of social interactions in a mental health care ecosystem: cooperation, trust and collective action. In Scientific Reports. Zenodo. <https://doi.org/10.5281/zenodo.1175627>Cigarini2018b Cigarini A et al. (2018) Jocs x La Salut Mental: Resultats de La Recerca Amb Salut Mental Catalunya. In Scientific Reports. Zenodo. <https://doi.org/10.5281/ZENODO.1186978>Turner2009 Turner, R., & Brown, R. (2009). Social Support and Mental Health. In T. Scheid & T. Brown (Eds.), A Handbook for the Study of Mental Health: Social Contexts, Theories, and Systems (pp. 200-212). Cambridge: Cambridge University Press. <https://doi.org/10.1017/CBO9780511984945.014>Zhou2014 Zhou, E.S. (2014). Social Support. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. <https://doi.org/10.1007/978-94-007-0753-5_2789>Mitats2022 Mitats, B., Bonhoure, I., Perelló, J., & González Virós, I. (2022). Brief on policy recommendations to promote and strengthen mental health social support networks. Zenodo. <https://doi.org/10.5281/zenodo.7244146>Mitats2023 Mitats, B., Bonhoure, I., Perelló, J., Peter, F., & González Virós, I. (2023). Recomanacions polítiques: promoció i enfortiment de les xarxes de suport social en salut mental. Zenodo. <https://doi.org/10.5281/zenodo.7657603>Harrison2022 Harrison, V. (2022). The Co-Researchers Journey in CoAct for Mental Health. Zenodo. <https://doi.org/10.5281/zenodo.7729269>Verhulst2023 Verhulst, S.G. (2023). Computational Social Science for the Public Good: Towards a Taxonomy of Governance and Policy Challenges. In: Bertoni, E., Fontana, M., Gabrielli, L., Signorelli, S., Vespe, M. (eds) Handbook of Computational Social Science for Policy. Springer, Cham. <https://doi.org/10.1007/978-3-031-16624-2_2>Leslie2023 Leslie, D. (2023). The Ethics of Computational Social Science. In: Bertoni, E., Fontana, M., Gabrielli, L., Signorelli, S., Vespe, M. (eds) Handbook of Computational Social Science for Policy. Springer, Cham. <https://doi.org/10.1007/978-3-031-16624-2_4>Nature2021 Editorial (2021). The powers and perils of using digital data to understand human behaviour. Nature 595, 149-150. <https://doi.org/10.1038/d41586-021-01736-y>Sadowski2021 Sadowski, J., Viljoen, S., & Whittaker, M. (2021). Everyone should decide how their digital data are used—Not just tech companies. Nature, 595(7866), 169-171. <https://doi.org/10.1038/d41586-021-01812-3>Schade2021 Schade, S., Pelacho, M., van Noordwijk, T., Vohland, K., Hecker, S., Manzoni, M. (2021). Citizen Science and Policy. In: Vohland K. et al. (Eds.). 2021. The Science of Citizen Science. Springer, Cham. <https://doi.org/10.1007/978-3-030-58278-4_18> pp: 351-371.Roche2020 Roche, J., Bell, L., Galvão, C., Golumbic, Y. N., Kloetzer, L., Knoben, N., ... & Winter, S. (2020). Citizen science, education, and learning: challenges and opportunities. Frontiers in Sociology, 5, 613814. <https://doi.org/10.3389/fsoc.2020.613814>Pearse2020 Pearse, H. (2020). Deliberation, citizen science and COVID-19. The Political Quarterly, 91(3), 571-577. <https://doi.org/10.1111/1467-923X.12869>Paleco2021 Paleco, C., García Peter, S., Salas Seoane, N., Kaufmann, J., Argyri, P. (2021). Inclusiveness and Diversity in Citizen Science. In: Vohland et al. The Science of Citizen Science. Springer, Cham. <https://doi.org/10.1007/978-3-030-58278-4_14>Senabre2021 Senabre Hidalgo, E., Perelló, J., Becker, F., Bonhoure, I., Legris, M., & Cigarini, A. (2021). Participation and co-creation in citizen science. In: Vohland K. et al. (Eds). 2021. The Science of Citizen Science. Springer. <https://doi.org/10.1007/978-3-030-58278-4> pp: 199-218.Criscuolo2022 Criscuolo, L., L’Astorina, A., van der Wal, R., & Gray, L. C. (2022). Recent contributions of Citizen Science on sustainability policies: a critical review. Current Opinion in Environmental Science & Health, 100423. <https://doi.org/10.1016/j.coesh.2022.100423>Sauermann2020 Sauermann, H., Vohland, K., Antoniou, V., Balázs, B., Göbel, C., Karatzas, K., ... & Winter, S. (2020). Citizen science and sustainability transitions. Research Policy, 49(5), 103978. <https://doi.org/10.1016/j.respol.2020.103978>Fritz2021 Fritz, S., See, L., Carlson, T. et al. Citizen science and the United Nations Sustainable Development Goals. Nat Sustain 2, 922–930 (2019). <https://doi.org/10.1038/s41893-019-0390-3>
http://arxiv.org/abs/2312.16569v1
{ "authors": [ "J. Perelló", "F. Larroya", "I. Bonhoure", "F. Peter" ], "categories": [ "physics.soc-ph", "cond-mat.stat-mech", "cs.HC" ], "primary_category": "physics.soc-ph", "published": "20231227133355", "title": "Citizen science for social physics: Digital tools and participation" }
FairCompass: Operationalising Fairness in Machine Learning Jessica Liu, Huaming Chen0000-0001-5678-472X Member, IEEE, Jun Shenhttps://orcid.org/0000-0002-9403-7140 Senior Member, IEEE, and Kim-Kwang Raymond Choo0000-0001-9208-5336Jessica Liu and Huaming Chen are with the School of Electrical and Computer Engineering, University of Sydney, Australia. (Corresponding author e-mail: huaming.chen@sydney.edu.au). Jun Shen is with the University of Wollongong, Australia. (e-mail: jshen@uow.edu.au). Kim-Kwang Raymond Choo is with The University of Texas at San Antonio, San Antonio, TX 78249, USA. (e-mail: raymond.choo@fulbrightmail.org).January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= As artificial intelligence (AI) increasingly becomes an integral part of our societal and individual activities, there is a growing imperative to develop responsible AI solutions. Despite a diverse assortment of machine learning fairness solutions is proposed in the literature, there is reportedly a lack of practical implementation of these tools in real-world applications. Industry experts have participated in thorough discussions on the challenges associated with operationalising fairness in the development of machine learning-empowered solutions, in which a shift toward human-centred approaches is promptly advocated to mitigate the limitations of existing techniques. In this work, we propose a human-in-the-loop approach for fairness auditing, presenting a mixed visual analytical system (hereafter referred to as `FairCompass'), which integrates both subgroup discovery technique and the decision tree-based schema for end users. Moreover, we innovatively integrate an Exploration, Guidance and Informed Analysis loop, to facilitate the use of the Knowledge Generation Model for Visual Analytics in FairCompass. We evaluate the effectiveness of FairCompass for fairness auditing in a real-world scenario, and the findings demonstrate the system's potential for real-world deployability. We anticipate this work will address the current gaps in research for fairness and facilitate the operationalisation of fairness in machine learning systems.To address the drawbacks of existing fairness solutions and assist in operationalising fairness, we propose a new approach to fairness tool that combines technical, non-technical and visual analytics solutions to fairness. We demonstrate this novel approach by benchmarking with FairVis and Fairness Compass. The problem space of this paper assumes the scope of the tools, which is to assist in fairness auditing for machine learning classifiers. Ultimately, we anticipate this work as a first step towards operationalising fairness, in the direction of establishing formal fairness processes within teams, organisations, and institutions, by better meeting the needs of practitioners who are tasked with the responsibility of fairness.AI Fairness, Human-in-the-loop, Visual analytics § INTRODUCTIONThe use of artificial intelligence in decision making has become increasingly popular across a range of industries such as healthcare, commerce, marketing, education and many more <cit.>. As AI takes on the automation of crucial decision-making processes traditionally overseen by humans, we observe the increasing prevalence of unfair outcomes attributed to AI systems. It is coupled with an increasing focus on fairness research and a growing call for responsible AI. Many case studies of unfair machine learning models underscore the fact that the biases within the machine learning life cycle, can adversely affect the individuals, with an overwhelming number of people belonging to marginalised groups <cit.>. Existing fairness research in the field of machine learning is primarily focused on addressing the challenge of minimising algorithmic bias <cit.>, which has led to an abundance of technical solutions to combat algorithmic bias <cit.>. However, a growing concern revolves around the inadequate implementation of these solutions in practical settings. It highlights a disconnect between the progress in fairness detailed in literature and the practical adoption by practitioners or organisations <cit.>. One main concern of current fairness solutions is the overemphasis on the algorithmic and statistical approach to fairness, such as the statistical definitions of fairness and bias detection algorithms <cit.>. It is widely recognised that fairness in machine learning is a socio-technical problem, and a human-centred approach to model understanding, diagnosing, and steering is needed to incorporate human values. This allows practitioners to pick up on information or patterns that may otherwise be neglected by computers <cit.>. Furthermore, fairness practices have yet to be widely adopted by institutions and organisations as standard procedure, and the responsibility of fairness is often left to the practitioners as extra work that is high effort and low reward <cit.>. Operationalising fairness in real-world problems require more nuanced and complex solutions that take into consideration the actors at each stage, refering to as human-in-the-loop machine learning. In <cit.>, the incorporation of fairness in machine learning with visual analytics is explored with the outcome of FairVis, an interactive system designed to audit the fairness and facilitate the discovery of biases. A main focus in FairVis is to identify the intersectional bias. More recently, the Fairness Compass is developed to formalise the decision process with a decision tree and specify the fairness objective for the application context <cit.>. The decision tree serves as a guideline to narrow down the solution space, providing extensible capability to prioritise the most appropriate fairness metrics, and consequently being able to select the most suitable bias mitigation measures. Other existing works, such as Aequitas <cit.>, AI Fairness 360 <cit.>, Google's What-If Tool <cit.>, also present promising results in the venue of novel approaches to fairness research as they aim to address problems commonly encountered by practitioners according to fairness related literature. However, they have either overemphasis on technical solutions or difficulties in operationlising fairness for practitioners. To address the drawbacks of existing fairness solutions and assist in operationalising fairness, we propose `FairCompass' that combines technical, non-technical and visual analytics solutions to fairness. We demonstrate this combining the subgroup discovery technique and decision tree-based schema for end users, in which the solution is driven with human-in-the-loop. The problem assumes the scope of the tools, which is to assist in fairness auditing for machine learning classifiers. To our best knowledge, this work is a first step towards operationalising fairness, by better meeting the needs of practitioners tasked with the responsibility of fairness. The main contributions of this paper are: * We thoroughly review the existing AI fairness solutions, providing the context to investigate the common issues presented in the literature.* We develop `FairCompass', a novel mixed visual analytics system with subgroup discovery technique and decision tree-based schema for fairness auditing.* The Exploration, Guidance and Informed Analysis Loop is leveraged to facilitate the use of knowledge Generation Model for Visual Analytics in FairCompass.* We evaluate FairCompass with a real-world scenario for fairness auditing to demonstrate the effectiveness of the system. The replication package is released publicly[https://github.com/Huaming-Chen/FairCompass]. The rest of this paper starts with an overview of the state-of-the-art research in AI fairness in Section 2. Section 3 discusses the design challenges and our overall approach design process. Section 4 presents the design and implementation of FairCompass, and our conceptual framework that supports fairness auditing. We evaluate FairCompass in Section 5 with real-world case analysis. Section 6 covers the limitations and future work, and we conclude the paper in Section 7. § BACKGROUND §.§ Unfairness in AI SystemsThe neutrality fallacy is a concept introduced as the misconception that machine learning systems are impartial to human biases <cit.>. This fallacy can be found in many AI systems over the past few years. One example is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is a machine learning software used by U.S. courts to measure the likelihood of recidivism of defendants. An investigation found that COMPAS was biased against African American offenders, disproportionally predicting them as having a higher risk of recidivism, with a false positive rate of nearly double their Caucasian counterparts <cit.>. Also, it is found that COMPAS model did not outperform non-expert human judgement in terms of accuracy or fairness <cit.>. Similarly,  <cit.> suggested that the perceived fairness of realistic machine learning models is overestimated, as their study on COMPAS did not demonstrate that a machine learning model was significantly more accurate than human judgement. Some popular machine learning-empowered systems can refer to as online job ad recommendation systems. <cit.> presents an analysis of data from a field test for a STEM (Science, Technology, Engineering and Math) job ad. The ad was designed to promote job opportunities and training in STEM, with an explicit intent to deliver the ad in a fair, gender-neutral manner. However, the ad was shown empirically to over 20% more men than women. The disparity was not due to the common perception of women being less likely to click on STEM job ads, rather suggestive evidence pointed to women being more expensive to show ads to compared to men. These works have collectively suggested a misplaced over-reliance on machine learning systems, even when the real-world problems in these scenarios have proved too large and complex to be fully automated without human intervention. These case studies point out the need for practitioners to attempt to understand why discriminatory outcomes are produced, rather than viewing machine learning models as a black box. Hence, fairness in responsible AI is a socio-technical issue that must be brought to the attention of the developers of machine learning systems, as well as other actors in each stage of the machine learning lifecycle.§.§ Fairness DefinitionA popular definition of fairness is presented by Mehrabi et al. <cit.>, concerning `the absence of any prejudice or favouritism toward an individual or group based on their inherent or acquired characteristics'. Thus, most fairness literature typically approaches the problem of defining fairness with three primary ideas of fairness: individual, group and subgroup.Individual fairness requires a machine learning model to give similar predictive outcomes to similar individuals. Dwork et al. formulate the framework of fairness through awareness, which captures fairness through the principle of classifying similar individuals (with respect to certain attributes) similarly <cit.>. For instance, in a loan allocation scenario, individuals with repayment rates that are similar shall receive a similar loan. Joseph et al. develop an approach allowing to distinguish `high quality' candidates and promote meritocracy, which are commonly used in decisions related to opportunity <cit.>.Group fairness requires a machine learning model to treat different groups equally <cit.>. This definition of fairness is the most used in the development of technical fairness solutions and bias mitigation methods in literature as they can be obtained without making any assumptions about the setting, which leads to immediately actionable algorithmic solutions <cit.>. In the context of group fairness, a population is partitioned into privileged and unprivileged groups based on sensitive attributes. In an unfair AI system, privileged groups are usually given more favourable outcomes, compared to underprivileged groups who are often disadvantaged due to a range of pre-existing biases <cit.>. Group fairness attempts to correct these biases by treating all groups equally. Subgroup fairness is a relatively new form of fairness that attempts to address the shortcomings of individual and group fairness and taking into consideration the intersectionality of bias. Problems with the intersectionality of bias arise in real world problems when populations are defined by multiple features. In the Gender Shades study <cit.>, Buolamwini & Gebru investigate a facial recognition software and evaluate the classification accuracy of subgroups based on sex and skin colour. When observing subgroups by sex and skin colour individually, they found that classifiers performed better on male faces compared to female faces with a difference of up to 20% in error rate, and performed better on lighter faces compared to darker faces with a difference of up to 19% in error rate. While these differences in performance were already significant, there were more drastic disparities between intersectional subgroups with both attributes, with darker skinned females having an accuracy as low as 65%, and lighter skinned males with an accuracy of almost 100%. §.§ Existing Fairness SolutionsA large amount of fairness research has focused on tools to help mitigate data and algorithmic bias, while a variety of toolkits have been developed with the intention of helping developers and practitioners across a large range of projects and domains. Some tools such as Google's What-If Tool <cit.> and FairSight <cit.>, provide features that facilitate a better understanding of possible biases and how they can impact the predictive output. Doing so lays the necessary groundwork for users to be able to identify biases and address them. Most toolkits include bias detection and auditing features such as Aequitas <cit.> and Google's What-If Tool <cit.>, with a few that provide libraries of bias mitigation algorithms, i.e., Microsoft’s Fairlearn <cit.>, AI Fairness 360 <cit.> and FairSight <cit.>. At the same time, non-technical solutions such as Microsoft’s AI fairness checklists <cit.>, have been proposed to guide practitioners to carry out the appropriate fairness practices throughout their projects. In more recent years, many experts have taken a human-centred approach to fairness and bias mitigation by introducing toolkits in the form of visual analytics systems. The solutions covered in this section are solutions that have been proposed by organisations as well as academia. Table <ref> summarises the features and fairness notions.While diverse in application and techniques, these solutions are often criticised for various shortcomings, with common issues in the literature including: 1) Overemphasis on Technical Solutions. It is found that overemphasis on statistical and algorithmic approaches to fairness could have a negative effect on fairness in AI systems. These solutions often fail to address the socio-technical aspect of AI fairness, and how social systems are complex, dynamic, and adaptive <cit.>. This may even result in technical bias <cit.>. Therefore, it is crucial to adopt frameworks to facilitate the proper application of these tools and focus on how bias manifests throughout the entire machine learning lifecycle <cit.>. 2) Subjectivity of Fairness. 23 types of biases and 10 different fairness metrics are defined in <cit.>, concluding that the synthesis of a unified definition of fairness is one major challenge. Due to the vast range of use cases covered by proposed fairness definitions and metrics, it is inevitable that there are disparities between them that make them incompatible with each other. It is also evident that that certain fairness definitions cannot coexist, and prioritising certain metrics over others can lead to misconceptions of the fairness <cit.>.3) Difficulties in Operationalising Fairness. Recent studies suggests that the tools often fail to address the needs of practitioners <cit.>, highlighting that this is an issue with the intersection of technical and design expertise. 4) Lack of Support for Practitioners. While AI fairness is a socio-technical problem, it often leaves practitioners feeling overwhelmed and unsupported. Some are concerned as unqualified to make decisions since lacking knowledge in fairness research <cit.>, and others fear missing potential biases but lack the know-how to assess their systems for unfairness <cit.>.§ METHODOLOGYThis section presents the proposed approach of `FairCompass'. The overarching goal for the design is to take a first step in operationalising fairness in the industry, promote the transition of fairness procedures as a formal part of the machine learning development workflow, and encourage organisations to prioritise fairness and offer better support for practitioners. With the feedback from practitioners <cit.>,the design process is informed by the practitioner feedback gathered in this work with emphasis to supplement statistical definitions with social practices <cit.>. Since the feedback from industry practitioners has the concern of a lack of application of these tools, despite the large assortment of technical and non-technical solutions <cit.>. A viable solution is to design a human-in-the-loop solution combining the strengths of automated or algorithmic methods with a non-technical solution to guide human decision-making. Though a tool bundle for AI fairness is presented combining the Fairness Compass and a Fairness Library <cit.>, it is insufficient in operationalising fairness. An effective solution must provide more structure in the fairness application process so that it can be adopted by practitioners, teams, and organisations. Thus, we assert that, an interactive visual analytics application will capture the human-in-the-loop approach to fairness, while also providing support for operationalising fairness. This addresses the need for practitioners to have a deeper understanding of their data by learning through interactions with visualisations and allows organisations and teams to streamline the fairness process. Furthermore, it can enable the organisations to establish formal fairness-assurance processes in the machine learning development life cycle. Thus, we design the system as a mixed visual analytics system that allows users to view the data and apply fairness metrics, together with a decision tree-based schema that acts as a guide to select the most appropriate fairness metric. Moreover, the system supports subgroup discovery as the technical component, demonstrating the potential to be polished with visual analytics principles and models such as Sacha et al.'s Knowledge Generation Model for Visual Analytics <cit.>, as shown in Fig. <ref>. We refrain from using checklists since it may be too tedious, overwhelming, and time-consuming for practitioner workflow following Cramer et al.'s user study <cit.>. Thus, as the non-technical component of the system, we leverage the decision tree-based schema as it provides a large array of fairness definitions in a clear visual representation. As the technical and non-technical components are designed independently, we apply a conceptual framework to mesh the two solutions together. To allow for seamless integration and apply the Knowledge Generation Model for Visual Analytics, we design a new conceptual framework: the Exploration, Guidance, and Informed Analysis Loop. With the mentioned models and framework, we anticipate the proposed system properly address the design challenges, including the identified intersectional fairness, incomplete visual representation and the need for a human-centred design. Especially, we have presented the systems without specific user expectations, designed without assumptions about the user's expertise in machine learning fairness. It has largely reduced the occurrence of the `gulfs' from Norman’s Seven Stages of Action cycle <cit.>, that arise when human interacts with digital interfaces.§ FAIRCOMPASS OVERVIEW§.§ ImplementationIn FairCompass, we have specified two primary views, which are the Subgroup Exploration Tab, and the Fairness Compass Tab. The Feature Distribution View is designed for individual interaction while simultaneously showing either Subgroup Exploration Tab or Fairness Compass Tab. The state of generated groups is persistent over the two different tabs, allowing users to switch between tabs to perform analysis on the same subgroups. The tab system allows any new information learned from the Fairness Compass tab, to be immediately explored or actioned through the Subgroup Exploration tab. To better support practitioners, split panes with movable dividers are used so that users can manipulate the size of each component in the tabs. This allows users to customise their viewing experience and prevent GUI clutter. Tooltips on hover for each button and tab are added to give users guidance and explain the functionalities.Feature Distribution View is implemented in the form of a collapsible sidebar in FairCompass, as illustrated in Fig. <ref>. We explicitly design the view collapsible to prevent clutter, as it is the panel that the user will spend the least time on, therefore the option is given to completely collapse it and provide more space for the tab contents. The functionality of this component is for viewing feature distributions and generating subgroups. Additionally, an additional active group saving functionality is implemented. To help users organise their thought processes, users can save a set of subgroups of interest for future revisit. Upon clicking on a saved set of subgroups in the saved subgroups list, the set is re-added to the currently active groups in the Subgroup Exploration and Fairness Compass tabs. This takes the cognitive stress off practitioners with ad-hoc analysis needs when there are many branches of information that need to be explored to gather evidence for a hypothesis.Subgroup Exploration Tab in Fig. <ref> includes the Subgroup Overview, Suggested and Similar Subgroups View and the Detailed Comparison View.Fairness Compass Tab consists of two panels, as shown in Fig. <ref>. The Decision Tree View on the left shows a Fairness Compass decision tree <cit.>. Missing explanations and formulas of the Fairness Compass are added using the information from <cit.>, with minor adjustments to fit the context of the FairCompass application. Upon clicking on each node in the decision tree, descriptions of each node are shown in the Fairness Description View on the right panel. If the selected node is a fairness metric, visualisations are shown for the active subgroups, to assist users in assessing the current set of subgroups with the fairness metric. We have implemented bar plots and scatter plots, providing a more multifaceted presentation of the data. The visualisations chose to take the binary subgroup approach from the Fairness Compass and convert them into intersectional comparisons between multiple subgroups. With certain fairness definitions, users can customise the visualisations to align with their problem and application context. Consider a scenario where a user is assessing the active subgroups with the Demographic Parity definition. Demographic Parity is achieved when there is an equal proportion of favourable outcomes for all subgroups. The visualisation component for the definition has a dropdown that allows users to select the class in their dataset that represents the favourable outcome. If the user’s dataset is about providing opportunities, such as scholarship grant decisions for students, the user would choose the positive class as the favourable outcome. Conversely, if the user’s dataset is about recidivism prediction such as the COMPAS dataset, the user would select the negative class (no recidivism) as the favourable outcome.§.§ Exploration, Guidance, and Informed Analysis LoopWe propose the Exploration, Guidance, and Informed Analysis conceptual framework, to provide a smoother integration between FairCompass and the Knowledge Generation Model in Fig. <ref>. The design is made to support the implementation of this framework. While this section outlines how this framework supports the analysis process within the new design, the design of this framework intends to be generic across similar work and can be adjusted accordingly for other domains. Exploration stage of the framework consists of the Feature Distribution View and Subgroup Exploration Tab of the new design. This stage is used to explore a high level overview of subgroups. The user builds a hypothesis in this stage and sets up a general objective for the rest of the analytical process. Guidance stage of the framework consists of the Feature Distribution View and Fairness Compass Tab of the new design. This stage is used to let users interact with the Fairness Compass and the different definitions available. Moving through the decision tree presents opportunities for users to discover more about the data and contextual factors surrounding it. Users can generate subgroups that they are interested in, test out different fairness definitions, and gather insight through the visualisations provided. In this stage, the user gathers more information to be able to action their objectives from the overview stage and produce insights as they move through each node in the decision tree. This gives practitioners with limited fairness knowledge adequate guidance to navigate through the task of selecting an appropriate fairness definition specific to the application context and the dataset. More “trustworthy” insights can be gathered from this process through the practitioner’s questioning, understanding, and reasoning using the fairness compass. Informed Analysis stage of the framework consists of both the Subgroup Exploration Tab and the Fairness Compass Tab. This stage occurs after a user has gained a good overall understanding of their dataset and enough fairness-related knowledge to analyse their hypothesis and findings from the Exploration stage. Through this, users will either be able to solidify their insights into new knowledge to begin a new iteration of the loop, or branch off into the investigation of related areas to support their central hypothesis.§ EVALUATION ON AN INCOME PREDICTION SYSTEMThis section provides a use case and a walkthrough of how FairCompass can be used in fairness auditing. In this scenario, a machine learning practitioner is assigned by a financial institution to determine if an income prediction model is fair. The institution is planning on using this model in the development of an application that determines the loan size they are willing to grant an individual. The model classifies instances into positive (1) where the individual makes less than or equal to $50,000 a year, or negative (0) where the individual makes more than $50,000 a year. In this work, we use an effective income prediction model, which is a simple two-layer neural network with the Adult Income dataset from UCI Machine Learning Repository <cit.>. The practitioner wants to find out if the model perpetuates bias against disadvantaged individuals. Iteration 1: Exploration She first looks at the feature distribution of the data and finds that the sample size of males is twice the size of females. She decides to generate subgroups to explore based on gender. Upon generation of the subgroups, she sees that the female subgroup has higher accuracy, and the male subgroup has higher precision and recall. She wonders if there are more complex relationships present in the prediction results of these subgroups and would like to explore this more but does not know where to begin. This is shown in Fig. <ref>. Iteration 1: Guidance She clicks on the Fairness Compass Tab and navigates through the decision tree: * Policy: The practitioner confirms with her team and the policymakers in her organisation that there are no anti-discrimination policies in place for the purpose of home loans based on an individual’s income, and the organisation is not planning on implementing affirmative action at this stage. However, there are laws that prohibit lending bias against individuals based on race, sex, marital status, and age. Although these policies do not apply to granting home loans on the basis of income, the practitioner thinks that it is important to investigate the sensitive attributes in lending bias for the purpose of her task. She decides to explore the sex attribute first but notes down the others for further exploration. She continues down the No option.* Equal base rates: From the high-level overview of the data that the practitioner has performed, she knows that 2/3 of all instances in the data belong to the male group. She questions if this dataset is appropriate for the application context, due to the disparity in sample size. According to the explanation provided in the Fairness Description View, assuming equal base rates can benefit historically discriminated groups in decisions related to opportunity. She continues down the No, but should be option.* Explaining variables: The practitioner looks through the Feature Distribution View again to see all attributes of the data. She finds that there are several explaining variables for outcome in this dataset. Features such as occupation and working hours are factors that can explain disparities in salary. She continues down the Yes option, as presented in Fig.<ref>.The decision tree leads her to the Conditional statistical parity definition. This definition is an extension of Demographic parity with consideration of a set of legitimate attributes (explaining variables) that affect income. The practitioner wants to ensure that there are fair outcomes between male and female individuals, applying this fairness definition means that the model should assign equal proportions of favourable outcomes to male and female candidates of the same working hours, occupation, etc. The practitioner decides to explore the relationship between sex and each explaining variable separately. Shown in Fig. <ref>, she first generates a new set of subgroups based on sex and occupation and clicks on the Conditional statistical parity node. Two dropdowns appear, the first asking her to select the favourable outcome, and the second asking her to select the sensitive attribute. She selects the negative class (< 50k) as the favourable outcome, as larger loans are granted to those of higher income in this context, and selects sex as the sensitive attribute. According to the bar chart, she finds that for all occupations, female candidates have a lower negative rate, meaning that for each occupation, the model predicted proportionally fewer female candidates to have incomes higher than 50k, in comparison to male candidates. Upon this discovery, the practitioner decides to take a closer look at the Male Exec-managerial subgroup and Female Exec-managerial subgroup, as these subgroups have a proportionally large sample size and a large discrepancy of 34% in negative rate according to the bar chart visualisation. Iteration 1: Informed Analysis Informed by her new insight into the negative rate discrepancies she returns to the Subgroup Exploration Tab and selects the false negative rate and false positive rate metrics for further investigation. She pins the Male Exec-managerial subgroup, and hovers over the Female Exec-managerial subgroup, to view a comparison of the metrics in the Detailed Comparison View. In the figure below, the Detailed Comparison View shows the comparison of these metrics for the Exec-managerial Male and Exec-managerial Female subgroups. She finds that the Female Exec-managerial manager group has an almost 25% lower false negative rate and an almost 20% higher false positive rate. This means that instances in the Male Exec-managerial subgroup were more likely to be falsely classified as having an income higher than 50k, and less likely to be falsely classified as having an income less than or equal to 50k, compared to their female counterparts. The practitioner has now gained more “trustworthy” evidence of unfairness between the female and male subgroups, however, she wants to gather more evidence surrounding the unfair treatment of female individuals in this model before drawing a conclusion. This is shown in Fig. <ref>.Iteration 2: Exploration As she has previously identified hours of work as an explaining variable assisted by the Fairness Compass, she decides to compare the prediction results between male and female individuals that work the same number of hours. Upon viewing the feature distribution for hours of work in Fig. <ref>, she finds that almost half of all instances are of 40 hours. Due to the drastic difference in sample size, the practitioner decides that she will select the top three values with the largest sample size to avoid analysing data with representation bias. The practitioner selects 40, 45, and 50 for the hours of work, along with the sex attribute and generates subgroups. She finds that the Male 50 and Male 45 subgroups have the lowest accuracy, precision and recall of all groups, and the Female 40 Subgroup has the highest accuracy, precision and recall. Although these values suggest that the Female subgroup has better performance using these metrics, the practitioner would like to take a closer look at the visualisation on the Fairness Compass Tab.Iteration 2: Informed Analysis Upon checking if the subgroups achieve the conditional statistical parity definition in the Fairness Compass Tab, she finds that the negative rate for male subgroups is higher than for female subgroups by more than 15% for all three working hours. To further explore this phenomenon, she goes back to the Subgroup Exploration Tab to view the false negative rate and false positive rate of these groups. She finds that all female subgroups have a below average false negative rate, and all male subgroups have an above average false negative rate. The converse is true for the false positive rate metric, further solidifying the bias against female subgroups. After checking with two explaining variables, the practitioner is now more confident that this model has a tendency to classify female individuals as having a lower income compared to their male counterparts. The practitioner has concluded that this model is unfair and should not be used in the application of granting home loans. She decides to further explore the other sensitive attributes mentioned in the lending bias policies she discovered earlier, to try to find other sources of discrimination, before submitting a formal report of her findings to her boss.This use case presents a practical usage of the system, where the practitioner completes two iterations of the Exploration, Guidance and Informed Analysis loop. Through this analysis process, she also engages the exploration loop, verification loop, and knowledge generation loop of the Knowledge Generation Model for Visual Analytics. The overall evaluation of the use case represents an effective outcome of the system. However, we would like to highlight that, the analysis process can vary and diverge from our example depending on the problem context, dataset, and the practitioner's personal preferences in performing analysis. § LIMITATIONS AND FUTURE WORKIn this section, we highlight the limitations of FairCompass, and discuss recommendations for future work in this direction. §.§.§ Polishing the Guidance stage and Extending the Fairness CompassRuf & Detyniecki <cit.> conclude that there is no silver bullet to overcoming AI bias and that the Fairness Compass is not the last word on the subject, as fairness research is constantly advancing. The guidance stage offered in our design is rather limited in its coverage of the complex landscape of fairness research. Therefore, more work can be done to develop a comprehensive taxonomy of biases and harms, as well as implement individual and subgroup notions of fairness in the Fairness Compass, in a way that is easily digestible by practitioners who are unfamiliar with the topic. §.§.§ Biases in Human-in-the-loopAlgorithm to use biases may be introduced in the use of the application and the analytical process. Holstein et al. <cit.> note that several participants in their study emphasised the importance of taking into consideration the human biases embedded at each stage of the machine learning life cycle. An example is presentation bias, which is a form of bias that stems from how the information is presented to users, leading them to come to biased conclusions <cit.>. With our suggestion of human-in-the-loop fairness auditing, there needs to be an increased awareness of mitigating human biases. While the Fairness Compass helps to relieve the burden of fairness placed on practitioners, an overreliance on it or the belief that conclusions made from the tree are always the "correct" solution, can be detrimental to practitioners’ pursuit of fairness. This presents a case of Sandvig’s neutrality fallacy <cit.>, where users believe that the system is correct even in situations where it is obvious that it is not. The aim of the Guidance stage of our proposed framework is not to teach users everything they need to know about fairness, but to introduce them to concepts that they may be unfamiliar with and encourage them to seek out further resources if needed. Future work for mitigating use bias can include the creation of guidelines for preventing use bias and enforcement of implicit bias training programs specific to practitioners who are using interactive systems for fairness auditing.§.§.§ Developing Domain Specific ToolsBoth FairVis and the Fairness Compass are designed for very general use of fairness auditing. This means that they may be insufficient in solving real-world problems that require more domain specific tools. FairVis currently only supports binary classifiers, which means that users are unable to audit the fairness of other output types such as those of ranking algorithms, recommendation systems, voice and facial recognition systems etc. The k-means clustering method of generating suggested subgroups may also not be able to cover domain specific situations where features have different weightings of importance and have complex relationships with one another. In these instances, FairVis’s subgroup recommendation functionality may obscure information essential to decision making. The Fairness Compass decision tree may be inapplicable to specific application contexts. For example, we have found that when using the tree in the context of utilising the COMPAS dataset for recidivism prediction, it is reasonable to end on the demographic parity definition. Dwork et al. <cit.> argued that demographic parity is a flawed definition in practice by proving three scenarios where the definition is maintained yet produces a blatantly unfair result from an individual fairness perspective. Minority communities are controlled and policed more frequently <cit.>, resulting in a self-fulfilling loop of higher arrests and higher base rates. Using the demographic parity definition may further perpetuate pre-existing biases against these individuals in the data, as the definition requires groups to be assessed the same when they have historically not been treated the same. For future work, the general approach proposed in this paper can be adjusted for specific domain, and take into account real life problems where fairness is heavily context dependent.§.§.§ Enforcing Fairness at a Higher LevelThis project aims to give practitioners better support in making fairness-related decisions, due to the fact that the responsibility of ensuring the fairness of a system is often disproportionally assigned to them when this should not be the case. It is ultimately the organisation’s responsibility to ensure that there are fairness measures in place, or that practitioners receive an adequate amount of support to be able to carry out these procedures. The responsibility of AI fairness within an organisation extends to adjacent or higher-level actors such as data scientists, domain experts, policymakers and stakeholders. While our project lays the groundwork for operationalising fairness, there is a need for organisations and institutions to prioritise fairness and install formal processes that enforce fairness, and eventually cement these practices as standard procedures in the industry. § CONCLUSIONIn this paper, we firstly review the existing fairness tools and identify areas of challenges suggested by fairness experts and practitioners. The major issues often brought forward include the identified challenges of overemphasis on technical solutions and difficulties in operationalising fairness. To address these concerns, we suggest a novel approach by leveraging both technical and non-technical approach integrated within a visual analytics system, to streamline the bias auditing process with a human-centred design. We demonstrate our proposed system of FairCompass. We also propose to incorporate the Exploration, Guidance, and Informed Analysis loop, in order to apply the Knowledge Generation Model for Visual Analytics to FairCompass, and also take a first step in structuring the novel approach for future works. We anticipate FairCompass to steer more research for the intersection between visual analytics and fairness in machine learning, encouraging experts to prioritise operationalising fairness in practice. ieeetr
http://arxiv.org/abs/2312.16726v1
{ "authors": [ "Jessica Liu", "Huaming Chen", "Jun Shen", "Kim-Kwang Raymond Choo" ], "categories": [ "cs.LG", "cs.AI", "cs.CY", "cs.SE" ], "primary_category": "cs.LG", "published": "20231227212953", "title": "FairCompass: Operationalising Fairness in Machine Learning" }
^1 Research Institute for Science and Engineering, Waseda University, 3-4-1, Okubo, Shinjuku, Tokyo 169-8555, Japan^2 RIKEN Center for Emergent Matter Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan^3 Research Center for Materials Nanoarchitectonics (MANA) and Center for Green Research on Energy and Environmental Materials (GREEN), National Institute for Materials Science (NIMS), Namiki, Tsukuba-shi, Ibaraki, 305-0044, Japan^4 Physics Division, Sophia University, Kioi-cho, Chiyoda-ku, Tokyo 102-8554, JapanThe superconducting (SC) cuprate HgBa_2Ca_2Cu_3O_8 (Hg1223) has the highest SC transition temperature T_c among cuprates at ambient pressure P_ amb,namely, T_c^ opt≃ 138 K experimentally at the optimal hole doping concentration. T_c^ opt further increases under pressure P and reaches 164 K at optimal pressure P_ opt≃ 30 GPa, then T_c^ opt decreases with increasing P > P_ opt generating a dome structure [Gao et al., Phys. Rev. B 50, 4260(R) (1994)]. This nontrivial and nonmonotonic P dependence of T_c^ opt calls for theoretical understanding and mechanism. To answer this open question, we consider the ab initio low-energy effective Hamiltonian (LEH) for the antibonding (AB) Cu3d_x^2-y^2/O2p_σ band derived generally for the cuprates.In the AB LEH for cuprates with N_ℓ≤ 2 laminated CuO_2 planes between block layers,it was proposed that T_c^ opt is determined by a universal scaling T_c^ opt≃ 0.16|t_1|F_ SC [Schmid et al., Phys. Rev. X 13, 041036 (2023)],where t_1 is the nearest neighbor hopping, and the SC order parameter at optimal hole doping F_ SC mainly depends on the ratio u=U/|t_1| where U is the onsite effective Coulomb repulsion: The u dependence of F_ SC has a peak at u_ opt≃ 8.5 and a steep decrease with decreasing u in the region u < u_ opt irrespective of materials dependent other ab initio parameters. In this paper, we show that (I) |t_1| increases with P, whereas (II) u decreases with P in the ab initio Hamiltonian of Hg1223. Based on these facts, we show that the dome-like P dependence of T_c^ opt can emerge at least qualitatively if we assume (A) Hg1223 with N_ℓ = 3 follows the same universal scaling for T_c^ opt, and (B) Hg1223 is located at the slightly strong coupling region u ≳ u_ opt at P_ amb and u ≃ u_ opt at P_ opt by taking account of expected corrections to our ab initio calculation.The consequence of (A) and (B) is the following: With increasing P within the range P < P_ opt, the increase in T_c^ opt is accounted for by the increase in |t_1|, whereas F_ SC is insensitive to the decrease in u around ≃ u_ opt and hence to P as well.At P > P_ opt, the decrease in T_c^ opt is accounted for by the decrease in u below u_ opt, which causes a rapid decrease in F_ SC dominating over the increase in |t_1|. We further argue the appropriateness of the assumptions (A) and (B) based on the insight from studies in other cuprate compounds in the literature. In addition, we discuss the dependencies of u and |t_1| on each crystal parameter (CP), which provides hints for designing of even higher T_c^ opt materials. Dome structure in pressure dependence of superconducting transition temperature for HgBa_2Ca_2Cu_3O_8 — Studies by ab initio low-energy effective HamiltonianJean-Baptiste Morée^1,2 0000-0002-0710-9880, Youhei Yamaji^3 0000-0002-4055-8792,and Masatoshi Imada^1,4 0000-0002-5511-2056 =============================================================================================================================================================§ INTRODUCTION Unconventional SC occurs in cuprates <cit.> with a diverse distribution of T_c^ opt. At P_ amb, known values of T_c^ opt range from T_c^ opt≃ 6 K in Bi_2Sr_2CuO_6 (Bi2201) <cit.> to T_c^ opt≃ 138 K in HgBa_2Ca_2Cu_3O_8 (Hg1223) <cit.>. T_c^ opt further increases under pressure.In the case of Hg1223 and other Hg-based cuprates, T_c^ opt has a dome-like structure as a function of P <cit.>.An example is shown in Fig. <ref>(b) for Hg1223: T_c^ opt increases with pressure and shows the maximum 164 K at P_ opt≃ 30 GPa <cit.>, which is the highest known value of T_c^ opt in the cuprates. A wide range of T_c^ opt≃6-164 K in the cuprateshas inspired studies on chemical substitution and pressure application to gain insights into the microscopic mechanism of the diversity in T_c^ opt.For example, for Y-based <cit.>and Hg-based <cit.> high-T_c cuprates, the uniaxial pressures P_a and P_c were applied. (In this paper, P_a refers to the simultaneous compression along axes a and bin Fig. <ref>, while keeping | a|=| b|, and P_c refers to the compression along axis c.The axes are represented in Fig. <ref> for the tetragonal cell in Hg1223.) This decomposition of pressure revealed, in the case of HgBa_2CuO_4 (Hg1201,T_c^ opt≃ 94 K <cit.>), that T_c^ opt decreases with out-of-CuO_2 plane contraction caused by P_c (∂ T_c^ opt / ∂ P_c ≃ -3 K/GPa) but increases with in-plane contraction caused by P_a(∂ T_c^ opt / ∂ P_a ≃5 K/GPa) <cit.>.However, microscopic mechanisms leading to theP dependence of T_c^ opt are not well understood yet, while understanding the mechanism of them certainly helps future materials design.Since it is difficult to isolate these hidden mechanisms by experiments only, further theoretical studies of cuprates under P are desirable. In this paper, we propose a microscopic mechanism for the P dependence of T_c^ opt for the carrier doped Hg1223 based on an ab initio study.For ab initio studies, the density functional theory (DFT) has been widely applied in the history <cit.>. However, its insufficiency in strongly correlated electron systems is also well known. Instead,we apply the multiscale ab initio scheme for correlated electrons (MACE) <cit.>, whichhas succeeded in correctly reproducing the SC properties of the cuprates <cit.> at ambient pressure and has motivated further studies on hypothetical Ag-based compounds <cit.>. MACE consists of a three-step procedure that determines the LEH parameters for the single-band AB Hamiltonian;this procedure has several different accuracy levels, which are defined below and whose details are given in Appendix <ref>. At the earliest stage of the MACE, the simplest level denoted as LDA+cRPA <cit.> or GGA+cRPA was employed; at this level, we start from the electronic structure at the Local Density Approximation (LDA) or Generalized Gradient Approximation (GGA) level, and the effective interaction parameters are calculated on the level of the constrained random phase approximation (cRPA) <cit.>. The next level is denoted as cGW-SIC <cit.>, in which the starting electronic structure is preprocessed from the LDA or GGA level to the one-shot GW level, and the one-particle part is improved by using the constrained GW (cGW) <cit.> and the self-interaction correction (SIC) <cit.>. The most recent and accurate level is denoted as cGW-SIC+LRFB <cit.>, which is essentially the same as the cGW-SIC, except that the GW electronic structure is further improved: The level renormalization feedback (LRFB) <cit.> is used to correct the onsite Cu3d_x^2-y^2 and O2p_σ energy levels. Although the cGW-SIC+LRFB level is the most accurate and was used to reproduce the SC properties of the cuprates <cit.>,we mainly employ the simplest GGA+cRPA versionfor the purpose of the present paper, because the qualitative trend of the parameters can be captured by this simplest framework. (See Appendix <ref> for a more detailed discussion.) We also reinforce the analysis by deducing more refined cGW-SIC+LRFB level in a limited case from the explicit cGW-SIC level calculations to remove the known drawback of GGA+cRPA as we detail later. We derive and analyze the pressure dependence of AB LEH parameters including various intersite hoppings and interactions; however, we restrict the main discussion to |t_1| and u, since they are the principal parameters that control T_c in the proposal <cit.>. Other LEH parameters are given in the Supplemental Material (S1) <cit.>. In the following, we mainly discuss |t_1^ avg| and u^ avg, which are the ab initio values of |t_1| and u at GGA+cRPA level, averaged over the inner and outer CuO_2 planes. (See Fig. <ref> for a representation of the CuO_2 planes.)This paper is organized as follows. In Sec. <ref>, the central results of the present paper isare outlined to capture the essence of the results before detailed presentation. In Sec. <ref>, we give the crystal structure of Hg1223, the hole concentration and a reminder of the GGA+cRPA scheme. In Sec. <ref>, we give the DFT electronic structure at the GGA level as a function of P. In Sec. <ref>, we show the pressure dependence of AB LEH parameters at the GGA+cRPA level. In Sec. <ref>, we discuss the adequacy of the assumptions made in Sec. <ref>. We also discuss the consistency of our results with the experimental P dependence of T_c^ opt in Fig. <ref>. Summary and Conclusion are given in Sec. <ref>. In Appendix <ref>, methodological details of MACE scheme are summarized. In Appendix <ref>, computational details used in this paper are described. In Appendices <ref> and <ref>, we detail the corrections used in Secs. <ref> and <ref>.In Appendix <ref>, we discuss in detailthe P dependence at the intermediate stage of the present procedure.In Appendix <ref>, we detail the dependence of AB LEH parameters on crystal parameters (CP) around optimal pressure. § OVERVIEW The main results obtained in this paper are summarized as (I) and (II) below.(I) |t_1^ avg| increases with P. This increase in |t_1| is caused specifically by the uniaxial pressure P_a, in agreement with previous experimental studies on e.g. Hg1201 <cit.>. (II) u^ avg decreases with P. The decrease in u is caused mainly by (I), namely by the increase in |t_1|, but is slowed down by the increase in U at P<P_ opt. The increase in U is also caused by P_a.The nontrivial pressure dependence of T_c^ opt can be understood from (I,II), which is derived from our ab initio Hamiltonianeven at the preliminary level GGA+cRPA, if we assume the following (A) and (B). [The reality of (A) and (B) will be discussed later in Sec. <ref>, and details of (C) and (D) are given in Appendices <ref> and <ref>, respectively.][The reality of (A) and (B) will be discussed later in Sec. <ref>.] (A) The universal scaling for T_c^ opt is givengiven theoretically as T_c^ est≃ 0.16|t_1|F_ SC recently proposed for the cuprates with N_ℓ≤ 2N_ℓ= 1, 2 and ∞ <cit.> is also valid for Hg1223 with N_ℓ = 3.(B)F_ SC follows a universal u dependence revealed in Ref. <cit.>, where F_ SC has a peak at u=u_ opt≃8.0-8.5. In addition, at P_ amb, Hg1223 is located at slightly strong coupling side u ≳ u_ opt, whilethe highest pressure P=60 GPa applied so far is in the weak coupling side u < u_ opt.In fact, we justify later u ∼ ≃u_ opt at optimal pressure P_ opt≃ 30 GPa for Hg1223. To understand the consequences of the assumptions (A) and (B) appropriately and to complement the consequences quantitatively, we correct the errors anticipated in our ab initio GGA+cRPA calculation by using the following (C) and (D). [Details of (C) and (D) are given in Appendices <ref> and <ref>, respectively.] (C) We correct the values of u^ avg and |t_1^ avg| obtained at the GGA+cRPA level by deducing the most sophisticated cGW-SIC+LRFB level. Since GGA+cRPA is known to underestimate u in Bi2201 and Bi_2Sr_2CaCu_2O_8 (Bi2212), it is desirable to improve the AB LEH to the more accurate cGW-SIC+LRFB level. However, the explicit calculation at the cGW-SIC+LRFB level is computationally demanding, while the corrections from the explicitly calculated cGW-SIC to the cGW-SIC+LRFB levels are known to be small and are relatively materials insensitive. Thus we represent the correction by a universal constant with admitted uncertainty.The estimates of u and |t_1| improved in such ways are denoted as u_ cGW -SIC+“LRFB" and |t_1^|_ cGW -SIC+“LRFB"Theprocedure consists in the two steps (C1) and (C2): (C1) cGW-SIC calculation: Starting from the whole and detailed pressure dependence of u^ avg and |t_1^ avg| for Hg1223 calculated at the GGA+cRPA level, we calculate explicitly the level of the cGW-SIC denoted as u_ cGW -SIC and |t_1^|_ cGW -SIC in limited cases of pressure choices of Hg1223 to reduce the computational cost. (C2) Estimate at the cGW-SIC+LRFB level: We useu_ cGW -SIC+“LRFB" = x_ LRFB u_ cGW -SIC and|t_1|_ cGW -SIC+“LRFB" = y_ LRFB|t_1|_ cGW -SIC and estimate constants x_ LRFB and y_ LRFB fromin Hg1223 from the already explicitly calculated results.for other compounds Hg1201, CaCuO_2, Bi2201 and Bi2212. The estimated values are x_ LRFB = 0.95 (with the range of uncertainty 0.91-0.97) and y_ LRFB = 1.0. See Appendix <ref> for detailed procedure to estimate x_ LRFB and y_ LRFB for the case of Hg1223. The concrete effect of (C) for Hg1223 is to increase u from the cRPA level by the ratio u_ cGW -SIC+“LRFB"/u^ avg≃ 1.29 at P_ amb,≃ 1.15 at 30 GPa, and ≃ 1.08 at 60 GPa; also, the ≃ 13-14% increase in |t_1^ avg| from P_ amb to 30 GPa becomes ≃ 17% by this correction. (D) After applying (C), we further correct the value of|t_1|_ cGW -SIC+“LRFB" and u_ cGW -SIC+“LRFB"|t_1|_ cGW -SIC+“LRFB" by considering the plausible error in crystal parameters at high pressure. Structural optimization by ab initio calculation is known to show quantitative error and it is preferable to correct it if experimental value is known. We compare our structural optimization and the experimental cell parameter a if it is available (this is the case at P<8.5 GPa) and assume that this trend of the deviation continues for P > 8.5 GPa, where experimental data are missing. Namely, at P > 8.5 GPa, we assume that our calculation overestimates the experimental a by ≃ 0.05 Å, and we correct a by Δ a = -0.05 Åaccordingly. The concrete effect of (D) is that the increase in |t_1|_ cGW -SIC+“LRFB" from P_ amb to 30 GPa is now ≃ 22%.The final estimates ofu_ cGW -SIC+“LRFB" and |t_1|_ cGW -SIC+“LRFB" are shown in Fig. <ref>(a). Since (C1) is computationally demanding, we perform (C) and (D) only at P_ amb, 30 GPa and 60 GPa, and infer the correction at other pressures by linear interpolation for the pressure dependence. Even by considering only (A) and (B) above, the present mechanism qualitatively accounts for the microscopic trend of the dome structure: At P<P_ opt, (I), namely the increase in |t_1|, plays the role to increase T_c, whereas the decrease in u does not appreciably affect F_ SC and thus T_c, because F_ SC passes through the broad peak region in the u dependence. At P>P_ opt, (II), namely the decrease in u, drives the decrease in F_ SC and thus T_c surpassing the increase in |t_1|, which generates a dome structure.If we take into account (C) and (D) in addition to (A) and (B),the dome structure in the P dependence of experimental T_c^ opt is more quantitatively reproduced (see Fig. <ref>).In addition to the above results, we discuss the dependence of AB LEH parameters on each CP, which provides us with hints for future designing of even higher T_c^ opt materials.§ FRAMEWORK OF METHODWe start from the crystal structure of Hg1223 and the pressure dependence of the CP values in Fig. <ref>.We abbreviate the inner and outer CuO_2 planes shown in Fig. <ref> as IP and OP, respectively. The crystal structure is entirely determined by the seven CPs defined in Table <ref>, which consist of the two cell parameters a and c and the five characteristic distances d^z_l. The CP values considered in this paper are listed in Fig. <ref>, as a function of P. In the main analyses of this paper, we consider (i) CP values obtained by a structural optimization, which are denoted as optimized CP values. For comparison, we also consider (ii) the theoretical calculation of the CP values in Zhang et al. <cit.> for the region between P_ amb and 20 GPa, and extrapolate the pressure dependence up to 60 GPa. Details about (i,ii) are given in Appendix <ref>. We also consider (iii) the experimental CP values from Armstrong et al. <cit.> between P_ amb and 8.5 GPa. (The values at P_ amb correspond to the SC phase with the experimental SC transition temperature T_c^ exp≃ 135 K close to T_c^ opt≃ 138 K.)It is known that the optimized CP values slightly deviate from the experimental values and it is indeed seen in Fig. <ref>. From the comparison of the optimized and experimental CPs, we take into account the correction (D) addressed in Sec. <ref>.We simulate at the experimental optimal hole concentration p,which allows a reliable comparison with the P dependence of T_c^ opt <cit.>. We use the same procedure as that in Ref. Moree2022 employed for Hg1201: We partially substitute Hg by Au. We consider the chemical formula Hg_1-x_ sAu_x_ sBa_2Ca_2Cu_3O_8 with x_ s=0.6in order to realize the average hole concentration per CuO_2 plane p_ av = 0.2 <cit.>. This choice is discussed and justified in Appendix <ref>. In addition, we examine the distinct effects of the uniaxial pressures along axis a and axis c,whose definitions are given in Table <ref> and discussed below.The nontrivial point is: Experimentally, what are the variations in CP values when the crystal structure is compressed along a (c) ? First, the compression along a obviously modifies the cell parameter aas well as the amplitude |d^z_ buck| of the Cu-O-Cu bond buckling in the OP, but it should not affect the other CPs d^z_ Cu, d^z_ Ca, d^z_ Ba, and d^z_ O(ap). Thus, we define the uniaxial pressure P_a^ buck along a as follows:The compression along a modifies the values of a and d^z_ buck,and all other CP values are those at P_ amb. We also consider a simplified definition, denoted as P_a:The compression along a modifies only the value of a, and all other CP values are those at P_ amb. As we will see, P_a is sufficient to describe the main effect of the compression along a. Second, the compression along c modifies the values of d^z_l, that is, all CP values except that of a. This uniaxial pressure is denoted as P_c. For completeness, we also consider a second definition, denoted as P_c^ buck:The compression along c modifies all CP values except those of a and d^z_ buck. This allows to discuss the effect of the relatively large value of |d^z_ buck| at P > P_ opt.In the main analyses of this paper, we consider P_a (P_c) to simulate the compression along a (c). We also give complementary results by considering P_a^ buck and P_c^ buck.We first compute the electronic structure at the DFT level. The P dependence of the GGA band structure is demonstrated in Fig. <ref>, from which we derive the LEH spanned by the Cu3d_x^2-y^2/O2p_σ AB bands by employing the GGA+cRPA scheme sketched in Appendix <ref>.Computational details of DFT and GGA+cRPA scheme are described in Appendix <ref>.Then, we define the AB LEH as follows. In the AB LEH for multi-layer cuprates <cit.>, there is only one AB orbital centered on each Cu atom. Then the AB LEH readsℋ^_ = ∑_l,l'ℋ^l,l'_ = ∑_l,l'[ ℋ_ hop^l,l' + ℋ_ int^l,l'],where l,l'={i,o,o'} with i being an IP site, and o,o' belonging to the two equivalent OPs. in which we distinguish the hopping and interaction parts between planes l and l', as, respectively,ℋ_ hop^l,l' = ∑_(σ R),(σ' R') t^l,l'_(R'-R) ĉ^†_lσ Rĉ^_l'σ'R',ℋ_ int^l,l' = ∑_(σ R),(σ'R') U^l,l'_(R'-R) n̂^_l σ Rn̂^_l' σ'R',where σ,σ' are the spin indices. By using these notations, (l σ R) is the AB spin-orbital in the plane l and in the unit cell at R, with spin σ. c^†_lσ R, c^_lσ R and n̂^_lσ R are respectively the creation, annihilation and number operators in (l σ R),and t^l,l'_( R'-R) and U^l,l'_( R'-R) are respectively the hopping and direct interaction parameters between (l σ R) and (l' σ'R'). The translational symmetry allows to restrict the calculation of LEH parameters tot^l,l'_σ,σ'( R) and U^l,l'_σ,σ'( R) between (l σ 0) and (l' σ'R). In this paper, we focus on the intraplane LEH ℋ_^l = ℋ_^l,lwithin the plane l and analyze only the first nearest-neighbor hopping t_1^l=t^l,l_([100])and the onsite effective interaction U^l=U^l,l_([000]), because these two parameters were proposed to essentially determine T_c^ opt at least for single- and two-layer cuprates <cit.>.(Other LEH parameters are given in the Supplemental Material (S1) <cit.>.) Then within this restricted range, ℋ_^l is rewritten asℋ_^l = |t_1^l| [ ℋ̃_ hop^l + u^lℋ̃_ int^l] = |t_1^l| ℋ̃_^l,in which ℋ̃_ hop^l = ℋ_ hop^l / |t_1^l| and ℋ̃_ int^l = ℋ_ int^l / U^l are the dimensionless hopping and interaction parts, expressed in units of their respective characteristic energies |t_1^l| and U^l. The full dimensionless intraplane LEH is ℋ̃_^l = ℋ_^l/ |t_1^l|,and the dimensionless ratio u^l=U^l/|t_1^l| encodes the correlation strength.As mentioned in Sec. <ref>, we also discuss the values of |t_1^ avg|=(|t_1^i|+|t_1^o|)/2 and u^ avg=(u^i+u^o)/2. Average values of other quantities with the superscript l are defined similarly.We compute the above LEH parameters |t_1^l| and U^l as follows. We use thecode <cit.>.The standard calculation procedure is presented in detail elsewhere <cit.>. First, we compute t_1^l ast_1^l = ∫_Ω dr w^*_l 0(r) h(r) w_l R_1(r),in which w_l R is the Wannier function of the AB orbital (l R), R_1=[100], Ω is the unit cell, and h is the one-particle part at the GGA level. Then, we compute U^l as follows. We compute the cRPA effective interaction W_ H, whose expression is found in Appendix <ref>, Eq. (<ref>).We use a plane wave cutoff energy of 8 Ry. We deduce the onsite effective Coulomb interaction as U^l = ∫_Ω dr ∫_Ω dr' w_l 0^*(r) w_l 0^*(r') W_ H(r,r') w_l 0^(r) w_l 0^(r').We also deduce the onsite bare Coulomb interaction v^l by replacing W_ H by the bare Coulomb interaction v in Eq. (<ref>), and the cRPA screening ratio R^l=U^l/v^l. The obtained values of |t_1^l|, U^l, v^l and R^l are plotted in Fig. <ref>.§ PRESSURE DEPENDENCE OF ELECTRONIC STRUCTURE AT DFT LEVELNow, we show the result of GGA calculation as a function of P and clarify what can be learned within the DFT level already.The band dispersion is shown in Fig. <ref>(a-l). We also show in Fig. <ref>(o-q) the onsite energy of the Cu3d_x^2-y^2 and in-plane O2p_σ atomic-like Wannier orbital (ALWO). (As explained in Appendix <ref>, we denote these ALWOs as M-ALWOs because they are in the M space.) We also show the Cu3d_x^2-y^2/O2p_σ hopping amplitude |t_xp^l|=|t^ Cu(l ),O(l )_x^2-y^2,p_σ| in the unit cell.We first elucidate main mechanisms of the following items [MW], [Mϵ] and [Mt] when P increases:[MW] Broadening of the M band dispersion in Fig. <ref>(a-g)[Mϵ] Decrease in onsite energies of M-ALWOs relative to the Fermi level in Fig. <ref>(o,p)[Mt] Increase in hoppings between M-ALWOs in Fig. <ref>(q). A simple interpretation of [MW], [Mϵ] and [Mt] is that P works to reduce the interatomic distances.This causes two distinct effects:First, the electrons in the CuO_2 plane feel the stronger Madelung potential from ions in the crystal. Indeed, the amplitude of the Madelung potential scales as 1/d, where d is the interatomic distance between the ion and the Cu or O atom in the CuO_2 plane. The variation in Madelung potential modifies the M-ALWO onsite energies and causes [Mϵ] (for details, see Appendix <ref>). Second, the overlap and hybridization between M-ALWOs increases, which causes [Mt]. Both [Mϵ] and [Mt] increase the splitting of the B/NB (bonding/nonbonding) and AB bands, which causes [MW]: The bandwidth W of the M bands increases from W ≃ 9 eV at P_ amb to W ≃ 12 eV at P=60 GPa [see Fig. <ref>(a-g)]. Simultaneously, the bandwidth W_ AB of the AB band increases from W_ AB≃ 4 eV at P_ amb to W_ AB≃ 5.5 eV at P=60 GPa, which is caused by [Mt]. Indeed, the increase in |t_1^l| and thus W_ AB≃ 8|t_1| originates from the increase in |t_xp^l|, as discussed later in Sec. <ref>.Effects of the uniaxial pressures P_a and P_c to [MW], [Mϵ] and [Mt] can also simply be accounted for when we consider the anisotropy of the overlap of the two M-ALWOs and the direction of the pressure.For instance, [MW] is caused by P_a rather than P_c [see Fig. <ref>(h-n)], because the the AB bandwidth W_ AB and W are mainly determined by theoverlap between Cu3d_x^2-y^2 and O2p_σ ALWOs in a CuO_2 plane.This increase in the bandwidth with P_a was also mentioned in Ref. Sakakibara2012prboct in the case of Hg1201. On the other hand, the application of P_c shifts a few specific bands:Hg5d-like bands are shifted from -4/-5 eV at P_ amb to -7 eV at P_c=30 GPa. However, P_c does not modify W_ AB.Effects of uniaxial pressure on [Mϵ] and [Mt] are also obviously and intuitively understood in a similar fashion: We clearly see in Fig. <ref>(o-q) that [Mϵ] and [Mt] are caused by P_a rather than P_c. For more details of the pressure effects, see Appendix <ref>. § PRESSURE DEPENDENCE OF AB EFFECTIVE HAMILTONIANNow, we discuss the P dependence of AB LEH parameters in Fig. <ref>(a,b), in which the two main mechanisms (I,II) are visible:(I) |t_1^l| increases, whereas (II) u^l decreases.In this section, we discuss the mechanisms of (I,II) that are summarized in Table <ref>,and demonstrate that (I,II) are indeed physical and robust.We discuss mainly |t_1^ avg| and u^ avg, and discuss briefly the difference between values in the IP and OP. A comparison with experiments will be made separately in Sec. <ref>. §.§ Increase in |t_1^| with P The increase (I) in the P dependence of |t_1^ avg| [see Fig. <ref>(a)] is purely caused by the reduction of cell parameter a when the crystal is compressed along axis a.Indeed, (I) is purely caused by the application of P_a [see Fig. <ref>(a)], whose only effect is to reduce a. The underlying origin is simply the increase in overlap between AB orbitals on neighboring Cu atoms due to the decrease in cell parameter a when increasing P_a as already discussed in Sec. <ref> at the DFT level. We note that |t_1^l| has a similar P dependence as that of |t_xp^l| [see Fig. <ref>(a,g)]. This is obvious because the AB orbital is formed by the hybridization of Cu3d_x^2-y^2 and O2p_σ M-ALWOs.Note that, at P > P_ opt, |t_1^o| is reduced with respect to |t_1^i|; this is because of the buckling of Cu-O-Cu bonds in the OP.Indeed, the decrease in |t_1^o|-|t_1^i| and also |t_xp^o|-|t_xp^i| occurs in the P_a^ buck dependence [see Fig. <ref>(h)] but not in the P_a dependence [see Fig. <ref>(a)], and the value of d^z_ buck is modified by the application of P_a^ buck but not by the application of P_a. Furthermore, the P dependence of |t_1^o|-|t_1^i| is consistent with that of |d^z_ buck|: The decrease in |t_1^o|-|t_1^i| starts at P_ opt and is amplified at larger pressures [see Fig. <ref>(a,g)],which is consistent with the increase in |d^z_ buck| from 0.05 Å to 0.20 Å between P_ opt and 60 GPa (see Fig. <ref>). The origin of the decrease in |t_1^o|-|t_1^i| can be understood as follows: When |d^z_ buck| increases, the overlap between Cu3d_x^2-y^2 and O2p_σ M-ALWOs in the OP is reduced. Note that the buckling induced decrease in |t_1^| has also been observed in the two-layer cuprate Bi2212 <cit.>. Comparison of results obtained from different CP values shows that (I) is physical and robust. If we consider both (i) the optimized CP values and (ii) the CP values from Zhang et al., the P dependencies of |t_1^l| and |t_xp^l| are very similar for (i) and (ii) [see Fig. <ref>(a,g,h,n)]. This is intuitive since the P dependence of a is similar for (i) and (ii),and the P dependence of d^z_ buck at P>P_ opt is also similar (see Fig. <ref>). If we consider (iii) the experimental CP values from Armstrong et al. <cit.> at P < 8.5 GPa,the increase in |t_1^ avg| and |t_xp^ avg| is faster. This is in accordance with the faster decrease in a for (iii) with respect to (i,ii) (see Fig. <ref>), and implies the uncertainty of the estimate of |t_1^ avg| at P_ opt, as discussed later in Section <ref>.§.§ Decrease in u^ with P< P_ optAt P< P_ opt, the decrease (II) in u^ avg is largely induced by the increase (I) in |t_1^ avg|; however, the increase in U^ avg [see Fig. <ref>(c)] partially cancels the decrease in u^ avg.Thus, we discuss the P dependence of U^ avg below.The increase in U^ avg is caused by two cooperative factors (i,ii) whose main origin is the reduction in a.These are (i) the increase in onsite bare interaction v^ avg [see Fig. <ref>(c)],and (ii) the reduction in cRPA screening represented by the increase in the average value R^ avg of the cRPA screening ratio R^l=U^l/v^l [see Fig. <ref>(d)]. In the following, we discuss the microscopic origins of (i,ii). On (i), the increase in v^l mainly originates from the increase in charge transfer energy Δ E^l_xp between Cu3d_x^2-y^2 and O2p_σ M-ALWOs.This is because the increase in Δ E^l_xp reduces the importance of the Cu3d_x^2-y^2/O2p_σ hybridization. (The latter is roughly encoded in the ratio O_xp^l=|t_xp^l|/Δ E_xp^l.) The reduction in hybridization increases the Cu3d_x^2-y^2 atomic character and thus the localization of the AB orbital. This is discussed and justified in the item (a) in Appendix <ref>. This simple view is consistent with the systematic correlation between v^l and Δ E_xp^l in this paper [see Fig. <ref>(e,f,l,m) and also Appendix <ref>],and also in the literature <cit.>. Still, note that the correlation between v^l and Δ E_xp^l is slightly reduced at P > P_ opt [see Fig. <ref>(e,f,l,m) at P > P_ opt]. This is because|t_xp^o| is reduced with respect to |t_xp^i| at P > P_ opt due to the nonzero d^z_ buck, which contributes to reduce O_xp^o [see also the item (c) in Appendix <ref>].The increase in Δ E^l_xp mainly originates from the reduction in a.Indeed, the increase is mainly caused by P_a [see Fig. <ref>(f)].This is because the reduction in a increases the energy of Cu3d_x^2-y^2 electrons with respect to that of O2p_σ electrons (see Appendix <ref>). Although the reduction in a is the main origin of the increase in the P dependence of Δ E^ avg_xp, note that Δ E^l_xp depends not only on a but also on other CPs (see Appendix <ref>). The concomitant increases in v^l and |t_1^l| seem counterintuitive, but can be explained as follows.The counterintuitive point is thatthe increase in v^l suggests a more localized AB orbital whereas the increase in |t_1^l| would be more consistent with a delocalization of the AB orbital. Although the AB orbital is more localized, the increase in |t_1^l| is explained by the increase in |t^l_xp| with P_a in Fig. <ref>(g). This is discussed in detail in the item (b) in Appendix <ref>, which is summarized below. We apply P_a and examine the a dependencies of |t_1^ avg|, |t_xp^ avg| and Δ E_xp^ avg, and the average values O_xp^ avg and T_xp^ avg of O_xp^l and T_xp^l=|t_xp^l|^2/Δ E_xp^l. The increase in Δ E_xp^ avg with a is faster than the increase in |t_xp^ avg|, but slower than the increase in |t_xp^ avg|^2. As a result, when a decreases, |t_1^ avg| ∝ T_xp^ avg∝ 1/a^3 increases. On the other hand, O_xp^ avg∝ a decreases, hence the increase in v^l. On (ii), the decrease in cRPA screening [the increase in R^l in Fig. <ref>(d)] is due to the broadening [MW] of the GGA band dispersion (whose origin is the reduction in a as discussed in Sec. <ref>). Indeed, [MW] causes the increase in charge transfer energies between occupied bands and empty bands, which reduces the amplitude of the cRPA polarization (see Appendix <ref> for details).The increase in R^l is monotonous,except for the small dip in the P dependence of R^o at P ≃ 24 GPa in Fig. <ref>(d). The dip may originate from the change in the sign of d^z_ buck at P ≃ 24 GPa (see the next paragraph).Comparison of results obtained from different CP values shows that (i,ii) are essentially correct, independently of the uncertainty on CP values. Let us consider the results obtained from the CP values from Zhang et al. in Fig. <ref>(h-n) and compare them with the results obtained from the optimized CP values in Fig. <ref>(a-g). The increase in v^ avg is well reproduced [see Fig. <ref>(e,l)]. The increase in R^ avg with P is qualitatively reproduced [see Fig. <ref>(d,k)]; however, the P dependence of R^l is not exactly the same and we discuss the difference below.First, there is a small dip in the P dependence of R^o at P ≃ 24 GPa in Fig. <ref>(d) (optimized CP values). This dip is not observed in Fig. <ref>(k) (CP values from Zhang et al.). This may be because the sign of d^z_ buck does not change at P ≃ 24 GPa if we consider the CP values from Zhang et al., contrary to the optimized CP values (see the P dependence of d^z_ buck in Fig. <ref>).Second, at P_ opt = 30 GPa, the value of R^i is similar but the value of R^o is larger in Fig. <ref>(k) with respect to Fig. <ref>(d). This is because the values of both d^z_ Ca and d^z_ Cu are larger in Zhang et al. with respect to the optimized CP value(the difference is 0.1 Å as seen in Fig. <ref>). As shown in Appendix <ref>, the larger value of d^z_ Ca increases R^o. At the same time, the larger value of d^z_ Ca (d^z_ Cu) decreases (increases) R^i.(Both effects cancel each other.)Finally, if we consider the experimental CP values from Armstrong et al., the increases (i,ii) are faster [see Fig. <ref>(d,e)]. This is consistent with the faster decrease in a in Armstrong et al. with respect to the optimized CP values and also thosefrom Zhang et al. (see Fig. <ref>). §.§ Decrease in u^ with P > P_ opt At P > P_ opt, the decrease in u^ avg is faster because R^ avg decreases. Let us start from the P dependence of U^ avg: At P > P_ opt, U^ avg ceases to increase [see Fig. <ref>(c)] and may even decrease if we consider the CP values from Zhang et al. [see Fig. <ref>(i)]. The origin is not the P dependence of v^ avg, which increases monotonically [see Fig. <ref>(e,l)], but rather that of R^ avg, which shows a dome structure with a maximum at P_ scr≃ 30-40 GPa and a decrease at P > P_ scr [see Fig. <ref>(d,k)]. The decrease in R^ avg dominates the increase in v^ avg.The decrease in R^ avg looks physical, and robust with respect to the uncertainty on CP values. It is still observed if we consider the CP values from Zhang et al. instead of the optimized CP values [see Fig. <ref>(k)], even though the P dependence of R^l is modified. The decrease in R^ avg is the result of a competition between P_a and P_c. (The effect of P_c is dominant at P > P_ opt.) As seen in Fig. <ref>(d),applying only P_a causes (i) the non-linear increase in R^ avg,which dominates at P < P_ opt but saturates at P > P_ opt. On the other hand, applying only P_c causes (ii) the decrease in R^ avg, which becomes dominant at P > P_ opt.[(i,ii) are interpreted in terms of the cRPA polarization in Appendix <ref>.] The microscopic origin of (ii) is the decrease in both d^z_ Cu and d^z_ O(ap) when P_c is applied (see Appendix <ref>).Note that, in the OP, the destructive effect of P_c on R^o and thus u^o is cancelled by the buckling induced decrease in |t_1^o|. Indeed, the P_c dependence of u^o in Fig. <ref>(b) shows a 6% increase from P_ opt to 60 GPa. This increase originates from the buckling of Cu-O-Cu bonds in the OP,because it does not appear in the P_c^ buck dependence of u^o in Fig. <ref>(i), and the value of d^z_ buck is modified by applying P_c but not by applying P_c^ buck. The buckling reduces |t_1^o| as discussed in Sec. <ref>, which is the main origin of the increase in u^o from P_ opt to 60 GPa.§ DISCUSSION Here, we discuss in detail how the experimental P dependence of T_c^ opt is predicted by considering (I,II) together with the assumptions (A,B) and the corrections (C,D) in Sec. <ref>.We also discuss that (A) through (D) are all physically sound.First, we emphasize that only by considering (A) and (B), the dome structure in the P dependence of T_ c^ optis qualitatively understood. Since (B) implies that F_ SC stays at a plateau region around the peak of parabolic P dependence between P_ amb and P_ opt as is seen in Fig. <ref>(a). Then the dominant P dependence of F_ SC arises from t_1, which causes increase in T_ c^ est in Eq. (<ref>). On the other hand, F_ SC rather rapidly decreases with increasing P above P_ opt, which dominates over the effect of increase in |t_1|.The location of Hg1223 assumed in (B) is justified from (C). Without (C), we would have u^ avg≃ 7.2 at P_ amb and ≃ 6.8 at P_ opt: Both values are below u_ opt≃ 8.0-8.5, so that F_ SC would quickly decrease with P, and (B) would not be valid. On the other hand, if we apply (C), we have u ≃ 9.3 ≳ u_ opt at P_ amb and ≃ 7.8 ≃ u_ opt at P_ opt [see Fig. <ref>(a)], so that (B) becomes valid. Let us discuss more quantitative aspects. Although F_ SC does not vary substantially with increasing P below P_ opt, there is a small (≃ 5%) decrease in F_ SCfrom P_ amb to P_ opt even after applying (C) [see Fig. <ref>(a)]. If we apply (C) without (D), the ≃ 13-14% increase in |t_1^ avg| from P_ amb to P_ opt becomes the ≃ 17% increase in |t_1|.However, the increase inT_ c^ est estimated from Eq.(<ref>)is only ≃ 10% due to the ≃ 5% decrease in F_ SC. If we apply (D) after (C), the increase in |t_1| becomes ≃ 22%, so that the increase in T_ c^ estbecomes ≃ 17% and reproduces that in T_ c^ opt.Note that the quantitative agreement between the increases in T_ c^ estandT_ c^ opt is very good at x_ LRFB^ est=0.95 at least for small P.[see Fig. <ref>(b)].For completeness, note that (D) has a limitation: It relies on the a dependence of |t_1| at the GGA+cRPA level.[For more details, see the last paragraph of Appendix <ref>.]0 The P dependencies of T_c^ estare in reasonably good agreement for x_ LRFB=0.95 [see Fig. <ref>(b)]. The increase in T_c^ estat P ≤ 8.5 GPa is driven by the increase in |t_1^ est|(y_ LRFB^ est, Δ a), as mentioned in Sec. <ref>. This is mainly a consequence of (C); still, (D) brings a quantitative correction: If we apply (C) but omit (D), the increase in T_c^ est(x_ LRFB^ est,y_ LRFB^ est) from P_ amb to 8.5 GPa is ≃ 6.0%, which underestimates the ≃ 8.3% increase in T_ c^ opt. If we apply (D), the increase in T_c^ est(x_ LRFB^ est,y_ LRFB^ est,Δ a) is ≃ 7.4%, in good agreement with the increase in T_ c^ opt. This is because |t_1^ est|(y_ LRFB^ est, Δ a) increases faster than |t_1^ est|(y_ LRFB^ est). The correction (D) accounts for the difference between the optimized a and experimental a: The decrease in a with increasing P < 8.5 GPais slightly faster in experiment <cit.> with respect to the optimized CP (see Fig. <ref>). The optimized CP is less accurate than the experimental CP, and we use the latter here because it is available at P < 8.5 GPa. Now, we argue that (A,B,C,D) are adequate from the physical point of view.On (A), it was shown that the scaling Eq. (<ref>) is equally satisfied for N_ℓ=1,2 and ∞ <cit.>. This is on the one hand due to the fact that the interlayer coupling is small for all the cases and within a CuO_2 layer on the other hand, the superconductivity is mainly dependent on t_1 and U only and the dependence on other parameters is weak within the realistic range.In the present case of Hg1223 with N_ℓ =3, the interlayer coupling is again small. For instance, the ratio between the interlayer offsite Coulomb repulsion V^i,o and U^ avg is V^i,o/U^ avg = 0.13 at P_ amb and the superconducting strength is expected to be governed by the single layer physics, which is the same as thecases of N_ℓ=1,2 and ∞. On (B), the statement that u_ cGW -SIC+“LRFB" at P_ amb is above u_ opt∼ ≃8.0-8.5 is indeed satisfied in the ab initio estimate by considering the correction (C). As mentioned earlier, the GGA+cRPA estimate is u^ avg≃ 7.2 < u_ opt at P_ amb. However, (C) yields u_ cGW -SIC+“LRFB"≃ 9.3 ≳ u_ opt at P_ amb,and u_ cGW -SIC+“LRFB"≃ 7.8 ≃ u_ opt at P_ opt. [See Fig. <ref>(a).]On (C), thecalculation of u_ cGW -SIC+“LRFB" and |t_1|_ cGW -SIC+“LRFB" is detailed in Appendix <ref>.On (D),it is plausible that our calculation overestimates a by ≃ 0.05 Å at P > 8.5 GPa,because the same overestimation is already observed at P=8.5 GPa in Fig. <ref>.The derivation of improved |t_1|_ cGW -SIC+“LRFB" is detailed in Appendix <ref>. In Fig. <ref>(b), although the pressure dependence of T_c^ opt is nicely reproduced for P<P_ opt, the estimated T_c^ est decreases more rapidly thanthe experimental T_c^ opt at P>P_ opt. The origin of this discrepancy is not clear at the moment. One possible origin is of course the uncertainty of the crystal parameters at high pressure because there exist no experimental data. Another origin would be the limitation of the inference for the LRFB correction taken simply by the constants x_ LRFB and y_ LRFB. The third possibility is the possible inhomogeneity of the pressure in the experiments. The complete understanding of the origin of the discrepancy is an intriguingfuture issue.§ SUMMARY AND CONCLUSION We have proposed the microscopic mechanism for the dome-like P dependence of T_c^ opt in Hg1223 as the consequence of (I) and (II) obtained in this paper together with the assumptions (A,B) and the corrections (C,D) mentioned in Sec. <ref> and supported in Sec. <ref>. We have also elucidated the microscopic origins of (I,II), which are summarized below.(I) The increase in |t_1| is caused by the reduction in the cell parameter a when the crystal is compressed along axis a. (II) The decrease in uis induced by (I),but is partially cancelled by the increase in U at P < P_ opt. The increase in U is caused by two cooperative factors: (i) The increase in onsite bare interaction v,whose main origin is the reduction in Cu3d_x^2-y^2/O2p_σ hybridization, and (ii) the reduction in cRPA screening at P < P_ opt. Both (i) and (ii) originate from the reduction in a.At P > P_ opt, U ceases to increase with increasing P,because the cRPA screening increases due to the compression along axis c, more precisely the reduction in distance d^z_ Cu between the IP and OP [d^z_ O(ap) between the OP and apical O], which screens AB electrons in the IP (OP).The elucidation of the above mechanisms offers a platform for future studies on cuprates under Pand design of new compounds with even higher T_c^ opt: For instance, T_c may be controlled by controlling |t_1| via the cell parameter a. However, the increase in |t_1| is a double-edged sword for the increase in T_c: On one hand, it is the direct origin of the increase in T_c^ opt∝ |t_1| at P < P_ opt in Hg1223. On the other hand, it is a prominent cause of the decrease in u and thus F_ SCand T_c^ opt at P > P_ opt. Conversely, in the OP, the buckling of Cu-O-Cu bonds reduces |t_1|: This reduces T_c^ opt∝ |t_1|, but this may also increase F_ SC and thus T_c^ opt if the value of u is in the weak-coupling region [u < 7.5 in Fig. <ref>(b)].For instance, the buckling may be identified as the main origin of the higher T_c^ opt in Bi2212 (T_c^ opt≃ 84 K <cit.>) compared to Bi2201 (T_c^ opt≃ 6 K <cit.>): The buckling reduces |t_1| and thus increases u in Bi2212 with respect to Bi2201 <cit.>, so that Bi2212 is near the optimal region whereas Bi2201 is in the weak-coupling region <cit.>. This explainsthe larger |t_1|F_ SC in Bi2212 <cit.> despite the smaller |t_1|.§ ACKNOWLEDGEMENTS We thank Michael Thobias Schmid for useful discussions. This work was supported by MEXT as Program for Promoting Researches on the Supercomputer Fugaku (Basic Science for Emergence and Functionality in Quantum Matter ­Innovative Strongly-Correlated Electron Science by Integration of Fugaku and Frontier Experiments­, JPMXP1020200104 and JPMXP1020230411) and used computational resources of supercomputer Fugaku provided by the RIKEN Center for Computational Science (Project ID: hp200132, hp210163, hp220166 and hp230169). We also acknowledge the financial support of JSPS Kakenhi Grant-in-Aid for Transformative Research Areas, Grants Nos. JP22H05111 andJP22H05114 (“Foundation of Machine Learning Physics").Part of the results were obtained under the Special Postdoctoral Researcher Program at RIKEN. The left panel of Fig.  <ref> was drawn by using software<cit.>.§ METHOD OF MACE Here, as a complement to Sec. <ref>, we summarize and comment the method of deriving the effective Hamiltonian, which consists of three steps. (i) First, starting from the crystal structure,the electronic structure of the material is calculated at the simplified Density Functional Theory (DFT) <cit.> level. This framework uses the LDA or GGA exchange-correlation functionals, and a single-determinant wavefunction. The electronic structure is either left at the LDA(GGA) level [in case the LDA(GGA)+cRPA is employed], or preprocessed to the GW level (if cGW-SIC is employed) supplemented with LRFB (if cGW-SIC+LRFB is employed), as explained in Sec. <ref>. (ii) The description of the L space is improved by deriving a low-energy effective Hamiltonian (LEH) restricted to the L space.In this LEH, the two-particle part is calculated at the constrained random phase approximation (cRPA) <cit.> at the GGA+cRPA level. At the cGW-SIC and cGW-SIC+LRFB levels, the one-particle part of the LEH is also improvedby removing the exchange-correlation double counting term <cit.> and the self-interaction term <cit.> (see also Sec. <ref>). This properly describes high-energy (H) states such as core and semicore bands from closed shells, but fails to describe many-body effects and strong electronic correlation in the low-energy (L) subspace near the Fermi level, even with the above preprocessing.In the case of cuprates, this L space is composed of the ABorbital centered on each Cu atom in the CuO_2 plane. The correlation strength is quantified within the ratio u whose value is typically above 7 for the high-T_c cuprates <cit.>. (iii) The LEH is solved by a many-body solver, e.g. many-variable Variational Monte-Carlo (mVMC) <cit.>.This three step MACE procedure allows to correctly describe the Mott physics in the mother compound and the SC phase in the carrier doped compound <cit.>. In the mVMC solution, F_ SC rapidlyincreases with u in the range 7 ≲ u ≲ 8.5 <cit.>,which suggests an increase in T_c with u <cit.>,in agreement with the positive correlation between u and T_c^ opt <cit.> in the same range of values of u. This range corresponds to the weak-coupling and plateau regions [7 ≲ u ≲ 9 in the u dependence of F_ SC in Fig. <ref>(b)].These results led to the identification of the possibly universal scaling T_c ≃ 0.16 |t_1| F_ SC in the solution of the AB LEH at the cGW-SIC+LRFB level <cit.>. To predict the SC character of the material with the above MACE procedure, insights may be obtained even prior to the computationally expensive solution (iii), by examining intermediate quantities within the hierarchical structure of MACE.Notably, the scaling T_c^ opt≃ 0.16 |t_1| F_ SC proposed in Ref. Schmid2023 and the u dependence of F_ SC in Fig. <ref>(b) suggest it is possible to anticipate the crystal structure dependence of T_cby studying the crystal structure dependence of LEH parameters (ii), particularly |t_1| and u.Following this idea, we tackle in this paper the derivation of the AB LEH (ii) for Hg1223 as a function of pressure, without performing explicitly the solution (iii) which is left for future studies. Of course, the explicit many-body solution of the LEH (iii) is necessary to reach the final conclusion.Furthermore, qualitative insights into the SC may be obtained by deriving the LEH parameters at the simple GGA+cRPA level up to the process (ii), whereas cGW-SIC+LRFB brings a mostly quantitative correction to the LEH parameters <cit.>. Note that this quantitative correction by cGW-SIC+LRFB is still important to stabilize the SC state with mVMC (iii) in practice:The improvement by cGW-SIC+LRFB increases U and thus u by 10-15 % in Bi2201 and Bi2212 <cit.>, which allows quantitative estimate of the SC order in the mVMC solution.On the other hand, at the simple GGA+cRPA level, u may be underestimated. Nonetheless, GGA+cRPA still reproduces the dependence of u in the LEH parameters on the materials,and the CPs including pressure effects systematically in accordance with cGW-SIC+LRFB <cit.>, which allows to extract qualitatively correct trends in the LEH parameters by avoiding the large computational cost <cit.> of cGW-SIC+LRFB. For instance, in the comparisonbetween Bi2201 (T_c^ opt≃ 6 K <cit.>) and Bi2212 (T_c^ opt≃ 84 K <cit.>),u is larger for Bi2212 at the cGW-SIC+LRFB level,and this qualitative result is also reproduced at the GGA+cRPA level in Ref. Moree2022, Appendix C. Following the above idea, we mainly employ the GGA+cRPA scheme to derive the AB LEH (ii) for Hg1223.We also employ the cGW-SIC+LRFB scheme in a limited case in Appendix <ref>, as explained in Sec. <ref>.§ COMPUTATIONAL DETAILS§.§ Choice of crystal parameter values The CP values obtained from neutron diffraction powder and energy-dispersive synchrotron x-ray diffraction experiment <cit.> are summarized in Fig. <ref>.There is an uncertainty on the CP values, especially after P ≃ 9.2 GPa.Indeed, the experimental P dependence of the CP values varies between different works. In addition, to our knowledge, the CP values at P > 9.2 GPa have not been completely determined in experiment. Refs. Hunter1994,Armstrong1995 provide all CP values, but only up to P ≃ 8.5-9.2 GPa.Ref. Eggert1994 provides the values of a and c up to P ≃ 26 GPa, but not the values of d^z_l. Thus, the CP values are not available within the range P_ amb < P < 45 GPa that corresponds tothe dome-like P dependence of T_c^ opt in Ref. Gao1994.To verify the robustness of our results with respect to the uncertainty on CP values, we consider CP values obtained by two different theoretical calculations (i,ii), up to 60 GPa. We consider (i) CP values obtained by a structural optimization (denoted as optimized CP values), and (ii) CP values obtained in Zhang et al. <cit.>. We determine first (ii), then (i), as explained below.On (ii), the values in Zhang et al. have been obtained from a theoretical calculation, by the means of interatomic potentials. At P < 9.2 GPa, these values are in reasonable agreement with the different experimental values from Refs. Hunter1994,Armstrong1995. Although the values of a are overestimated with respect to Refs. Eggert1994,Hunter1994,Armstrong1995,they are in good agreement with Ref. Eggert1994 at 24 GPa.However, the CP values from Zhang et al. are available only up to 20 GPa; thus, we extrapolate their P dependence up to 60 GPa, as follows.We fit the P dependence of aby considering the Murnaghan equation of statea(P)/a(P_ amb) = [ 1 + κ'/κ P ]^-1/κ',as done in Ref. Eggert1994. We deduce the values of the two parameters κ and κ', which are respectively the bulk modulus and its pressure derivative. The same procedure is applied to c, d^z_ O(ap), d^z_ Cu, d^z_ Ba,O(o)=d^z_ Ba+d^z_ buck, and d^z_ Ca,Ba=d^z_ Cu-d^z_ Ca+d^z_ Ba, whose values are extracted from Ref. Zhang1997. In the case of d^z_ buck,we fit the Cu(o)-O(o)-Cu(o) bond angle as a function of P in Ref. Zhang1997, Fig. 5 with Eq. (<ref>). Then, we deduce d^z_ Ba from d^z_ Ba,O(o) and d^z_ buck, and d^z_ Ca from d^z_ Ca,Ba, d^z_ Cu and d^z_ Ba. We checked that values of κ for these CPs from Ref. Zhang1997 are reproduced with a difference lower than 0.5%. These values of κ are 1.81×10^-3 GPa^-1 for a, 4.61× 10^-3 GPa^-1 for c, 7.01× 10^-3 GPa^-1 for d^z_ O(ap), 2.94× 10^-3 GPa^-1 for d^z_ Cu, 0.64× 10^-3 GPa^-1 for d^z_ Ba- O(o), and 1.535× 10^-3 GPa^-1 for d^z_ Ca- Ba. We obtain the CP values in Fig. <ref>. On (i), the optimized CP values are obtained by starting from (ii),and performing a structural optimization. We impose the following constraint: The volume V = a^2 c of the unit cell remains constant.This allows to avoid the relaxation of the volume to its value at P_ amb. Other computational details are the same as those for the self-consistent calculation (see Appendix <ref>).Results are shown in Fig <ref>. We deem (i) more reliable than (ii) because the structural optimization allows the rigorous minimization of the free energy of the crystal; thus, we consider (i) in the main analyses of this paper, and (ii) as a complement. Still, (ii) is useful to check the robustness of results obtained from (i): We show that both (i) and (ii) yield the same qualitative P dependence of AB LEH parameters (see Sec. <ref>). Of course, it would be desirable to determine accurately all CP values from P_ amb to 60 GPa in future experimental works. Note that, at P = 60 GPa, the negative value d^z_ buck≃ -0.2 Å obtained for both (i) and (ii) is physical, as discussed below. First, the P dependence of d^z_ buck at P > P_ opt looks robust, because it is similar for (i) and (ii)(see Fig. <ref>). Second, the negative value of d^z_ buck has a physical origin: The "collision" between the in-plane O in the OP and the Ca cation. Indeed, when P increases,the distance d^z_ Cu-d^z_ Ca between the OP and Ca cation is reduced (see Fig. <ref>). If we see the ions as rigid spheres, the Ca cation "collides" with the in-plane O in the OP, so that the in-plane O is pushed outside of the OP. This explains why d^z_ buck becomes negative and |d^z_ buck| increases. In addition, the rigidity of Cu-O-Cu bonds may play a role in the increase in |d^z_ buck|: When a is decreased, |d^z_ buck| is also increased to prevent the reduction in distance d_ Cu-O = √((a/2)^2 + (d^z_ buck)^2) between Cu and in-plane O.§.§ Hole concentration Next, we take into account the experimental optimal value p_ opt of the hole concentration p,which realizes T_c^ exp (the experimental value of T_c^) close to T_c^ opt≃ 138 K at P_ amb.Experimentally, hole doping in the CuO_2 planes is realized byintroduction of excess oxygen atoms and/or partial substitution of atoms, e.g. Hg by Au, so that the chemical formula of Hg1223 becomes Hg_1-x_ sAu_x_ sBa_2Ca_2Cu_3O_8+δ. In that case, a rough estimate of the total hole concentration isp_ tot = 2δ + x_ s, which corresponds to the average hole concentration per CuO_2 plane p_ av = p_ tot/3 = (2δ + x_ s)/3. At P_ amb, previous studies <cit.> suggest the optimal value of p_ av is p_ opt≃0.14-0.20. In Ref. Bordet1996, the x_s dependence of T_c^ exp is explicitly studied:For δ=0.3, we have T_c^ exp≃ 133 K at x_s=0, then T_c^ exp decreases with x_s,so that the maximum value of T_c^ exp≃ 133 K is reached at p_ av= 2δ/3 ≃ 0.2.This value of T_c^ exp corresponds to T_c^ opt≃ 138 K <cit.>.Also, the value p_ opt≃ 0.2 is consistentwith Ref. Kotegawa2001 in which T_c^ exp≃ 115-133 K at p_ av≃ 0.19 - 0.24,and also with p_ opt≃ 0.19 in Ref. Yamamoto2015.However, Ref. Gao1994 reports p_ opt≃ 0.14 which corresponds to T_c^ opt=138 K.Thus, the maximal value of T_c^ exp≃ 133-138 K is realized for experimental p_ opt≃ 0.14-0.20 <cit.>. We checked that the LEH parameters are insensitive to the variation in p_ av in the range ≃ 0.14-0.20, as discussed below.Thus, in our calculations, we realize p_ av=0.2 by realizing p_ tot=0.6.We do not consider excess oxygen, so that δ=0.0; instead, we consider x_ s=0.6 to compensate the absence of excess oxygen, and realize p_ tot=0.6. Also, we checked that our calculations correspond to optimal hole doping p_ opt≃ 0.14-0.20not only at P_ amb but also under pressure, which allows a reliable comparison with the P dependence of T_c^ opt <cit.>. According to Ref. Yamamoto2015, p_ opt is reduced under pressure: We have p_ opt≃ 0.19 (T_c^ opt≃ 134 K) at P_ ambbut p_ opt≃ 0.163 (T_c^ opt≃ 150 K) at P=12 GPa. Linear extrapolation of the above pressure dependence of p_ opt yields p_ opt≃ 0.12 at P_ opt=30 GPa. However, we have checked that this reduction in p_ opt does not affect substantially the AB LEH parameters. We consider x_ s=0.4 to realize p_ av = 0.133, and compare with results obtained at p_ av = 0.2. The values of |t_1^l| and u^l at P_ opt change by only 1% - 2% (see Table <ref>).For completeness, we have also considered p_ av = 0.133 at P_ amb:In that case, the values of |t_1^l| change by only 1% and the values of u^l increase by only 3% - 6%with respect to p_ av=0.2. Thus, the p_ av dependence of AB LEH parameters is weak, and considering the same value of p_ av=0.2 at all pressures is acceptable.§.§ DFT calculation We perform the conventional DFT calculation as follows.We use<cit.>,and optimized norm-conserving Vanderbilt pseudopotentials (PPs) <cit.> by employing the GGA-PBE functional <cit.> together with the pseudopotentials( = Hg, Au, Ba, Ca, Cu and O) from the .The substitution of Hg by Au is done by using the Virtual Crystal Approximation (VCA) <cit.>.The Hg_1-x_ sAu_x_ s fictitious atomis abbreviated as "“Hg" from now on.We consider nonmagnetic calculations, a plane wave cutoff energy of 100 Ry for wavefunctions, a Fermi-Dirac smearing of 0.0272 eV, a 12 × 12 × 12 k-point grid for the Brillouin zone sampling in the self-consistent calculation, and a 8 × 8 × 3 k-point grid and 430 bands for the following non self-consistent calculation. We obtain the GGA band dispersion in Fig. <ref>. In this band dispersion, the medium-energy (M) space near the Fermi level is spanned by the 44 Cu3d, O2p and Hg5d-like bands from -10 eV to +3 eV by defining the origin at the Fermi level. First, we separate the M space from other bands as follows.We compute the 44 atomic-like Wannier orbitals (ALWOs) spanning the M space (denoted as M-ALWOs), as maximally localized Wannier orbitals <cit.>, by using the RESPACK code <cit.>. The initial guesses are d, p and d atomic orbitals centered respectively at Cu(l), O(l) (with l=i,o representing the inner and outer planes, respectively) and at Hg atoms. 44 ALWOs are constructed from the GGA band number from #41 to #87, which are numbered from the energy bottom of the GGA cutoff. We preserve the band dispersion in the GGA by using the inner energy window from the bottom of the lowest band in the M space [the band in black color between -7 eV and -10 eV in Fig. <ref>(a-l)] to the bottom of the lowest empty band outside the M space[the dashed band in black color between the Fermi level and +2 eV in Fig. <ref>(a-l)].Then, the three bands above the 44 M bands are disentangled <cit.> from the latter. We obtain the M-ALWOs.They are denoted as (lj R), where R is the coordinate of the unit cell in the space [xyz] expanded in the (a, b, c) frame in Fig. <ref>, j is the orbital index and l is the index (defined in Table <ref>) giving the atom located in the cell at R, on which (lj R)is centered.We then express the GGA one-particle part h(r) in the M-ALWO basis, as h^l,l'_j,j'( R) = ∫_Ω dr w_lj 0^*(r) h(r) w_l'j' R(r),in which w_lj R is the one-particle wavefunction of (lj R). From Eq. (<ref>), we deduce the onsite energy ϵ^l_l = h^l,l_j,j( 0) of the M-ALWO (lj) at any R, and the hopping t^l,l'_j,j'( R) = h^l,l'_j,j'( R) between the M-ALWO (lj 0) and the M-ALWO (l'j' R). In this paper, we discuss in particular the Cu3d_x^2-y^2 and in-plane O2p_σ onsite energiesand the Cu3d_x^2-y^2/O2p_σ hopping in the unit cell t_xp^l = t_x^2-y^2,p_σ^ Cu(l), O(l). These quantities are given in Fig. <ref>(o-q). §.§ Low-energy subspace Then, we focus on the L space, which is spanned by the Cu3d_x^2-y^2/O2p_σ AB band shown in red color in Fig. <ref>(a-g).To construct the AB maximally localized Wannier orbitals,the initial guesses are the d_x^2-y^2 atomic orbitals centered on each of the three Cu(l) atoms in the unit cell. The band window is essentially the M space but we exclude the N_ excl=14 lowest bands from it to avoid catching the B/NB Cu3d_x^2-y^2/O2p_σ character. Then, in the band window, we disentangle the 29 other bands from the AB band. §.§ Constrained polarization and effective interactionThen, we compute the cRPA polarization at zero frequency.It is expressed as <cit.>: [χ_ H]_GG'^(q)= -4/N_k∑_k∑_n_u^ empty∑_n_o^ occupied (1-T_n_o k T_n_u k+q)M^G_n_o, n_u(k+q,k)[M^G'_n_o, n_u(k+q,k) ]^*/Δ_n_o, n_u(k,q)-iη,in which q is a wavevector in the Brillouin zone, G,G' are reciprocal lattice vectors, nk is the Kohn-Sham one-particle state with energy ϵ_nk and wavefunction ψ_nk, and T_nk=1 if nk belongs to the L space, and T_nk=0 else. The charge transfer energy Δ_n_o, n_u(k,q) = ϵ_n_u k+q - ϵ_n_o kencodes the difference in onsite energies of n_u k+q and n_o k, and the interstate matrix elementM^G_n_o, n_u(k+q,k) = ∫_Ω dr ψ^*_n_u k +q(r) e^i(q+G)rψ_n_o k(r) encodes the wavefunctions ψ_nk, and also encodes the overlap between ALWOs since the latter are constructed from ψ_nk. We deduce the cRPA effective interaction asW_ H = (1 - vχ_ H)^-1 v,in which v is the bare Coulomb interaction. We deduce the onsite Coulomb repulsion in Eq. (<ref>).§ CORRECTION OF U AND |T_1|: IMPROVEMENT FROM THE GGA+CRPA LEVEL TO THE CGW-SIC+LRFB LEVELHere, we give details on the calculation of x_ LRFB^ and y_ LRFB^ in Eqs. (<ref>) and (<ref>) which allows to deduce u_ cGW -SIC+“LRFB" and |t_1|_ cGW -SIC+“LRFB" in Hg1223. [This corresponds to the correction (C) mentioned in Sec. <ref>.]First, we address again the computational loadof the direct cGW-SIC+LRFB calculation for Hg1223. Thiscalculation requiresthe LRFB preprocessing, whose extension to the cuprates with N_ℓ=3 is computationally demanding, because one needsto solve the three-orbital Hamiltonian consisting of three CuO_2 planes in total by an accurate quantum many-body solver (see Ref. Moree2022 for details)by taking into account the inter-CuO_2 plane hopping and interaction parameters.We leave such an extension for future studies. Instead we employ the procedure (C1) and (C2)mentioned in Sec. <ref>, because it already allows us to reach physically transparent understanding. In the procedure (C1), we improve the AB LEH from the GGA+cRPA level to the cGW-SIC level. Since the ratios u_ cGW -SIC/u^ avgand |t_1|_ cGW -SIC/|t_1^ avg|may have strong materials dependence and also pressure dependence, due to the diversity of the global band structure outside of the AB band, we need to perform this procedure with respect to each material and pressure separately. For instance, in Hg1223, we have u_ cGW -SIC/u^ avg≃ 1.36 at P_ amb and ≃ 1.21 at 30 GPa. The calculated cGW-SIC level of the parameters is shown in Table <ref>; computational details of the cGW-SIC calculation are given at the end of this Appendix. To perform (C2), we employ the material independent constants x_ LRFB and y_ LRFB to correct the cGW-SIC results obtained in (C1), because this procedure is only to readjust mainly the onsite Coulomb interaction U and this correction is materials insensitive. This readjustment arises from the correction of the relative chemical potential between the AB and B/NB bands to keep the electron fillings of the Cu3d and O2p orbitals, while the band structure of AB and B/NB bands by readjusting their chemical potentials and this chemical potential shift are indeed material insensitive in the known four compounds <cit.> because of the similar AB and B/NB band structures of the cuprates in general.In fact, our explicit calculations of x_ LRFB and y_ LRFB for several other cuprates (Hg1201, CaCuO_2, Bi2201, and Bi2212) show that, near optimal hole doping, x_ LRFB≃ 0.91-0.97 and y_ LRFB≃ 0.99-1.06 are rather universal and almost independent of the material. Thus, it may be reasonable to assume that Hg1223 near the optimal hole doping has similar values of x_ LRFB of y_ LRFB, and the narrow range of uncertainty allows accurate estimation of the Hamiltonian parameters.Still, the small uncertainty on x_ LRFB≃ 0.91-0.97 causes a possible quantitative error on the P dependence of T_ c^ est [see Fig. <ref>(b)], even though the qualitative dome structure is robust. We thus narrow down the estimate of x_ LRFB as follows. InFig. <ref>, we see a small but systematic linear dependence of x_ LRFB on 1/N_ℓ.Linear interpolation of the 1/N_ℓ dependence of x_ LRFB yields x_ LRFB^ est =0.951≃ 0.95 at N_ℓ=3.Thus, we assume x_ LRFB = 0.95 in Hg1223;for completeness, we also admit the range of uncertainty x_ LRFB≃ 0.91-0.97. On y_ LRFB, there is no clear 1/N_ℓ dependence of y_ LRFB, so that we simply assume y_ LRFB=1.0. (Note that the results shown in Fig. <ref> and Fig. <ref> do not depend on the value of y_ LRFB.) We deduce the values of u_ cGW -SIC+“LRFB" and |t_1|_ cGW -SIC+“LRFB" that are shown in Table <ref>. The universality of calculated x_ LRFB and y_ LRFB may be understood as follows. The LRFB corrects the value of Δ E_xp by an amount Δμ whose value is similar for all optimally doped compounds (we obtain Δμ≃ 1.1-1.4 eV in Ref. Moree2022, Table IV). This universality in Δμ is consistent with the universality in x_ LRFB and y_ LRFB. Note that u_ cGW -SIC+“LRFB" and |t_1|_ cGW -SIC+“LRFB" are rough estimates of the actual cGW-SIC+LRFB result. In the actual cGW-SIC+LRFB calculation, more complex factors such as the self-doping of the IP and OP <cit.> and the Coulomb interaction between the IP and OP may affect the result of the LRFB calculation. (Clarification of these factors is left for future studies.) Nonetheless, the simple above estimate supports the assumption (B) in Sec. <ref>.*Computational details of the cGW-SIC scheme — We apply the cGW-SIC scheme to Hg1201, Bi2201, Bi2212, CaCuO_2 and Hg1223 as follows. On Hg1201, Bi2201, Bi2212 and CaCuO_2, we consider the same computational conditions and hole concentrationas in Ref. Moree2022.On Hg1223, we first preprocess the 44 bands within the M space from the GGA level to the GW level. (The GW preprocessing is presented in detail in Ref. Moree2022, Appendix A2.) The random phase approximation (RPA) polarization is calculated by using 100 real frequencies and 30 imaginary frequencies; the maximum modulus of the frequency is 19.8 Ha. The exchange-correlation potential is sampled in the real space by using a 120 × 120 × 540 grid to sample the unit cell. In the calculation of the GW self-energy, we reduce the computational cost by employing the scheme sketched in Ref. Moree2022, Appendix E, with the cutoff energy ϵ = 0.01 eV. Other computational details are the same as those in Appendix <ref>.We obtain the GW electronic structure, in which the M bands are preprocessed at the GW level and the other bands are left at the GGA level. Then, we derive the AB LEH. We start from the GW electronic structure, and construct the AB MLWO. The band window is the M space but we exclude the N_ excl lowest bands from it. (We use N_ excl=9 at P_ amb, and N_ excl=10 at 30 GPa and 60 GPa.) Then, we use the cRPA to calculate the two-particle part and U. We also use the cGW to calculate the one-particle part and |t_1|. (Details about the cGW scheme can be found in Ref. Moree2022, Appendix A5.) § CORRECTION OF |T_1^| BY CORRECTING THE CELL PARAMETER A Here, we give details about the correction (D) mentioned in Sec. <ref>.To correct the P dependence of |t_1|, we correct (i) the P dependence of a in Fig. <ref>, then combine the corrected (i) with(ii) the a dependence of |t_1| estimated in Appendix <ref>, Eq. (<ref>).On (i), the P dependence of a is shown in Fig. <ref>. The experimental values of a are available at P_ amb <cit.> and P=8.5 GPa, but not at P > 8.5 GPa. At P_ amb, the experimental a and optimized a are in very good agreement (the difference is ≃ 0.004 Å). However, at P=8.5 GPa, the optimized a overestimates the experimental a by ≃ 0.05 Å.We assume that such an overestimation also happens at P > 8.5 GPa, and we correct the P dependence of optimized a accordingly. The values of the P dependent correction Δ a(P) are Δ a(P) = 0 Å if P=P_ amb and Δ a(P) = Δ a = -0.05 Å if P > P_ amb,and the P dependent corrected a is denoted asã(P) = a(P) + Δ a(P).On (ii), Eq. (<ref>) gives |t_1|(a) ∝ 1/a^3. Combination with Eq. (<ref>) yields:|t_1|[ã(P)] =|t_1|[a(P)]/1 + 3Δ a(P)/a(P) + 3 [ Δ a(P)/a(P)]^2 + [ Δ a(P)/a(P)]^3,which allows to determine |t_1|[ã(P)] as a function of |t_1|[a(P)]. The last two terms in the denominator of Eq. (<ref>) are negligible because |Δ a(P)/a(P)| ≃ 0.014 ≪ 1,so that we have: |t_1|[ã(P)] = |t_1|[a(P)]/1 + 3Δ a(P)/a(P).[Note that |t_1|[ã(P)] ≥ |t_1|[a(P)] because Δ a(P) ≤ 0.]We use Eq. (<ref>) to correct the P dependence of |t_1|_ cGW -SIC+“LRFB". The values of |t_1|_ cGW -SIC+“LRFB" that are obtained after applying (D) are shown in Table <ref> at P_ amb, 30 GPa and 60 GPa. For completeness, we mention a limitation of the correction (D): It relies on the dependencies (i) and (ii) mentioned above,and (ii) is determined at the GGA+cRPA level. The only way to improve slightly the approximation in (D) and Eq. (<ref>) would be totake the optimized CPs at P=30 GPa and reduce the cell parameter a by 0.05 Å(the estimated difference between optimized a and experimental a),then perform explicitly the cGW-SIC calculation from the CP with the reduced a,then deduce |t_1|_ cGW -SIC+“LRFB" and u_ cGW -SIC+“LRFB".However, this improvement is computationally expensive,and we do not expect it to change the results significantly. Thus, we do not consider it here. § PRESSURE DEPENDENCE OF INTERMEDIATE QUANTITIES §.§ Pressure dependence of the DFT band structure and Madelung potential Here, as a complement to Sec. <ref>, we show that [MW] is robust with respect to the definition of uniaxial pressure and with respect to the uncertainty on CP values.First [MW] is caused by P_a^ buck rather than P_c^ buck (see Fig. <ref>), which is consistent with Fig. <ref> in which [MW] is caused by P_a^ rather than P_c^: The main origin of [MW] is indeed the reduction in a, and the variation in d^z_ buck with P_a^ buck does not affect this result.Also, if we use the CP values from Zhang et al. instead of the optimized CP values,[MW] is well reproduced (see Fig. <ref>).In addition, we discuss the mechanisms of [Mϵ] and [MW] in terms of Madelung potential created by ions in the crystal.As shown in Sec. <ref>, [Mϵ] and [MW] are mainly caused by the reduction in a. This may be understood as follows. The main contribution of the Madelung potential felt by electrons in the CuO_2 plane is from the positive Cu and negative O ion within the plane. Then, the energy of an electron at the Cu3d orbital gets higher when the surrounding O ions become closer to the Cu site, namely, if a is reduced. On the contrary, an electron at the O2p_σ orbital feels opposite for the reduced a. This makes the difference of the electronic levels for the Cu3d_x^2-y^2 and O2p_σ larger.More precise calculation including long-range Coulomb potential by DFT supports this simple view is essentially correct. The P_a induced increase in energy of Cu3d bands is illustrated in Fig. <ref>(a,b,c). The application of P_a increases the absolute energy of Cu3d bands. (The absolute energy is defined as the energy without renormalization with respect to the Fermi level.) Note that examining the pressure dependence of absolute energies does make sense,because the chemical composition of the crystal is not modified by the application of pressure.The application of P_c increases the energy of not only Cu3d bands but also O2p bands in Fig. <ref>(a,d,e), so that [MW] does not occur. This can also be understood in terms of Madelung potential from in-plane O anions. When P_c is applied, the distance d^z_ Cu between the IP and OP is reduced. This reduces not only (i) the interatomic distance between the O anion in the OP (IP) and the Cu in the IP (OP), but also (ii) the interatomic distance between the O anion in the OP (IP) and the O in the IP (OP), The concomitant reduction in (i) and (ii) causes the concomitant increase in Cu3d and O2p electronic levels. §.§ Pressure dependence of the onsite bare interaction and Cu3d_x^2-y^2/O2p_σ charge transfer energy Here, as a complement to Sec. <ref>, we discuss the following points:(a) The increase in onsite bare interaction v is caused by the reduction in Cu3d_x^2-y^2/O2p_σ hybridization when Δ E_xp increases.(b) The concomitant increases in |t_1| and v when a decreases can be understood by further analysis of the a dependence of quantities.(c) The reduction in the correlation between v^l and Δ E^l_xp at P > P_ opt originates from the non-equivalence of the IP and OP, especially the buckling of Cu-O-Cu bonds in the OP. On (a), a first remark is that the Cu3d_x^2-y^2/O2p_σ hybridization reduces v by reducing the atomic Cu3d_x^2-y^2 character of the AB orbital. In the AB orbital, the onsite bare interaction is v^ avg≃ 14.5-15.5 eV, but in the Cu3d_x^2-y^2 M-ALWO, the onsite bare interaction v^ avg_x≃ 25.5 eV is larger [see Fig. <ref>(a)]. This is because the Cu3d_x^2-y^2 M-ALWO has atomic character, and is more localized than the AB orbital. In the limit of zero hybridization, the AB orbital is equivalent to the Cu3d_x^2-y^2 M-ALWO if we neglect the effect of other orbitals: In that case, v^ avg_ = v^ avg_x. However, the hybridization is always non-zero in the realistic cuprate,so that the atomic Cu3d_x^2-y^2 character of the AB orbital is reduced.Second, the importance of the Cu3d_x^2-y^2/O2p_σ hybridization decreases with P.The importance of the hybridization is roughly encoded in the ratio O_xp=|t_xp|/Δ E_xp between the Cu3d_x^2-y^2/O2p_σ hopping amplitude and Cu3d_x^2-y^2/O2p_σ charge transfer energy. O_xp decreases when the hybridization is reduced, and becomes zero when the hybridization is negligible. And, O_xp^ avg decreases with P [see Fig. <ref>(b)]. Thus, the atomic Cu3d_x^2-y^2 character of the AB orbital increases with P: We interpret this as the origin of the increase in v.To confirm this, we show explicitly that v^ avg_≃ v^ avg_x in the limit of zero hybridization. Let us consider the O_xp^ avg dependence of v^ avg in Fig. <ref>(c),which is obtained by combining the P dependencies of v^ avg and O_xp^ avg in Fig. <ref>(a,b). In the ab initio calculation, we have O_xp^ avg≃ 0.7-0.8, so that the O_xp dependence of v^ avg can be explicitly obtained only within this range. However, the value of v^ avg in the limit of zero hybridization may be estimated by performing a linear extrapolation of the ab initio O_xp^ avg dependence of v^ avg. The extrapolation yields v^ avg≃ 24 eV at O_xp^ avg=0 [see Fig. <ref>(c)],which is similar to v^ avg_x≃ 25.5 eV. This suggests the AB orbital becomes the Cu3d_x^2-y^2 M-ALWO in the limit of zero hybridization, as mentioned above.On (b), we analyze the a dependence of |t_1^ avg|, Δ E_xp^ avg and |t_xp^ avg|, as well as the average values O_xp^ avg and T_xp^ avg of O_xp^l=|t_xp^l|/Δ E_xp^l and T_xp^l=|t_xp^l|^2/Δ E_xp^l; the key point is that, when a decreases, O_xp^ avg decreases whereas |t_1^ avg| ∝ T_xp^ avg increases.To obtain the a dependence of the above quantities, we take the values of a as a function of pressure in Fig. <ref> and combine them with the P_a dependence of |t_1^l|, Δ E_xp^l and |t_xp^l| in Fig. <ref>(a,f,g).Note that we consider the P_a dependence instead of the P dependence. This is because the application of P_a modifies only the value of a: This allows to extract accurately the a dependence while avoiding the d^z_l dependence of quantities. We interpolate the a dependencies of the above quantities, by using the fitting function f(a)= C a^β,where β and C are the fitting parameters.We examine the values of β, which encode the speed of variation in quantities with a. The obtained values of β are shown in Fig. <ref>.First, we have:|t_1^ avg| ∝ T_xp^ avg∝ 1/a^3. Indeed, the value of β for |t_1^ avg| is β(|t_1^ avg|) ≃ -2.88, which is very close to -3. This is consistent with the 1/r^3 decay of the density-density correlation function <cit.>. Also, β(T_xp^ avg) ≃ -2.89 is almost identical to β(|t_1^ avg|).Second, we have:O_xp^ avg∝ a. Indeed, the O_xp^ avg dependence of a is almost linear: β(O_xp^ avg)=0.98 is very close to 1.The above equations (<ref>) and (<ref>) show that both |t_1^i| and v^i increase when a decreases.Indeed, in the item (a), we have clarified that v^ avg increases when O_xp^ avg decreases, and Eq. (<ref>) shows that O_xp^ avg decreases when a decreases. Note that β(Δ E_xp^ avg) < β(|t_xp^ avg|) <0: This is the origin of the positive value of β(O_xp^ avg). On the other hand, 2 β(|t_xp^ avg|) < β(Δ E_xp^ avg) <0: This is the origin of the negative value of β(|t_1^ avg|) ≃β(T_xp^ avg).On (c), the non-equivalence of the IP and OP causes a slight difference in the Δ E^l_xp dependence of v^l for l=i and l=o [see Fig. <ref>(e,f,l,m) at P > P_ opt]. This is simply because |t_xp^o| is reduced at P>P_ opt due to the increase in |d^z_ buck| [see Fig. <ref>(g)]. This contributes to reduce O_xp^o=|t_xp^o|/Δ E_xp^o, which increases v^o [see the item (a)]. This explains why, at P > P_ opt, v^o≃ v^i even though Δ E_xp^o < Δ E_xp^i [see Fig. <ref>(e,f)]. On the other hand, if we apply only P_a (which modifies only a without modifying d^z_ buck), |t_xp^o| is not reduced with respect to |t_xp^i| [see Fig. <ref>(g)], and the Δ E_xp^l dependence of v^l is very similar for l=i and l=o [see Fig. <ref>(e,f)]. The non-equivalence of the IP and OP also causes a slight difference in the P dependence of Δ E^l_xp for l=i and l=o in Fig. <ref>(f,m).This is because Δ E^l_xp depends not only on a, but also on d^z_l [see Fig. <ref>(f) in Appendix <ref>]. For instance, Δ E^i_xp (Δ E^o_xp) increases (decreases) when d^z_ Ca decreases. And, d^z_ Ca≃ 1.48 Å in the optimized CP values is smaller than d^z_ Ca≃ 1.59 Å in the CP values from Zhang et al.. This is why Δ E^i_xp > Δ E^o_xp for the optimized CP values, but Δ E^i_xp < Δ E^o_xp for the values from Zhang et al. [see Fig. <ref>(f,m)].§.§ Pressure dependence of the screening Here, as a complement to Sec. <ref>, we discuss the uniaxial pressure dependence of R^ avg, from which the dome structure in the uniform pressure dependence of R^ avg originates.First, we discuss the P_a dependence of R^ avg in Fig. <ref>(d).(i) At P_a < P_ opt, the increase in R^ avg is explained by the broadening [MW] of the GGA band dispersion. More precisely, the origin is the increase in charge transfer energies (<ref>) (schematically denoted as Δ in this Appendix)between occupied and empty bands, due to [MW] discussed in Sec. <ref>. The increase in Δ participates in the decrease of the amplitude of the cRPA polarization (<ref>), schematically denoted as |χ| ∝ 1/Δ. This reduces the cRPA screening and thus increases R^ avg.(ii) At P_a > P_ opt, the increase in R^ avg ceases. This is because the effect of [MW] is progressively reduced: We have ∂ |χ_^| / ∂Δ_∝ -1/Δ_^2,so that the larger P_a and thus Δ_,the smaller the decrease in |χ_^| when Δ_ is further increased, and the less important the effect of [MW]. In addition, when P_a increases,the charge transfer energy Δ_ M-empty between the M bands and empty bands outside M space is reduced, because the energy of the Cu3d bands increases [see Fig. <ref>(a,b,c)]. This may contribute to increase |χ_^| ∝ 1/Δ_ M-empty and cancel the effect of[MW] at high pressure.Second, we discuss the decrease in the P_c dependence of R^ avg in Fig. <ref>(d). This is because [MW] does not occur when P_c is applied, contrary to P_a. Thus, Δ does not increase. On the other hand, Δ_ M-empty is reduced because the energy of Cu3d bands increases [see Fig. <ref>(a,d,e)]. As a result, |χ_^| ∝ 1/Δ_ M-empty increases. § CRYSTAL PARAMETER DEPENDENCE OF EFFECTIVE HAMILTONIAN PARAMETERS AT OPTIMAL PRESSURE Here, as a complement to Sec. <ref>, we analyze the CP dependencies of AB LEH parameters around P_ opt. We start from the optimized CP values at P_ opt and modify separately the values of each CP.The modified values are given in Fig. <ref> (open squares). The CP dependencies of AB LEH parameters are shown in Fig. <ref>. We summarize the main results below: (i) As for |t_1^l|, the a dependence is the strongest.(ii) As for u^i, the d^z_ Ca and d^z_ Cu dependencies are the strongest.(iii) As for u^o, the d^z_ Ca and d^z_ O(ap) dependencies are the strongest.Also, (ii,iii) suggest the origin of the decrease in R^l at P > P_ scr in Sec. <ref>, Fig. <ref>(d,k): The decreases in R^i and R^o are caused respectively by the decreases in d^z_ Cu and d^z_ O(ap). *a dependence of AB LEH parameters —At P_ opt, the optimized value of a ≃ 3.69 Å is the same as that from Zhang et al..Still, this value might be overestimated.Indeed, the P dependence of experimental values <cit.> shows the faster decrease at lower pressures (see Fig. <ref>).Thus, we consider the modification Δ a of a at P_ opt, such that -0.05 Å ≤Δ a ≤ 0 Å at P_ opt.The a dependence of |t_1| is strong [see Fig. <ref>(a)], as discussed in Sec. <ref>.We note that the 15% increase in |t_1| from P_ amb to P_ opt becomes 18-19% if Δ a = -0.05 Å. Thus, the 3% difference between the increase in |t_1| and that in T_c^ opt may be understood by admitting the above uncertainty on a at P_ opt (see the discussion in Sec. <ref>). *d^z_ Ca dependence of AB LEH parameters —The optimized value d^z_ Ca≃ 1.48 Å is lower than that from Zhang et al. (d^z_ Ca≃ 1.59 Å).Thus, we consider0.0 Å ≤Δ d^z_ Ca≤ +0.2 Å to examine the d^z_ Ca dependence of AB LEH parameters.Increasing d^z_ Ca causes the rapid decrease in u^i and increase in u^o [see Fig. <ref>(b)],due to the decrease in Δ E_xp^i and increase in Δ E_xp^o [see Fig. <ref>(f)].Indeed, v^l and R^l are correlated with Δ E_xp^l. The correlation between v^l and Δ E_xp^l has been discussed in Appendix <ref>,and the increase in Δ E_xp^l also contributes to increase R^l by reducing the cRPA screening between Cu3d_x^2-y^2/O2p_σ B/NB and AB bands.The increase (decrease) of Δ E_xp^l originates from the positive Madelung potential created by the Ca cation, which stabilizes electrons in the vicinity of the Ca cation. When d^z_ Ca increases, the Ca cation becomes closer to (farther from) the O atoms in the OP (IP). Thus, the O2p_σ orbitals in the IP (OP) are destabilized (stabilized) [see Fig.  <ref>(b)].The Cu3d_x^2-y^2 orbitals are also destabilized, but less than O2p_σ orbitals because Cu atoms are farther from Ca compared to in-plane O. The above simple view is supported by the fact thatthe variation in ϵ_p_σ^l with d^z_ Ca and also the variation in LEH parameters with d^z_ Ca are twice faster in the IP compared to the OP [see Fig.  <ref>(b) and Fig. <ref>]. This is because the IP is surrounded by twice more Ca cations than the OP (see Fig. <ref>). However, note that the average values of LEH parameters do not vary substantially, because the Δ d^z_ Ca dependencies of LEH parameters in the IP and OP compensate each other. This explains why the increase in Δ E_xp^ avg from P_ amb to P_ opt originates from P_a rather than P_c (see Sec. <ref>).*d^z_ Cu dependence of AB LEH parameters —The optimized value d^z_ Cu≃ 2.82 Å is lower than that from Zhang et al. (d^z_ Cu≃ 2.91 Å).Thus, we consider0.0 Å ≤Δ d^z_ Cu≤ +0.2 Å to examine the d^z_ Cu dependence of AB LEH parameters.Increasing d^z_ Cu causes the rapid increase in u^i [see Fig. <ref>(b)],due to the decrease in both v^i and R^i [see Fig. <ref>(d,e)]. The decrease in v^i is caused by the decrease in Δ E_xp^i [see Fig. <ref>(e,f)]. Δ E_xp^i decreases because the in-plane O anions in the OP become farther from those in the IP. As a result, the O2p_σ electrons in the IP are stabilized [see Fig.  <ref>(c)],because the Madelung potential from O anions in the OP is weaker.However, the decrease in Δ E_xp^i may not be sufficient to explain the decrease in R^l.We see that from Δ d^z_ Cu = 0.0 Å to Δ d^z_ Cu = -0.2 Å,R^o slightly decreases and R^i sharply decreases [see Fig. <ref>(d)]. The decrease in R^o is not consistent with the increase in Δ E_xp^o which contributes to increase R^o;also,the decrease in R^i is very sharp compared to the smooth decrease in Δ E_xp^i.Instead, the decrease in R^l may be caused by an increase in cRPA screening between adjacent CuO_2 planes. This is intuitive because Δ d^z_ Cu = -0.2 Å reduces the distance between the CuO_2 planes in the real space. This increases the overlap and hybridization between M-ALWOs in the IP and OP, which may increase the cRPA screening (see also the discussion about the d^z_ O(ap) dependence of the screening below). The interplane cRPA screening particularly affects the IP, because the IP is adjacent to two OPs whereas the OP is adjacent to only the IP; this explains the sharp decrease in R^i.*d^z_ Ba dependence of AB LEH parameters —The optimized value d^z_ Ba≃ 1.96 Å is similar to that from Zhang et al. (d^z_ Ba≃ 1.98 Å). Still, for completeness, we consider -0.2 Å ≤Δ d^z_ Ba≤ 0.0 Å in order to examine the d^z_ Ba dependence of AB LEH parameters.Decreasing d^z_ Ba does not cause a significant variation in u^l [see Fig. <ref>(b)]. Still, we note that v^o and Δ E_xp^o slightly increase [see Fig. <ref>(e,f)]. This is because the positive Madelung potential from Ba cation felt by the OP is stronger (see the above discussion on the d^z_ Ca dependence of Δ E_xp^l). Note that the positive Madelung potential from Ba cation does not affect the IP, because the IP is separated from the Ba cation by the OP (see Fig. <ref>). *d^z_ O(ap) dependence of AB LEH parameters —The optimized value d^z_ O(ap)≃ 2.22 Å is slightly lower than that from Zhang et al. (d^z_ O(ap)≃ 2.32 Å).Thus, we consider 0.0 Å≤Δ d^z_ O(ap)≤ +0.2 Å to examine the d^z_ O(ap) dependence of AB LEH parameters. In addition, we consider -0.2 Å ≤Δ d^z_ O(ap)≤ 0.0 Å to probe the effect of apical O displacement at higher pressures.In the d^z_ O(ap) dependence of u^o [see Fig. <ref>(b)], there is a sharp decrease in u^o when d^z_ O(ap) decreases.This decrease has also been observed in the case of Bi2201 and Bi2212 <cit.>. It has two origins: (i) the decrease in v^o due to the decrease in Δ E_xp^o [see Fig. <ref>(e,f)], and more prominently (ii) the decrease in R^o [see Fig. <ref>(d)]. (ii) is due to the cRPA screening of AB electrons by the apical O, and this screening increases when d^z_ O(ap) decreases as in Bi2201 and Bi2212 <cit.>. Note that, contrary to R^o, R^i does not decrease significantly when d^z_ O(ap) decreases: This is because the IP is protected from the cRPA screening from apical O by the OP, which separates the IP from the apical O (see Fig. <ref>).A possible origin of (ii) is the increase in hybridization between theapical O 2p_z orbital and AB orbital in the OP.We show in Fig. <ref>(a) the partial density of states of the apical O2p_z M-ALWO. We see that bands at the Fermi level have slight apical O2p_z character, in addition to the dominant AB character. This originates from the hybridization between the AB orbital and the apical O2p_z orbital. The apical O2p_z partial density of states at Fermi level increases when d^z_ O(ap) decreases, which suggests the increase in hybridization between apical O 2p_z and AB orbitals. This is further supported by the increase in amplitude |t^ O(o), O(ap)(o)_p_σ,p_z| of the apical O 2p_z/in-plane O2p_σ hopping when d^z_ O(ap) decreases [see Fig. <ref>(b)], because the AB orbital is partly constructed from the in-plane O2p_σ orbital.51 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Bednorz and Müller(1986)]Bednorz1986 author author J. G. Bednorz and author K. A. Müller, https://doi.org/10.1007/BF01303701 journal journal Z. Phys. B volume 64, pages 189 (year 1986)NoStop [Torrance et al.(1988)Torrance, Tokura, LaPlaca, Huang, Savoy, and Nazzal]Torrance1988 author author J. Torrance, author Y. Tokura, author S. LaPlaca, author T. Huang, author R. Savoy, and author A. Nazzal, https://doi.org/https://doi.org/10.1016/0038-1098(88)90987-8 journal journal Solid State Communications volume 66, pages 703 (year 1988),note 25th Aniversary YearNoStop [Gao et al.(1994)Gao, Xue, Chen, Xiong, Meng, Ramirez, Chu, Eggert, and Mao]Gao1994 author author L. Gao, author Y. Y. Xue, author F. Chen, author Q. Xiong, author R. L. Meng, author D. Ramirez, author C. W.Chu, author J. H. Eggert, and author H. K. Mao, https://doi.org/10.1103/PhysRevB.50.4260 journal journal Phys. Rev. B volume 50, pages 4260 (year 1994)NoStop [Dai et al.(1995)Dai, Chakoumakos, Sun, Wong, Xin, and Lu]Dai1995 author author P. Dai, author B. C. Chakoumakos, author G. F. Sun, author K. Wong, author Y. Xin, and author D. F. Lu, https://doi.org/https://doi.org/10.1016/0921-4534(94)02461-8 journal journal Physica C: Superconductivity volume 243, pages 201 (year 1995)NoStop [Yamamoto et al.(2015)Yamamoto, Takeshita, Terakura, andTokura]Yamamoto2015 author author A. Yamamoto, author N. Takeshita, author C. Terakura, andauthor Y. Tokura, @noopjournal journal Nature communications volume 6, pages 8990 (year 2015)NoStop [Nuñez-Regueiro et al.(1993)Nuñez-Regueiro, Tholence, Antipov, Capponi, and Marezio]Nunez1993 author author M. Nuñez-Regueiro, author J. L. Tholence, author E. V. Antipov, author J. J. Capponi, and author M. Marezio, https://doi.org/10.1126/science.262.5130.97 journal journal Science volume 262, pages 97 (year 1993)NoStop [Schmid et al.(2023)Schmid, Morée, Kaneko, Yamaji,and Imada]Schmid2023 author author M. T. Schmid, author J.-B. Morée, author R. Kaneko, author Y. Yamaji, and author M. Imada, https://doi.org/10.1103/PhysRevX.13.041036 journal journal Phys. Rev. X volume 13, pages 041036 (year 2023)NoStop [Crommie et al.(1989)Crommie, Liu, Zettl, Cohen, Parilla, Hundley, Creager, Hoen, and Sherwin]Crommie1989 author author M. F. Crommie, author A. Y. Liu, author A. Zettl, author M. L. Cohen, author P. Parilla, author M. F. Hundley, author W. N.Creager, author S. Hoen, and author M. S. Sherwin, https://doi.org/10.1103/PhysRevB.39.4231 journal journal Phys. Rev. B volume 39, pages 4231 (year 1989)NoStop [Meingast et al.(1991)Meingast, Kraut, Wolf, Wühl, Erb, and Müller-Vogt]Meingast1991 author author C. Meingast, author O. Kraut, author T. Wolf, author H. Wühl, author A. Erb, and author G. Müller-Vogt, https://doi.org/10.1103/PhysRevLett.67.1634 journal journal Phys. Rev. Lett. volume 67, pages 1634 (year 1991)NoStop [Belenky et al.(1991)Belenky, Green, Roytburd, Lobb, Hagen, Greene, Forrester, and Talvacchio]Belenky1991 author author G. L. Belenky, author S. M. Green, author A. Roytburd, author C. J. Lobb, author S. J. Hagen, author R. L. Greene, author M. G. Forrester, and author J. Talvacchio, https://doi.org/10.1103/PhysRevB.44.10117 journal journal Phys. Rev. B volume 44, pages 10117 (year 1991)NoStop [Welp et al.(1992)Welp, Grimsditch, Fleshler, Nessler, Downey, Crabtree, andGuimpel]Welp1992 author author U. Welp, author M. Grimsditch, author S. Fleshler, author W. Nessler, author J. Downey, author G. W. Crabtree, and author J. Guimpel, https://doi.org/10.1103/PhysRevLett.69.2130 journal journal Phys. Rev. Lett. volume 69, pages 2130 (year 1992)NoStop [Meingast et al.(1993)Meingast, Karpinski, Jilek, andKaldis]Meingast1993 author author C. Meingast, author J. Karpinski, author E. Jilek, and author E. Kaldis, https://doi.org/https://doi.org/10.1016/0921-4534(93)90580-J journal journal Physica C: Superconductivity volume 209, pages 591 (year 1993)NoStop [Mito et al.(2012)Mito, Imakyurei, Deguchi, Matsumoto, Tajiri, Hara, Ozaki, Takeya, and Takano]Mito2012 author author M. Mito, author T. Imakyurei, author H. Deguchi, author K. Matsumoto, author T. Tajiri, author H. Hara, author T. Ozaki, author H. Takeya, andauthor Y. Takano, https://doi.org/10.1143/JPSJ.81.113709 journal journal Journal of the Physical Society of Japan volume 81, pages 113709 (year 2012),https://arxiv.org/abs/https://doi.org/10.1143/JPSJ.81.113709 https://doi.org/10.1143/JPSJ.81.113709 NoStop [Mito et al.(2014a)Mito, Matsui, Imakyurei, Deguchi, Horide, Matsumoto, Ichinose, and Yoshida]Mito2014 author author M. Mito, author H. Matsui, author T. Imakyurei, author H. Deguchi, author T. Horide, author K. Matsumoto, author A. Ichinose, and author Y. Yoshida, https://doi.org/10.1063/1.4867907 journal journal Applied Physics Lettersvolume 104, pages 102601 (year 2014a), https://arxiv.org/abs/https://doi.org/10.1063/1.4867907 https://doi.org/10.1063/1.4867907 NoStop [Mito et al.(2014b)Mito, Imakyurei, Deguchi, Matsumoto, Hara, Ozaki, Takeya, and Takano]Mito2014meissner author author M. Mito, author T. Imakyurei, author H. Deguchi, author K. Matsumoto, author H. Hara, author T. Ozaki, author H. Takeya, and author Y. Takano, https://doi.org/10.7566/JPSJ.83.023705 journal journal Journal of the Physical Society of Japan volume 83, pages 023705 (year 2014b), https://arxiv.org/abs/https://doi.org/10.7566/JPSJ.83.023705 https://doi.org/10.7566/JPSJ.83.023705 NoStop [Mito et al.(2016)Mito, Goto, Matsui, Deguchi, Matsumoto, Hara, Ozaki, Takeya, and Takano]Mito2016 author author M. Mito, author H. Goto, author H. Matsui, author H. Deguchi, author K. Matsumoto, author H. Hara, author T. Ozaki, author H. Takeya, andauthor Y. Takano, https://doi.org/10.7566/JPSJ.85.024711 journal journal Journal of the Physical Society of Japan volume 85, pages 024711 (year 2016),https://arxiv.org/abs/https://doi.org/10.7566/JPSJ.85.024711 https://doi.org/10.7566/JPSJ.85.024711 NoStop [Hardy et al.(2010)Hardy, Hillier, Meingast, Colson, Li, Barišši ćć, Yu, Zhao, Greven, and Schilling]Hardy2010 author author F. Hardy, author N. J. Hillier, author C. Meingast, author D. Colson, author Y. Li, author N. Barišši ćć, author G. Yu, author X. Zhao, author M. Greven, and author J. S. Schilling, https://doi.org/10.1103/PhysRevLett.105.167002 journal journal Phys. Rev. Lett. volume 105, pages 167002 (year 2010)NoStop [Mito et al.(2017)Mito, Ogata, Goto, Tsuruta, Nakamura, Deguchi, Horide, Matsumoto, Tajiri, Hara, Ozaki, Takeya, and Takano]Mito2017 author author M. Mito, author K. Ogata, author H. Goto, author K. Tsuruta, author K. Nakamura, author H. Deguchi, author T. Horide, author K. Matsumoto, author T. Tajiri, author H. Hara, author T. Ozaki, author H. Takeya, andauthor Y. Takano, https://doi.org/10.1103/PhysRevB.95.064503 journal journal Phys. Rev. B volume 95, pages 064503 (year 2017)NoStop [Putilin et al.(1993)Putilin, Antipov, Chmaissem, andMarezio]Putilin1993 author author S. N. Putilin, author E. V. Antipov, author O. Chmaissem,and author M. Marezio, https://doi.org/10.1038/362226a0 journal journal Nature volume 362, pages 226 (year 1993)NoStop [Hohenberg and Kohn(1964)]Hohenberg1964 author author P. Hohenberg and author W. Kohn, https://doi.org/10.1103/PhysRev.136.B864 journal journal Phys. Rev. volume 136, pages B864 (year 1964)NoStop [Kohn and Sham(1965)]Kohn1965 author author W. Kohn and author L. J. Sham, https://doi.org/10.1103/PhysRev.140.A1133 journal journal Phys. Rev. volume 140, pages A1133 (year 1965)NoStop [Aryasetiawan et al.(2004)Aryasetiawan, Imada, Georges, Kotliar, Biermann, and Lichtenstein]Aryasetiawan2004 author author F. Aryasetiawan, author M. Imada, author A. Georges, author G. Kotliar, author S. Biermann, and author A. I. Lichtenstein, https://doi.org/10.1103/PhysRevB.70.195104 journal journal Phys. Rev. B volume 70, pages 195104 (year 2004)NoStop [Aryasetiawan et al.(2006)Aryasetiawan, Karlsson, Jepsen, andSchönberger]Aryasetiawan2006 author author F. Aryasetiawan, author K. Karlsson, author O. Jepsen,and author U. Schönberger,https://doi.org/10.1103/PhysRevB.74.125106 journal journal Phys. Rev. B volume 74,pages 125106 (year 2006)NoStop [Imada and Miyake(2010)]Imada2010 author author M. Imada and author T. Miyake,https://doi.org/10.1143/JPSJ.79.112001 journal journal Journal of the Physical Society of Japan volume 79, pages 112001 (year 2010)NoStop [Hirayama et al.(2013)Hirayama, Miyake, and Imada]Hirayama2013 author author M. Hirayama, author T. Miyake,and author M. Imada, https://doi.org/10.1103/PhysRevB.87.195144 journal journal Phys. Rev. B volume 87, pages 195144 (year 2013)NoStop [Hirayama et al.(2015)Hirayama, Misawa, Miyake, andImada]Hirayama2015 author author M. Hirayama, author T. Misawa, author T. Miyake, and author M. Imada, https://doi.org/10.7566/JPSJ.84.093703 journal journal Journal of the Physical Society of Japan volume 84, pages 093703 (year 2015)NoStop [Hirayama et al.(2018)Hirayama, Yamaji, Misawa, andImada]Hirayama2018 author author M. Hirayama, author Y. Yamaji, author T. Misawa, and author M. Imada, https://doi.org/10.1103/PhysRevB.98.134501 journal journal Phys. Rev. B volume 98, pages 134501 (year 2018)NoStop [Hirayama et al.(2019)Hirayama, Misawa, Ohgoe, Yamaji, and Imada]Hirayama2019 author author M. Hirayama, author T. Misawa, author T. Ohgoe, author Y. Yamaji, and author M. Imada, https://doi.org/10.1103/PhysRevB.99.245155 journal journal Phys. Rev. B volume 99, pages 245155 (year 2019)NoStop [Ohgoe et al.(2020)Ohgoe, Hirayama, Misawa, Ido, Yamaji, and Imada]Ohgoe2020 author author T. Ohgoe, author M. Hirayama, author T. Misawa, author K. Ido, author Y. Yamaji, and author M. Imada, https://doi.org/10.1103/PhysRevB.101.045124 journal journal Phys. Rev. B volume 101, pages 045124 (year 2020)NoStop [Morée et al.(2022)Morée, Hirayama, Schmid, Yamaji, and Imada]Moree2022 author author J.-B. Morée, author M. Hirayama, author M. T. Schmid, author Y. Yamaji, and author M. Imada, https://doi.org/10.1103/PhysRevB.106.235150 journal journal Phys. Rev. B volume 106, pages 235150 (year 2022)NoStop [Hirayama et al.(2022)Hirayama, Schmid, Tadano, Misawa, and Imada]Hirayama2022silverarxiv author author M. Hirayama, author M. T. Schmid, author T. Tadano, author T. Misawa, and author M. Imada, https://doi.org/10.48550/ARXIV.2207.12595 title arxiv:2207.12595 (year 2022)NoStop [hg1()]hg1223supplemental @nooptitle Supplemental material.Stop [Zhang et al.(1997)Zhang, Lu, and Ong]Zhang1997 author author X. Zhang, author W. Lu, andauthor C. Ong, https://doi.org/https://doi.org/10.1016/S0921-4534(97)01461-5 journal journal Physica C: Superconductivity volume 289, pages 99 (year 1997)NoStop [Armstrong et al.(1995)Armstrong, David, Gameson, Edwards, Capponi, Bordet, andMarezio]Armstrong1995 author author A. R. Armstrong, author W. I. F. David, author I. Gameson, author P. P. Edwards, author J. J. Capponi, author P. Bordet, and author M. Marezio, https://doi.org/10.1103/PhysRevB.52.15551 journal journal Phys. Rev. B volume 52, pages 15551 (year 1995)NoStop [Hunter et al.(1994)Hunter, Jorgensen, Wagner, Radaelli, Hinks, Shaked, Hitterman,and Von Dreele]Hunter1994 author author B. Hunter, author J. Jorgensen, author J. Wagner, author P. Radaelli, author D. Hinks, author H. Shaked, author R. Hitterman, and author R. Von Dreele, https://doi.org/https://doi.org/10.1016/0921-4534(94)90659-9 journal journal Physica C: Superconductivity volume 221, pages 1 (year 1994)NoStop [Eggert et al.(1994)Eggert, Hu, Mao, Beauvais, Meng, and Chu]Eggert1994 author author J. H. Eggert, author J. Z. Hu, author H. K. Mao, author L. Beauvais, author R. L. Meng, and author C. W. Chu, https://doi.org/10.1103/PhysRevB.49.15299 journal journal Phys. Rev. B volume 49, pages 15299 (year 1994)NoStop [Bordet et al.(1996)Bordet, Le Floch, Capponi, Chaillout, Gorius, Marezio, Tholence, and Radaelli]Bordet1996 author author P. Bordet, author S. Le Floch, author J. Capponi, author C. Chaillout, author M. Gorius, author M. Marezio, author J. Tholence, and author P. Radaelli, https://doi.org/https://doi.org/10.1016/0921-4534(96)00224-9 journal journal Physica C: Superconductivity volume 262, pages 151 (year 1996)NoStop [Kotegawa et al.(2001)Kotegawa, Tokunaga, Ishida, Zheng, Kitaoka, Kito, Iyo, Tokiwa, Watanabe, and Ihara]Kotegawa2001 author author H. Kotegawa, author Y. Tokunaga, author K. Ishida, author G.-q. Zheng, author Y. Kitaoka, author H. Kito, author A. Iyo, author K. Tokiwa, author T. Watanabe, andauthor H. Ihara, https://doi.org/10.1103/PhysRevB.64.064515 journal journal Phys. Rev. B volume 64, pages 064515 (year 2001)NoStop [Nakamura et al.(2020)Nakamura, Yoshimoto, Nomura, Tadano, Kawamura, Kosugi, Yoshimi, Misawa, and Motoyama]Nakamura2020 author author K. Nakamura, author Y. Yoshimoto, author Y. Nomura, author T. Tadano, author M. Kawamura, author T. Kosugi, author K. Yoshimi, author T. Misawa, and author Y. Motoyama, journal journal arXiv preprint arXiv:2001.02351 https://doi.org/10.1016/j.cpc.2020.107781 10.1016/j.cpc.2020.107781 (year 2020)NoStop [Sakakibara et al.(2012)Sakakibara, Suzuki, Usui, Kuroki, Arita, Scalapino, andAoki]Sakakibara2012prboct author author H. Sakakibara, author K. Suzuki, author H. Usui, author K. Kuroki, author R. Arita, author D. J.Scalapino, and author H. Aoki, https://doi.org/10.1103/PhysRevB.86.134520 journal journal Phys. Rev. B volume 86, pages 134520 (year 2012)NoStop [Momma and Izumi(2011)]Momma2011 author author K. Momma and author F. Izumi,@noopjournal journal Journal of applied crystallography volume 44, pages 1272 (year 2011)NoStop [Misawa et al.(2019)Misawa, Morita, Yoshimi, Kawamura, Motoyama, Ido, Ohgoe, Imada, and Kato]Misawa2019 author author T. Misawa, author S. Morita, author K. Yoshimi, author M. Kawamura, author Y. Motoyama, author K. Ido, author T. Ohgoe, author M. Imada, and author T. Kato, https://doi.org/https://doi.org/10.1016/j.cpc.2018.08.014 journal journal Computer Physics Communications volume 235, pages 447 (year 2019)NoStop [Giannozzi et al.(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, Dal Corso, de Gironcoli, Fabris, Fratesi, Gebauer, Gerstmann, Gougoussis, Kokalj, Lazzeri, Martin-Samos, Marzari, Mauri, Mazzarello, Paolini, Pasquarello, Paulatto, Sbraccia, Scandolo, Sclauzero, Seitsonen, Smogunov, Umari, and Wentzcovitch]QE-2009 author author P. Giannozzi, author S. Baroni, author N. Bonini, author M. Calandra, author R. Car, author C. Cavazzoni, author D. Ceresoli, author G. L.Chiarotti, author M. Cococcioni, author I. Dabo, author A. Dal Corso, author S. de Gironcoli, author S. Fabris, author G. Fratesi, author R. Gebauer, author U. Gerstmann, author C. Gougoussis, author A. Kokalj, author M. Lazzeri, author L. Martin-Samos, author N. Marzari, author F. Mauri, author R. Mazzarello, author S. Paolini, author A. Pasquarello, author L. Paulatto, author C. Sbraccia, author S. Scandolo, author G. Sclauzero, author A. P.Seitsonen, author A. Smogunov, author P. Umari,and author R. M. Wentzcovitch,http://www.quantum-espresso.org journal journal Journal of Physics: Condensed Matter volume 21, pages 395502 (19pp) (year 2009)NoStop [Giannozzi et al.(2017)Giannozzi, Andreussi, Brumme, Bunau, Nardelli, Calandra, Car, Cavazzoni, Ceresoli, Cococcioni, Colonna, Carnimeo, Corso, de Gironcoli, Delugas, Jr, Ferretti, Floris, Fratesi, Fugallo, Gebauer, Gerstmann, Giustino, Gorni, Jia, Kawamura, Ko, Kokalj, Küçükbenli, Lazzeri, Marsili, Marzari, Mauri, Nguyen, Nguyen, de-la Roza, Paulatto, Poncé, Rocca, Sabatini, Santra, Schlipf, Seitsonen, Smogunov, Timrov, Thonhauser, Umari, Vast, Wu, and Baroni]QE-2017 author author P. Giannozzi, author O. Andreussi, author T. Brumme, author O. Bunau, author M. B. Nardelli, author M. Calandra, author R. Car, author C. Cavazzoni, author D. Ceresoli, author M. Cococcioni, author N. Colonna, author I. Carnimeo, author A. D.Corso, author S. de Gironcoli, author P. Delugas, author R. A. D. Jr, author A. Ferretti, author A. Floris, author G. Fratesi, author G. Fugallo, author R. Gebauer, author U. Gerstmann, author F. Giustino, author T. Gorni, author J. Jia, author M. Kawamura, author H.-Y. Ko, author A. Kokalj, author E. Küçükbenli, author M. Lazzeri, author M. Marsili, author N. Marzari, author F. Mauri, author N. L.Nguyen, author H.-V.Nguyen, author A. O.de-la Roza, author L. Paulatto, author S. Poncé, author D. Rocca, author R. Sabatini, author B. Santra, author M. Schlipf, author A. P.Seitsonen, author A. Smogunov, author I. Timrov, author T. Thonhauser, author P. Umari, author N. Vast, author X. Wu, and author S. Baroni, http://stacks.iop.org/0953-8984/29/i=46/a=465901 journal journal Journal of Physics: Condensed Matter volume 29, pages 465901 (year 2017)NoStop [Schlipf and Gygi(2015)]Schlipf2015 author author M. Schlipf and author F. Gygi,https://doi.org/https://doi.org/10.1016/j.cpc.2015.05.011 journal journal Computer Physics Communications volume 196, pages 36(year 2015)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]Perdew1996 author author J. P. Perdew, author K. Burke, andauthor M. Ernzerhof, https://doi.org/10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Nordheim(1931)]nordheim1931electron author author L. Nordheim, @noopjournal journal Ann. Phys volume 9, pages 607 (year 1931)NoStop [Marzari and Vanderbilt(1997)]Marzari1997 author author N. Marzari and author D. Vanderbilt, https://doi.org/10.1103/PhysRevB.56.12847 journal journal Phys. Rev. B volume 56, pages 12847 (year 1997)NoStop [Souza et al.(2001)Souza, Marzari, and Vanderbilt]Souza2001 author author I. Souza, author N. Marzari,and author D. Vanderbilt,https://doi.org/10.1103/PhysRevB.65.035109 journal journal Phys. Rev. B volume 65,pages 035109 (year 2001)NoStop [Miyake et al.(2009)Miyake, Aryasetiawan, and Imada]Miyake2009 author author T. Miyake, author F. Aryasetiawan, and author M. Imada, https://doi.org/10.1103/PhysRevB.80.155134 journal journal Phys. Rev. B volume 80, pages 155134 (year 2009)NoStop [Misawa and Imada(2007)]Misawa2007 author author T. Misawa and author M. Imada,https://doi.org/10.1103/PhysRevB.75.115121 journal journal Phys. Rev. B volume 75,pages 115121 (year 2007)NoStop
http://arxiv.org/abs/2312.16402v1
{ "authors": [ "Jean-Baptiste Morée", "Youhei Yamaji", "Masatoshi Imada" ], "categories": [ "cond-mat.supr-con", "cond-mat.str-el" ], "primary_category": "cond-mat.supr-con", "published": "20231227042006", "title": "Dome structure in pressure dependence of superconducting transition temperature for HgBa$_2$Ca$_2$Cu$_3$O$_8$ -- Studies by $ab$ $initio$ low-energy effective Hamiltonian" }
Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments Lavanya Ratnabala1, Robinroy Peter2, E.Y.A. Charles31Department of Computer Science, University Of JaffnaJaffna, Sri Lankalavanyaratnabala@gmail.com2Department of Computer Science, University Of JaffnaJaffna, Sri Lanka robinroy.peter@gmail.com3Department of Computer Science, University Of JaffnaJaffna, Sri Lankacharles.ey@univ.jfn.ac.lk =========================================================================================================================================================================================================================================================================================================================================================================== This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials.Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.Swarm, Path formation, Task allocation, Argos, Exploration, Navigation, Sub-goal § INTRODUCTION [https://github.com/Robinroy-peter/Dynamic-Subgoal-Based-Path-Formation-and-Task-Allocation-Exploration-Navigation-Unknown-Environments.gitGitHub Repository: Dynamic Subgoal-Based Path Formation]Robotics has emerged as a captivating field of research, experiencing continuous growth over the past few decades. The development of highly sophisticated robots capable of handling the demanding challenges of the real world efficiently is no easy feat. Instead of relying on a single advanced robot, the use of multiple robots becomes necessary to tackle vast and complex tasks effectively. These systems, known as multi-robot systems (MRS), involve the coordination and cooperation of multiple entities, primarily robots, working together to achieve common goals. In some cases, human beings and centralized systems may also be part of these systems. However, effectively coordinating different agents in an MRS poses several challenges, particularly in terms of autonomy and human factors. The deployment and operation of these systems in real-world scenarios require a broad sense of autonomy, wherein robots possess enhanced capabilities and intelligence to operate under adverse conditions for extended periods.Within the realm of MRS, swarm robotics focuses on the utilization of a large number of simple robots. This branch of robotics draws inspiration from swarm intelligence observed in social animals such as ants, bees, and fishes. These creatures demonstrate remarkable abilities to handle difficult tasks collectively, surpassing what individuals can achieve alone. Swarm robotics embraces this concept as its foundation, where global behavior emerges from the coordination of simple individual behaviors or rules, akin to self-organization in natural swarm systems. Swarm robots possess limited sensing and actuating capabilities, with communication primarily limited to local interactions, such as those with neighboring robots and the environment. In scenarios requiring extensive area coverage, a multitude of robots, ranging from hundreds to thousands or more, can be deployed. In such cases, centralized organization can lead to system failures due to information overflow. However, swarm robotic systems do not rely on a centralized agent or controller, as macroscopic or global behavior arises from decentralized, local interactions. Swarm robotic systems exhibit scalability, ensuring their functioning and efficiency remain unaffected by changes in the number of robots. With a foundation in local sensing, swarm systems are adaptable and flexible, able to respond to disturbances within the working environment. The use of simple robots in swarm robotics leads to reduced production costs compared to the development of a single complex robot, thanks to their small size, simple shape, and computational complexity. Key factors in swarm robot design are their miniature form factor and cost-effectiveness. Each member of a swarm team must be resource-efficient and energy-conscious. Efficient cooperative operation in a swarm robot system necessitates the use of a novel method for cooperative subgoal-based path formation using swarms of robots. This method should enable obstacle avoidance during the cooperative subgoal-based path formation, even in the absence or with limited inter-robot communication. However, employing a large number of robots can lead to decreased performance due to inter-robot collisions and traffic congestion. Task allocation methods can enhance efficiency by forming the shortest and quickest paths. The objective is to assign tasks to the robots in a manner that optimizes cooperation and achieves the global objective more efficiently. In our case, the effective assignment of robots for the path formation task is crucial, and we propose to employ task allocation techniques. In the context of this paper, the two tasks at hand are resting and path formation. § PROBLEM DESCRIPTIONIn the realm of cooperative motions and applications involving a group of robots, the generation of a navigable path presents a crucial and complex challenge. Existing approaches for generating paths between two unknown targets have not been specifically designed for other cooperative tasks, such as cooperative navigation or pushing. Furthermore, the quality and applicability of these approaches in other swarms of robotic cooperative tasks have not been thoroughly addressed. Since complex tasks with swarms of robots often require a combination of various cooperative tasks, it becomes essential to have a cooperative path generation algorithm that can generate paths applicable to multiple cooperative robot swarm tasks.However, a significant issue arises when a large number of robots work together to form a path within a confined area: traffic congestion. The presence of traffic congestion can severely impact the effectiveness and efficiency of the overall system. As robots move randomly with limited visual and communication range, they may become lost or venture into unwanted areas, hindering the path formation process.To overcome these challenges, the problem at hand calls for the utilization of task allocation techniques to ensure that only the necessary robots are involved in the path formation task. By dynamically allocating tasks to specific robots, the system can prevent unnecessary exploration and minimize the occurrence of traffic congestion. § RELATED WORKSCollective navigation involves a robot reaching a destination by traversing an unknown environment with the assistance of other robots. This task is typically accomplished using communication techniques and finite-state machines. In one study, researchers proposed a strategy for transporting a large object to a goal using a substantial number of mobile robots, which are considerably smaller than the object itself. The robots only push the object at positions where the direct line of sight to the goal is obstructed by the object <cit.>. In their future work, they discussed the use of sub-goal-based path formation to push the object through complex environments with numerous obstacles and larger scales. The proposed method involved setting intermediate goals between the starting point and the goal point. Robots would follow the path discovered and push the object toward each sub-goal consecutively until reaching the final goal.Inspired by natural phenomena, such as ant foraging, researchers have proposed several methods for creating efficient paths between unknown targets <cit.>, utilizing artificial pheromones. These methods employ various techniques to generate artificial pheromones, including the release of alcohol, heat, odor, visual marks, or RFID tags. While these methods have shown efficacy in creating efficient paths, artificial pheromone systems may not be reliable in more realistic scenarios. As an alternative, a novel approach involving local IR (infrared) range bearing has been proposed <cit.>.Another efficient approach to path formation involves creating a chain using physically connected or non-connected field-based methods. Two different controllers, vector-field and chain, have been proposed to form paths. The process begins with robots starting to form a path from the prey once it has been detected. The direction of the vector field depends on the LED light direction of the robots already part of the path. Each robot probabilistically decides whether to join the path or not. Additionally, an evolutionary-based approach has been suggested for generating paths between two targets, as evolutionary robotics has shown promising results in solving cooperative tasks in swarm robotics.Various strategies exist to address task allocation in swarm robots. In a multi-foraging scenario, researchers proposed a task allocation model using the distributed bees algorithm (DBA) <cit.>, inspired by the foraging behavior of natural bee colonies. Robots were designed to use broadcast communication to inform other robots within range about the estimated location and quality of discovered targets in a decentralized manner. Another method <cit.> eliminates the need for global knowledge, communication between robots, and centralized components, relying solely on local interactions and individual robot perceptions. In another approach <cit.>, each robot has a probability of employing task partitioning, defined by sigmoid functions. Two novel-based approaches have been proposed to assign robots to announced tasks <cit.>. One approach relies on simple reactive mechanisms based on light signal interactions, while the other employs a more advanced gossip-based communication scheme to announce task requirements among the robots. A self-organizing method has also been proposed for allocating a swarm of robots to perform a foraging task with sequentially dependent sub-tasks <cit.>, based on the response threshold model. In this method, each robot updates its response threshold based on the task demand and the number of neighboring robots performing the task.The Argos simulator <cit.> is a notable multi-robot simulator capable of simulating up to 10,000 robots simultaneously. One key feature of the Argos simulator is its ability to apply different physics engines to different regions of the arena and run them in parallel. Argos has been specifically designed and developed for swarm robotics research. § METHODOLOGYIn this study, we propose a sub-goal-based path formation method using Finite State Machines (FSMs). Building upon previous approaches that utilize robotic chains, our method introduces dynamic robots within a sub-goal without requiring local intercommunication among them. We adapt foraging concepts to form a path between a nest and a goal, allowing for increased flexibility in various environments. Unlike previous methods, our approach addresses path formation using swarm robots in obstacle-ridden and complex environments. To improve path efficiency, we introduce Recovery Behavior/Hidden Location Identifier robots that indicate and complete the path effectively. Two types of alignment processes are employed: one from the starting point to the goal, and another from the goal back to the starting position. A state model of the path formation process is depicted in Figure <ref>. The model consists of various states, each representing a specific behavior. Table 1 provides descriptions of the different states, including Resting, Exploring, Subgoal, Return to Nest, Recovery Behavior, Subgoal Optimization 1, and Subgoal Optimization 2. §.§ Experimental setupThe experimental setup for this study revolves around the s-bot, which has been developed as part of the SWARM-BOTS project. While an individual s-bot may have limited capabilities, a swarm of s-bots is designed to overcome these limitations and operate efficiently. Since the physical s-bots are still under construction, all experiments were conducted using simulation.Prototype s-bots were constructed, and their specifications were used to develop the simulation software Argos. This software provides a 2D/3D simulation environment that takes into account the dynamics and collisions of rigid bodies. By utilizing Argos, we were able to simulate the behavior of the s-bots and test our sub-goal-based path formation method effectively.Table 2 presents the state transitions within the model, providing an explanation of the conditions and events that trigger a transition from one state to another. Each transition is assigned a label (a to i) and corresponds to specific situations such as the successful completion of path formation, unsuccessful exploration, discovery of a goal or sub-goal, return to the nest, encounter with another swarm robot in certain states, loss of visibility of the goal or sub-goal, successful formation of a sub-goal, completion of the first optimization, and completion of the resting time period.The finite state machine model depicted in Figure <ref>, along with the detailed information provided in Tables 1 and 2, serves as the foundation for our sub-goal-based path formation method.§.§ Subgoal FormationIn the subgoal formation phase, the robots begin by exploring the environment in order to find a goal. During the exploration state, each robot engages in random exploration, increasing its distance from the nest. If the minimum unsuccessful exploration time is exceeded without finding the goal, the robot returns to the starting position to initiate the next exploration.Once a robot detects the goal within a range of 30 in Argos, it transitions to the subgoal state-changing range. At this point, the robot emits a color signal, specifically white, and moves towards the starting point using a potential field approach. It positions itself as a static subgoal at a range of 70 in Argos. This process is illustrated in Figure <ref>.A subgoal robot can also serve as a robot beacon, allowing other robots to explore and detect it as their goal. This robot beacon, referred to as a subgoal-member, communicates with other robots by emitting a color signal through its LED ring. When a robot becomes a subgoal, it emits the color red. This distributed process results in the formation of one or more subgoal robots. The process continues until the robots reach the starting point, ultimately forming a complete path with intermediate subgoals from the goal to the starting point. This process is depicted in Figures <ref> and <ref>.One unique aspect of our research is the introduction of recovery robot behavior. Although robots can become subgoals within a range of 70, there may be cases where the goal/subgoal is not visible within that range. In such situations, the robot continues moving until it reaches the maximum visible range of 100. If the robot loses visibility of the goal/subgoal within its visibility range, it assumes that there is an obstacle between the robot and the goal/subgoal. Consequently, the robot switches to the recovery robot state, as illustrated in Figure <ref>. Recovery robots inform other robots to avoid entering the invisibility area. If a robot detects a recovery robot within a range of 20, the recovery robot repulses it, ensuring that the robot avoids entering the blind spot. In certain cases, the robot may enter the repulsion range, triggering a state change and eventually becoming a subgoal.§.§ Path formation stratagiesThe path formation strategies in our approach involve two heuristic optimization processes: one from the starting point to the goal, and another from the goal to the starting point. The optimization process begins when a subgoal robot detects the nest (represented by the color blue). The first subgoal robot from the nest initiates the first alignment process with the second subgoal robot. Once this process is successfully completed, the first subgoal robot starts emitting the color blue, indicating that it is acting as a sub-nest. This process continues with subsequent subgoal robots until the last subgoal robot is reached. Four parameters are utilized in the first optimization strategy, as depicted in Figure <ref>: θ_1 represents the goal/subgoal angle, θ_2 represents the nest/sub-nest angle, while x and y denote the distances between the goal/subgoal and the processing robot, and between the nest/sub-nest and the processing robot, respectively. The first optimization process involves adjusting the robot's position to minimize the error angle.Once the last subgoal robot completes the first alignment process, it proceeds to perform the second alignment process from the goal to the nest. This second process continues until it reaches the first subgoal robot from the starting position, as illustrated in Figure <ref>. In cases where the alignment robot loses visibility of the subgoal or sub-nest within a certain visibility range during the first or second alignment process, it transitions to the recovery robot state. The role of the recovery robot is to inform other robots to avoid entering the invisibility area while they are in the subgoal formation process.Our control system has been designed to ensure the desired behavior described above is achieved. §.§ Task allocation modelOur task allocation model is designed to ensure effective path formation while utilizing resources efficiently and aiming to create the shortest path. Instead of deploying all robots for the path formation task, which could lead to decreased performance due to traffic congestion, our method allocates only the required number of robots for this task. The remaining robots are assigned to the resting task. To determine the number of robots needed to form the path from the start to the end point, we employ a finite state machine (FSM) as depicted in Figure <ref>.Local interactions among robots are facilitated through light signal-based communication. During the exploration phase, robots can detect the goal within their range of vision. If a robot detects the goal, it changes its color to indicate that it has found the goal to other robots. If the goal is not found, the robot moves towards the starting point based on the potential field.l=s*tThe length of the path is calculated by robots using the exploration time and their speed. Equation (1) represents the calculation of the path length (l) as the product of the exploration time (t) and the robot speed (s). This information is used by the robots to determine when their energy level has reached its optimum, indicating that they should return to the nest.n_0=l/vBased on the calculated path length (l) and the robot's visual range (v), the number of robots needed to form the path (n_0) is determined using Equation (2). However, n_0 is not directly applied to perform the path formation task due to the complexity of the environment and other factors involved in subgoal-based path formation. In such cases, some recovery robots are required for the subgoal-based path formation task, while other robots assume the role of subgoals without participating in the path. To account for these behaviors, a fixed factor based on the complexity of the environment, denoted as δ, is added in Equation (3).n=l/v+δThe value of δ depends on the type of environment, allowing for flexibility in adapting the task allocation model to different scenarios.§.§ Local Communication ProtocolOnce the goal is found and a robot becomes the goal founder, the other robots enter the decision-making state through light signal-based interactions. They all proceed to the start point and initiate the process of determining which robots will be assigned to the path formation task and which ones will perform the resting task. This is where the local communication protocol comes into play.The task allocation process utilizes a local communication protocol that includes both broadcast and unicast signals. As illustrated in Figure <ref>, the goal founder robot broadcasts a signal to instruct other robots to engage in the path formation task. Upon receiving this request, the other robots respond with unicast signals containing their robot IDs, indicating their readiness to participate in the path formation task. The goal founder robot acknowledges the receipt of these signals and decrements the count of required robots. Once the desired number of robots has been allocated, the remaining robots transition to the resting task, and the communication process is terminated.d=√((x_1-x_2)^2+(y_1-y_2)^2) α=sin^-1(|y1-y2|/d)To prevent collisions between the path formation robots and the resting robots, the resting robots remain at the deployment point. To achieve this, a potential field is created towards the initial deployment point using euclidean distance, as described in Figure <ref>. The initial deployment point (x1, y1) and the current position (x2, y2) are obtained from the positioning sensors. The euclidean distance between the start and current positions is calculated using Equation (4). Furthermore, the angle α between the start position and the current position with respect to the y-axis is determined using Equation (5). With this angle α and the robot's heading angle, a potential field is generated to guide the robot towards the initial deployment point. Consequently, the robots utilize this potential field to move towards their respective initial points for resting, where they ultimately come to a rest. § EVALUATION AND COMPARISONSThe evaluation and comparisons of the testing model were conducted in eight different environments, including three open environments, three obstacle environments, and two complex environments. Each environment was tested with varying robot counts ranging from 60 to 100 robots. Figure <ref> provides a visual comparison of the subgoal state path (blue), optimization 1 path (purple), optimization 2 path (green), and the A* algorithm path (red) across the 40 test cases.In Figures <ref>, <ref>, and <ref>, the A* algorithm path is represented in red, the sub-goal formation final path is represented in blue, and the result obtained from improving the path using task allocation strategies is represented in green. The plotted dots in each graph indicate the locations of the robots as intermediate sub-goals. The success rate of sub-goal formation path formation was evaluated based on different environment types and robot sizes. The final path was compared with the sub-goal path, optimization 1 path, optimization 2 path, and the A* algorithm path.The performance of the task allocation model was also tested against the A* algorithm and the path before the implementation of the task allocation strategies. Table 5 presents the time taken to form the path in Argos default step size, the path length in Argos default unit, and the percentage of resource reduction achieved in the eight different environment types. The "Without" column refers to the path formation model without improvement through task allocation, while the "With" column represents the path after incorporating the task allocation model.In the evaluation, the model without task allocation demonstrated that 25% of the paths formed using the sub-goal based path formation were shorter than those generated by the A* algorithm. The success rate of forming paths without task allocation across the 40 test cases was 80%.Regarding the task allocation model, resource efficiency was calculated based on the deployed and allocated robot counts. On average, across the 40 different test cases, the model achieved a 61.93% reduction in resource utilization. Path efficiency was evaluated based on the path length, with 40% of the test cases forming paths shorter than those generated by the A* algorithm. In all 40 test cases, the path formed using the model with task allocation was shorter than the path formed without task allocation. Furthermore, 87.5% of the cases with task allocation formed paths more quickly than those without task allocation. § CONCLUSIONS AND FUTURE WORKIn this work, we have tackled the challenge of collective exploration and navigation using a swarm of robots. Our approach involved the development of a behavior-based controller inspired by foraging behavior, which relied solely on local information. We implemented and analyzed three different control strategies: the subgoal strategy, the aligning strategy 1, and the aligning strategy 2.The subgoal strategy served as the foundation, resulting in the formation of static subgoals. The aligning strategy 1 extended this approach by introducing adjustments to the position of subgoal members, aiming to achieve specific distances and angles with respect to their neighbors. This led to the alignment of subgoals from the start to the goal. The aligning strategy 2 further expanded on aligning strategy 1, incorporating recovery robots to maintain a certain distance from obstacles and walls. Our algorithm dynamically adapted the path in any type of environment, ensuring robustness.Furthermore, we proposed task allocation mechanisms for swarm subgoal-based path formation. Through light signal-based interactions, the robots initially explored the environment to locate a goal. Subsequently, communication protocols were employed during the decision-making phase to effectively allocate tasks. The task allocation model successfully utilized robot resources by allocating only the necessary number of robots for path formation tasks. This approach reduced resource requirements and deployment costs, allowing for the parallel utilization of excess robot resources in other tasks.Comparisons with the A* algorithm and the model without task allocation demonstrated that our proposed model consistently formed the shortest paths. Future work could involve the implementation of more advanced communication protocols and testing the model with real robots in real-world environments. This would provide further validation and insights into the practical applicability of our approach.00 b1 Jianing Chen, Melvin Gauci, Wei Li, Andreas Kolling and Roderich Grob, "Occlusion-Based Cooperative Transport with a Swarm of Miniature Mobile Robots",IEEE Transactions on Robotics, 2015. b2 Gross Roderich and Dorigo Marco, "Towards group transport by swarms of robots",Bio-Inspired Computation, Vol. 1, Nos. 1/2, 2009. b3 S. Nouyan, A. Campo, and M. Dorigo ,"Path formation in a robot swarm Self-organized strategies to find your way home",IRIDIA, CoDE, Universite Libre de Bruxelles, Brussels, Belgium, 2004. b4 J Werfel ,"Collective construction with robot swarms,Morphogenetic Engineering", Springer,2012. b5 K. Lerman. and A. Galstyan," Two paradigms for the design of artificial collectives", In Proceeding of the First Annual workshop on Collectives and Design of Complex Systems, NASA-Ames, CA,2004. b6 L. Panait and S. Luke," Cooperative multi-agent learning: the state of the art",Autonomous Agents and Multi-Agent Systems, 11(3):387-434, 2005. b7 Trullier, S. Wiener, A. Berthoz, and J. Meyer," Biologically-based artificial navigation systems: Review and prospects. Progress in Neurobiology", 51:483-544, 1997. b8 Reynolds, Craig W.Flocks, herds and schools," A distributed behavioral model", ACM SIGGRAPH Computer Graphics, 1987. b9 M. Bonani, V.Longchamp and S.Megnenat, "The MarXbot, a Miniature Mobile Robot Opening new Perspectives for the Collective robotic Research," Int. Conf. Intell. Robot. Syst. IROS, Taiwan, pp. 4187-4193, 2010. b10 A. Jevtic, A.Gutierrez, D. Andina and M. Jamshidi ,"Distributed Bees Algorithm for Task Allocation in Swarm of Robots",IEEE system Journal,June 2012. b11 A. Brutschy, G. Pini, C. Pinciroli, M. Birattari, and M. Dorigo, "Self-organized Task Allocation to Sequentially Interdependent Tasks in Swarm Robotics", IRIDIA Technical Report Series,May 2012. b12P. Giovanni , A. Brutschy, M. Frison, A. Roli, M. Dorigo, and M. Birattari, "Task partitioning in swarms of robots: An adaptive method for strategy selection", IRIDIA Technical Report Series,May 2011. b13 F. Ducatelle, A. Forster, G.A. Dicaro and L.M. Gambardella, " New task allocation methods for robotic swarms", Proceedings of the 9th IEEE/RAS Conference on Autonomous Robot Systems and Competitions, May 2009. b14 Yang Y ongming, Chen Xihui, Li Qingjun Tian Yantao, "Swarm Robots Task Allocation Based on Local Communication", 2010 International Conference on Computer, Mechatronics, Control and Electronic Engineering (CMCE). b15 J. Chen, M. Gauci, Wei Li, A. Kolling , " Occlusion-Based Cooperative Transport with a Swarm of Miniature Mobile Robots", IEEE Transactions on robotics, vol. 31, April 2015. b16 C. Pinciroli, V. Trianni,R. Grady, G. Pini, A. Brutschy, M. Brambilla, N. Mathews , E. Ferrante, C. Gianni, F. Ducatelle, M. Birattari, L. Maria, M. Dorigo "ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems", Springer Science+Business Media New York 2012. b17A.Campo, S. Nouyan, M. Birattari, G.Roderich, M. Dorigo. "Negotiation of goal direction for cooperative transport", IRIDIA – Technical Report Series, April 2006. b18 Wonki Lee, Neil Vaughan and Daeeun Kim "Task Allocation into a Foraging Task with a Series of Subtasks in SwarmRobotic System",IEEE TRANSACTIONS and JOURNALS, June 2020. b19Bao Pang, Yong Song , Chengjin Zhang, Hongling Wang, and Runtao Yang, "Autonomous Task Allocation in a Swarm of Foraging Robots: An Approach Based on Response Threshold Sigmoid Model", International Journal of Control, Automation and Systems 17(X) (2019). b20 D.Jha,"Algorithms for Task Allocation in Homogeneous Swarm of Robots",IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2018. b21 J. Zhou, M. Dejun , F. Yang, D. Guanzhong, "Labor division for swarm robotic systems with arbitrary finite number of task types",Proceeding of the IEEE International Conference on Information and Automation Hailar, China, July 2014. b22 S. Goss, S. Aron , L. Deneubourg, J. Pasteels, "Self orgaized shortcuts in the Argentine ant",Springer, Decemer 1989. b23 D. Payton, M. Daily, R. Estkowski, M. Howard, C. Lee "Pheromone Robotics",Springer, November 2001.
http://arxiv.org/abs/2312.16606v1
{ "authors": [ "Lavanya Ratnabala", "Robinroy Peter", "E. Y. A. Charles" ], "categories": [ "cs.RO", "cs.MA", "cs.NE" ], "primary_category": "cs.RO", "published": "20231227151356", "title": "Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments" }
update .ps, .png .pspdf.pdfps2pdf -dEPSCrop -dNOSAFER #1table max_max .3ex < -.75em1ex∼ .3ex > -.75em1ex∼
http://arxiv.org/abs/2312.15992v1
{ "authors": [ "Joshua N. Benabou", "Adriano Testa", "Chen Heinrich", "Henry S. Grasshorn Gebhardt", "Olivier Doré" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20231226105123", "title": "The Galaxy Bispectrum in the Spherical Fourier-Bessel Basis" }
APS/123-QEDhuangyj@illinois.edu Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Department of Applied Physics, Stanford University, Stanford, CA 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USApeihao.sun@unipd.it Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università degli Studi di Padova, Padova 35131, Italy Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Department of Physics, Arizona State University, Tempe, AZ 85287, USALinear Coherent Light Source, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USARIKEN SPring-8 Center, 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5148, Japan Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USADepartment of Applied Physics and Materials Science, California Institute of Technology, Pasadena, CA 91125, USA Department of Physics, Southern University of Science and Technology (SUSTech), Shenzhen, Guangdong, China, 518055 Department of Materials Science and Engineering, Northwestern University, Evanston, IL 60208-3108, USA Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA RIKEN SPring-8 Center, 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5148, JapanDepartment of Chemistry, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Linear Coherent Light Source, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USAdreis@stanford.edu Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Department of Applied Physics, Stanford University, Stanford, CA 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USAWe demonstrate that the absorption of femtosecond hard x-ray pulses can excite quasi-spherical, high-amplitude and high-wavevector coherent acoustic phonon wavepackets using an all hard-x-ray pump-probe scattering experiment. The time- and momentum-resolved diffuse scattering signal is consistent with strain pulses induced by the rapid electron cascade dynamics following photoionization at uncorrelated excitation centers. We quantify key parameters of this process, including the localization size of the strain wavepacket and the photon energy conversion efficiency into elastic energy. The parameters are determined by the photoelectron and Auger electron cascade dynamics, as well as the electron-phonon interaction.In particular, we obtain the localization size of the observed strain wave packet to be 1.5 and 2.5 nm for bulk SrTiO_3 and KTaO_3 single crystals, even though there are no nanoscale structures or light-intensity patterns that would ordinarily be required to generate acoustic waves of wavelengths much shorter than the penetration depth.Whereas in GaAs and GaP we do not observe a signal above background. The results provide crucial information on the mechanism of x-ray energy deposition into matter and shed light on the shortest collective length scales accessible to coherent acoustic phonon generation using x-ray excitation, facilitating future x-ray study of high-wavevector acoustic phonons and thermal transport at the nanoscale.Hard X-ray Generation and Detection of Nanometer-Scale Localized Coherent Acoustic Wave Packets in SrTiO_3 and KTaO_3 David A. Reis January 14, 2024 =====================================================================================================================§ INTRODUCTION Fundamental x-ray-matter interactions are typically dominated by photoionization of core electrons creating highly-excited states that initially decay on the femtosecond time-scale through Auger-Meitner decay and characteristic florescence<cit.>.The subsequent cascade of secondary excited states involves the inelastic scattering of high-energy electrons and to a lesser extent photons. This creates additional core excited states, and a plethora of both single-particle and collective excitations including electron-hole pairs, plasmons, polarons, and phonons <cit.> in hard condensed matter systems. This exponentially complex process of secondary interactions serves to either induce or avoid radiation damage depending on how effectively it dissipates the high energy density associated with localized x-ray excitation. Thus, it is important to understand experimentally the energy relaxation processes and subsequent structural dynamics following x-ray ionization on the relevant length and time scales.This is particularly critical for experiments that utilize the high flux and short pulse duration of x-ray free electron lasers (XFELs) to create and/or probe atomic-scale dynamics. Even the most robust materials are not immune to single-shot radiation damage in the focused beam of an XFEL where intensities can be high enough to saturate the photoionization cross-section <cit.> as well as induce multi-photon K-shell absorption <cit.>, and Compton scattering<cit.>.In recent x-ray pump, x-ray probe experiments on diamond excited beyond the single-shot damage threshold, the atomic motion appeared frozen for the first 20fs<cit.>, while in proteins dense-environment effects have been found to strongly affect local radiation damage induced structural dynamics<cit.>.It is equally important to understand the structural dynamics induced by x-ray absorption below the single-shot damage threshold.Here we present the results of x-ray pump, x-ray probe structural dynamics experiments on the oxide perovskites, SrTiO_3 and KTaO_3 excited at high densities, but below the multi-shot damage threshold.We find that the photoionization leads to the sudden excitation of 3-dimensional (3D) coherent acoustic phonon wavepackets with characteristic wavelengths on the order of single nanometer scale, through analysis of the evolution of diffuse scattering in time and momentum (inducing changes in the signal of over 100% with moderate pulse fluences).We model the strain generation and propagation as due to the in-phase addition of coherent acoustic wavepackets originating from a large collection of nanometer stress centers following localized photoionization events at random uncorrelated sites.We do not observe signatures of acoustic phonon generation above noise in semiconducting GaAs or GaP indicating that there are significant differences in the cascade process and in particular the dissipation of electronic energy to the lattice.The results have fundamental implications for our understanding of x-ray matter interactions at modest intensities below the damage threshold. In particular, the structural dynamics initiated by the electron cascade process has practical implications for developing a microscopic understanding of condensed matter dynamics, for example, using high wavevector x-ray transient grating spectroscopy<cit.> to study nanoscale thermal transport <cit.>. § METHODSThe experiment is carried out at the x-ray correlation spectroscopy (XCS) endstation at the Linac Coherent Light Source (LCLS) <cit.>. The photon energy is set to 9.828 keV, slightly below the Ta L3 edge. A schematic diagram of the split-delay setup is shown in Fig. <ref>A. The hard x-ray split-delay (HXRSD) unit <cit.> is inserted into the x-ray beam path, splitting each x-ray pulse into two branches, a fixed-delay branch (red lines) and a variable-delay branch (blue lines). The relative delay between the pulse from the two branches is adjustedby changing the path length in the variable-delay branch, as indicated by the blue double-headed arrows; in this work, the delay is changed between -2 ps and 10 ps in 0.1 ps steps. After the crystal C_4, the pulses from the two branches, each approximately 30 fs in duration, become nearly collinear and are focused by a beryllium (Be) lens stack of focal length 3.5 m to approximately 20m× 20m at the sample position. The spatial overlap between the two pulses is optimized with the help of a beam profile monitor consisting of a Ce:YAG scintillator screen positioned in the same plane as the sample and a microscope objective.Due to imperfections in the translation stages, the angles of crystals C_2 and C_3 vary slightly as the delays is scanned.While the magnitude of the angular deviation is small compared to the ∼16rad Darwin width (for the p-polarized x rays), this “wobble” nonetheless results in slight variation of the pointing between the two pulses. Since the wobbling motion is correlated with the motor positions (which correspond to different delay times), the variations in the pointing are repeatable and thus are partially corrected by changing the angles of crystals C_2 and C_3 as a function of delay. The remaining variations are well characterized, and the effect on the signal is accounted for using an overlap correction factor as a function of the delay; more details are provided in Appendix <ref>.The pulse energies are measured shot-to-shot at the 120 Hz repetition rate of the FEL by intensity monitors shown as green dots in Fig. <ref>A. Specifically, the pulse energies in the individual branches are measured by the x-ray diodes d_03 and d_34 placed right before the recombination of the branches, while the overall pulse intensity is measured by the intensity monitor i_5 placed between the Be lens stack and the sample. The conversion from diode reading to pulse energy is calibrated, as detailed in Appendix <ref>.The experimental geometry is shown in Fig. <ref>B. The samples are placed in reflection geometry at room temperature, with the beam incident angle on the sample fixed to 5 grazing. The incident x-ray fluence is kept below the multiple pulse damage threshold of the sample. The x rays scattered by the sample are collected by an area detector (Jungfrau-1M, pixel size 75m× 75m) <cit.> placed around 130 mm away from the sample. In the elastic scattering limit, each pixel on the detector maps to a 𝐐=𝐤_out-𝐤_in, where 𝐤_in and 𝐤_out are the incoming and outgoing wave vectors, respectively, with amplitudes |𝐤_in|=|𝐤_out|=2π/λ where λ is the x-ray wavelength 1.26 Å. The sample is rotated around its surface normal 𝐧̂ until the Bragg condition for a low-order Bragg peak was found, and then rotated by at most 1 to tune off the Bragg peak to access the diffuse scattering about the peak. For the cubic perovskite samples SrTiO_3 and KTaO_3 with surface normal (001), the targeted Bragg peak was (1̅1̅ 2).§ RESULTS §.§ Extraction of the pump-probe signalWe begin by examining the general features of the pump-probe signal, taking SrTiO_3 as an example. The detector measuresx rays from both pulses, such that the scattered intensity detected, I(𝐐, t; ℰ_1, ℰ_2) = ℰ_1 S_0(𝐐)+ ℰ_2 S_0(𝐐) + Δ I(𝐐, t; ℰ_1, ℰ_2),where t is the time delay. ℰ_1 and ℰ_2 denote the pulse energies in the variable-delay and fixed-delay branches, respectively, which are measured separately as shown in Fig. <ref>A. Here and in the rest of the text, 𝐐 denotes the scattering wavevector, 𝐆 the nearest reciprocal lattice vector (i.e., the Bragg peak), and 𝐪≡𝐐-𝐆 the reduced wave vector (i.e., the deviation from the Bragg peak). The first two terms on the right-hand side of Eq. (<ref>) represent the intensities of diffuse scattering in thermal equilibrium, which are proportional to the pulse energies, and S_0(𝐐) is the diffuse scattering structure factor independent of the pulse energies. The last term, Δ I(𝐐, t; ℰ_1, ℰ_2), represents the pump-probe signal which depends on both the pump and probe pulse energies and the relative delay between the two pulses. To extract the pump-probe signal Δ I(𝐐, t; ℰ_1, ℰ_2), we first note that the x-ray pulse intensity delivered onto the sample varies shot-to-shot due to the fluctuating overlap between the x-ray spectrum coming into the split-and-delay system and the band-pass of the crystals in the system <cit.>.The ratio between the intensities in the two branches, ℰ_1/ℰ_2, also fluctuates due to jitter in the beam position at the splitting crystal C_1. Therefore, throughout the measurement, we collect a large set of images with a wide distribution of pulse energies ℰ_1 and ℰ_2. As an example, a histogram of the distribution of (ℰ_1, ℰ_2) at delay t=4.0 ps is shown in Fig. <ref>A. The distributions at other time delays are similar. This wide distribution of (ℰ_1, ℰ_2) helps isolate the pump-probe signal Δ I(𝐐, t; ℰ_1, ℰ_2): from all shots at time delay t, we select “low intensity” ones (0.1J < ℰ_1, ℰ_2 < 0.25J) where the pump-probe signal is expected to be small, and “high intensity” ones (0.85J < ℰ_1, ℰ_2 < 1.6J) where the pump-probe signal should be large. These ranges are indicated by the solid and dashed boxes in the histogram in Fig. <ref>A. We then calculate the normalized image for each category by dividing the summed image by the summed pulse intensities.The normalized low- and high-intensity images for SrTiO_3 at t=4.0 ps are shown in Fig. <ref>B-C. Note that the long white streaks are due to scattering from the tails of the Bragg peak (from the surface truncation rod). Comparing these two images, one can see modulations away from the central region appear in the high-intensity image, which becomes clearer when dividing the high-intensity image by the low-intensity one as shown in Fig. <ref>D. These modulations appear like ripples emanating from the center, which corresponds to the closest point to the (1̅1̅2) Bragg peak on the detector (i.e., on the Ewald sphere), reflecting the acoustic phonon excitation in the sample. Note that the relative signal level is rather high: the modulations reach more than 100% of the diffuse scattering background approximated by the low-intensity image in Fig. <ref>B. In comparison, the pump-probe signal appears negligible around zero time delay: Fig. <ref>E shows the results for delay t=0.0 ps, which does not contain any modulation like in Fig. <ref>D. Therefore, we use the data at time zero as the background diffuse scattering, as will be further detailed below.Having observed the general features of the pump-probe signal, we next demonstrate that it is bi-linear in the pump and probe pulse energies. Because the pump-probe signal, Δ I(𝐐, t; ℰ_1, ℰ_2), should be proportional to both the probe pulse energy and the amount of lattice distortion created by the pump pulse, the bi-linearity is expected if the latter is proportional to the number of photons in the pump. In this case, we may write Δ I(𝐐, t; ℰ_1, ℰ_2) = C(𝐐,t)ℰ_1 ℰ_2, where C(𝐐,t) is the pump-probe response coefficient independent of the pulse energies. In this case, the normalized scattered intensity, I(𝐐, t; ℰ_1, ℰ_2)/ℰ_1+ℰ_2 = S_0(𝐐) + C(𝐐,t) ℰ_1 ℰ_2/ℰ_1+ℰ_2. We now test the validity of Eq. (<ref>). Using the extracted pump-probe signal in Fig. <ref>D, we select a region of interest (ROI) with a clear signal, as indicated by the red dashed line. Fig. <ref>Fshows the summed intensity within this region, I_ROI, normalized by the total pulse energy ℰ_1+ℰ_2, plotted as a function of ℰ_1 ℰ_2/(ℰ_1+ℰ_2). The results for delay t=4.0 ps and 0.0 ps are shown as blue and green circles, respectively, where each circle corresponds to the average over a bin in the histogram in Fig. <ref>A with at least 5 shots. These data are consistent with a linear trend with the same intercept at ℰ_1ℰ_2=0, which supports the validity of Eq. (<ref>) and hence the bi-linearity of the pump-probe signal. Therefore, the results verify our expectation that the total lattice distortion is proportional to the pump pulse energy. Moreover, while the data for t=4.0 ps shows a clear slope, the data for t=0.0 appear independent from the pulse energies, confirming the absence of pump-probe signal at zero time delay.Since we have demonstrated that the pump-probe signal is negligible around zero delay, to increase the signal-to-noise ratio, we use the normalized intensity including all valid shots at t=0.0 ps, I^norm(𝐐, t=0), as the equilibrium diffuse scattering structure factor S_0(𝐐), in the absence of the effect of the pump. Using Eq. (<ref>), the pump-probe coefficient at delay t is thus be obtained from the experimental data set as:C(𝐐,t)/S_0(𝐐) = [ I^norm (𝐐, t)/I^norm (𝐐, t=0) - 1 ] ∑_s (ℰ_1^(s) + ℰ_2^(s))/∑_s ℰ_1^(s)ℰ_2^(s) [𝒪(t)]^-1,where the sum is over all shots s at delay t. Here, 𝒪(t) denotes the correction factor of order unity which accounts for changes in the overlap between the two beams on the sample during the delay scan due to the aforementioned wobbling motion of the delay scan stages (see Appendix <ref>).An example of the pump-probe signal, obtained using Eq. (<ref>) for t=7.0 ps, is shown in Fig. <ref>A. The green line shows the direction 𝐪∥𝐆, which coincides with the direction of the largest intensity modulation. Along this line, we take several 𝐪 points (indicated by the colored dots) and plot the time dependence of the pump-probe signal in Fig. <ref>B, where the labels indicate the magnitude q ≡ |𝐪| for each trace. These curves exhibit damped oscillations, whose frequency increases with increasing q. The curves do not resemble a perfect sinusoidal function but feature flat minima, suggesting the existence of even-order frequency overtones. With a Fourier transformation, we obtain the spectral weights along this 𝐪 direction, which are shown in Fig. <ref>C.The results indicate that the excited modes are predominantly LA phonons. Firstly, the direction of the strongest modulation (green line in Fig. <ref>A) coincides with the direction 𝐪∥𝐆, while the modulation vanishes in the perpendicular direction, consistent with the |𝐐·ϵ|^2 dependence in the scattering intensity where ϵ is the phonon polarization vector. Secondly, we overlay the spectral weights in Fig. <ref>C with the LA phonon dispersion in the direction 𝐪∥𝐆 [using v=8.2 km/s, which is calculated by density-functional theory (DFT) along the selected 𝐪 direction] and its second-harmonic overtone, showing good agreement with the data. §.§ ModelWe present a model that is consistent with our observations and describes quantitatively the time evolution of the pump-probe signal. The model is based on the following physical picture. First, the stochastic absorption of x-ray photons from the pump pulse causes the creation of a large number of uncorrelated photoelectrons and core holes, each of which relaxes generating a cascade of lower-energy electrons. This process is mostly complete within 100 fs <cit.>, much faster than the period of the acoustic phonons that we detect. After this process, a large number of electron clouds are formed within the sample. These clouds are expected to have a core region, on the order of several nanometers, with a high electron density <cit.>, which serves as a random collection of excitation centers. The high concentration of secondary photoelectrons about each center leads to a sudden local stress that produces a propagating strain pulse in the form of a coherent longitudinal acoustic phonon wavepacket with a typical phonon period given by the time it takes for sound to propagate across the core region of the cascade. The probability of absorption about any given atomic site is much less than one and is given by the product of the photon fluence and the photoionization cross-section. For SrTiO_3 at 9.828 keV, it is dominated by absorption on the Sr sites with a mean distance between absorption events on order of 30 nm, for 0.5 μJ in a 20m× 20m spot.This is an order of magnitude larger than the inverse of the maximum q in Fig. <ref>A with observable “ripple” feature, which is around 1/(5× 10^-3 2πÅ^-1) ≈ 3 nm. Therefore, we assume that the interference between strain waves from the individual random photoabsorption events largely averages out. Furthermore, since x-ray photoabsorption is a stochastic process, we assume that the spatial distribution of these excitation centers across the different unit cells is given by a binomial probability distribution. Since we measure the incoherent sum of their scattering amplitudes (more details below and in Appendix <ref>), it is justified to take the ensemble average limit when describing the strain generation and propagation. Although individual photoelectrons may create anisotropic distributions of secondary electrons <cit.>, it is expected to become small by 100 fs <cit.>, and the distribution is assumed to be isotropic in the ensemble average limit <cit.>.Taking into account the arguments above, we build a model assuming that:1) The pump pulse creates a number of excitation centers that are randomly and sparsely distributed within the illuminated volume, and the number of these centers is proportional to the pump fluence. 2) Around each excitation center, a step-function-like (in time) stress field causes a sudden change in the equilibrium lattice constant and therefore a sudden strain. We assume the excitation is instantaneous compared to the phonon periods which are on the order of picoseconds (see Fig. <ref>C), so at t=0 the atomic displacements are zero. 3) The strain field is isotropic and assumes a Gaussian spatial profile in the ensemble average limit. 4) The strain field can be treated in the continuum limit, since the smallest length scales considered (several nanometers, corresponding to the inverse of the maximum q range of visible ripples) are still significantly larger than the size of the unit cell.Furthurmore for simplicitly, we approximate the material as elastically isotropic. Under these assumptions, the Fourier transform of the average displacement field for a single excitation center is (see Appendix <ref> for detailed derivations):𝐮(𝐪, t) = iπ^3/2 A σ^2/V q e^-σ^2 q^2 / 4[1 - cos(q v t ) ]e^-t/τ𝐪̂,where A describes the amplitude of the displacement field, σ is the rms extent of the distortion field, v=8.2 km/s is the velocity of the LA wave obtained from data in Fig <ref>C, e^-t/τ is a phenomonological decay decay factor added to account for the observed decay of the oscillations (see Fig. <ref>B), and 𝐪̂ is the unit vector in the direction of 𝐪. Here, 𝐮(𝐪,t) has the unit of length. The [1 - cos(q v t ) ] term is typical of displacive-like excitation, where the equilibrium position of the lattice suddenly shifts and atoms oscillate around the newequilibrium <cit.>.We take a common decay time τ, for both the decay of the new equilibrium back to the original equilibrium, and the oscillation amplitude.Since we observe that the modulations of the diffuse scattering (see Fig. <ref>) happen at the regime of relatively small q ≡ |𝐪| ≪ |𝐆|, and we assume that the spatial distribution of excitation centers is sparse and random, the change in diffuse scattering intensity due to the distortions is derived as for the Huang diffuse scattering due to static defects <cit.>. Hence the intensity modulation, Δ I(𝐐,t) ∝ c |𝐆·𝐮(𝐪,t)|^2 ℰ_1,where ℰ_1 is the probe pulse energy; c ≪ 1 is the concentration (number per unit cell) of excitation centers that are expected to be proportional to the pump pulse energy ℰ_2. Thus, Δ I(𝐐,t) is proportional to ℰ_1 ℰ_2 as expected. The full expression for Δ I(𝐐,t) considering all geometric factors is provided in SI. Note that in Eq. (<ref>), the term |𝐆·𝐮(𝐪,t)|^2 gives rise to the angular dependence Δ I(𝐐,t) ∝ |𝐆·𝐪̂|^2, in agreement with the experimental observation in Fig. <ref>, even for an isotropic𝐮(𝐪,t) = 𝐮(|q|,t). The thermal-equilibrium diffuse scattering I_0(𝐐, t), on the other hand, is presumed to be dominated by thermal phonons, for simplicity. The expression for thermal diffuse scattering is given in Eq. (<ref>). Based on this model, the pump-probe signal is (see Appendix <ref> for detailed derivations), C(𝐐,t)/S_0(𝐐) = ℱσ^3 ( U_p/U_d) e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ|𝐆·𝐪̂|^2,where the pre-factor ℱ takes into account (see Eq. (<ref>) for the full expression): geometric factors (e.g., the beam size), the x-ray linear absorption coefficient, thermal diffuse scattering background assuming phonon frequencies and eigenvectors as obtained from DFT, as well as other known constants (e.g., x-ray atomic scattering form factors at the given q and photon energy), all of which are independent of parameters of the model. Thus, the pre-factor ℱ can be calculated for any given 𝐐. We only explicitly write out in Eq. (<ref>) the following terms: the time dependence [1 - cos(q v t ) ]^2 e^-2 t/τ, the angular dependence |𝐆·𝐪̂|^2 (which determines the intensity anisotropy of the “ripples” in Figure <ref>A), the size of the distortion field σ, and the energy conversion coefficient U_p/U_d. Here U_d is the absorbed energy density and U_p is the energy density of the launched acoustic phonons, both defined in the bulk average limit.Using Eq. (<ref>), we fit our model to the experimentally measured C(𝐐, t)/S_0(𝐐) to extract the main physical quantities of interest: the size of the distortion field, σ, and the energy conversion efficiency, U_p/U_d.The fit is done in the following way: first, we estimate the decay constant τ with the time-dependent C(𝐐, t)/S_0(𝐐), the colorful traces in Fig. <ref>B, assuming that τ is independent of 𝐪 (i.e., a global estimate to all traces in Fig. <ref>B). Then, we vary the parameters σ and U_p/U_d to best fit the model to the data in the q-range from 2 to 7× 10^-3 2πÅ^-1 along the direction 𝐪∥𝐆 (i.e., the green line cut in Fig. <ref>A) and in the available delay range from 0 to 10 ps. Data at q>7× 10^-32πÅ^-1 are excluded because of low signal levels, while data at q<2 × 10^-3 2πÅ^-1 are excluded because of their sensitivity to inaccuracies in 𝐪-space calibration and in the modeling of the diffuse scattering, which may contain a background from static disorder besides the thermal diffuse scattering considered above. The results are presented in Fig. <ref>, which shows the measured pump-probe signal C(𝐐,t)/S_0(𝐐) (colored lines) and fit results (black lines) as a function of q at different delays.The extracted fit parameters are σ=1.5 nm, U_p/U_d = 7 × 10^-3, and τ=12 ps for SrTiO_3. The fits are shown as black lines in Fig. <ref> A.Fig. <ref>B shows C(𝐐, t)/S_0(𝐐) data on the selected area of the detector (top row) and model predictions using the fit parameters (bottom row), at delays of 4, 7, and 10 ps.Based on the general agreement between the model predictions and the experimental data, we consider our few-parameter model to be robust. Note, however, that σ is model-dependent, and it may change if one assumes a different form of the source profile other than a Gaussian one (e.g., an exponential decay profile in real space). In KTaO_3, the extracted fit parameters are σ=2.5 nm and U_p/U_d = 2× 10^-3; more detailed results are shown in Appendix <ref>.§ DISCUSSIONIt is remarkable that our simple model, which only assumes that spherical strain waves are launched from random, uncorrelated, and three-dimensionally localized sources of electrons, reproduces our experimental data and allows for the quantification of key parameters of this process, including the localization size σ of strain wave packets, and the photon energy conversion efficiency U_p/U_d into the elastic waves.The model is in stark contrast to ultrafast optical excitation in opaque materials where the energy absorbed is distributed uniformly over the illuminated area and exponentially along a distance (of absorption length) much shorter than the wavelength and the beam size that typically leads to an effective 1D strain wave propagating into the bulk with characteristic wavelength given by the penetration depth. In the all-x-ray experiment reported here, coherent acoustic phonons propagate in 3D. Even though the absorption of the x-ray on average leads to an exponentially decaying density profile into the bulk, the typical wavelength of coherent acoustic phonons is many orders of magnitude shorter than both the x-ray spot size and the penetration depth, pointing to the fact that X-ray excitation induces much more localized electron distribution than optical pump and the potentially dramatically different electron-phonon coupling mechanism. The generation and detection of coherent high-wavevector acoustic waves as reported here do not involve engineered interfaces or inhomogeneities, such as a transducer layer <cit.> or a superlattice structure <cit.>, which would normally be required for generation and detection of high-wavevector acoustic waves using optical pulses.In the case ofSrTiO_3, we find σ = 1.5 nm and U_p/U_d = 7× 10^-3. As pointed out in the Model section, from U_p/U_d one can obtain the product of the strain amplitude and concentration of localized excitation centers cA^2. If we assume the concentration of excitation sites is equal to the initial density of photoexcited atoms (∼ 10^17 cm^-3), the amplitude is 0.15nm corresponding to a dilation at the excitation center of ∼10%.Notably, while we find similar results for the oxide perovskites, SrTiO_3 and KTaO_3, we do not detect an observable signal for the tetrahedral semiconductors GaAs and GaP over a similar q-range.We expect the effective source sizes to be similar for the materials if they were solely based on the dependence of the cascade-electron distributions on the atomic constituents <cit.>. Moreover, the concentrations of initial ionization sites should also be similar based on the photoelectron cross-sections. Thus, we estimate an upper limit for the strain amplitude to be about 30 times smaller than for oxides.The dramatic difference in the response between these materials depends on the microscopic details of the strain generation and how it depends on the complex dynamics of the relaxation of the highly excited states and how it couples to the lattice.In the optical regime, ultrafast excitation of low-energy electrons (and holes) in opaque materials leads to coherent strain generation through both thermoelastic and deformation potential mechanisms.If a similar process were to dominate the x-ray case, the differences in the material's properties would also not be sufficient to explain the differences. However, the detailed spatio-temporal profile of the stress and resultant strain fields depends not just on the thermal expansion coefficient and deformation potentials, but also on the electron cooling rate and whether there is significant transport across the initial excitation region during the sound propagation time<cit.>.Even in the optical regime, this can reshape the coherent acoustic phonon pulse enhancing the lower frequency components and suppressing the higher ones, as seen for example in x-ray diffraction experiments from photoexcited Ge<cit.>.In the x-ray regime, the length scales are much smaller, and the electron energies are initially much higher, such that the details of the energy deposition rates and in particular the coupling to plasmons and polarons could become important given the high polarizability of the oxides.In particular, polarons feature local electron-lattice interactions that may explain the high-wave-vector excitation of coherent strain waves <cit.>. The average size of the strain field around each excitation center is linked with the distribution of secondary electrons and their coupling to the lattice. The spatial distribution is determined by the energy and momentum relaxation channels of the photoelectrons and Auger electrons <cit.>. For reference, in SrTiO_3 where Sr dominates the photoabsorption, a photoelectron ionized from Sr 2s is ∼ 7.6 keV, while the subsequent Auger electron is ∼ 1.6 keV <cit.>. The secondary electron cascade initiated by a multi-keV electron is expected on average to have a size on the order of hundreds of nanometers. Lower energy electrons, i.e., <1 keV, are expected to initiate cascades that end up with a localized secondary electron distribution with a characteristic size on the order of nanometers and a high peak density near the excitation center <cit.>. The former, while possessing an extended overall dimension, features more localized centers of a few nanometers in size <cit.>. The latter has a general length scale consistent with our experimentally measured σ. Therefore, given the inelastic mean free path of multi-keV electrons is on the order of tens of nanometers <cit.>, much longer than the experimentally measured σ, implies that c exceeds the initial excitation density, and thus our estimate for A is an upper one.The spherical wave packet center concentration c can indeed be lower than solely determined by the material photoabsorption cross-section, due to Auger electrons from multiple elements (e.g., both Ti and Sr atoms in SrTiO_3), re-absorption of fluorescence photons, and ionization by secondary electrons.We note that coherent phonons can be selectively generated with light by spatial patterning of the radiation. One such case is the transient grating (TG) technique where two crossed laser pulses create a standing wave interference pattern that excites phonons with the same period. The TG technique has recently been extended from the optical to extreme ultraviolet (EUV) wavelengths <cit.> and has been able to selectively excite phonons with wavelength as small as 24 nm <cit.>.With hard x-ray laser pulses, the period of the standing waves can be reduced to well below the sub-1 nm scales due to the short x-ray photon wavelength<cit.>. It has been suggested that the fundamental limit of the wavelength of coherent acoustic phonon generated by such gratings is determined by the inelastic mean free path of electrons <cit.> leading to significant signal degradation in the sub-10 nm length scales. The results here show that during the electronic cascade process, significant phonon generation can occur at nanometer length scales before the electronic and thermal excitation homogenizes. § CONCLUSION In summary, we report hard x-ray generation and detection of high-wavevector, large amplitude coherent acoustic strain pulses in oxide insulators.We anticipate future experiments with higher signal sensitivity and q resolution to definitively clarify the speculations above. The key is to directly extract σ and A in X-ray-pumped semiconductors. If indeed σ is confirmed to be of similar magnitude as SrTiO_3, it will support the mechanism of direct local electron-phonon coupling. On the other hand, if σ turns out to be much larger compared with SrTiO_3, it will prompt a more detailed look at the electronic cascade and diffusion process. Though the electron cascade upon hard x-ray photoabsorption is relatively well understood based on simulations <cit.>, additional simulations of the electron-phonon coupling together with the electron cascade process after photoabsorption of hard x-ray photon will greatly help in understanding the full process of high q coherent phonon generation.The spectral content of the coherent acoustic phonons that make up the strain wave is consistent with a large collection of localized sources of sudden stress with size on the order of a few nanometers.The size is expected to be determined by the complex dynamics of the high-energy electron cascade and is significantly shorter than the x-ray penetration depth.The observed excitation site dimension of 1.5 nm (in SrTiO_3) is significantly shorter than the low-energy electron inelastic mean free path <cit.>. While a more systematic study is required to determine the excitation mechanism of phonons from the x-ray-induced charge distribution,the generation of high amplitude coherent phonon wavepackets with nm-scale characteristic extent substantiates that high amplitude monochromatic acoustic phonons can be generated with sub-10 nm scale wavelengths using x-ray transient gratings methods, addressing an important length scale for the thermal transport in modern integrated circuits and its power management.The fraction of x-ray energy deposited in acoustic waves, U_p/U_d on the order of ∼ 10^-3 (We obtain that the energy conversion efficiency U_p/U_d ∼ 7 × 10^-3 for SrTiO_3, and ∼ 2 × 10^-3 for KTaO_3), as obtained from our model, may help quantify an energy transfer channel relevant to radiation damage processes relevant to all FEL based pump-probe measurements for condensed matter physics. Besides crystalline materials, the reported methods will also be beneficial for studying the x-ray-induced structural changes in amorphous materials on short times scales <cit.>.The authors thank J. B. Hastings for useful discussions. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences through the Division of Materials Sciences and Engineering under Contract No. DE-AC02-76SF00515. Measurements were carried out at the Linac Coherent Light Source, a national user facility operated by Stanford University on behalf of the U.S. Department of Energy, Office of Basic Energy Sciences under Contract No. DE AC02- 76SF00515.Preliminary experiments were performed at SACLA with the approval of the Japan Synchrotron Radiation Research Institute (JASRI) (Proposal No.2017B8046)P. Sun acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101023787.The participants from MIT were supported by the Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award Number DE-SC0019126.M.G. and J.M.R. were supported by the U.S. Department of Energy (DOE) under Grant No. DE-SC0012375. Y.H., P.S., and S.W.T. contributed equally to this work.§ DERIVATIONS FOR THE MODEL §.§ Spherical wave solutionIn this section, we present the derivation of the spherical strain wave model which is used in the main text. Two main assumptions are made. Firstly, we take the continuum limit, which is appropriate given that we are considering lengths scales of tens of nanometers and above, which is large compared with the size of the unit cell. Secondly, we assume that the material is isotropic, which greatly simplifies the mathematical form of the results. The second assumption is not strictly true in reality, but the analysis and final results are not significantly influenced by the anisotropy of the materials, so we keep this assumption. Furthermore, we start the derivation without considering dissipation, to demonstrate the main features of the propagating spherical waves (the oscillating patterns in reciprocal space that are observed in our data). At the end of the section, we take into account the effects that lead to decay over time.In our model, an X-ray photon excitation event leads to a distortion in the equilibrium position at time t=0. This distortion is assumed to be spherically symmetric, and it launches longitudinal spherical waves for t > 0. Since the material is assumed to be isotropic, the spherical symmetry is preserved during the wave propagation; in other words, the displacement field in the material after the excitation, 𝐮(𝐫, t), should be curl-free. Therefore, we may write 𝐮(𝐫, t)=∇ϕ(𝐫, t), where ϕ(𝐫, t) is a scalar field which satisfies the wave equation <cit.>:∇^2 ϕ(𝐫, t) - 1/v^2∂^2/∂ t^2ϕ(𝐫, t) = s(𝐫)H(t),where v is the longitudinal sound speed and H(t) is the Heaviside step function. s(𝐫) represents the distortion field of the new equilibrium, whose form is not specified at this point. In reciprocal space, Eq. (<ref>) becomes- ( q^2 + 1/v^2∂^2/∂ t^2) ϕ(𝐪, t) = s(𝐪)H(t). Eq. (<ref>) is a standard wave equation whose general solution for t>0 isϕ(q, t) = ϕ_0(q) + F(q) e^i q v t + G(q) e^-i q v t,where ϕ_0(q)=-s(q)/q^2 is the equilibrium solution, and F(q) and G(q) are arbitrary functions of q. Note that we have now dropped the dependence on the direction of 𝐪 because of spherical symmetry.Now we impose the initial conditions that, at t=0, there is no displacement or movement of the atoms:ϕ(q, t=0) =0,. ∂ϕ(q, t)/∂ t|_t=0 =0.This leads to F(q)=G(q)=-ϕ_0(q)/2. Hence, the solution, Eq. (<ref>), becomesϕ(q, t) = ϕ_0(q) [1 - cos(q v t)].The displacement field in reciprocal space is thus𝐮(𝐪, t) = -i ϕ_0(q) [1 - cos(q v t)] 𝐪. The functional form of ϕ_0 is given by the physical mechanism that leads to the distortion. In our model, it is assumed that the dilatation field (i.e., the divergence of the displacement) of the new equilibrium after the excitation is proportional to the concentration of electron-hole pairs <cit.>. The latter is assumed to follow a spherical Gaussian distribution. Therefore, we may write∇·𝐮_0(𝐫) = ∇^2 ϕ_0(𝐫) = σ^-1 A e^- r^2/σ^2,where A is the amplitude of the distortion with units of length and σ is the localization size of the electron-hole distribution. With a spherical Fourier transform (see more details in the next section), we can obtain, in reciprocal space,-q^2 ϕ_0(q) = π^3/2 A σ^2/Ve^-σ^2 q^2/4,where V is a normalization volume which we take to be the volume of the unit cell. Therefore, for t>0,ϕ(q, t) = -π^3/2 A σ^2 /V q^2 e^-σ^2 q^2/4 [1 - cos(q v t)],𝐮(𝐪, t) = iπ^3/2 A σ^2/V q e^-σ^2 q^2/4 [1 - cos(q v t)] 𝐪̂,where 𝐪̂ denotes the unit vector in the direction of 𝐪.If one is interested in the distortion in real space, an inverse spherical Fourier transform can be applied to the results above to obtain:ϕ(r, t) = -√(π) A σ^2/8r[ 2 erf( r/σ) - erf(r-vt/σ) . . - erf( r+vt/σ)] ,𝐮(𝐫, t) =√(π) A σ^2 /8 r^2[ 2 erf( r/σ) - erf(r-vt/σ) . . - erf( r+vt/σ)] 𝐫̂ - A σ/4 r[ 2 e^-r^2 / σ^2- e^-(r-vt)^2/σ^2- e^- (r+vt)^2/σ^2] 𝐫̂ ,where erf(z) ≡2/√(π)∫_0^z e^-x^2 dxis the error function. The terms containing (r-vt)/σ and (r+vt)/σ represent outgoing and incoming spherical waves, respectively.The derivations above have not considered dissipation. In reality, the equilibrium distortion field s(𝐫) decays together with the excited electron cloud, and the phonon modes are damped as well. The time scales of these two processes are not necessarily the same, but in this work, the data is consistent with the two time constants being close to each other. For example, in Fig. 3B in the main text, the experimental data can be described with an overall exponential decay with time. Therefore, we assume that both processes have the same decay time constant, τ. Thus, we may modify the wave equation, Eq. (<ref>) into the following form:- ( q^2 + 2/v^2 τ∂/∂ t + 1/v^2∂^2/∂ t^2) ϕ(𝐪, t) = π^3/2 A σ^2/Ve^-σ^2 q^2/4 e^-t/τ H(t),where the term -(2 v^-2τ^-1)(∂ϕ(𝐪, t)/∂ t) accounts for phonon damping, and the term e^-t/τ accounts for the decay of the distortion field. The solution of this equation, with the initial conditions (Eqs. [<ref>, <ref>]), isϕ(q, t) = -π^3/2 A σ^2/V(q^2-v^-2τ^-2) e^-σ^2 q^2/4×[1 - cos(q t √(v^2 - 1/q^2 τ^2)) ]e^-t/τ,𝐮(𝐪, t) = iπ^3/2 A q σ^2 /V (q^2-v^-2τ^-2) e^-σ^2 q^2/4×[1 - cos(q t √(v^2 - 1/q^2 τ^2)) ]e^-t/τ𝐪̂. These results may be simplified under the condition thatq^2 v^2 τ^2 ≫ 1,which holds true in our study: for example, with q=0.004 × 2π, v = 8970m/s (for SrTiO_3), and τ≈10ps, we have q^2 v^2 τ^2 ≈ 500. Therefore, we may approximate the results above withϕ(q, t) = -π^3/2 A σ^2 /V q^2 e^-σ^2 q^2/4[1 - cos(q v t ) ]e^-t/τ,𝐮(𝐪, t) = iπ^3/2 A σ^2 /V q e^-σ^2 q^2/4[1 - cos(q v t ) ]e^-t/τ𝐪̂,which are simply the solution in the undamped case, Eqs. [<ref>, <ref>], multiplied by the exponential decay term e^-t/τ. Similarly, the solution in real space is given byϕ(r, t) = -π^1/2 A σ^2/8r e^-t/τ×[ 2 erf( r/σ) - erf(r-vt/σ) - erf( r+vt/σ)],𝐮(𝐫, t) =π^1/2 A σ^2 /8 r^2 e^-t/τ𝐫̂×[ 2 erf( r/σ) - erf(r-vt/σ) - erf( r+vt/σ)]- A σ/4 r e^-t/τ𝐫̂×[ 2 e^-r^2 / σ^2- e^-(r-vt)^2/σ^2- e^- (r+vt)^2/σ^2],where 𝐫̂ denotes the unit vector in the direction of 𝐫. §.§ Spherical Fourier transformsIn this section we show the formulae for Fourier transform pairs in spherical coordinates.For scalars ϕ(r) and ϕ(q):ϕ(q) =1/V∫_0^∞ r^2 dr ∫_0^πsinθ dθ∫_0^2π dφϕ(r) e^i q r cosθ =4π/qV∫_0^∞ϕ(r) r sin (qr) dr,ϕ(r) =V/2π^2 r∫_0^∞ϕ(q) q sin (qr) dq.Again, here V is a normalization volume so that ϕ(q) and ϕ(r) have the same units. In general, the value of V is arbitrary. For simplicity, in our derivations it is taken to be the unit cell volume.For vectors 𝐮(𝐫) = u(r)𝐫̂ and 𝐮(𝐪) = u(q)𝐪̂, note the extra factor of cosθ when projecting onto the direction of 𝐫̂ or 𝐪̂:u(q) = 1/V∫_0^∞ r^2 dr ∫_0^πsinθ dθ∫_0^2π dφ u(r) cosθ e^i q r cosθ =4π i/Vq^2∫_0^∞ u(r) [ sin (qr) - qr cos(qr) ] dr, u(r) = -i V/2π^2 r^2∫_0^∞u(q) [ sin (qr) - qr cos(qr) ] dq. §.§ Energy in the excited strain fieldThe total energy deposited to the LA phonon fields can be calculated from the momentum-resolved LA displacements by integrating over all modes.The energy per mode isW(q) =1/2 cN m ω^2(q)|ũ(q)|^2 = 1/2 cN m v^2q^2 |ũ(q)|^2,where v is the speed of sound, q is the magnitude of the wavevector, m is the total mass of atoms in the unit cell, and|ũ(q)| = π^3/2 A σ^2 /V q e^-σ^2 q^2/4is the maximum mode displacement at a given wavevector; see Eq. (<ref>). Since we take into account an exponential decay in time, W(q) thus represents the phonon energy at t=0. Integrating over all wavevectors and dividing by the total volume NV, we obtain the energy density:U_p = 1/NVV/(2π)^3∫_0^∞ 4π q^2 dq W(q)= 1/8 π^3 N∫_0^∞ 4π q^2 dq ·1/2 cN m v^2 q^2 ( π^3/2 A σ^2 /V q)^2 e^-σ^2 q^2/2 = π c m v^2 A^2 σ^4/4 V^2∫_0^∞ q^2 e^-σ^2 q^2/2 dq= π^3/2 c m v^2 A^2 σ/4 √(2) V^2 . The same result can be obtained via calculations in real space. Because there is no shear, the elastic energy density of the distortion field is <cit.>:W= 1/2 (λ_L+2μ_L) (ε_11+ε_22+ε_33)^2- 2μ_L (ε_11ε_22 + ε_22ε_33 + ε_33ε_11) = 1/2 (λ_L+2μ_L)(∇·𝐮)^2 - 2μ_L (ε_11ε_22 + ε_22ε_33 + ε_33ε_11),where ε_11,22,33 are the diagonal elements of the strain tensor; λ_L and μ_L are the material's Lamé parameters, which are related to the longitudinal sound speed byλ_L + 2μ_L = m/V v^2. The amplitude profile in q corresponds to the amplitude of the time-independent term in Eq. (<ref>):𝐮_0(𝐫) = [ π^1/2 A σ^2 /4 r^2erf (r/σ) - A σ/2re^-r^2/σ^2] 𝐫̂.Thus we can easily obtain the strain tensor due to a single defect:ε_11 = ε_rr = d u_0/dr,ε_22 = ε_θθ = u_0/r,ε_33 = ε_ϕϕ =u_0/r.Therefore, the energy density of the excitations in the system isU_p= cN/NV∫_0^∞ 4π r^2 W dr= 4π c/V∫_0^∞[ 1/2 (λ_L+2μ_L) r^2 (∇·𝐮_0)^2 .- . 2μ_L ( 2 r u_0 du_0/dr + u_0^2 ) ] dr = 4π c/V[∫_0^∞m v^2/2 V σ^2 r^2 A^2 e^-2r^2 / σ^2 dr - . (2μ_L u_0^2 r) |_r=0^∞] = π^3/2 c m v^2 A^2 σ/4 √(2) V^2 .§.§ Obtaining the energy conversion efficiencyIn this section, we present detailed derivations of how the conversion efficiency (from deposited X-ray energies to phonon energies) can be obtained by fitting the data with the model described above. The only additional assumption is that the term cA^2, which describes the concentration and amplitude of the excitations, is proportional to the fluence of the pump pulse at any point in the sample. As will be shown, this is expected given the bi-linearity of the pump-probe signal demonstrated in the main text.We begin by considering the scattering from a single pair of pump-probe pulses. Assuming that, at the sample position, the probe and pump beams have transverse fluence profiles Φ_1(x,y) and Φ_2(x,y). Let μ denote the X-ray linear attenuation coefficient. Since we work in grazing geometry, let α denote the grazing angle, and β the angle of the outgoing wave (see Fig. <ref>). As in the main text, we use 𝐐 to denote the scattering wavevector, 𝐆 the reciprocal lattice vector, and 𝐪≡𝐐-𝐆 the deviation from the Bragg peak which corresponds to the phonon wave vector. Then, the thermal diffuse scattering to the first order can be written as <cit.>:I_0(𝐐) = ħ/2∑_i1/ω_𝐪,i( ħω_𝐪,i/2 k_B T) |∑_j F_j(𝐆) ( 𝐐·ϵ_i,𝐪,j/√(m_j)) |^2 ×∫_-∞^∞∫_-∞^∞dxdy/sinα∫_0^∞ dz exp(- μ z/sinβ) I_e n,where n is the number density of the unit cell, k_B is the Boltzmann constant, and T=300K is the sample temperature. The exp (- μ z/sinβ ) term accounts for the attenuation of the outgoing beam. The sum ∑_j is over all atoms in a unit cell; m_j is the mass of atom j, and the structure factor of atom j is defined asF_j(𝐆) ≡ f_j e^-M_j e^-i 𝐆·τ_j,where f_j is the form factor, e^-M_j the Debye-Waller factor, and τ_j the position of the atom in the unit cell. The sum ∑_i is over all phonon modes; ω_𝐪,i is the angular frequency and ϵ_i,𝐪,j the eigenvector of phonon mode i. z is the penetration depth into the sample (see Fig. <ref>). I_e is the scattering from a single electron; it can be re-written as I_e=Φ_inc(x,y,z) 𝒮_e,where Φ_inc(x,y,z) is the total incident fluence at coordinate (x,y,z), and 𝒮_e is a scattering strength taking into account X-ray polarization and detector solid angle; see Ref. <cit.>. Taking into account the spatial profile of the X-ray beams as well as their attenuation in the sample giving rise to a factor of exp(- μ z/sinα ), the equation above becomes: I_0(𝐐) = ħ n 𝒮_e/2∑_i1/ω_𝐪,i( ħω_𝐪,i/2 k_B T) | ∑_j F_j(𝐆) (𝐐·ϵ_i,𝐪,j/√(m_j)) |^2 ×∫_-∞^∞∫_-∞^∞dxdy/sinα∫_0^∞ dz [Φ_1(x,y)+Φ_2(x,y)] exp(-μ z/sinα - μ z/sinβ). Noting that the integral of the fluence is the pulse energy,∫_-∞^∞∫_-∞^∞ dxdy Φ_1,2(x,y) = ℰ_1,2,we may re-write Eq. (<ref>) as I_0(𝐐) = ħ n 𝒮_e/2 μ (ℰ_1 + ℰ_2) ∑_i1/ω_𝐪,i( ħω_𝐪,i/2 k_B T) ×| ∑_j F_j(𝐆) ( 𝐐·ϵ_i,𝐪,j/√(m_j)) |^2( 1 + sinα/sinβ)^-1 .As expected, I_0(𝐐) is proportional to the summed pulse energy ℰ_1+ℰ_2. As in Eq. (<ref>), we may write I_0(𝐐) = S_0(𝐐) (ℰ_1 + ℰ_2), where S_0(𝐐) is independent of ℰ_1,2.The change in diffuse scattering intensity due to the distortions can be derived in a similar way as for the Huang diffuse scattering due to static defects <cit.>. The results can be written as:Δ I(𝐐,t) ≈ ∫_-∞^∞∫_-∞^∞n dxdy/sinα∫_0^∞ dz c I_e exp(- μ z/sinβ) ×| ∑_j F_j(𝐆) |^2 | 𝐆·𝐮(𝐪,t) |^2 =∫_-∞^∞∫_-∞^∞ n dxdy/sinα∫_0^∞ dz c 𝒮_e Φ_1(x, y) ×exp(-μ z/sinα - μ z/sinβ) | ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2 ×π^3 A^2 σ^4 /V^2 q^2 e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ, where in the second step we have used the results from the model, Eq. (<ref>). Note that here the incident fluence Φ_inc includes only the probe beam, Φ_1. As mentioned above, the effect of the pump pulse on the sample is reflected in the term cA^2, which varies with the spatial coordinates (x,y,z) and is assumed to be proportional to the pump fluence:cA^2 = κΦ_pump = κΦ_2(x,y) exp(- μ z/sinα) ,where κ is a conversion coefficient. Thus, Δ I(𝐐,t) =π^3 κσ^4 n 𝒮_e/V^2 q^2 e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ×∫_-∞^∞∫_-∞^∞dxdy/sinα∫_0^∞ dz Φ_1(x,y) Φ_2(x,y) exp(-2 μ z/sinα - μ z/sinβ) | ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2.As will be discussed in the section “Overlap correction” below, the beam profiles may change during a delay scan due to motor movements. However, at a given delay, we may assume that the spatial profiles of the beams remain the same for all shots. In other words, we may write Φ_1,2(x,y) = ℰ_1,2ϕ_1,2(x,y), where ϕ_1,2(x,y) do not vary between shots and ∬ϕ_1,2(x,y) dx dy = 1. Then, we define the overlap factor:𝒪(t) ≡ 4πσ_b^2 ∫_-∞^∞∫_-∞^∞ϕ_1(x,y) ϕ_2(x,y) dxdy.The prefactor 4πσ_b^2 represents the area of the beam and makes 𝒪(t) a unitless quantity. σ_b represents the size of the beam and, in case of a Gaussian beam, it is taken to be the standard deviation of the Gaussian (see the section “Overlap correction” below). Now, we can rewrite Eq. (<ref>) as:Δ I(𝐐,t) = π^2 κσ^4 n 𝒮_e/4 σ_b^2 V^2 q^2 μ e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τℰ_1 ℰ_2 ×| ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2 𝒪(t) ( 2+ sinα/sinβ)^-1 .As expected, this pump-probe signal is bi-linear in the pump and probe pulse energies. As in the main text, we may write Δ I(𝐐, t) = C(𝐐, t) 𝒪(t) ℰ_1 ℰ_2, where C(𝐐, t) is independent of ℰ_1,2. We have also isolated the overlap correction factor 𝒪(t), a purely geometrical effect due to experimental conditions, from the physically relevant quantity C(𝐐, t).Experimentally, we measure the total intensity I(𝐐, t) = I_0(𝐐) + Δ I (𝐐, t) together with the pulse energies ℰ_1, ℰ_2 for each shot. Let s be the index of a shot, then the summed intensity is∑_s^all shots I(𝐐, t) =S_0(𝐐) ∑_s^all shots (ℰ_1^(s) + ℰ_2^(s)) + C(𝐐, t) 𝒪(t) ∑_s^all shotsℰ_1^(s)ℰ_2^(s).Then, we normalize it by the summed pulse energies:I^norm (𝐐, t) ≡ . ∑_s I(𝐐, t) / ∑_s (ℰ_1^(s) + ℰ_2^(s)) . =S_0(𝐐) + C(𝐐, t) 𝒪(t) ∑_s ℰ_1^(s)ℰ_2^(s)/∑_s (ℰ_1^(s) + ℰ_2^(s)).As shown in the main text, there is no pump-probe signal at t=0, so the term S_0(𝐐) may be replaced by I^norm (𝐐, t=0). Hence, [ I^norm (𝐐, t)/I^norm (𝐐, t=0) - 1 ] [ ∑_s (ℰ_1^(s) + ℰ_2^(s)) / ∑_s ℰ_1^(s)ℰ_2^(s)] [𝒪(t)]^-1= C(𝐐, t) / S_0(𝐐) =. π^2 κσ^4 n 𝒮_e/4σ_b^2 V^2 q^2 μ e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ| ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2( 2+ sinα/sinβ)^-1/ ħ n 𝒮_e/2 μ∑_i1/ω_𝐪,i( ħω_𝐪,i/2 k_B T) | ∑_j F_j(𝐆) (𝐐·ϵ_i,𝐪,j/√(m_j)) |^2( 1 + sinα/sinβ)^-1=π^2 κσ^4/2ħσ_b^2 V^2 q^2 e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ| ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2/∑_iω_𝐪,i^-1( ħω_𝐪,i/2 k_B T) | ∑_j F_j(𝐆) (𝐐·ϵ_i,𝐪,j/√(m_j)) |^2 ( 1 + sinα/sinβ/2+ sinα/sinβ)≈ π^2 κσ^4/4 ħσ_b^2 V^2 q^2 e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ| ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2/∑_iω_𝐪,i^-1( ħω_𝐪,i/2 k_B T) | ∑_j F_j(𝐆) (𝐐·ϵ_i,𝐪,j/√(m_j)) |^2 where in the last step we have used approximations given that sinα / sinβ≪ 1. The physical quantity of interest is the ratio between the deposited energy density, U_d, and the phonon energy density, U_p. The former is simply U_d = μ_peΦ_pump, where μ_pe is the x-ray photoelectric absorption coefficient <cit.>. Thus, combining Eqs. [<ref>, <ref>, <ref>], we obtain [ I^norm (𝐐, t)/I^norm (𝐐, t=0) - 1 ] [ ∑_s (ℰ_1^(s) + ℰ_2^(s)) / ∑_s ℰ_1^(s)ℰ_2^(s)] [𝒪(t)]^-1=(2π)^1/2μ_peσ^3 /ħσ_b^2m v^2 q^2( U_p/U_d) e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ| ∑_j F_j(𝐆) (𝐆·𝐪̂) |^2/∑_iω_𝐪,i^-1( ħω_𝐪,i/2 k_B T) | ∑_j F_j(𝐆) (𝐐·ϵ_i,𝐪,j/√(m_j)) |^2 . Therefore, by calculating the pump-probe signal in Eq. (<ref>) from experimental data, and fitting it with Eq. (<ref>), we can extract the localization size σ and the conversion coefficient U_p/U_d. Specifically, we may define a pre-factor ℱ:ℱ≡(2π)^1/2μ_pe/ħσ_b^2m v^2 q^2| ∑_j F_j(𝐆) |^2/∑_iω_𝐪,i^-1( ħω_𝐪,i/2 k_B T) | ∑_j F_j(𝐆) (𝐐·ϵ_i,𝐪,j/√(m_j)) |^2 ,which includes: geometric factors (e.g., the beam size and the pump-probe overlap factor), known constants (e.g., the x-ray linear absorption coefficient), DFT results (e.g., phonon mode frequencies), all of which are independent from parameters of the model. Thus, the pre-factor ℱ can be calculated for any given 𝐪, t and is fixed during the data fitting. We can then re-write the equation above as[ I^norm (𝐐, t)/I^norm (𝐐, t=0) - 1 ] [ ∑_s (ℰ_1^(s) + ℰ_2^(s))/∑_s ℰ_1^(s)ℰ_2^(s)] [𝒪(t)]^-1= C(𝐐, t)/S_0(𝐐)= ℱσ^3 ( U_p/U_d) e^-σ^2 q^2/2[1 - cos(q v t ) ]^2 e^-2 t/τ|𝐆·𝐪̂|^2,and fit the data by tuning the parameters τ, σ, and U_p/U_d.§ ADDITIONAL EXPERIMENTAL METHODS §.§ Overlap correction The time delay between the two pulses is adjusted via two symmetric linear motions in the delay branch which change the distance between the inner crystals (i.e., C_1 and C_4 in Fig. 1A in the main text) and the outer crystals (C_2 and C_3). In order to perform a continuous scan of the delay, the straightness of the linear stages needs to meet two requirements: * The orientation errors of the outer crystals caused by this linear motion should be well below the Darwin width of the Bragg reflection, 17 in this case, to maintain the photon throughput.* The angular errors of the exit beam from the delay branch should be sufficiently small so that the two output beams remain focused and overlapped at the sample location. Note that in this experiment, with a focal size of 20and a focal length of 3.3m, angular errors on the order of 6 would lead to the complete loss of overlap between the two beams.Although the planar air-bearing-based mechanism used for the linear motion <cit.> meets the first requirement, it is still difficult to achieve a sub- level straightness required by the second one. On the other hand, the angular errors of these air-bearing-based linear motions are repeatable on the sub- level. Therefore, the following calibration routine has been implemented to partially correct for the angular errors: * First, we measure changes in the horizontal and vertical position of the beam due to the angular movements during the delay scan. This is done using the high-resolution beam profile monitor at the sample location. * Then, we calibrate the relation between the θ and χ motion of crystal C_4 and the horizontal and vertical movement of the beam at the sample location using the profile monitor. * Next, we build a lookup table for θ and χ values to compensate for the angular motion measured in step 1.* Finally, we perform the delay scan and at each time point, using the values of θ and χ in the lookup table. Shown in Fig. <ref>A is the movement of the focused beam from the delay branch measured at the sample position using the beam profile monitor while the delay is scanned from -210ps after the angular error correction. Limited by the resolution of the θ and χ motions, we can only correct the angular errors to some extent. During the experiment, the overlap is optimized at the most negative delay, t=-2ps, so the change in the centroid position is calculated with respect to its position at -2ps.To calculate the overlap correction factor, 𝒪(t), defined in Eq. (<ref>), we assume that the two beams are both Gaussian in shape. Since the full-width at half-maximum (FWHM) of the beams are measured to be 20, the standard deviation of the Gaussian is thus σ_b = 8.49. Let D denote the distance between the centroids of the two beams. We may choose the coordinate system so that the centroids are located at (x,y) = (± D/2, 0). Hence, the overlap factor is𝒪(t) =4πσ_b^2 ∫_-∞^∞∫_-∞^∞ dxdy 1/2πσ_b^2exp[ -(x+D/2)^2 + y^2/2σ_b^2] ×1/2πσ_b^2exp[ -(x-D/2)^2 + y^2/2σ_b^2] = exp( -D^2/4σ_b^2)This factor is calculated and plotted in Fig. <ref>B and is taken into account for further analyses on the time-resolved signal. §.§ Diode calibrationFigure <ref> shows the calibration for the diode readings. The intensity monitor i_5, which is placed before the sample, is calibrated separately and the readings are in units of J. We use the reading to calibrate the diodes d_03 and d_34 in the following way: In one scan, we block the fixed-delay branch and obtain the coefficient c_1 that converts d_03 reading (i.e., intensity from the variable-delay branch) into i_5 reading, as shown in the top panel of Fig. <ref>. In another scan, we leave both branches open and, knowning the coefficient c_1, obtain the coefficient c_2 that converts d_34 reading (i.e., intensity from the fixed-delay branch) into i_5 reading, as shown in the bottom panel of Fig. <ref>. In this way, we can obtain the energy delivered onto the sample in units of J from each branch, ℰ_1=c_1d_03 and ℰ_2=c_2d_34.§ ADDITIONAL DATAFigures <ref> and <ref> present data on KTaO_3, in the same format as Figs. <ref> and <ref> in the main text for SrTiO_3.§ ESTIMATED AMPLITUDE OF STRAIN WAVESIf we assume that the coupling is only through deformation potential and that one photon creates one spherical wave, then each photon will lead to a uniform electron band shift of the amount Δ E in the excited volume Ω=NV=4π/3σ_e^3. V is the unit cell volume and N is the number of unit cells excited. σ_e is the radius of the electron cloud, which is allowed to be different from σ. The photon of energy hν injects a number of carriers Δ N=hν/E_gap and leads to a change in the chemical potential Δ E=Δ N E_gap/N where N the total number of unit cells in the excited volume Ω or the number of electrons within one single band is defined through Ω=NV=4π/3σ_e^3. We then have Δ E=hν/N.The induced strain is determined by Δ E=Ξε whereΞ is the deformation potential, Therefore, the uniform strain ε, which is incurred by the incident photon, satisfies the relation ε/hν=V/ΞΩ=3V/4πΞσ_e^3.The signal [C(𝐐,t)/S_0(𝐐)]_STO/[C(𝐐,t)/S_0(𝐐)]_GaAs≈ε^2_STO/ε^2_GaAs under a similar scattering geometry, and ε^2_STO/ε^2_GaAs≈Ξ_GaAs^2/Ξ_STO^2 if σ_e is assumed to be similar in the two materials. Such assumption is not unreasonable because the heaviest elements in these materials are not far off in the atomic number and we are not hitting X-ray resonance in between their edges. Now we consider thermoelastic coupling as the electron-lattice coupling mechanism.The temperature rise caused by the absorption of one X-ray photon is Δ T=hν/CN/N_A where C is the heat capacity in J/(K· mol), N_A is the Avocadaro number. Due to thermal expansion, the strain caused by temperature rise is ε=αΔ T, where α is the thermal expansion coefficient. Therefore ε/hν=α N_A V/CΩ=3α N_A V/4π Cσ_e^3. To compare ε in the two materials we only need to compare their α/C. [C(𝐐,t)/S_0(𝐐)]_STO/[C(𝐐,t)/S_0(𝐐)]_GaAs≈ε^2_STO/ε^2_GaAs=(α/C)^2_STO/(α/C)^2_GaAsFor SrTiO_3 , α=3.23× 10^-5K^-1, C=98J/(K· mol) <cit.>. For GaAs, α=6× 10^-6K^-1,C=45 J/(K· mol) <cit.>. This results in only a factor of 6 larger signal in SrTiO_3.
http://arxiv.org/abs/2312.16453v2
{ "authors": [ "Yijing Huang", "Peihao Sun", "Samuel W. Teitelbaum", "Haoyuan Li", "Yanwen Sun", "Nan Wang", "Sanghoon Song", "Takahiro Sato", "Matthieu Chollet", "Taito Osaka", "Ichiro Inoue", "Ryan A. Duncan", "Hyun D. Shin", "Johann Haber", "Jinjian Zhou", "Marco Bernardi", "Mingqiang Gu", "James M. Rondinelli", "Mariano Trigo", "Makina Yabashi", "Alexei A. Maznev", "Keith A. Nelson", "Diling Zhu", "David A. Reis" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20231227074038", "title": "Hard X-ray Generation and Detection of Nanometer-Scale Localized Coherent Acoustic Wave Packets in SrTiO$_3$ and KTaO$_3$" }
Structural Diagnosability Analysis of Switched and Modular Battery Packs *This work is financed by the Swedish Electromobility Center and the Swedish Energy Agency.1st Fatemeh Hashemniya Department of Electrical Engineering Linköping University, Sweden e-mail: fatemeh.hashemniya@liu.se 2nd Arvind Balachandran Department of Electrical Engineering Linköping University, Sweden e-mail: arvind.balachandran@liu.se 3rd Erik Frisk Department of Electrical Engineering Linköping University, Sweden e-mail: erik.frisk@liu.se 4th Mattias Krysander Department of Electrical Engineering Linköping University, Sweden e-mail: mattias.krysander@liu.se ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Safety, reliability, and durability are targets of all engineering systems, including Li-ion batteries in electric vehicles.This paper focuses on sensor setup exploration for a battery-integrated modular multilevel converter (BI-MMC) that can be part of a solution to sustainable electrification of vehicles. BI-MMC contains switches to convert DC to AC to drive an electric machine.The various configurations of switches result in different operation modes, which in turn, pose great challenges for diagnostics. The study explores diverse sensor arrangements and system configurations for detecting and isolating faults in modular battery packs. Configurations involving a minimum of two modules integrated into the pack are essential to successfully isolate all faults. The findings indicate that the default sensor setup is insufficient for achieving complete fault isolability. Additionally, the investigation also demonstrates that current sensors in the submodules do not contribute significantly to fault isolability. Further, the results on switch positions show that the system configuration has a significant impact on fault isolability. A combination of appropriate sensor data and system configuration is important in achieving optimal diagnosability, which is a paramount objective in ensuring system safety.Diagnostics, Fault detection, Lithium-ion battery, Modular Multilevel Converters (MMC).§ INTRODUCTION Switched systems are commonly found in many applications, e.g., power electronics, automotive, and aerospace. This work will explore how to perform diagnosability analysis in a switched battery system for electric vehicles.Electric vehicle (EV) powertrains employ two-level inverters to convert DC from a large battery pack to AC that powers an electric machine <cit.>. The large battery pack typically contains several series and parallel connected low-voltage battery cells (2-4 V) to provide high-voltage (300-800 V) <cit.>. Due to differences in leakage currents and cell in-homogeneities, individual cell voltage and state-of-charge (SOC) distribution among the cells are non-homogeneous <cit.>.As a result, over time, some cells tend to discharge faster than other cells, thus limiting the total energy the pack can deliver.One approach to mitigate this problem is to introduce modular battery packs, a technique that has gained popularity in the research and development of EV powertrains due to their high efficiency and providing cell-level control <cit.>. Modular battery packs can improve battery balancing and provide better battery fault tolerance due to their highly modular structure <cit.>.However, the increased flexibility makes fault monitoring more demanding than monitoring traditional battery packs because modular battery packs include more components, leading to more types of faults, and can be operated in diverse operation modes which makes diagnostics more challenging. The purpose of fault diagnosis is to detect and mitigate faults, i.e., small deviations from nominal behavior, such that failures can be avoided by taking proper actions <cit.>. Cell faults include unexpected rates of aging such as increased internal resistance or capacity fade and increased connector resistance.Internal/external short circuits and thermal runaways in cells are failures since these imply a permanent interruption of the battery's ability to perform and thus are not the focus of this work.Hu et al. <cit.> comprehensively reviewed mechanisms, features, and diagnoses of various faults in Li-ion batteries.In addition, battery management systems (BMS) include a wide range of sensors that are essential for battery monitoring and control <cit.>. These sensors include voltage, current, and temperature sensors and the detection and isolation of sensor faults are also important for the diagnostics <cit.>. Offset and scaling faults are common faults in voltage and current sensors <cit.>. Therefore, establishing a proper sensor setup to monitor battery conditions is essential. § PROBLEM FORMULATIONThe main problem studied in this paper is how to do fault diagnosis analysis for switched systems and apply it to a modular battery pack model and, in particular, discuss questions concerning computational complexity. Model-based techniques based on structural analysis <cit.> is a suitable choice of methods for addressing this problem, however, extensions to standard analysis techniques are required to include also switched models.The first contribution of this paper is a method to determine the influence of the total number and type of sensors required to detect and isolate faults in a modular battery pack, utilizing methods for switched systems.The diagnostic performance is given not only by the operational mode of an individual submodule (SM) but also by the configuration of all SMs in the battery system. Therefore, a method for investigating how fault detectability and isolability depend on switch configuration and sensor setup is demonstrated. A second contribution is two techniques for the reduction of the combinatorial complexity in the analysis introduced by the switches.A closely related work is <cit.> where two sensor setups have been analyzed, among other things, to reach the best isolability for a reconfigurable battery system. The approach is based on an equivalent circuit model (ECM) with a half-bridge converter and a two-state thermal model. One difference to <cit.> is that here the effects of switching between system configurations, leading to different model structures and isolability possibilities, are analyzed. § DIAGNOSIS BACKGROUNDTo perform diagnostics using structural analysis, a structural model <cit.> in the form of an incidence matrix is introduced based on a mathematical model of the system. The rows in the incidence matrix represent the set M of system equations and the columns represent the unknown variables. The presence of a variable in an equation is denoted by '1', and its absence is denoted by '0'.The key analysis tool, the Dulmage-Mendelsohn (DM) decomposition <cit.>, partitions the incidence matrix into three parts, an underdetermined part, M^-, where the number of equations is less than the number of variables, a just-determined part, M^0, where the number of equations and variables are equal, and an overdetermined part, M^+, where the number of equations is more than the number of variables. The overdetermined part contains the analytical redundancy needed for diagnosis. Without loss of generality, assume that each fault only affects one equation and let e_f denote the equation that is affected by a fault f. Then the structural fault detectability and isolability are defined as A fault f is structurally detectable in model M if e_f ∈ M^+. ⧫Fig. <ref>(a) shows an example of an extended Dulmage-Mendelsohn decomposition <cit.> of an incidence matrix. The just-determined part consists of the two first rows and columns, and the remaining rows and columns are the overdetermined part, i.e., M^+ = {e_1, e_2, e_3, e_4, e_6, e_8}. The faults f_2tof_4are structurally detectable since they are in M^+ but not f_1. The next definition characterizes structural model properties required for being able to point out which fault has occurred. A fault f_i is structurally isolable from f_j in a model M if e_f_i∈ (M \{e_f_j})^+. ⧫ A fault is uniquely isolable in a model if it is structurally isolable from all other faults in a model. A model has full isolability if all faults in the model are uniquely isolable.Structural isolability can be efficiently determined using the Dulmage-Mendelsohn decomposition <cit.>, efficiently implemented in, e.g., thecommand in MATLAB and also available in <cit.>. The isolability of a model can be visualized by partitioning the equations in M^+ such that faults are isolable if and only if they are in different equation sets in the partition <cit.>. In Fig. <ref>(a), M^+ = {e_1, e_2, e_4, e_8} ∪ {e_3} ∪ {e_6} where gray boxes indicate sets with cardinality greater than one, in this case, there is only one such set. Thus, the figure shows that f_4 is isolable from all other faults, i.e., f_4 is uniquely isolable, and f_2 and f_3 are isolable from f_4 but not isolable from each other.Fault isolability can condensly be represented with an isolability matrix. Fig. <ref> (b) shows the isolability matrix corresponding to the model presented in Fig. <ref>(a).A dot in position (f_i, f_j) implies that f_i is not isolable from f_j and full isolability corresponds to an identity matrix. § MODULAR BATTERY PACK: AN OVERVIEW The battery pack considered works as a battery-integrated modular multilevel converter (BI-MMC) with several cascaded stages of DC-AC converters called submodules (SMs).A BI-MMC SM has a few series- and parallel-connected cells defined by the output voltage () and the total energy stored in the battery pack <cit.>.The SMs are controlled in such a way that a sinusoidal output voltage is achieved that is used to drive an electric machine <cit.>.To control the speed and torque of the electric machine, the output current () is required <cit.> and this is measured using a current sensor.fig:BIMMC_FBSM(a) shows the schematic of a BI-MMC phase-leg with n full-bridge (FB) SMs and the schematic of the FB-SM is shown in fig:BIMMC_FBSM(b).The cells in the SM are modeled as a lumped RC equivalent circuit model with one RC link.This model can capture the degradation of Li-ion batteries <cit.>. Cell models with constant phase elements can be used to effectively capture and distinguish the various degradation phenomena at the cost of increased complexity and computational time <cit.>. To estimate the SOC, crucial for any BMS, information about the cell current (k) and voltage (k) are necessary <cit.>, where k corresponds to the k:th SM.The k is measured using a voltage sensor and k is either measured, using a current sensor, or estimated, using . The FB-SM, presented in fig:BIMMC_FBSM(b) has two complimentary switch pairs, , and ,and by controlling the `on'- and `off'-durations of these switch pairs, called modulation, an AC output voltage can be achieved.Typically, in MMCs (also applicable to BI-MMCs) with a large number of SMs, nearest-level modulation is chosen because of its low switching frequency, easy implementation, and low total harmonic distortion <cit.>.fig:NLCM shows an illustration of the nearest level modulation of a BI-MMC phase arm with 6 FB-SMs. From the figure, the SM operation can be classified into three modes, based on the output voltage and the states of the SM switches.The different modes of the FB-SM based on the `on' and `off' states of the switches are summarized in tab:SMswmodes.§ MODELLING FOR FAULT DIAGNOSIS This section presents the cell and BI-MMC models for four different sensor setups and their corresponding fault equations. Each SM is modeled as a set of differential-algebraic equations, and the total number of equations depends on the sensor setups. The nomenclature, xk refers to equation number x in the k:th SM. Faults are modeled as fault signals f_i where f_i ≠ 0 if the fault is present and f_i = 0 otherwise.It is important to note that the model presented in this section is a highly idealized and simplistic model of battery cell behavior <cit.>. However, since the analysis is based on structural properties, the method is directly applicable to more detailed models. §.§ Model with sensor setup ISensor setup I has voltage sensors in every SM to measure k, where k=1, …, n representing n number of SMs, and a current sensor measuring . In this sensor setup, there are 10 equations per SM. The cells are modeled by1k:k = k/ - / ,2k:k =+k + , and3k:k = d/dt,where is the voltage across the double-layer capacitance, is the charge transfer resistance, is the double-layer capacitance, is the ohmic resistance, and is the open circuit voltage of the cell in the k:th SM. These parameters depend on the cells' SOC, state of health (SOH), and temperature, which are here assumed known entities. During the assumed diagnosis period, a couple of output fundamental AC periods, the parameters can with good accuracy be assumed constant. The SOC, SOH, and temperature dependency in the parameters, , , , and are, e.g., modeled using look-up tables.The equations describing faults in the behavior of the cells are4k: =+ ,5k: =+ ,6k: =+ , and7k: =+ ,where , , , and are the values of the corresponding nominal parameters for present state of charge and temperature, and the corresponding fault signals , , , and,models a deviation from the corresponding nominal value. If |f_c| ≥L, where c is a component and L is a predetermined limit, e.g., state of health limit, then c is considered faulty.The four inner cell faults change gradually, representing aging phenomena <cit.>. A general cell fault k will be considered instead of considering the internal faults {, , , } of the cell individually.The voltage sensor in every SM to measure k is modeled by8k:k = k + k,where k is the sensor signal, k the measured voltage, and k a possible sensor fault.The different modes of the SM, m_k∈{,, , }, make the BI-MMC a multi-mode system. The mode-dependent equations are,9k: k = k,ifm_k ∈{} -k, ifm_k ∈{} 0, o.w.,10k:k =, ifm_k ∈{} -,ifm_k ∈{} 0, o.w.,where k is the output voltage of the k:th SM.The equations describing the BI-MMC output voltageand current are,10: = ∑_k=1^nk, 20:y_ =+ ,where y_ is the current measurement, and f_ models the current sensor fault. For n number of SMs, there are 10 n equations ({1, …, 10}k, where k = 1, …, n) describing the faults and operation of SMs, and two more ({1, 2}0) describing the and , i.e., this sensor setup has 10 n + 2 equations and the fault set is {k,k}_k=1^n ∪{}. §.§ Model with sensor setup IISensor setup II, in addition to the sensors used in setup I, has a sensor measuring the output voltage, i.e., .In this sensor setup, there are 10 equations per SM{ik}_i=1^10, the output voltage and current equations {i0}_i = 1^2, and the sensor equation30: y_ =+ f_,where y_ is the voltage measurement and f_ models a voltage sensor fault. This setup has 10n + 3 equations, and the fault set is {k,k}_k=1^n ∪{,}. §.§ Model with sensor setup IIISensor setup III has voltage and current sensors in each SM, i.e., {k, k}_k = 1^n are measured. The output current is also measured. This sensor setup has 2 output equations {i0}_i = 1^2, and 11 equations per SM, i.e., {ik}_i = 1^11 where 11k:y_k = k + k,is a current sensor equation, y_k the current measurement, and k the current sensor fault. This setup has 11 n + 2 equations and the fault set is {k,k,k}_k=1^n ∪{}. §.§ Model with setup IVSensor setup IV includes all both current and voltage sensors on each SM and output current and voltage sensors, i.e., the set of measured variables is {k, k}_k = 1^n ∪{, }. This sensor setup has 11 equations per SM {ik}_i = 1^11 and 3 output equations {i0}_i=1^3, thus bringing the total number of equations to 11 n + 3. The fault set for this sensor setup is {k, f_k, f_k}_k=1^n ∪{f_, f_}. § ISOLABILITY ANALYSIS METHODThis section introduces techniques to reduce the combinatorial complexity introduced by the switches. First, configuration reduction to reduce computational complexity is discussed followed by a method on how to compute the fault detectability and isolability properties of a BI-MMC for different sensor setups and reduced system configurations.An efficient representation of the fault isolability properties of the modular battery pack is also presented. §.§ SM modes and system configurations reduction In the structural analysis of the BI-MMC, two possible reduction techniques are explored to reduce computational complexity: SM mode reduction and system configuration reduction. Table <ref> shows the four modes for each SM and then in a system with n submodules, there are 4^n different system switch configurations.Different switch configurations have different diagnosis properties, the analysis must then be performed for all different switch configurations and since the number of system configurations, and therefore also the analysis complexity, grows exponentially with the number of submodules it is important to reduce the number of system modes to be able to scale the analysis.First, consider the SM mode reduction. Note that the modes {, } and {, } have identical structures, as seen in e_9,k and e_10,k.The only difference in the model equations inandmode is a sign change, i.e., = ±k, which does not change the structure. Also, the bypass modes have the same structure. Therefore, both the modes {, } will hereafter be represented with() and {, } with().Second, consider the system configuration reduction. The number of system configurations to analyze can be reduced since isolability properties are determined only by the number of SMs in or mode and not by the specific modes of SMs. Consider for example n = 3 where system configurations are represented as a triple (m_1,m_2,m_3)∈{, }^3 where m_k denotes the mode of the k:th SM. Then the configurations, , andhave similar fault isolability properties. This reduces the number of system configurations to be analyzed in a system with n SMs to n+1 system configurations representing 0, 1, …, n inserted cells, respectively. §.§ Compact isolability representation Assume S is the set of the 4 considered sensor setups described in Section <ref>. For a given sensor setup s ∈ S, there are n+1 different system configurations to be analyzed, i.e., for k∈{0, 1, …, n} inserted cells.Let the structural model for sensor setup s and k inserted cells be denoted M_k(s). Similarly, let the detectable faults and the fault isolability for the case with k inserted cells be denoted by D_s,k, I_s,k, respectively. These sets are computed with the fault diagnosis toolbox in <cit.> as D_s, k = 𝙳𝚎𝚝𝚎𝚌𝚝𝚊𝚋𝚒𝚕𝚒𝚝𝚢(M_k(s)) I_s, k = 𝙸𝚜𝚘𝚕𝚊𝚋𝚒𝚕𝚒𝚝𝚢(M_k(s)),and can be represented as shown in Table <ref> where the subscripts s and k correspond to rows and columns, respectively. The diagnosability, i.e., detectability and isolability, of the BI-MMC is computed by going through all sensor setups and system configurations in two nested loops. Each row Table <ref> corresponds to a sensor setup that is specified by the first three columns. Sensors that are included in all SMs are listed in the SM column, and sensors measuring the output of the battery pack are listed in the pack column. The column labeled non-D under 0 inserted cells specifies the non-detectable faults for each sensor setup, i.e.,is not detectable for any sensor setup with 0 inserted cells. All faults are detectable for all sensor setups if at least one cell is inserted. The fault isolability for a model (without taking causality <cit.> into account) can be specified with a partition of the detectable faults where each fault set in the partition indicates the faults that are not isolable from each other. Thus, a compact representation of the isolability used here is to list the sets of non-isolable faults. To show an example of the isolability of a system and how it is compactly represented in the table, let us consider a switched battery pack with n = 3 SMs in mode , i.e., with two inserted cells and with sensor setup II, i.e., with voltage sensors in the SMs and voltage and current output sensors. The extended DM decomposition and the corresponding fault isolability matrix are shown in Fig. <ref> and Fig. <ref> respectively. Here, the different faults {, , , } related to the cell parameters are used to fulfill the assumption that faults only affect one equation in the structural analysis. However, when analyzing the result the general cell fault k, introduced in Section <ref>, is considered to represent any type of faulty behavior of the cell. The non-isolable fault sets correspond to the diagonal blocks in the fault isolability matrix, i.e., I_II, 2 = { {1}, {1}^SM 1, {2}, {2}^SM 2,{3, 3}_SM 3, {}, {}}. This example shows some isolability properties that hold for any number of SMs in the system, for any of the considered sensor setups, and for any switch configuration, that can be utilized to compactly describe the isolability in all these different cases. Faults in different SMs are isolable from each other implying that the non-isolable fault sets include faults from at most one SM. Futhermore, SMs in insertion mode, here SM 1 and 2, have similar isolability. SMs in bypass mode have also similar isolability even though it is not shown in this example. Hence, the isolability can be specified by listing the non-isolable fault sets with cardinality strictly greater than 1 for an SM in insertion mode, given in the columns labeled non-I,in Table <ref>, and for an SM in bypass mode given in the columns labeled non-I, . This implies that faults not present in any non-isolable set are uniquely isolable and full isolability is given by no non-isolable fault sets. For the example in (<ref>), there are no non-isolable faults in insertion mode and {3, 3} is the non-isolable fault set in bypass mode. This result can be seen in Table <ref> for setup II and > 1 inserted cell. § FAULT ISOLABILITY RESULT AND DISCUSSIONThis section will analyze the isolability results summarized in Table <ref> of modular battery systems for the different sensor setups and conclude with a discussion of how the structural results relate to analytical properties. §.§ Diagnosability for sensor setup I The fault detectability and isolability for sensor setup I are given in the first row in Table <ref>, and it can be seen that all faults are detectable in all configurations, except for the case when all SMs are in bypass mode (i.e., 0 inserted cells) where the output current sensor faultis non-detectable. If one cell is inserted, thenis detectable but is not isolable from the faults in the inserted cell. If at least two cells are inserted,is uniquely isolable. The maximum isolability is achieved when the number of inserted cells is equal to or greater than 2. Then, a fault can be isolated to a specific SM or identified as an output current sensor fault, but the faults within an SM are not isolable from each other. §.§ Diagnosability for sensor setup II The fault detectability and isolability result of sensor setup II can be seen on the second row of Table <ref>. The difference between setup I and II is an output voltage sensor with fault . The output voltage sensor faultis detectable and uniquely isolable in all system configurations, thus not visible in the table. By adding the output voltage sensor, the faults k and f_k are now isolable from each other if and only if SM k is in insertion mode. A more detailed illustration of this property can be seen in Fig. <ref>, which shows the extended DM decomposition for the sensor setup II with the system operated in configuration . The faults f_1 and f_2 in the SMs in insertion mode are uniquely isolable but not f_3 in SM 3, which is in bypass mode. §.§ Diagnosability for sensor setup III and IV The isolability results of sensor setup III and IV are given in the third and fourth row in Table <ref> respectively. The sensor faults in added current sensors f_k for k ∈{1, …, n}, are detectable and uniquely isolable in both cases. A pairwise comparison of rows 1 with 3 and 2 with 4 shows that the only gain in isolability by adding a current sensor to each SM is that the output current sensor faultbecomes uniquely isolable in the case of one inserted cell. By comparing rows 2 and 3 it is clear that except for the possibility of isolating an output current sensor fault in the case of one inserted cell, it is better to include one output voltage sensor because that will enable the possibility of isolating the cell and voltage sensor faults within each SM in inserted mode. §.§ Structural vs analytical isolability Even though the structural detectability or isolability does not improve when sensors are added it can happen that faults can more easily be detected and isolated due to the numerical properties of the model as the next example will show focusing on the detectability of the output current sensor fault f_. Consider an example system with only one SM where the nominal parameters are R̅_p = 692 μΩ, C̅_p = 1.52 F, R̅_o = 1.2 mΩ, and v̅_ocv = 4.07 V. For sensor setup I, the following residual (r) sensitive to f_ is valid in the forward and backward mode: v̇_p= ±y_i_out/C̅_p - v_p/R̅_p C̅_p, r= y_v_cell - v_p ∓R̅_o y_i_out - v̅_ocv. The upper sign is used in forward mode and the one below is in backward mode. In fault-free operation, the residual is 0, but in case of a fault f_ the residual value can be expressed as a function of the fault signal as v̇_p= ±f_/C̅_p - v_p/R̅_p C̅_p r= - v_p ∓R̅_o f_. For a stationary fault, the gain from fault to residual is ∓R̅_o which is a small number making detection difficult with noisy measurements and model inaccuracies. In bypass mode, the fault f_ is not detectable at all as shown previously in Table <ref>. Now, consider sensor setup III with an additional sensor y_i_cell. Then in addition to residual (<ref>) the following residual can be used for detecting f_ r = y_∓ y_i_cell = f_∓ f_. The coefficient of f_ is 1, i.e., the fault sensitivity to f_ is much better using this residual compared to residual (<ref>). This is an example of quantitative improvements that are not captured in structural analysis. Although the addition of current sensors to each cell improves the fault sensitivity of f_, there are less expensive solutions to reach the same result. For example, by adding a redundant sensor measuring, the output current y^extra_ =+ f^extra_, gives an even better result, since the corresponding residual is r = y_-y^extra_ =- f^extra_. The fault sensitivity is the same as in (<ref>) but this residual is valid in all system configurations, not only in the insertion modes. § CONCLUSIONInvestigation on sensor setup and system configuration for BI-MMC for diagnosis purposes was addressed in this paper. A main challenge with this is the combinatorial complexity due to a large number of system switch states. A general method for evaluating fault detectability and isolability in different system configurations was demonstrated. In systems with more than one SM it is sufficient to use voltage sensors in all SMs and an output current and voltage sensor to make all faults uniquely isolable. Full isolability is obtained if all SMs are in insertion mode. The investigation showed that SM current sensors do not significantly improve the structural diagnosability but an additional redundant output current sensor can be used to improve fault sensitivity.plain
http://arxiv.org/abs/2312.16520v1
{ "authors": [ "Fatemeh Hashemniya", "Arvind Balachandran", "Erik Frisk", "Mattias Krysander" ], "categories": [ "eess.SY", "cs.SY" ], "primary_category": "eess.SY", "published": "20231227105744", "title": "Structural Diagnosability Analysis of Switched and Modular Battery Packs" }
[footnoteinfo]This paper was not presented at any IFACmeeting. Corresponding author T. H. Scholl.TS]Tessina H. Scholltessina.scholl@kit.edu, [TS]Karlsruhe Institute of Technology, 76021 Karlsruhe, GermanyTime-delay systems, Lyapunov-Krasovskii functional of complete type, robustness, absolute stability Inspired by the widespread theory of complete-type Lyapunov-Krasovskii functionals, the article considers an alternative class of Lyapunov-Krasovskii functionals that intends to achieve less conservative robustness bounds. These functionals share the same structure as the functionals of complete type, and also they share to be defined via their derivative along solutions of the nominal system. The defining equation for the derivative, however, is chosen differently: the Lyapunov equation, which forms the template for the defining equation of complete-type Lyapunov-Krasovskii functionals, is replaced by the template of an algebraic Riccati equation. Properties of the proposed Lyapunov-Krasovskii functionals of robust type are proven in the present article. Moreover, existence conditions are derived from the infinite-dimensional Kalman-Yakubovich-Popov lemma, combined with a splitting approach. The concept is tailored to sector-based absolute stability problems, and the obtainable robustness bounds are strongly related to the small-gain theorem, the complex stability radius, passivity theorems, the circle criterion, and integral quadratic constraints with constant multipliers, where, however, the nominal system itself has a time delay. § INTRODUCTION Complete-type <cit.> and related <cit.> Lyapunov-Krasovskii (LK) functionals are a recent field of research <cit.>.Amongst the various applications, an important one, and, in fact, the original purpose of complete-type LK functionals when introduced in <cit.>,is to tackle the question of robustness. Being constructed for anominal linear time-delay system ẋ(t)=A_0 x(t)+ A_1 x(t-),the LK functionalshall prove stabilityin perturbed versions of that system with an added uncertain or nonlinear termg̃(x(t),x(t-)) (leaving the delayof the nominal system unaltered).Frequency-domainrobustness methods<cit.> are available but, in view of possibly non-global stability results for nonlinear perturbations <cit.>, an LK-functional-based approach is seen to be preferable. Complete-type and related LK functionalsare predestined for that task since (like a Lyapunov function that is derived from a Lyapunov equation <cit.>)such kinds of LK functional can always be found, provided exponential stability holds in the nominallinear system <cit.>.The resulting robustness bounds characterizeadmissible termsg̃(x(t),x(t-)) that do not compromise thisstability.Unfortunately,the robustness bounds fromcomplete-type LK functionals in <cit.> turn out to be rather conservative. Despite ofimprovements, the same holds for <cit.>, which do not rely on complete-type buton related LK functionals. The present paper aims tointroduce aclass of quadratic LK functionals that can be used to derive less conservative robustness results. It intendstoovercome some open questionsconcerning the concept of complete-typeLK functionals,which are raised in the next section. Important properties as well asexistence conditions onthe proposedLK functionalsof robust typewill be discussed.Being able to handlethe more adapted defining equation of the functional numerically, is mainlydue to a recentapproach,proposed in<cit.> for complete-type and related LK functionals.These numerical results, however, are beyond the scope of the present paper. Totackle the open questions identified in the next section, we resort to methods from the field of absolute stability <cit.>.Lyapunov-function-free approaches in that fieldhave already in the very beginning been extended totime-delay systems <cit.>, <cit.>. See also <cit.> and <cit.>.However, concerning Lyapunov-function-based considerations, there is no satisfactory counterpart for time-delay systemsin terms of a computableLK functional without adding conservativity. Actually, generalizations to abstract ordinary differential equationson Hilbert spaces have been developed in <cit.>, including time-delay systems when considering L_2×ℝ^n as state space. However, these results only provide an abstract existence statement for a functional on the given Hilbert space.Computable results have been derived in form of various linear matrix inequality (LMI)criteria for absolute stability problems <cit.>, where an explicit LK functional is obtained via semidefinite programming. However, the prescribed restricted ansatz of such LK functionals with a finite number of degrees of freedomis accompanied by addedconservativity.Absolute stability can also be seen as a foundation for the modern framework of integral quadratic constraints <cit.>.However, in this framework, time delays are usuallyunderstood as being part of the perturbation <cit.>.Infinite-dimensional nominal systems have not been considered until very recently in <cit.>,where, however,the used sum of squares approach is still not free of conservativity. Likecomplete-type LK functionals, the approach in the present paper does not rely on semidefinite programming. Structure. The paper is organized as follows. Sec. <ref> discusses some open problems which motivate in Sec. <ref> the definition of LK functionals of robust type.Sec. <ref> and Sec. <ref> are devoted to their monotonicity properties along solutions of the perturbed equation and their partial positive definiteness properties. A splitting approachfrom Sec. <ref> and an operator-based point of view from Sec. <ref>lay the foundation for the existence proofof the functionals in Sec. <ref>. Resulting robustness bounds are presented in Sec. <ref> before Sec. <ref> concludes the paper.Notation. Continuous ℝ^n-valued functions on [a,b] are denoted by ϕ∈ C([a,b],ℝ^n) or ϕ∈ Cwith ϕ_C=max_θ∈ [a,b]ϕ(θ), square integrable ℂ^n-valued functions by L_2([a,b],ℂ^n) or L_2, andAC stands for absolutely continuous.For x∈ℝ^n,x is an arbitrary norm on ℝ^n and x_2 the Euclidean norm. Further notations are the zero vector 0_n∈ℝ^n, the zero function 0_n_[a,b]∈ C([a,b],ℝ^n), the zero matrix 0_n× m∈ℝ^n× m (in short 0), and the identity matrix I_n∈ℝ^n× n (in short I),A matrix is said to be Hurwitz if all eigenvalues have negative real parts. The symmetric part of a matrix is sym(A)= 1 2 (A+A^⊤). If A∈ℂ^n× n, then He(A)= 1 2 (A+A^H). Moreover,μ_2(A) = λ_max( 1 2 (A^H+A)) describes the logarithmic normw.r.t. the spectral norm, which is A_2=√(λ_max (A^H A) ).If Q=Q^H, λ_min(Q) and λ_max(Q) denote the smallest and largest eigenvalue.Positive definiteness is addressed by Q≻ 0_n× n (Q≽ 0_n× n), which implicitly requires Q=Q^H.See <cit.> for the formal definition of the derivativeD_f^+V:C→ℝ of V:C→ℝ along solutions of ẋ(t)=f(x_t). The set of class-K functions is 𝒦={κ∈ C([0,∞),ℝ_≥ 0): κ(0)=0, strictly increasing}. § PROBLEM STATEMENT In the following, some open questions are identified. §.§ System classWe consider a retarded functional differential equation (RFDE)with a discrete delay >0 ẋ(t)= A_0 x(t)+ A_1 x(t-)_f(x_t)+g̃(x(t),x(t-))_g(x_t), decomposed into a linear part, ẋ(t)=f(x_t) with A_0,A_1∈ℝ^n× n, and a possibly nonlinear termg(x_t). The RFDE state x_t∈ C([-,0],ℝ^n) at time t>0 x_t(θ)=x(t+θ), θ∈[-,0]describes the solution segmentover the preceding delay interval.Fornotational compactness, theperturbation gC([-,0],ℝ^n) →ℝ^n; ϕ↦ g(ϕ) is assumed to be time-invariant. Still, the resultsstraightforwardly extend to a time-varying (t,ϕ)↦ g(t,ϕ),see Rem. <ref>.For simplicity, g is assumed to be locally Lipschitz continuous, ensuring well-posedness of (<ref>), cf. <cit.>. Moreover, g(0_n_[-,0])=0_n.Various scenarios give rise to (<ref>).* The nonlinearity g(x_t) mightrepresent the rest term from a linearization ẋ(t)=f(x_t), provided the RFDE right-hand side f+g is Fréchet differentiable (which is otherwise not presumed in (<ref>)). * The nonlinearityg(x_t) mightinvolve a saturation, which is frequently encountered if a delayed control law operates ona constrained input. * Uncertainties Δ_0,Δ_1∈ℝ^n× n added to A_0,A_1 in (<ref>) can be addressed by g(x_t) =Δ_0 x(t)+Δ_1 x(t-).The LK functionals V: C([-,0],ℝ^n)→ℝ; ϕ↦ V(ϕ) the present paper deals with are quadratic, time-invariant, and have the form V(ϕ) = ϕ^⊤(0)P_xx ϕ(0) + 2 ∫_-^0 ϕ^⊤(0)P_xz(η) ϕ(η) dη+ ∫_-^0∫_-^0 ϕ^⊤(ξ) P_zz(ξ,η) ϕ(η) dη dξ+∫_-^0 ϕ^⊤(η) P_zz,diag(η) ϕ(η) dη,P_xx∈ℝ^n× n, P_xz∈ L_2([-,0],ℝ^n× n), P_zz∈ L_2([-,0]× [-,0] ,ℝ^n× n) and P_zz,diag(η) ≡P_zz,diag∈ℝ^n× n.The argument ϕ, to which the functional is applied, usually represents an RFDE state ϕ=x_t, cf. (<ref>).Therefore it is convenient to describe rather V(x_t), whereϕ(0)=x_t(0)=x(t) and ϕ(-)=x(t-), see (<ref>).§.§LK functionals of complete typeThe structure (<ref>)(but with P_zz,diag(η) not being constant)isthe one known from complete-type Lyapunov-Krasovskii functionals. These are motivated by the following template from delay-freeordinary differential equations (ODEs):For ẋ(t)=A x(t), with A∈ℝ^n× n, a Lyapunov function V(x)=x^⊤ P xcan be found byspecifying a desiredLyapunov function derivative D_(ẋ=Ax)^+ V(x)=-x^⊤ Q x, with some freely chosen Q≻ 0_n× n, andsolving the Lyapunov equation PA+A^⊤ P=-Q for P.Analogously, for time-delay systems, LK functionals of complete type arise from prescribing the desired LK functional derivative along solutions of the nominal linear part of (<ref>) ẋ(t)=A_0 x(t)+A_1 x(t-)= f(x_t).This desired derivativeis a prioriset as D_f^+V(x_t) =x^⊤(t)Q_xx x(t) + x^⊤(t-)Q_x̅x̅x(t-)+∫_-^0 x^⊤(t+η) Q_zz,diag x(t+η) dη,with freely chosen matrices Q_xx,Q_x̅x̅,Q_zz,diag≻ 0_n× n (commonly named W_0,1,2).The LK functional that solves this task is(<ref>) with P_xz(η), P_zz(ξ,η), P_zz,diag(η), P_xx being expressible in terms of the so-called delay Lyapunov matrix function[denoted by U in <cit.>] Ψ:[-,]→ℝ^n× n, whichdepends on (<ref>) andQ_xx,Q_x̅x̅, Q_zz,diag. See the monograph<cit.> for an in-depth treatment of complete-type LK functionals.Consider again the delay-free template. It is well known that alsofor the perturbed ODE ẋ(t)=A x(t)+g(x(t)), the above derived Lyapunov functionstill gives rise to a negative definite derivativeD_(ẋ=Ax+g(x))^+ V(x)if the perturbation g(x)is only small enough.Tobe more precise,if itremainsbelow the linear norm bound g(x)≤γx with γ<λ_min(Q)/2λ_max(P)<cit.>.Similarly, for complete-type LK functionals, it can be shown that the derivative D_(f+g)^+ V(x_t)along solutions of the perturbed RFDE (<ref>)still satisfies the requirements of the classical LK theorem,wheneverg(x_t) satisfies the linear norm bound<cit.>g(x_t)_2≤γ[[x(t); x(t-) ]]_2 with γ < min{λ_min (Q_0)2+A_1_2,λ_min (Q_1)1+A_1_2, λ_min (Q_2)A_1_2}/λ_max(Ψ(0)) . §.§ Open questionsThe approach described above raises severalquestions. * How to choose the three matrices in (<ref>)?The question can even be widened: Why should the desirable LK-functional derivative be restricted to the structure in (<ref>). Why not choosing for instanceD_f^+V(x_t) = x^⊤(t) Q_xx x(t)+x^⊤(t-) Q_x̅x̅ x(t-) + 2 x^⊤(t-) Q_x̅x x(t) + 2 ∫_-^0x^⊤(t) Q_xz(η) x(t+η)dη + 2 ∫_-^0 x^⊤(t-) Q_x̅ z(η) x(t+η)dη +∫_-^0∫_-^0x^⊤(t+ξ) Q_zz(ξ,η) x(t+η) dηdξ,for which as well a uniquefunctional V with thestructure (<ref>) exists, according to<cit.> (regarding all terms but Q_x̅x̅) and<cit.> (regardingQ_x̅x̅). Of course, then we are even more spoiled for choice as to howto specify the kernel functions in (<ref>). * The derivation of the linear norm bound (<ref>) relies on many inequality estimations. Canthe conservativity of γ be reduced?* Can we incorporate some information on the structure of g from (<ref>)in the construction of V?It might be highly relevant for the achievablelinear norm bound γ in (<ref>) whether the perturbation affects only certain components of ẋ, or whether it depends only on certain parts of x(t) and x(t-). [Motivation, structure] Consider the delay-free example ẋ=[[ -0.10;0-10 ]] x+g(x). Compared tog(x)= [[ g_1(x_1);0 ]],g(x)= [[0; g_2(x_2) ]] is far less critical. The decisive structure information can bemade visible ina fictive feedback law notationg(x)=-Ba (Cx)=- [[ 0; 1 ]] a([[ 0 1 ]]x) with a(ζ)=-g_2(ζ).Once the structure describing matrices B=[[ 0; 1 ]] andC=[[ 0 1 ]] are fixed, only the specific restriction on a() is of interest. * Whystriving exclusively for a linear norm bound? [Motivation, asymmetric bound] Consider the delay-free scalar ODE ẋ=-x-x^3. Actually,g(x)=-x^3 is even a helpful perturbation of ẋ=-x, not at all hampering the global asymptotic stability of the origin.However, a linear norm bound | g(x)|≤γ| x|cannot distinguish between g(x)=-x^3 and the globally harmful g(x)=x^3. Hence, only a conservative estimation of the domain of attraction becomes possible.However, fromthe one-dimensional phase portrait, it is obvious that stability is preserved for any g(x)=-a(x) with a(x)≥ 0 for x> 0, and a(x)≤ 0 for x<0.§ LK FUNCTIONALS OF ROBUST TYPEThe four questions raised aboveare tackled as follows. * Prescribed nominal derivative: The LK-functional derivative D_f^+V in the present paper in fact has the general structure (<ref>). However, nomatrices and kernel functions must be chosen a priori. Instead,the desired derivative D_f^+V implicitly depends on the solution V itself. To be more precise, based on the first two terms in the LK functional (<ref>), the expression v(ϕ) :=P_xx ϕ(0) + ∫_-^0 P_xz(η) ϕ(η) dηwill be encountered in the defining equation (<ref>) (respectively (<ref>)). That is why the latter might seem rather involved. However, in terms of thenumerical approach from <cit.>, theLyapunov equation required for complete-type LK functionalsis simply replaced by an algebraic Riccati equation. * Conservativity:The restrictivelinear norm bound (<ref>) from complete-type LK functionalsis required in order to guarantee nonpositivity of D_(f+g)^+V.Therefore, the defining equation (<ref>) proposed below isonly constructed having the outcome ofthis conditionin mind (see Sec. <ref>).* Perturbation structure: To take the structure of the perturbation g(x_t) into account, we decomposeg(x_t)= - B^(𝒞 x_t)into three mappings: Firstly, a linear operator 𝒞: C([-,0],ℝ^n)→ℝ^p,toconfine what the perturbation is based upon, 𝒞ϕ =[ C_1ϕ(-); C_0ϕ(0) ],i.e., 𝒞 x_t = [ C_1x(t-);C_0x(t) ]with C_1∈ℝ^p_1× n, C_0∈ℝ^p_0× n, p_0+p_1=p (where C_0 or C_1 vanish if p_0=0 or p_1=0); secondly, a possibly nonlinear continuous map : ℝ^p→ℝ^m with (0_p) =0_m;and, thirdly, a matrix B^∈ℝ^n× mthat indicates which components of ẋ(t) in (<ref>) are affected by the perturbation.The negative sign in (<ref>) intends to resemble a negative feedback.To obtain an unstructured restriction, e.g.,alinear norm bound g(x_t)_2≤γ[[x(t); x(t-) ]]_2 on g like (<ref>), identity matrices B=C_0=C_1=I_n are still always a possiblechoice.Otherwise, the restriction only refers to the mapping a in the structured perturbation(<ref>) and thus (<ref>) is replaced by a(𝒞 x_t)_2≤γ𝒞 x_t_2. * Perturbation restriction:There might be more appropriate sector boundsthan(<ref>) on the image of the possibly nonlinear map↦ a() in (<ref>).Note that the linear norm bound (<ref>) can equivalently be written as a^⊤(𝒞 x_t)a(𝒞 x_t)≤γ^2(𝒞 x_t)^⊤ (𝒞 x_t) orw( 𝒞 x_t,a(𝒞 x_t)) ≥ 0withw(,α)=γ^2 ^⊤ - α^⊤α. Besides of (<ref>),we also allow for more general indefinite quadratic forms.That is, we describe the family of perturbations via arestriction w( 𝒞 x_t,a(𝒞 x_t))≥ 0with w(,α)= ^⊤Π_ + 2 ^⊤Π_ aα + α^⊤Π_aaα, whereΠ_=Π_^⊤∈ℝ^p× p, Π_ a∈ℝ^p× m, Π_aa= Π_aa^⊤∈ℝ^m× m. These matrices can depend on a parameter γ like Π_=γ^2 I_p in (<ref>).The third matrix is henceforth required to be negative definite, Π_aa≺ 0_m× m. The restriction w(ζ,a(ζ))≥ 0 given by (<ref>) should at least locally, i.e., for sufficiently small =𝒞x_t∈ℝ^p,be satisfied by thefunction↦ a() from the perturbation g(x_t)=-Ba(𝒞 x_t).Table <ref> provides an overview of possible choices for (Π_,Π_ a,Π_a a ) in (<ref>) and the associated permitted sector for a(ζ). Consider the delay-freeExample <ref>,where g(x)=-Ba(Cx) with B=C=1 and a()=^3.The requirement (<ref>) forces us to use instead of the pure passivity restriction w(,a())= a()≥ 0,ratherw(,a())= a()- a^2()≥ 0, cf. row (II|a) in Table <ref>. Nevertheless, as >0 can be chosen arbitrarily small, still an arbitrarily large domain of validity can be obtained for the given perturbation a()=^3 depicted in the right column of row (II|a).A functional V:C([-,0],ℝ^n)→ℝ_≥ 0that has the structure (<ref>) is called aLyapunov-Krasovskii functional of robust type w.r.t.* the nominal linear system ẋ(t)=f(x_t), * the perturbation structure (B,𝒞), and * the perturbation restriction (Π_,Π_ a,Π_ a a)if for allϕ∈ C([-,0],ℝ^n)it holds D_f^+ V(ϕ)= -(𝒞ϕ)^⊤Π_(𝒞ϕ)-[^⊤(ϕ) B- (𝒞ϕ)^⊤Π_ a](-Π_aa)^-1[ B^⊤(ϕ)- Π_ a^⊤ (𝒞ϕ)] -e(ϕ),with :C([-,0],ℝ^n)→ℝ^ngiven by(<ref>)and e(ϕ)≡ 0. Moreover, if e(ϕ)≥ 0 is some arbitrary non-negative discrepancybetween the left- and the remaining right-hand side in (<ref>), V is called an inequality-based Lyapunov-Krasovskii functional of robust type.Note that the nominal linear system ẋ(t)=f(x_t) determines the left-hand side of (<ref>).The existence of a Lyapunov-Krasovskii functional of robust type(or of itsgeneralization with e≥ 0)is only ensured if the chosen perturbation restriction(Π_,Π_ a,Π_ a a)fits with the robustness of this nominal system.That is why it is reasonable to incorporate a parameterin Π_,Π_ a,Π_ a a that canbe optimized.For instance, in case of the linear norm bound depicted in row (I|a) of Table <ref>,we are interested in a large parameter γ.Permissiblecoefficients γ are characterized by the fact that a solution V of (<ref>) exists, whereas for too large values of γ, no solution V of (<ref>) exists(thus, in case of a linear norm bound, solvability of (<ref>) below is required since (<ref>) simplifies to (<ref>)).Corresponding existence conditions will be derived from the Kalman-Yakubovich-Popov lemma in Sec. <ref>, leading, e.g., to an explicit bound on γ in Sec. <ref>.First, however, we are going to discuss properties of the functional that are decisive for its usability: monotonicity along solutions (Sec. <ref>) and partialpositive definiteness (Sec. <ref>).The results will show that, if C_0 and C_1 in (<ref>) are chosen as full rank matrices, then (provided a fundamentalrequirementdescribed in Sec. <ref> holds)an LK functional of robust type satisfiesthe conditions imposed by the classical LK theorem <cit.>.Choosingfull-rank matrices for C_0 and C_1 in (<ref>)is always possible andthus gives a situation comparable to complete-type LK functionals.Nevertheless, a vanishing C_0 or C_1 is attempting if x(t) or x(t-) do not occur in the perturbation, and less restrictive choices for C_0 and C_1 are natural if only few components of x(t) or x(t-) affect g(x_t). These choices for C_0 and C_1and theresulting weaker properties of the LK functional can also be expedient to prove stability, e.g., relying on LaSalle's invariance principle <cit.> or on other methods <cit.>.§ THE LK-FUNCTIONAL DERIVATIVE ALONG SOLUTIONS OF THE PERTURBED RFDEAlong solutions of the unperturbed RFDE, the LK-functional derivativeD_f^+ V(x_t)is explicitly given by the right-hand side of the defining equation (<ref>). Hence, D_f^+ V(x_t) is exactly known once V and thus v have been determined. However, rather of interest is D_(f+g)^+V(x_t).The following lemmais valid for any functional having the structure(<ref>), independently from the defining equation. When applied to complete-type functionals,it leads to aresult known from <cit.>. For a functional given by (<ref>),it holds D_(f+g)^+ V(ϕ)= D_f^+ V(ϕ) + 2 ^⊤(ϕ)g(ϕ)with v being defined in (<ref>). For ϕ=x_t,the LK functional (<ref>) becomes V(x_t) = x^⊤(t) P_xx x(t) + 2 x^⊤(t) _t-^tP_xz(η-t) x(η) dη + _t-^t_t-^t x^⊤(ξ) P_zz(ξ-t,η-t) x(η) dη dξ+_t-^t x^⊤(η) P_zz,diag(η-t) x(η) dη. We compare D_f^+V(x_t) with D_(f+g)^+V(x_t), i.e., the derivative along trajectories of ẋ(t)=f(x_t) with the derivative alongtrajectories of ẋ(t)=f(x_t)+g(x_t). A difference can only occur in terms that involve ẋ (t) in D_(f+g)^+ V(x_t) = 2 ẋ^⊤(t)_(f(x_t)+g(x_t))^⊤ P_xx x(t) + 2 (ẋ^⊤(t)_(f(x_t)+g(x_t))^⊤_t-^tP_xz(η-t) x(η) dη +x^⊤(t) dd t_t-^t (…) dη) + dd t_t-^t_t-^t (…)dη dξ +dd t_t-^t (…)dη.Thus, the scalar differenceis given by2 g^⊤ (x_t) (P_xx x(t) +_t-^tP_xz(η-t)x(η) dη)=2 g^⊤ (x_t) v(x_t).The defining equation(<ref>)is tailored tothe objectivethat the abovederivative (<ref>)shall easily be proven to be non-positive.In fact, a desired result D_(f+g)^+ V(ϕ)≤ -ℓ(𝒞ϕ) with a chosen offset function ℓ can be prescribed.To this end, the perturbation restriction w(ζ,a(ζ))≥ 0 introduced in (<ref>), is strengthened tow(ζ,a(ζ))≥ℓ(ζ) in (<ref>) below. Being chosen as some,not necessarily quadratic, nonnegative function with ℓ(0)=0, a small offset ℓ comes alongwith a slight reduction of the permissible region for the graph of a. The latter is indicated by the turquoise shading in the last column of Table <ref>. Of course, a vanishing offset ℓ(ζ)≡ 0suffices if only D_(f+g)^+ V(ϕ)≤ 0 is desired, in which case (<ref>) is (<ref>).Let V be an LK functional of robust type described by Def. <ref>. Then for any ϕ∈ Cfor which the perturbation restriction (<ref>)is exceeded by a givenoffset function ℓ: ℝ^p→ℝin the sense of w(𝒞ϕ ,a(𝒞ϕ)) ≥ℓ(𝒞ϕ ),the LK-functional derivative along solutions of the perturbed equation satisfiesD_(f+g)^+ V (ϕ)≤ - ℓ(𝒞ϕ ). If e(ϕ)≢0 in (<ref>), then D_(f+g)^+ V (ϕ)≤ - ℓ(𝒞ϕ )-e(ϕ). We consider(<ref>) with (<ref>),D_(f+g)^+V(ϕ)= D_f^+V(ϕ) - 2v^⊤(ϕ) B a(𝒞ϕ). Note that the defining equation (<ref>) for D_f^+V(ϕ) involves the term b̂^⊤b̂ when abbreviating b̂^⊤ := [v^⊤(ϕ) B - (𝒞ϕ)^⊤Π_ a](-Π_aa)^-1/2,and thus(<ref>) (assuming (ϕ)≡ 0) can be written as D_(f+g)^+V(ϕ)= -(𝒞ϕ)^⊤Π_(𝒞ϕ) -b̂ ^⊤b̂- 2v^⊤(ϕ) Ba(𝒞ϕ).Adding0=- b̂+â_2^2+b̂^⊤b̂ + 2 b̂ ^⊤â + â ^⊤âwith â := (-Π_aa)^1/2 a(𝒞ϕ)and noting that a part of the mixed term 2b̂^⊤â = 2 [v^⊤(ϕ) B- (𝒞ϕ)^⊤Π_ a]a(𝒞ϕ)eliminates the perturbation term from(<ref>),we obtain D_(f+g)^+V(ϕ) = -(𝒞ϕ)^⊤Π_(𝒞ϕ)-b̂ + â_2^2 -2 (𝒞ϕ)^⊤Π_ a a(𝒞ϕ) + â^⊤â .Due to â^⊤â=a^⊤ (𝒞 x_t) (-Π_aa)a(𝒞ϕ), the resulting D_(f+g)^+V(ϕ) = -(𝒞ϕ)^⊤Π_(𝒞ϕ)-b̂ + â_2^2 -2 (𝒞ϕ)^⊤Π_ a a(𝒞ϕ) -a^⊤ (𝒞ϕ)Π_aa a(𝒞ϕ)explicitly involves the perturbation restriction (<ref>) inD_(f+g)^+V(ϕ)=-w(𝒞ϕ ,a(𝒞ϕ) ) -b̂ + â_2^2 .Hence, (<ref>) immediately leads to the estimation (<ref>).If -e(ϕ)≠ 0in(<ref>), this term also occurs in (<ref>).The results can straightforwardly be extended to a time-varyinga(ζ,t). Requiring w(𝒞ϕ,a(𝒞ϕ, t))≥ℓ(𝒞ϕ) leads as well to D_(f+g)^+V(ϕ)≤ -ℓ( 𝒞ϕ ).The following corollary focuses on a perturbation restriction in form of a linear norm bound, comparable to (<ref>). Assumeγ>0 is chosen such that an LK functional V(ϕ) having the form(<ref>) exists that solves D_f^+ V(ϕ)= -γ^2 (𝒞ϕ)^⊤𝒞ϕ - ^⊤(ϕ) BB^⊤(ϕ) ,whereis given by(<ref>) (see Cor. <ref> in Sec. <ref> for a respective range of γ).If a(𝒞ϕ)_2 ≤√(γ^2-k_3)𝒞ϕ_2,with somek_3∈ [0,γ^2], then thederivative of V(ϕ) along solutions of the perturbed RFDE (<ref>) is non-positive with D_(f+g)^+V(ϕ)≤ -k_3 𝒞ϕ_2^2.Consider (Π_,Π_ a,Π_aa)from Tab. <ref>, row (I|a). The defining equation (<ref>)becomes (<ref>). Choosing ℓ(𝒞 x_t) =k_3 𝒞 x_t_2^2,the strengthened perturbation restriction (<ref>) becomes (<ref>), and (<ref>) is (<ref>).A quadratic offset ℓ, as it has been chosen in the above corollaryfor simplicity, results in tightened sector slopes like (<ref>).However, in the example sketched in row (II|a) of Table <ref>, a(ζ) at ζ=0 is already tangent to the originalsector bound with ℓ(ζ)≡ 0. Thus, tightened sector slopesare inappropriate in this case.Choosing rather ℓ(ζ)=κ(ζ)with a not specified class-K function κ∈𝒦 is less demandingin terms of the perturbation restriction, and simultaneously amounts to what is usually the desired estimation for D_(f+g)^+V(ϕ).Provided ζ is considered on a bounded set, a tightening via(<ref>) only results in an open rather than a closed sector condition. Let Ω⊂ℝ^p be a bounded set. Then the existence of a class-K function κ∈𝒦 such that ∀ζ∈Ω: w(ζ,a(ζ))≥κ(ζ) is equivalent tothe open sector restriction w(ζ,a(ζ))>0 for all ζ∈Ω∖{0_p}.Note that ζ↦β(ζ)=w(ζ,a(ζ)) is a continuous function β:ℝ^p→ℝwith β(0_p)=0.Finally, in terms of the classical LK theorem, we have the following conclusion:Choosingℓ according to(<ref>), D_(f+g)^+V(ϕ)(<ref>)≤ -ℓ(𝒞ϕ) =-κ([[ C_1 ϕ(-); C_0 ϕ(0) ]])meets the well-known monotonicity condition∃κ_3∈𝒦, ∀ϕ∈ C: D_(f+g)^+V(ϕ)≤ -κ_3 (ϕ(0)), cf. <cit.>, whenever C_0 in (<ref>) is chosen as a full rank matrix. § POSITIVE-DEFINITENESS PROPERTIES Like a Lyapunov function from a Lyapunov equation in the delay-free case,complete-type and related LK functionals satisfy the (partial) positive definiteness condition from the classical LK theorem if and only if the equilibrium of the nominal system is exponentially stable <cit.>. This nominal stability is proven beforehand by other means, e.g., relying on the characteristic equation.Similar holds for LK functionals of robust type.The nominal exponential stability is again already sufficient for thenonnegativity of V(ϕ),provided the perturbation sector described by (Π_,Π_ a,Π_a a) contains thezeroperturbation a(ζ)≡ 0_m in its inner. The latter is the case for the linear norm bound, and, in fact, for any perturbation restriction with Π_≻ 0 (choosing K=0_m× p in (<ref>) below).However, the subsequent theorem can also be applied in more general cases (the equilibrium of the nominal system might even be unstable if a(ζ)≡ 0_m does not belong to the sector). The only condition to be imposed is that a stabilizing linear control lawa(ζ)=Kζ, K∈ℝ^m× p, can be found in the inner of the sector of allowed perturbations (see, e.g., Rem. <ref>). The latter is anyways a necessary condition for the simultaneous exponential stability underall considered perturbationssince thelinear control lawis itself part of that perturbation family. If there exists a K∈ℝ^m× p such that the linear control law a(𝒞 x_t)=K 𝒞 x_t(a) belongs to the interior of the considered perturbation family (<ref>), i.e., K satisfies Π_+Π_ a K +K^⊤Π_ a^⊤+ K^⊤Π_a a K ≻ 0_p× p, and (b) renders the zero equilibrium of ẋ (t) = f(x_t)-B K𝒞 x_t exponentially stable,then∃ k_1,0,k_1,1>0,∀ϕ∈ C: k_1,0C_0ϕ(0)^3 /ϕ_C^ + k_1,1C_1ϕ(0)^3 /ϕ_C^≤ V(ϕ).Theargument in V(ϕ) isan arbitrary function ϕ∈ C([-,0],ℝ^n).We take the latter as an initial condition x_0=ϕ for the stabilized problem (<ref>).Knowing that the resulting state x_t decays with increasing time t exponentially to 0_n_[-,0], where V(0_n_[-,0])=0, and knowing that V isquadratic,we can write V(x_0) =-(lim_t_1→∞V(x_t_1)_→ 0,exp. -V(x_0)) =-∫_0^∞D_(f-BK𝒞)^+V(x_t) d t(<ref>)≥∫_0^∞ℓ(𝒞 x_t)d t,given (<ref>) holdsfor the involveda(𝒞 x_t)=K 𝒞 x_t. Usingα=Kζ in (<ref>) shows that ℓ in (<ref>) can be chosen as ℓ(ζ) = kζ_2^2with k =λ_min(Π_+Π_ a K +K^⊤Π_ a^⊤+ K^⊤Π_a a K) (<ref>)>0.Hence, (<ref>), where 𝒞 is defined in (<ref>), becomesV(x_0)≥∫_0^∞ k 𝒞 x_t_2^2 d t = ∫_0^∞k C_0 x(t)_2^2d t +∫_-^∞k C_1 x(t)_2^2 d t.Tomake the dependency on x(0)visible(similar to<cit.>), werestrict for each term theintegration to a small time intervalwhereC_j x(t)_2, j∈{1,2}, deviates less than half fromitsvalue at t=0, and thus C_j x(t)≥1/2C_j x(0). Lemma <ref>expressesa time bound δ(α)thatguaranteesfor t∈ [0,δ(α)] an arbitrarily small deviationx(t)-x(0)≤αx_0_Crelative totheinitial function.Thus, C_jx(t)-C_jx(0)≤αC_jx_0_C, and bythe reverse triangle inequality C_j x(t)≥C_j x(0) -αC_jx_0_C^, ift∈[0,δ(α)] . Hence, by considering only t∈[0,δ(α_j)] with α_j =1/2C_j x(0)/C_jx_0_C^,weachieve C_j x(t)≥1/2C_j x(0), and (<ref>) becomes V(x_0)≥∫_0^δ(α_0) k 4 C_0 x(0)_2 ^2d t+ ∫_0^δ(α_1) k 4 C_1 x(0)_2 ^2d t=δ(α_0)k 4 C_0 x(0)_2 ^2+δ(α_1)k 4 C_1 x(0)_2 ^2 .According to Lemma <ref>, δ can be chosen as a linear function, δ(α)=m α, m>0,yielding V(x_0)≥m/2C_0 x(0)C_0x_0_C^ k 4 C_0 x(0)_2 ^2+ m/2C_1 x(0)C_1x_0_C^ k 4 C_1 x(0)_2 ^2,wherex(0)=x_0(0)=ϕ(0)since the initial function x_0 represents the used argument ϕ in V.As a consequence, if C_0 is chosen as a full rank matrix or, more generally, ifthe combination[ [ C_1; C_0 ]] ∈ℝ^(p_0+p_1)× nhas rank n, then V(ϕ)sharesthe samepartial positive definiteness properties as the LK functionals in <cit.>. Letrk([[ C_0; C_1 ]])=n. If the conditions in Thm. <ref> hold, then (a) (Local cubic bound) for any r>0, there exists a k_1>0 such that for all ϕ∈ C with ϕ_C<r it holdsk_1ϕ(0)^3≤ V(ϕ); (b) (Global quadratic b. on a Razumihkin-like set) there exists a k_1>0 such that for all ϕ∈ C with ϕ_C=ϕ(0) it holds k_1ϕ(0)^2≤ V(ϕ).Thm. <ref>using that (<ref>) simplifies to ∃ k_1>0: k_1ϕ(0)^3/ϕ_C≤ V(ϕ) if [[ C_0; C_1 ]] has full rank. If C_1 has full rank (even if p_0=0 in (<ref>)),V(ϕ) sharesthe same partial positive definiteness properties as the LK functionals of complete type in <cit.>. Let rk(C_1)=n. If the conditions in Thm. <ref> hold, then∃ k_1>0,∀ϕ∈ C: k_1ϕ(0)^2≤ V(ϕ). The starting point is (<ref>). If alsork(C_0)=n, the proof proceeds analogously to <cit.>. Otherwise, note that∫_-^∞k C_1 x(t)_2^2 d t ≥∫_0^∞k C_1 x(t)_2^2 d t is a lower bound on (<ref>). Hence, we can usea convex combination of both (<ref>) and this lower bound on (<ref>) as starting point, and the same arguments apply. Concerning the classical LK theorem, we have the following conclusion:With a full-rank choice for C_1,an LK functional of robust type meets the well-known partial positive definiteness requirement ∃κ_1∈𝒦: κ_1(ϕ(0))≤ V(ϕ)globally.With[ [ C_1; C_0 ]] having full-column-rank, e.g., as rk(C_0)=n,it still meets this requirement on any arbitrarily large bounded set. Note that, in (<ref>) and (<ref>), observability Gramians <cit.>can be recognized. Therefore, if C_0,C_1 do not have full rank but, by chance,they render(<ref>) observable in a certain sense(see <cit.>), then the above discussedlower bound in terms of ϕ(0) still exists. § A SPLITTING APPROACHIf C_1 is nonzero, then (𝒞ϕ)^⊤Π_(𝒞ϕ) on the right-hand side of the defining equation (<ref>) explicitly depends on ϕ(-), respectively x_t(-)=x(t-). However, for the operator-theoretic treatment in the next two sections, as well as the analysis of the numerical approach, aproblem without such a dependency is more convenient. That is why we split the LK functional V(ϕ) in, firstly, a part V_0(ϕ) that results from adefining equation without a quadratic delayed term,and, secondly, a remaining part V_1(ϕ). Due to the following transformation, the derivation does not have to cope withmixed term matrices Π_ζ a, even if the original perturbation restriction belongs to row (II) or (III) in Table <ref>. V(ϕ) is an LK functional of robust typew.r.t. ẋ(t)=f(x_t)=A_0x(t)+A_1x(t-), (B,𝒞), and Π=(Π_,Π_ a,Π_aa) if and only ifV(ϕ)=V^I(ϕ) is an LK functional of robust type w.r.t. thetransformed systemẋ(t)=A_0^I x(t) + A_1^I x(t-) = f^I(x_t),A_0^I =A_0-B(-Π_aa)^-1Π_ a^⊤[[ 0_p_1× n;C_0 ]]A_1^I =A_1-B(-Π_aa)^-1Π_ a^⊤[[C_1; 0_p_0× n ]], the original perturbation structure (B,𝒞),and the transformed perturbation restriction Π^I=(Π_^I,Π_ a^I,Π_aa^I) Π_^I = Π / Π_aa = Π_ + Π_ a(-Π_aa)^-1Π_ a^⊤,Π_ a^I = 0,and Π_ a a ^I = Π_ a a .Consider g(x_t)=-B(-Π_aa)^-1Π_ a^⊤𝒞 x_t.The defining equation(<ref>) is not altered ifD_f^+V(ϕ) and (Π_, Π_ a,Π_) are replaced by D_(f+g)^+V(ϕ) from Lemma <ref> and (Π_^I, Π_ a^I,Π_^I).Henceforth, we assume that (<ref>) has the structureΠ_^I=[Π_^I,11 0_p_1× p_0; 0_p_0× p_1Π_^I,00 ].Due to the block diagonal structure (<ref>), the first term in the defining equation (<ref>) for V^I(ϕ)=V(ϕ) is(𝒞ϕ)^⊤Π_^I (𝒞ϕ) = ϕ^⊤(-) C_1^⊤Π_^I,11 C_1_Q_1ϕ(-) + ϕ^⊤(0) C_0^⊤Π_^I,00 C_0_Q_0ϕ(0) .For notational compactness, we sete(ϕ)≡ 0 in (<ref>) (an extension to e(ϕ)≢0 isstraightforward). Assume Π_^Iin (<ref>) has the block diagonal structure (<ref>) givingrise to Q_0,Q_1 from (<ref>). Then anysolution V of(<ref>) can be splitted intoV(x_t) =V_0(x_t)+V_1(x_t), V_1(x_t) =∫_t-^t x^⊤ (η)Q_1 x(η) dη,where V_0 satisfies the modified defining equation D_f^I^+ V_0(x_t) = -x^⊤(t)(Q_0 +Q_1)x(t) -_0^⊤(x_t) B (-Π_aa )^-1B^⊤_0(x_t) without a term x^⊤ (t-) Q_1 x(t-), Q_1∈ℝ^n× n.According to Lemma<ref>, and with (<ref>),the defining equation (<ref>) for the overall functional V is D_f^I^+ V(x_t) =-x^⊤(t)Q_0 x(t) - ^⊤(x_t) B (-Π_aa)^-1B^⊤(x_t)-x^⊤(t-)Q_1x(t-).We intend to split the latter ina sum D_f^I^+ V(x_t) =D_f^I^+ V_0(x_t) +D_f^I^+ V_1(x_t).Note that (<ref>) gives rise toD_f^I^+ V_1(x_t) = x^⊤(t)Q_1 x(t) - x^⊤(t-)Q_1 x(t-). Thus, the remaining unknownV_0must satisfy D_f^I^+ V_0(x_t) = -x^⊤(t)Q_0 x(t) -x^⊤(t)Q_1x(t) - (_0(x_t)+ _1(x_t))^⊤ B (-Π_aa)^-1B^⊤(_0(x_t) +_1(x_t)) where v(x_t)=v_0(x_t)+v_1(x_t) are the corresponding subfunctionals according to (<ref>).In V_1 from (<ref>), the kernel functions in terms of (<ref>) are P_xx=0, P_xz(η)≡ 0, P_zz(ξ,η)≡ 0, P_zz,diag(η)≡ Q_1,and thus (<ref>) yieldsv_1(x_t)≡ 0. Consequently, (<ref>) becomes (<ref>).§ OPERATOR-BASED DESCRIPTIONLK functionals of complete type can be written as a quadratic form in L_2×ℝ^n with an operator from an operator-valued Lyapunov equation <cit.>. Having the same structure, LK functionals of robust type can analogously be described. As will be shown below, only the Lyapunov equation that determines the involved operator is replaced by an algebraic Riccati equation. In view of the next section, we consider the Hilbert space M_2 =L_2([-,0],ℂ^n) ×ℂ^n over the field of complex numbers, with the inner product[ We follow the convention to defineinner products conjugate linear in the second argument. For instance, ⟨ r_1,r_2⟩_ℝ^n=r_1^⊤r_2=r_2^H r_1. ] ⟨[[ ϕ_1; r_1 ]], [[ ϕ_2; r_2 ]]⟩_M_2 = ∫_-^0 (ϕ_2(θ))^H ϕ_1(θ) dθ +r_2^H r_1,ϕ_1,ϕ_2∈ L_2, r_1,r_2∈ℂ^n.For any ϕ∈ C, [[ϕ; ϕ(0) ]] ∈ C([-,0],ℝ^n) ×ℝ^n ⊂ L_2([-,0],ℂ^n) ×ℂ^nis an element of M_2.We focus on V_0 defined in the previous section. Compared to (<ref>), itdoes not show the term P_zz,diag, whichis only due to V_1(ϕ), cf. <cit.>.Based on (<ref>), we can write V_0 as a quadratic form in M_2V_0(ϕ) = ⟨𝒫_0 [[ϕ; ϕ(0) ]], [[ϕ; ϕ(0) ]] ⟩_M_2with a self-adjoint operator 𝒫_0:M_2→ M_2 relying on suboperators𝒫_zz^:L_2 → L_2,and 𝒫_zx^:ℂ^n→ L_2, 𝒫_0[ ϕ; r ] = [ 𝒫_zz^ϕ + 𝒫_zx^ r; 𝒫_zx^* ϕ + P_xxr ] =[ ϕ̃; r̃ ], with [t] ϕ̃(θ) = _-^0P_zz(θ,η) ϕ(η) dη+(P_xz(θ))^H r, r̃ = _-^0 P_xz(η) ϕ(η) dη +P_xxrthat incorporate the kernel functions from (<ref>).We are going to use the quadratic form (<ref>) in the defining equation (<ref>). Considerẋ(t)=A_0^I x(t) + A_1^I x(t-) = f^I(x_t)from (<ref>).The evolution of [[x_t; x_t(0) ]]∈ M_2 obeys the abstract ODE dd t[[x_t; x_t(0) ]]= 𝒜[[x_t; x_t(0) ]],where (denotingϕ'(θ)=ddθϕ(θ) below) the operator 𝒜 :D(𝒜 )→ M_2,𝒜[ ϕ; r ] =[ϕ'; A_0^I r + A_1^Iϕ(-) ], D(𝒜 ) = {[[ ϕ; r ]] ∈ M_2: r=ϕ(0), ϕ'∈ L_2, ϕ∈ AC } is the infinitesimal generator of a C_0-semigroup <cit.>.Using that abstract ODE, theleft-hand side of (<ref>) is D_f^I ^+ V_0(ϕ)= ⟨𝒫_0 𝒜[[ϕ; ϕ(0) ]],[[ϕ; ϕ(0) ]]⟩_M_2 + ⟨𝒜 ^* 𝒫_0[[ϕ; ϕ(0) ]],[[ϕ; ϕ(0) ]]⟩_M_2.The right-hand side of the defining equation (<ref>) can also be expressed in terms of ψ= [[ϕ; ϕ(0) ]]∈ M_2.Altogether, if V_0(ϕ) solves (<ref>), then𝒫_0=𝒫_0^* from (<ref>) solves the operator-valued algebraic Riccati equation (ARE)⟨𝒫_0 𝒜ψ,ψ⟩_M_2 + ⟨𝒜^* 𝒫_0ψ,ψ⟩_M_2= -⟨𝒬ψ,ψ⟩_M_2 -⟨ (-Π_aa)^-1ℬ^* 𝒫_0 ψ,ℬ ^* 𝒫_0ψ⟩_ℂ^m ∀ψ∈ D(𝒜 ), with𝒬:M_2→ M_2, and ℬ: ℂ^m→ M_2,𝒬[[ ϕ; r ]]= [[0_L_2; (Q_0+Q_1)r ]],ℬ u= [[ 0_L_2;Bu ]] (only due to the splitting from Sec. <ref>, 𝒬 is a bounded operator). Conversely,the following lemma ensures that a solution 𝒫_0 of (<ref>) has the form given in (<ref>).The result is well known for the stabilizing solution of classical[In contrast to classical LQR problems withnonnegative costs, the present algebraic Riccati equationwould arise in an indefinite LQR problem with input weight R_LQR=-Π_aa≻ 0 but state weight Q_LQR=-(Q_0+Q_1)≼ 0.] LQR algebraic Riccati equations <cit.> and is analogously provable in the present case. As a consequence, V(ϕ)=V_0(ϕ)+V_1(ϕ)has the desired structure (<ref>). Let a bounded self-adjoint operator 𝒫_0 be a solution of (<ref>) and assume 𝒜 generates an exponentially stable[If not 𝒜 but only𝒜^s:=𝒜-ℬ𝒦 with 𝒦=(-Π_aa)^-1ℬ^* (-𝒫_0), cf. Rem. <ref>, generates an exponentially stable C_0-semigroup, the statement still holds. In the proof, (<ref>) is first rewritten with 𝒜^s on the left-hand side, yielding the right-hand side -⟨Γ_1 ψ, Γ_1ψ⟩_ℂ^n+⟨Γ_2𝒫_0 ψ, Γ_2𝒫_0 ψ⟩_ℂ^m.] C_0-semigroup.Then 𝒫_0 is described by(<ref>), with𝒫_zz:L_2→ L_2 being an integral operator. The right-hand side of (<ref>) can be written as -⟨𝒬_lyapψ,ψ⟩_M_2:=-⟨Γ_1 ψ, Γ_1ψ⟩_ℂ^n-⟨Γ_2𝒫_0 ψ, Γ_2𝒫_0 ψ⟩_ℂ^mwhere bothΓ_1:M_2→ℂ^n;Γ_1 [[ ϕ; r ]]=(Q_0+Q_1)^1/2 randΓ_2:M_2→ℂ^m;Γ_2 [[ ϕ; r ]]= (-Π_aa)^-1/2 B^⊤ r are finite rank operators. Therefore, the arguments from <cit.> apply. Finally, the boundedness condition on V(ϕ) in C that is imposed by the classical LK theorem <cit.> is also ensured.If V_0 is described by (<ref>) with a bounded operator 𝒫_0 then V=V_0+V_1 with V_1 from (<ref>) satisfies ∃ k_2>0, ∀ϕ∈ C: V(ϕ)≤ k_2 ϕ_C^2. By (<ref>), V_0(ϕ)≤𝒫_0[[ϕ; ϕ(0) ]]_M_2^2 = 𝒫_0 ( ∫_-^0 ϕ(θ)_2^2 dθ + ϕ(0)_2^2) ≤𝒫_0 (+1) ϕ_C,2^2, where ϕ_C,2=max_θ∈ [-,0]ϕ(θ)_2.Moreover, in (<ref>), V_1(ϕ)≤Q_1ϕ_C,2^2. § SOLVABILITY OF THE DEFINING EQUATION We are going toanalyze the solvability of theARE (<ref>)and thus the existence of anLK functional of robust type.To this end, we consider a Kalman-Yakubovich-Popov (KYP) Lemma for C_0-semigroups on infinite-dimensional Hilbert spaces that is found in <cit.>. Let X,U be complex Hilbert spaces, 𝒜:D(𝒜)→ X be the infinitesimal generator of a C_0-semigroup on X, let ℬ:U→ X be a bounded linear operator, andℱ(x,u)=⟨ F_xx x,x ⟩_X+ 2Re⟨ F_ux x, u⟩_U + ⟨ F_uu u,u ⟩_U be a continuous quadratic form in X× U.Assuming that 𝒜 does not have a spectrum in the neighborhood of the imaginary axis, defineα_3=inf_ω∈ℝinf_u ∈ U1/u_U^2ℱ ((iω I_X - 𝒜)^-1ℬ u, u ) .Let (𝒜, ℬ) bestabilizable, i.e., there exists a bounded linear operator 𝒦_s:X→ U such that 𝒜 -ℬ𝒦_s generates an exponentially stable C_0-semigroup. If α_3>0, then there exist bounded linear operators ℋ=ℋ^*: X→ X and 𝒦:X→ U such that∀ x∈ D(𝒜),u∈ U:2 Re⟨𝒜 x+ ℬ u, ℋ x⟩_X + ℱ(x,u)=F_uu^1/2(𝒦 x+u)_U^2.If α_3 <0, then no such operators exist. The existence of an LK functional of robust type can be deduced from the given statement due to the following equivalence. Let ψ=x∈ X=M_2, u∈ U=ℂ^m, andℱ(ψ,u) =-⟨𝒬ψ,ψ⟩_M_2 + ⟨ (-Π_aa) u, u ⟩_ℂ^m.Then the Luré equation (<ref>) with ℋ=-𝒫_0is equivalent to theARE(<ref>) and𝒦=-(-Π_aa)^-1ℬ^* 𝒫_0. Withℋ=-𝒫_0, x=ψ, and with ℱ from (<ref>), where F_uu=F_uu^H=-Π_aa, equation (<ref>) becomes-2 (Re⟨𝒫_0𝒜ψ , ψ⟩_M_2 + Re⟨u , ℬ^*𝒫_0 ψ⟩_ℂ^m)-⟨𝒬ψ,ψ⟩_M_2 + ⟨ (-Π_aa)u, u ⟩_ℂ^m=⟨𝒦ψ , (-Π_aa)𝒦ψ⟩_ℂ^m + 2 Re⟨ u,(-Π_aa) 𝒦ψ⟩_ℂ^m +⟨ (-Π_aa) u ,u⟩_ℂ^m .Comparingthe mixed terms in u and ψ gives(-Π_aa) 𝒦= -ℬ^*𝒫_0. Theterms quadratic in ψ result in(<ref>). The decisive element in Lemma <ref> is (<ref>). Since 𝒜 refers to (<ref>), the first argument of ℱ in (<ref>) refers to (s I_M_2^-𝒜)^-1ℬ =[[Φ; Φ(0) ]], with Φ(θ)= e^sθ H^I(s) andH^I(s)= (s I - A_0^I -e^-s A_1^I)^-1 B,cf. <cit.>.Using (<ref>) and (<ref>), (<ref>) relies onℱ((iω I_M_2-𝒜 )^-1ℬ u , u ) =-u^H(H^I(iω))^H (Q_0+Q_1)H^I(iω) u - u^H Π_aa u.However, rather than expressing the result interms of H^I(s) from (<ref>), we intend to state the existence criterion in terms of the transfer function G(s) of the untransformed RFDE.Firstly, we incorporate how Q_0,Q_1 from (<ref>) depend on C_0,C_1 to makeG^I(s)= [[ C_1 e^-s;C_0 ]](s I - A_0^I -e^-s A_1^I)^-1 Bin (<ref>) visible. The following equivalence holds (H^I(iω))^H (Q_0+Q_1)H^I(iω) = (G^I(iω))^H Π_^I G^I(iω). (G^I(iω))^HΠ_^IG^I(iω) =(H^I(iω))^H[ C_1^H e^iωC_0^H ][[ Π_^I,11 0; 0 Π_^I,00 ]][ C_1e^-iω;C_0 ] H^I (iω)=(H^I(iω))^H ( C_1^H Π_^I,11 C_1+C_0^H Π_^I,00 C_0 )H^I(iω). Secondly, we undo the transformation from Lemma <ref>, to express (<ref>) in terms of the original transfer function G(s)= [[ C_1 e^-s;C_0 ]]_C(s)(s I - A_0 -e^-s A_1)^-1_Δ^-1(s) B .Note that, due to Lemma <ref>, the negative of the overall right-hand side in (<ref>) is described bythe right-hand side inLem. <ref> below,where Π^I=[[ Π_^I0;0 Π_aa ]], cf. (<ref>). In contrast,Π=[[ Π_ Π_ a; Π_ a^⊤ Π_aa ]]. Let Z=(-Π_aa)^-1Π_ a^⊤ and assume (I_m+Z G(iω))≠ 0for all ω∈ℝ.Then, ∀ v ∈ℂ^m,v^H [ G(iω);-I ]^H Π[ G(iω);-I ] v = u^H [ G^I (iω); -I ]^H Π^I[ G^I (iω); -I ] uwhere u=(I+ZG(iω)) v.Consider the Aitken block diagonalization v^H [ G (iω); -I ]^H [ Π_ Π_ a; Π_ a^⊤ Π_aa ][ G (iω); -I ] v =v^H [ G (iω); -I ]^H T^H [ Π/Π_aa0;0 Π_aa ] T [ G (iω); -I ] v,whereΠ/Π_aa is the Schur complement (<ref>), relying on T = [[I0; -ZI ]] with Z=(-Π_aa)^-1Π_ a^⊤.In T [ G (iω); -I ] v= [ G (iω)(I+Z G (iω) )^-1; -I ](I+Z G (iω))v_u,the upper term (cf. a closed loop transfer function with Z in the feedback path) simplifies to G (iω) ( I+ZG (iω))^-1(<ref>)= C(iω)Δ^-1(iω)B (I+ZC(iω) Δ^-1(iω) B )^-1= C(iω)Δ^-1(iω)( I+BZC(iω) Δ^-1(iω))^-1 B= C(iω)(Δ(iω) +BZC(iω) )^-1 B(<ref>),(<ref>)= G^I (iω),(usingthe push-through identityB(I+QB)^-1=(I+BQ)^-1 B in the second line). In the following theorem, we assumethat the equilibrium of the transformed system ẋ(t)=f^I(x_t) from (<ref>)is exponentially stable. (Note that, in the case of a linear norm bound,the latter coincides with the nominal system, f^I=f.)In view of Lem. <ref>, weaker conditions (stabilizability, hyperbolicity) suffice, but, in view of Sec. <ref>, thesimpler assumptionis anyways desirable:Since f^I(x_t)= f(x_t)- BK 𝒞 x_t in (<ref>), it implies that K=(-Π_aa)^-1Π_ a^⊤ stabilizes the nominal system.With K fulfilling (<ref>),this ensures that Vsatisfiesthe partial definiteness properties from Sec. <ref>, which would otherwise not have to be the case.Altogether, we obtain the followingexistence criterion for an LK functional of robust type w.r.t. the nominal system ẋ(t)=f(x_t)=A_0 x(t)+A_1 x(t-), the perturbation structure (B,𝒞), and the perturbation restriction (Π_,Π_ a,Π_a a).Assume that ẋ(t)=f^I(x_t) defined in (<ref>) has an exponentially stable equilibrium and ẋ(t)=f(x_t) does not have characteristic roots on the imaginary axis. Moreover,let Π_^I from (<ref>) have theblock diagonal structure (<ref>).Based on the transfer function (<ref>), consider W(iω)=-[[ G(iω);-I_m ]] ^H [[Π_ζζ Π_ζ a; Π_ζ a^⊤Π_aa ]] [[ G(iω);-I_m ]].If W(iω) ≻ 0_m× m, for all ω∈ℝ, then an LK functional of robust type exists. If W(iω) ≺ 0_m× m for some ω∈ℝ, then no LK functional of robust type exists. Since V_1 in (<ref>) always exists, only existence of V_0 must be proven, which amounts to solvability of(<ref>) (by assumption, the stability condition on 𝒜 in Lemma <ref> applies). Thus, due to Lemma <ref>, the existence question is tackled by Lemma <ref>.Concerning the characteristic roots {λ_k}_k, we do not have to distinguish between ∀ k:|Re(λ_k)|≠ 0 required above and ∃ϵ>0, ∀ k: |Re(λ_k)| >ε required in Lemma <ref>since eigenvalue chains in RFDEs satisfyRe(λ_k)→-∞ if |λ_k|→∞, cf. <cit.> and <cit.>. By Lemma <ref> and <ref>, (<ref>) dependson (<ref>) according to ℱ((iω I_M_2^-𝒜 )^-1ℬ u, u) = u^H(I+ZG(iω))^-H W(iω) (I+ZG(iω))^-1 u .Thus, the existence statement of Lemma <ref> relies on α_3 =inf_ωλ_min((I+ZG(iω))^-H W(iω) (I+ZG(iω))^-1) .By (<ref>) and theassumptions, ((I+ZG(iω))^-1)≠ 0 holds for all ω∈ℝ, and thus, Sylvester's law of inertia applies.Since G is strictly proper, and due to (<ref>), lim_|ω|→∞λ_min (W(iω)) =λ_min (-Π_aa)>0. Hence,positive definiteness of W(iω) is equivalent to α_3>0. If uniqueness of V is desired, the considerations can be restricted to the unique so-called stabilizing solution 𝒫_0 of theARE(<ref>),which according to <cit.> exists if and only if α_3>0, i.e., if and only if W(iω) ≻ 0_m× m for all ω∈ℝ. For the limit case α_3=0, which is not covered by Lem. <ref>, results from finite dimensional AREs, cf. <cit.>, suggest that a corresponding almost stabilizing solution might also be unique. § ADMISSIBLE PERTURBATION RESTRICTIONSThe limiting factor on the admissible boundsin Table <ref>appears to be Thm. <ref>, which is decisive for the existence of an LK functional of robust type.Subsequently derived bounds are summarized in Table <ref>.The first one addresses the linear norm bound γ from Tab. <ref>, row (I|a). Assume the nominal system ẋ(t)=A_0 x(t)+A_1 x(t-)=f(x_t) has anexponentially stable zero equilibrium. Let G(s) be its transfer function (<ref>) incorporating the perturbation structure (B,𝒞). If γ<γ_maxγ_max:=1/max_ωG(iω)_2 = 1/G_∞,then a solution V of (<ref>) exists.Due to f^I(x_t)=f(x_t), the assumptions in Thm. <ref> are satisfied.With (Π_,Π_ a,Π_aa) from row (I|a) inTab. <ref>, (<ref>) becomesW(iω) =-(γ^2 (G (iω) )^H G(iω) - I_m),λ_min(W(iω))= - γ^2 λ_max((G (iω) )^HG(iω)) +1>0.Moreover, the peak gain max_ω√(λ_max((G (iω) )^HG(iω))) =max_ωG(iω)_2 coincides with the H_∞-norm since G∈ H_∞ by the assumed exponential stability. Note that(<ref>) coincides with thecomplex stability radius, cf. <cit.>. Another way to read the above corollary is that the product of L_2-gains γG_∞ shall be smaller than one, mirroring the small-gain theorem cf. <cit.>. Similarly[Both refer to interconnected dissipative elements <cit.> with Π_,Π_ a,Π_aa describing the QSR-dissipativity of ζ↦ a(ζ).], for row(II|a) inTab. <ref>, the following corollary mirrors a passivity theorem: the excess ρof output passivity in the perturbation shall be larger than the shortage of input passivity in the nominal system. The latter is measured by the input passivity index ν(G)≤ 0, cf. <cit.>.Let G(s) be the transfer function(<ref>) of the nominal system ẋ(t)=A_0 x(t)+A_1 x(t-) with (B,𝒞). Consider (Π_,Π_ a,Π_aa) fromrow (II|a) inTab. <ref> with >_min,_min: =max_ωμ_2(-G(iω)) =-ν(G),where μ_2(M) = λ_max( 1 2 (M^H+M)) describes the logarithmic norm of a given matrix M∈ℂ^p× p.Moreover, assume thatẋ(t)=A_0 x(t)+A_1 x(t-)-1/2ρ B [[ C_1 x(t-); C_0x(t) ]]=f^I(x_t) has an exponentially stable zero equilibrium and the nominal system has no characteristic roots on the imaginary axis. Then a solution V of (<ref>) exists. Consider (<ref>) with Tab. <ref>, row(II|a), [t] W(iω) = -(- 1 2 ((G(iω))^H + G(iω)) - I),λ_min(W(iω))= -λ_max(He(-G(iω)))+ >0 .If p=m=1, we can use a Nyquist plot of G(iω).In terms of the thus relevant real and imaginary parts of G(iω), we rewrite (<ref>) asW(iω) = -[[ Re(G(iω));-I_m; Im(G(iω)) ]]^⊤[[Π_ζζ Π_ζ a0_p× m; Π_ζ a^⊤Π_aa0_m× m;0_m× p0_p× mΠ_ζζ ]] [[ Re(G(iω));-I_m; Im(G(iω)) ]].The general sector perturbation restriction from row (III) in Tab. <ref>results in the Nyquist plot restriction that is known from the circle criterion, cf. <cit.>. We do not recap the latter.Still, to see the circlein (<ref>),note thatgivensome radius r>0 and some shift x_δ^∈ℝ, ∓( (x-x_δ^)^2+y^2-r^2 )=∓[[x; -1;y ]]^⊤[[ 1x_δ^ 0;x_δ^ x_δ^2-r^2 0; 0 0 1 ]] [[x; -1;y ]]≥ 0describes a disc (-) or the complement of a disc (+) in the (x,y) plane.Analogously, if p=m=1,an open disc or the interior ofits complement in the (Re(G(iω)),Im(G(iω))) plane is described by W(iω)>0 from (<ref>) with Tab. <ref>, row (III|c). See the plots in Tab. <ref>.Rather than plotting the Nyquist curve, we preferably use transformations that eliminate either Π_ a (transformation I, Lemma <ref>) or Π_ (e.g.,transformation II below) to obtain numerically traceable results in the manner of Cor. <ref> or <ref>,not restricted to p=m=1.V(ϕ) is an LK functional of robust type w.r.t. ẋ(t)=A_0 x(t)+ A_1 x(t), (B,𝒞), and(Π_,Π_ a,Π_aa) from row (III) in Tab. <ref> with K_1=k_1 I_m and K_2=k_2 I_m, k_2>k_1∈ℝ, i.e., Π_=-k_1k_2 I_m ,2Π_ a=(k_1+k_2)I_m, Π_aa =-I_mif and only if V(ϕ)=V^II(ϕ) is an LK functional of robust type w.r.t. ẋ(t)=A_0^II x(t)+ A_1^II x(t)=:f^II(x_t), A_0^II =A_0-k_2 B [[ 0_p_1× n;C_0 ]],A_1^II =A_1-k_2 B [[C_1; 0_p_0× n ]],the original (B,𝒞), and the transformedΠ_^II =0_m× m,2Π_ a^II =- (k_2-k_1) I_m ,Π_aa^II =-I_m .The defining equation(<ref>) is not altered if instead of D_f^+V(ϕ) and (Π_,Π_ a,Π_aa), the result for D_(f+g)^+V(ϕ) from Lemma <ref> with g(x_t)=-k_2 B 𝒞 x_t and(Π_^II,Π_ a^II,Π_aa^II) are used. We derive a corollary expedient forsaturation nonlinearities, where the upper sector bound is usually fixed, and the best possible lower sector bound is of interest.Consider Tab. <ref> (III) with K_2=k_2 I_m. LetK_1=k_1 I_m andk_1>k_1,min,k_1,min := k_2-1/max_ωμ_2(G^II(iω)) =k_2+1/ν(-G^II),where G^II(s)= [[ C_1 e^-s;C_0 ]](s I - A_0^II -e^-s A_1^II)^-1 B and where A_0^II and A_1^II are defined in Lemma <ref>. Assume ẋ(t)=A_0 x(t)+A_1 x(t-)-k_1+k_2 2B [[ C_1 x(t-); C_0x(t) ]]=f^I(x_t) has an exponentially stable zero equilibrium and ẋ(t)=A_0^II x(t)+ A_1^II x(t)=f^II(x_t) has no characteristic roots on the imaginary axis. Then a solution V of (<ref>) exists.With(Π_^II,Π_ a^II,Π_aa^II) from Lem. <ref>, (<ref>) isW(iω)=-( 1 2 (k_2-k_1) ((G^II(iω))^H+ G^II(iω))- I_m), λ_min(W(iω)) = -(k_2-k_1)λ_max(He(G^II(iω)))+1 >0. § EXAMPLEThe following example system (<ref>) supplements in <cit.> the introduction of complete-type LK functionals. With a vanishingperturbation g(x_t)≡ 0_n, ẋ(t) = [ 0 1; - 1-2 ] x(t) +[00; -11 ] x(t-) +g(x_t) can be shown to have an exponentially stable equilibrium for any >0, cf. <cit.>. Henceforth, let =1.(i)Table <ref> gives alinear norm bound onunstructured perturbations g(x_t)=-a([[ x(t-);x(t) ] ])and onperturbations g(x_t)=[[0; g_2(x_t) ] ]=-[[ 0; 1 ] ]a([[ x(t-);x(t) ] ]) that only affect the second component, cf. (<ref>). The latter are quite plausible if (<ref>)represents the state space description of a second order system for x_1.According to (<ref>), γ_max=1/G_∞. See, e.g.,<cit.>,<cit.> for a numericalimplementation ofG_∞= max_ω∈ℝ[[ C_1e^-iω;C_0 ]] ( iωI_n -A_0-A_1 e^-iω)^-1 B_2. Cor. <ref> ensures that asolution V(ϕ) of (<ref>) exists ifγ<γ_max.Due to the full-rank choice C_0=C_1=I, such anLK functional of robust type V(ϕ) satisfies the conditions of the classical LK theorem <cit.>: * monotonicity by Cor. <ref> (k_2=γ^2-γ̃^2 from Tab. <ref> with γ̃<γ<γ_max, where γ̃,γ arearbitrarily close to γ_max) * partial positive definiteness by Thm. <ref> (with K=0_n× n in (<ref>), (<ref>)), and boundedness by Lem. <ref>.  Thus,for the family of accordingly perturbed systems,V(ϕ) is a common LK functional that proves global asymptotic stability of the zero equilibrium.(ii)Uncertainties Δ_0,Δ_1∈ℝ^n× n in the coefficient matrices of (<ref>) amount tog(x_t)=Δ_0 x(t) + Δ_1 x(t-) org(x_t) =[ 1 c_1Δ_1 1 c_0Δ_0 ][[ c_1 x(t-);c_0 x(t) ]]=:-a([[ c_1 x(t-);c_0 x(t) ] ]),for any c_0,c_1>0. Since a(ζ)_2≤ r(Δ_0,Δ_1) ζ_2,r(Δ_0,Δ_1) :=√( 1 c_1^2Δ_1_2^2+1 c_0^2Δ_0 _2^2),the linear norm bound is satisfied if r(Δ_0,Δ_1) < γ_max with γ_maxfrom (<ref>), choosing[ equivalent to B=C_0=C_1=I_n combined with Table <ref>, row (I|b), L=[[ c_1 I_n 0; 0 c_0 I_n ]], W=I_n.]B=I, C_0=c_0I, C_1=c_1I.See Table <ref>. § CONCLUSION Regarding an unstructured problem (i.e.,(<ref>)-(<ref>) with C_0=C_1=B=I_n) and regarding a linear norm bound as perturbation restriction, the proposed concept has many similarities to the concept of complete-type LK functionals. In particular, it shares what is commonly considered as the main advantage of the latter: If the equilibrium is exponentially stable, there always exists an LK functional of robust typethat satisfies the positive definiteness and monotonicitycondition imposed by the classical LK theorem. Thus,the approach is expedient even if the delay is arbitrarily close to a delay value at which the exponential stability is lost.At the same time, the resulting linear norm bound on admissible perturbations is significantly less conservative compared to the one derived from a complete-type LK functional. Furthermore, theconcept offers additional freedom in both incorporating the structure of the perturbation and imposing aperturbation restriction in form of an arbitrary sector, see Table <ref>. Being thus more adaptable to the problem at hand is usually rewarded by an additional reduction of conservativity. Frequency-domain-based robustness methods commonly only address global stability – not being applicableif the perturbation does not satisfy the sector condition globally. In contrast, having an LK functional at hand, canin further stepsrender regional stability results with an estimation of the domain of attraction possible.To this end, the explicit evaluability of the functional, respectively a numerical solution of the defining equation (<ref>) or (<ref>), and non-conservative bounds onthe functional are required. Both will be obtained viathe numerical approach from <cit.>, preferably relying on the Legendre-tau method,which is already known for convincing results onRiccati equations from classical LQR problems<cit.>. Incorporating these additional steps,the presented approach can be expedient even if a nonlinearperturbation resides only locally within the considered perturbation restriction.§ APPENDIX Solutions x of(<ref>) satisfy ∀α≥ 0,∃δ(α)≥ 0, ∀ t_1≥ t_0≥ 0: t_1-t_0≤δ(α) ⟹x(t_1)-x(t_0)≤αx_t_0_C^,and δ:ℝ_≥ 0→ℝ_≥ 0; α↦δ(α) can be chosen linear. Let(<ref>) be ẋ(t)=Ã_0 x(t)+Ã_1 x(t-).Then x(t_1)-x(t_0) = ∫_t_0^t_1(Ã_0 x(t) + Ã_1 x(t-)) d t≤ ( t_1-t_0) (Ã_0+Ã_1)max_t∈ [t_0-,t_1] x(t).Due to the uniform stability, ∀ε_s> 0, ∃δ_s(ε_s)>0,∀ t_0≥ 0: ϕ_C≤δ_s(ε_s) ⟹∀ t≥t_0-: x(t;t_0,ϕ)≤ε_s. Thus, ψ=1 b ϕ with b:=ϕ_Cδ_s(1), i.e., ψ_C=δ_s(1), implies x(t;t_0, ψ)≤ 1. By linearity, x(t;t_0, ψ)=1 b x(t;t_0,ϕ). Hence, ∀ t≥t_0-: x(t;t_0,ϕ)≤ b=ϕ_Cδ_s(1) withϕ=x_t_0.Consequently, we achieve (<ref>), by choosingt_1-t_0≤δ(α):=δ_s(1) αA_0+A_1 in (<ref>).. plain_neu
http://arxiv.org/abs/2312.16738v1
{ "authors": [ "Tessina H. Scholl" ], "categories": [ "eess.SY", "cs.SY" ], "primary_category": "eess.SY", "published": "20231227225241", "title": "Lyapunov-Krasovskii Functionals of Robust Type for the Stability Analysis in Time-Delay Systems" }
Periodically driven four-dimensional topological insulator with tunable second Chern number Bin Zhou January 14, 2024 =========================================================================================== Two k-ary Fibonacci recurrences are a_k(n) = a_k(n-1) + k · a_k(n-2) and b_k(n) = k · b_k(n-1) + b_k(n-2). We provide a simple proof that a_k(n) is the number of k-regular words over [n] = {1,2,…,n} that avoid patterns {121, 123, 132, 213} when using base cases a_k(0) = a_k(1) = 1 for any k ≥ 1. This was previously proven by Kuba and Panholzer in the context of Wilf-equivalence for restricted Stirling permutations, and it creates Simion and Schmidt's classic result on the Fibonacci sequence when k=1, and the Jacobsthal sequence when k=2. We complement this theorem by proving that b_k(n) is the number of k-regular words over [n] that avoid {122, 213} with b_k(0) = b_k(1) = 1 for any k ≥ 2. Finally, we conjecture that |[2]n121, 123, 132, 213| = a_1(n)^2 for n ≥ 0. That is,vincularizing the Stirling pattern in Kuba and Panholzer's Jacobsthal result gives the Fibonacci-squared numbers.§ INTRODUCTION The Fibonacci sequence is arguably the most famous integer sequence in mathematics, and the term generalized Fibonacci sequence has been used to describe an increasingly wide variety of related sequences. Here we consider two families of generalizations involving a second parameter k. The Fibonacci-k numbers are a_k(n) = a_k(n-1) + k · a_k(n-2) with a_k(0) = a_k(1) = 1.The k-Fibonacci numbers are b_k(n) = k · b_k(n-1) + b_k(n-2) with b_k(0) = b_k(1) = 1.We provide pattern avoidance results for the sequences {a_k(n)}_n ≥ 0 and {b_k(n)}_n ≥ 0. The objects are k-regular words meaning that each symbol in [n] = {1,2,…,n} appears k times, and subwords must avoid relative orders equal to one of the patterns. We let [k]n be the set of k-regular words over [n] and [k]nπ_1, π_2, …, π_m⊆[k]n be the subset that avoids all π_i patterns. We focus on two families of words. The Fibonacci-k words of length kn are the members of [k]n121, 123, 132, 213⊆[k]n.The k-Fibonacci words of length kn are the members of [k]n122, 213⊆[k]n.We provide a simple proof that the Fibonacci-k words are counted by the Fibonacci-k numbers for . a_k(n) = |[k]n121,123,132,213| for all k ≥ 1 and n ≥ 0.For example, the Fibonacci-2 numbers a_2(n) create Oeis[See the Online Encyclopedia of Integer Sequences (Oeis) <cit.> for all sequence references.] A001045 1, 1, 3, 5, 11, 21, 43, 85, 171, …, also known as the Jacobsthal sequence. Its first four terms count the Fibonacci-2 words in (<ref>)–(<ref>).1= |[2]0121,123,132,213| = |{ϵ}|1= |[2]1121,123,132,213| = |{11}| 3= |[2]2121,123,132,213| = |{1122, 2112, 2211}| 5= |[2]3121,123,132,213| = |{223311, 322311, 331122, 332112, 332211}|Note that 233211 ∈[2]3 since it is a 2-regular word over [3]. However, 233211 ∉[2]3121,123,132,213 since its underlined subword 232 is order isomorphic to the pattern 121. Hence, it does not appear in (<ref>).Kuba and Panholzer previously proved Theorem <ref> in a broader context (see Section <ref>). We complement their result for Fibonacci-k words with a new pattern avoidance result for k-Fibonacci words. More specifically, we prove that the k-Fibonacci words are counted by the k-Fibonacci numbers for k ≥ 2. b_k(n) = |[k]n122, 213| for all k ≥ 2 and n ≥ 0.For example, the 2-Fibonacci number sequence {b_n}_n ≥ 0 is 1, 1, 3, 7, 17, 41, 99,239, 577, … (A001333). Its first four terms count the 2-Fibonacci words over [n] for n = 0,1,2,3 as shown in (<ref>)–(<ref>).1= |[2]0122,213| = |{ϵ}| 1= |[2]1122,213| = |{11}| 3= |[2]2122,213| = |{1221, 2121, 2211}| 7= |[2]3122,213| = |{233121,233211,323121,323211,331221,332121,332211}|Figure <ref> provides Fibonacci words of both types for k=3, and illustrates why Theorems <ref>–<ref> hold. Readers who are ready to delve into the proofs of our two main results can safely skip ahead to Section <ref>. The remainder of this introductory section further contextualizes Theorems <ref>–<ref> and adds a conjecture.§.§ Classic Pattern Avoidance Results: Fibonacci and Catalan Theorem <ref> provides a k-ary generalization of the classic pattern avoidance result by Simion and Schmidt involving permutations and the Fibonacci numbers. Their statement of the result is provided below. For every n ≥ 1, |n123, 132, 213| = F_n+1where {F_n}_n ≥ 0 is the Fibonacci sequence, initialized by F_0 = 0, F_1 = 1.When comparing Theorems <ref> and <ref>, note that permutations are 1-regular words, and they all avoid 121. Thus, n123, 132, 213 = [1]n123, 132, 213 = [1]n121, 123, 132, 213. In other words, the 121 pattern in Theorem <ref> is hidden in the special case of k=1 in Theorem <ref>. Also note that (<ref>) has off-by-one indexing (i.e., subscript n versus n+1) and it holds for n=0 despite the stated n ≥ 1 bound. An even earlier result on pattern avoiding permutations is stated below. For every n ≥ 0, |n123| = C_n and|n213| = C_nwhere {C_n}_n ≥ 0 is the Catalan sequence starting with C_0 = 1 and C_1 = 1.Note that permutations also avoid 122, so n213 = [1]n122,213. For this reason, when k=1 the patterns avoided in Theorem <ref> are equivalent to those in Theorem <ref>. However, it is important to note that Theorem <ref> is not a special case of Theorem <ref>, as our new result only applies when k ≥ 2. This makes sense as the Catalan numbers do not follow a simple two term recurrence like a_k(n) or b_k(n). Likewise, the 1-Fibonacci words [1]n122, 213 = n213 are more accurately described as (213)-Catalan words.§.§ Base Cases: (0,1)-Based or (1,1)-Based Simion and Schmidt's result uses the customary base cases of F_0 = 0 and F_1 = 1 for Fibonacci numbers. In contrast, we use base cases of a_k(0) = b_k(0) = 1 and a_k(1) = b_k(1) = 1 in our k-ary generalizations. The distinction between (0,1)-based and (1,1)-based sequences can be dismissed as cosmetic for the Fibonacci-k recurrence a_k(n) = a_k(n-1) + k · a_k(n-2) since the resulting sequences 0,1,1,k,… and 1,1,k,… coincide from the first 1. However, the two sequences are shifted by one index relative to each other, which is important if we want to be sensitive to the off-by-one indexing issue found in (<ref>). The n=0 term is critical to the k-Fibonacci recurrence b_k(n) = k · b_k(n-1) + b_k(n-2) if . This is because the (0,1)-based sequence starts 0,1,k,k^2+1 and the (1,1)-based sequence starts 1,1,k+1,k^2+k+1. In particular, the Pell numbers P(n) = 2 · P(n-1) + P(n-2) follow the b_2(n) recurrence, but the Pell sequence is not covered by Theorem <ref> as it is (0,1)-based with P(0) = 0 and P(1) = 1. With apologies to the Pell numbers, we suggest that the (1,1)-based generalizations are more natural, at least in the context of pattern avoidance. This is due to the unique word of length n=0 that avoids all patterns, namely the empty word ϵ. Thus, an n=0 term of 0 mistakenly treats {ϵ} as the empty set ∅.§.§ Four Parameter Generalizations beyond k-Fibonacci and Fibonacci-k As mentioned earlier, there are many different notions of generalized Fibonacci numbers. To better discuss similar sequences, it is helpful to define the (b_0, b_1)-based k_1-Fibonacci-k_2 recurrence as follows,f(n) = k_1 · f(n-1) + k_2 · f(n-2)withf(0) = b_0andf(1) = b_1. The resulting sequence {f(n)}_n ≥ 0 is the (b_0, b_1)-based k_1-Fibonacci-k_2 sequence. When using these terms, we omit k_1 and/or k_2 when they are equal to 1. Thus, our Fibonacci-k numbers can be described as the (1,1)-based Fibonacci-k numbers, or as shifted (0,1)-based Fibonacci-k numbers (as per Section <ref>). Likewise, our k-Fibonacci numbers can be described as (1,1)-based k-Fibonacci numbers outside of this paper, and they are not equivalent to (0,1)-based k-Fibonacci numbers when k ≥ 2. Table <ref> collects previously studied (b_0, b_1)-based k_1-Fibonacci-k_2 sequences that are not covered by our work. For example, the aforementioned Pell sequence is the (0,1)-based 2-Fibonacci sequence using our terminology. More generally, the kth-Fibonacci sequences[Somewhat confusingly, the title of this well-cited paper is On the Fibonacci k-numbers and it contains a section titled k-Fibonacci numbers, but the term used throughout the paper is kth Fibonacci sequence.We use the latter in Table <ref>.] in <cit.> are the (0,1)-based k-Fibonacci sequences from the previous paragraph. When preparing Table <ref> we found the summary in <cit.> to be very helpful. Also note that the same four parameters are used in <cit.> with their generalized Fibonacci sequences being (a,b)-based p-Fibonacci-q sequences.§.§ Pattern Avoidance with Regular Words including Stirling Words A Stirling permutation is typically defined as a word with two copies of each value in [n] and the property that for each i ∈ [n], the values between the two copies of i are larger than i. In other words, it is a 2-regular word over [n] that avoids 212. The Stirling permutations with n=3 are given in (<ref>).[2]3212 = {112233, 112332, 113322, 122133, 122331, 123321, 133122, 133221,221133, 221331, 223311, 233211, 331122, 331221, 332211}Famously, <cit.> proved that |[2]n212| = (2n-1)!!. For example, (<ref>) verifies that there are 5!! = 5 · 3 · 1 = 15 such words for n=3. Generalizerd k-Stirling words are the members of [k]n212 using our notation, and were introduced under the name r-multipermutations in <cit.>. <cit.> investigated Wilf-equivalence for k-Stirling words that avoid a subset of patterns in 3. They prove that there are five ℕ-Wilf classes[The term ℕ-Wilf-equivalence means Wilf-equivalence for all k ≥ 1.] that avoid three such patterns. So there are five counting functions parameterized by k for |[k]n212, α, β, γ| with distinct α, β, γ∈3. Their class C_2 avoids λ = {312, 231, 321} and is identical to [2]k121,123,132,213. This is because k-regular word classes respect the symmetries of the square, and complementing our patterns results in 212 and λ. While <cit.> proves a stronger version of Theorem <ref> (and over a dozen other theorems), its sheer scope obfuscates the simplicity of this important special case. For example, the proof of their three-pattern result uses their two-pattern result, so readers must process intermediate results. Their representative patterns for C_2 also leads to an inductive process that requires scaling up subwords before prefixing 1s (and 2s) (cf., Figure <ref>). Some details are also rushed, and the authors admit that “we will be more brief than in the preceding sections”. A deeper distinction is that <cit.> uses k-Stirling words as a starting point. Broadening the basic objects to be k-regular words allowed us to discover Theorem <ref> — which uses the non-Stirling pattern 122 — as a natural companion to Theorem <ref>. Previous pattern avoidance results for non-Stirling k-regular words include the k-Catalan numbers C^k_n, which enumerate k-ary trees with n nodes. While Theorem <ref> shows that there are two distinct patterns for C_n = C^2_n (up to symmetries of the square), there are three distinct pairs of patterns for C^k_n when k > 2. * <cit.> proved that |[k]n212,312| = C^k_n. * <cit.> proved that |[k]n221, 231| = C^k_n. * <cit.> proved that |[k]n112, 123| = C^k_n.Note that the first two cases collapse into one case when k=1 as complementing and reversing 312 gives 231, while the other pattern is hidden. Williams also observed that the three results are collectively characterized by choosing a single pattern from 3 and a pattern of 1s and 2s that is consistent with it[The author was not initially aware of the Kuba and Panholzer result. The omission will be corrected in its extended version.]. We mention that k-regular words arise elsewhere in mathematics under a variety of names, including uniform permutations, and fixed-frequency multiset permutations. For example, the well-known middle levels theorem <cit.> is conjectured to have a generalization involving these words <cit.> with significant partial results existing in a broader context <cit.>. In Section <ref> we suggest that k-regular words can be combined with other pattern avoidance concepts. We do this by conjecturing that [2]n121, 123, 132, 213 = a_1(n)^2 for all n ≥ 0. That is, vincularizing the Stirling pattern in Theorem <ref> changes the Jacobsthal sequence into the Fibonacci-squared sequence.§.§ Outline Sections <ref> and <ref> prove Theorems <ref> and <ref>, respectively. The proofs involve simple bijections which could be suitable exercises for students and researchers interested in regular word pattern avoidance. Section <ref> gives our vincular conjecture on Fibonacci-squared numbers. Section <ref> closes with final remarks. As previously mentioned, Theorem <ref> was proven in a broader context in <cit.>. We encourage readers to further investigate that paper along with related results including <cit.> and <cit.>. For background on pattern avoidance we suggest <cit.>.§ PATTERN AVOIDANCE FOR FIBONACCI-K WORDS The Fibonacci-k sequences up to k=9 are illustrated in Table <ref>. Suppose α∈[k]n121, 123, 132, 213 and y ∈{2,3,...n}. Then the smallest symbol that can occur before y in α is y-1.Let α∈[k]n121, 123, 132, 213 and suppose for contradiction that the symbol x occurs before the symbol y in α, where x<y-1. Consider each of the relative positions of the symbol y-1: * (y-1)...x...y contains the pattern 213.* x...(y-1)...y contains the pattern 123.* x...y...(y-1) contains the pattern 132.Since each of these patterns must be avoided, we see that x cannot occur before y in α. Let n ≥ 2. Then α∈[k]n121, 123, 132, 213 if and only if one of the following is true:(1) α=n^kγ where γ∈[k]n-1121, 123, 132, 213;(2) α=n^b(n-1)^kn^k-bγ where 0≤ b < k and γ∈[k]n-2121, 123, 132, 213. Observe first that α=n^kγ∈[k]n121, 123, 132, 213 if γ∈[k]n-1121, 123, 132, 213, since none of the patterns we wish to avoid start with a largest symbol.Suppose α=a_1a_2...a_m ∈[k]n121, 123, 132, 213, but the first k symbols are not all n. Suppose a_i is the first symbol of α that is not n, where and 0≤ i<k. Since at least one copy of n must follow a_i, a_i can only be equal to n-1 by Lemma <ref>. We claim that all k-1 remaining copies of n-1 must follow immediately. Suppose for contradiction that a different symbol, t, appears before one of the copies of n-1. Then t cannot be equal to n, because more copies of n-1 must follow, creating the sub-permutation (n-1)n(n-1), which is the forbidden pattern 121. Thus all copies of n-1 must precede the remaining copies of n. The symbol t cannot be less than n-1, because there are remaining copies of n to the right, and by Lemma <ref>, the smallest symbol that can occur before n is n-1. Therefore, if α does not begin with n^k, then α must begin with b copies of n, followed by k copies of n-1, followed by k-b copies of n, where 0≤ b< k-1. Suppose that α is a word of the form n^b(n-1)^kn^k-bγ, where γ∈[k]n-2121, 123, 132, 213 and0≤ b < k. Observe that n^b(n-1)^kn^k-b does not contain the pattern 121, and given that γ also did not contain this pattern, α also cannot, since the symbols in γ are all smaller than n, n-1. Similarly, the patterns 123, 132, and 213 also cannot occur in α, since the prefix contains the largest values n and n-1, and the forbidden patterns did not occur in γ. The set [k]0121, 123, 132, 213={ϵ}; the set [k]1121, 123, 132, 213={1^k}.For any k≥ 1 and n≥ 2, |[k]n121, 123, 132, 213| = |[k]n-1121, 123, 132, 213| + k · |[k]n-2121, 123, 132, 213|where |[k]0121, 123, 132, 213|=1 and |[k]1121, 123, 132, 213|=1. By Theorem <ref>, the elements of [k]n121, 123, 132, 213 can constructed in the following way: * Create |[k]n-1121, 123, 132, 213| elements of [k]n121, 123, 132, 213 by inserting n^k in front of each element of Av^k_n-1(121, 123, 132, 213);* Create k· |[k]n-2121, 123, 132, 213| elements of [k]n121, 123, 132, 213 by inserting a prefix of the form n^b(n-1)^kn^k-b in front of each element of [k]n-2121, 123, 132, 213 where 0 ≤ b < k. There are k such prefixes. Therefore, with Remark <ref> as a base case, we have our result. Note that in the second step of constructing [k]2121, 123, 132, 213={2^b1^k2^k-b:0≤ b ≤ k }, we create k elements by placing 2^b1^k2^k-b in front of the empty word, to obtain 2^b1^k2^k-b·ϵ=2^b1^k2^k-b, for each 0 ≤ b < k.This completes our proof of Theorem <ref> for Fibonacci-k words. § PATTERN AVOIDANCE FOR K-FIBONACCI WORDS Suppose β∈[k]n122, 213 with k,n ≥ 2. For any x,y ∈{1,2,...n} with y>x, at least k-1 copies of y must occur to the left of x in β. Suppose to the contrary that β∈[k]n122, 213, and in β, some symbol x<y occurs with fewer than k-1 copies of y to the left of x.Then at least two copies of y must follow x.Thus, β contains the pattern 122, which contradicts that β∈[k]n122, 213. Suppose β∈[k]n122, 213 with k,n ≥ 2 and y ∈{2,3,...n}. Then the smallest symbol that can occur before y in β is y-1. Let β∈[k]n122, 213 and suppose that x occurs before y in β, where x<y-1.By Lemma <ref>, at least k-1 copies of y-1 occur to the left of x.But then β contains (y-1)...x...y, which is 213.Let α=a_1a_2...a_n be a word. We define α'= insert(α,x, i) to be the result of inserting the symbol x into position i of α. That is, insert(α,x,i)=a_1a_2...a_i-1xa_i+1...a_n.Let k,n≥ 2. Then β∈[k]n122, 213 if and only if one of the following is true: (1) β = n^k-1α' for some α'= insert(α,x,i+1) with 0 ≤ i ≤ k-1 and α∈[k]n-1122, 213.(2) β=n^k-1(n-1)^knγ where γ∈[k]n-2122, 213We start by proving that any element of [k]n122, 213 is of form (1) or (2) above. Let β∈[k]n122, 213 where k≥ 2. By Lemma <ref>, for any x<n, at least k-1 copies of n must occur to the left of x. Therefore, β must begin with n^k-1. Consider the position of the kth copy of n. The smallest symbol that can be to its left is n-1 by Lemma <ref>. If all k copies of n-1 occur before the kth copy of n, then β=n^k-1(n-1)^knγ where γ∈[k]n-2122, 213.Otherwise, suppose i<k copies of n-1 occur before the kth copy of n. Then β must begin with n^k-1(n-1)^in(n-1)^k-i-1, since at least k-1 copies of n-1 must precede all smaller symbols by Lemma <ref>. Note that any element α of [k]n-1122, 213 begins with (n-1)^k-1, again by Lemma <ref>. Therefore, β is of the form n^k-1α' for some α'= insert(α,n,i+1) with 0 ≤ i ≤ k-1 and α∈[k]n-1122, 213. We now show that any element described by (1) or (2) is a member of [k]n122, 213. Consider a word of the form n^k-1(n-1)^knγ where γ∈[k]n-2122, 213.The prefix n^k-1(n-1)^kn does not contain 122 or 213, and since only smaller symbols occur in γ, n^k-1(n-1)^knγ also avoids these patterns. Therefore, n^k-1(n-1)^knγ∈[k]n122, 213. Similarly, suppose α is in [k]n-1122, 213, and α'= insert(α,n,i+1), where 0 ≤ i ≤ k-1. Consider n^k-1α'. Since α∈[k]n-1122, 213, α avoids the patterns 122 and 213. Inserting a copy of n within (n-1)^k-1 to create α' will not create either of these patterns, nor will inserting n^k-1 in front of α'. Thus, n^k-1α'∈[k]n122, 213. The set [k]0122, 213={ϵ}, where ϵ is the empty word. The set [k]1122, 213={1^k}.For any k,n≥ 2,|[k]n122, 213| = k ·|[k]n-1122, 213| +|[k]n-2122, 213|where |[k]0122, 213|=1 and |[k]1122, 213|=1. By Theorem <ref>, the elements of [k]n122, 213 can constructed in the following way: * Create k ·|[k]n-1122, 213| elements of [k]n122, 213 by placing n^k-1 in front of each element of α'= insert(α,n,i+1), for each α∈[k]n-1122, 213 and each 0 ≤ i ≤ k-1;* Create |[k]n-2122, 213| elements of [k]n122, 213 by placing a prefix of the form n^k-1(n-1)^kn in front of each element of [k]n-2122, 213. Therefore, with Remark <ref> as a base case, we have our result.Note that in the second step of constructing [k]2122, 213={2^k-11^b21^k-b: 0 ≤ b ≤ k}, we create |[k]0122, 213|=1 element by placing 2^k-11^k2 in front of the empty word, to obtain 2^k-11^k2·ϵ=2^k-11^k2.This completes the proof of Theorem <ref> for k-Fibonacci words. § PATTERN AVOIDANCE FOR THE FIBONACCI-SQUARED SEQUENCE One motivation of this paper is to help popularize the use of k-regular words in the pattern avoidance community, To further this goal, we wish to demonstrate that k-regular words can be combined with non-classical patterns to produce interesting results. A vincular pattern allows pairs of adjacent symbols in the pattern to be specified as being attached, meaning that a subword only contains the pattern if its corresponding pair of symbols are adjacent in the word. The following conjecture modifies the set of regular words examined in Theorem <ref> by forcing the 121 pattern in [2]n121,123,132,213 to be fully attached (i.e., both the 12 and 21 pairs are attached). The formal statement below uses both common notations for vincularity (i.e., underlines and dashes). The content of the conjecture involves Fibonacci-squared numbers, c(n) = a_1(n)^2, whose sequence {c(n)}_n ≥ 0 is Oeis A007598 with the initial 0 omitted: 1, 1, 4, 9, 25, 64, 169, 441, 1156, 3025, 7921, ….c(n) = |[2]n121,1-2-3,1-3-2,2-1-3| = |[2]n121,123,132,213| for all n ≥ 0.The authors are actively working on this conjecture using the known recurrence c(n) = 2 · c(n-1) + 2 · c(n-2) - c(n-3) with c(0)= 0 and c(1) = c(2) = 1. While suitable bijections appear more complicated than found in our proofs of Theorems <ref>–<ref>, the authors hope to verifyConjecture <ref> in an upcoming report. § FINAL REMARKS We considered three pattern avoiding results involving k-regular words and Fibonacci sequences, including a simplified proof of a known result for Fibonacci-k words, a new result for k-Fibonacci words, and a conjecture for the Fibonacci-squared sequence. We hope that these results, together with those for the k-Catalan sequences (see Section <ref>), will serve as inspiration for further study of k-regular words beyond k-Stirling words. We conclude this investigation with some additional comments and thoughts.§.§ Efficient Generation of Multiset Permutations and Fibonacci Words There are many high-quality computational tools for investigating pattern avoidance including<cit.>, the Database of Permutation Pattern Avoidance <cit.>, and the Combinatorial Object Server <cit.>, to name a few. However, these tools don't (yet) support k-regular words, or more broadly, multiset permutations. (The k-regular words [k]n are permutations of a multiset in which the symbols in [n] each have frequency k.) During the initial steps of our investigation we found the cool-lex algorithm for generating multiset permutations to be particularly handy <cit.>. It generates a prefix-shift Gray code meaning that the next word is obtained by shifting a symbol into the first position. It also has loopless implementations meaning that words are created in worst-case O(1)-time[Combinatorial generation algorithms store one `current' object (e.g., one k-regular word) that is shared with the application, and which is modified to create the `next' object. In other words, `new' objects are not created. Hence, the algorithm might be able to create the next object in constant time if it generates a Gray code order (i.e., consecutive objects differ by a constant amount).]. A Python implementation appears in Appendix <ref> and could help readers perform their own investigations. Efficient Python code for generating k-Fibonacci and Fibonacci-k words is provided in Appendix <ref>. In this case, the words are generated directly using the proofs of Theorems <ref> and <ref>, respectively.§.§ Fibonacci Sequences with Parameter k Pattern avoidance with k-regular words is particularly promising for sequence families with one parameter. Other Fibonacci sequences with this property include those using base cases (1,k) or (k,1) or (k,k). Another natural target is the (1,1)-based k-Fibonacci-k sequence, which satisfies the following formula.d_k(n) = k · d_k(n-1) + k · d_k(n-2)withd_k(1) = 1andd_k(2) = 1.Interestingly, a pattern avoidance result for d_2(n) was established using d-permutations <cit.>. More specifically, d_2(n) is the number of 3-permutations of [n] avoiding 231 and 312. See <cit.> for further information on d-permutations (which are not d-regular permutations). Prior to this paper, the authors' only experience with pattern avoidance was in the context of Gray codes and combinatorial generation <cit.>. For this reason, our compute-and-check approach relied entirely on the existence of the Oeis. More bluntly, this paper would not exist without this invaluable resource. On the other hand, at the time of writing this paper, there was no mention of <cit.> in the Oeis pages for Fibonacci-k words, even for the k=2 Jacobsthal sequence A001045. This unfortunate omission significantly complicated and delayed this paper's completion, and we plan to rectify it prior to its publication. * abbrvnat § FIBONACCI WORD GENERATION IN PYTHON Python code for generating Fibonacci-k and k-Fibonacci words over [n] is in Figure <ref>. For example, runand in a terminal to generate the k=3 and n=4 words in Figure <ref>. The files are available <cit.>.§ GENERATION OF MULTISET PERMUTATIONS IN PYTHON The efficient generation of k-regular words is just beyond the standard packages in Python. In particular, thepackage only supports k=1 with itsiterator. For this reason, the following program for generating k-regular words in Figure <ref> may be of value for some readers. The file is also available online <cit.>. For example, running the following in a terminalwill generate the superset of words [3]4 found in Figure <ref>. The command-line arguments are tailored for k-regular words, but the underlying cool-lex algorithm <cit.> generates multiset permutations (including k-regular words) and has been ported to other languages such as<cit.>.
http://arxiv.org/abs/2312.16052v1
{ "authors": [ "Emily Downing", "Elizabeth Hartung", "Aaron Williams" ], "categories": [ "math.CO", "cs.DM", "05 (Primary) 68 (Secondary)", "G.2.1; G.4" ], "primary_category": "math.CO", "published": "20231226133634", "title": "Pattern Avoidance for Fibonacci Sequences using $k$-Regular Words" }
Integrated Access and Backhaul via LEO Satellites with Inter-Satellite Links Zaid Abdullah^†, Eva Lagunas^†, Steven Kisseleff^⋆, Frank Zeppenfeldt^, and Symeon Chatzinotas^† ^† Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, LuxembourgEmails: {zaid.abdullah, eva.lagunas, symeon.chatzinotas}@uni.lu^⋆ Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany. Email: steven.kisseleff@iis.fraunhofer.de ^ European Space Agency (ESA), Noordwijk ZH, The Netherlands. Email: frank.zeppenfeldt@esa.int================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The third generation partnership project (3GPP) has recently defined two frequency bands for direct access with satellites, which is a concrete step toward realizing the anticipated space-air-ground integrated networks. In addition, given the rapid increase in the numbers of satellites orbiting the Earth and emerging satellites applications, non-terrestrial networks (NTNs) might soon need to operate with integrated access and backhaul (IAB), which has been standardized for terrestrial networks to enable low-cost, flexible and scalable network densification. Therefore, this work investigates the performance of satellite IAB, where the same spectrum resources at a low earth orbit (LEO) satellite are utilized to provide access to a handheld user (UE) and backhaul via inter-satellite links. The UE is assumed to operate with frequency division duplex (FDD) as specified by the 3GPP, while both FDD and time division duplex (TDD) are investigated for backhauling. Our analysis demonstrate that the interference between access and backhaul links can significantly affect the performance under TDD backhauling, especially when the access link comes with a high quality-of-service demands.Integrated access and backhaul (IAB), non-terrestrial networks (NTNs), inter-satellite links (ISLs), satellite IAB, shared spectrum, new radio (NR). § INTRODUCTION §.§ Definition and StandardizationIntegrated access and backhaul (IAB) allows spectrum sharing between access links to end users (UEs) and backhaul links between the radio access network (RAN) nodes. The third generation partnership project (3GPP) has standardized the IAB as part of the fifth-generation new radio (5G-NR) to enable cost- and time-efficient network densification <cit.>, leading to a better quality-of-service (QoS) with reduced latency <cit.>. The RAN nodes that provide direct access to UEs and backhaul to other (parent or child) RAN nodes are called IAB nodes, while the gNodeB (gNB) that connects directly to the 5G core network via a non-IAB link (such as optical fiber) is called the IAB donor. Two types of IAB operations have been defined by the 3GPP in 5G-NR: In-band (IB) and out-of-band (OB). For the OB-IAB, the access and backhaul links are assigned to different frequency bands and thus, they do not interfer with each other. In contrast, the IB-IAB should have at least a partial frequency overlap between the two links (i.e. access and backhaul), and therefore, one should account for the resultant interference to ensure that the required QoS constraints are satisfied. Regarding the operational frequencies for IAB, different bands in the first and second frequency ranges (FR1: below 7.125 GHz and FR2: 24.25 - 52.6 GHz) with time division duplex (TDD) have been adopted by the 3GPP specifications, which can be found in release 17 <cit.>. §.§ From Terrestrial to Non-Terrestrial Networks (NTNs) Thus far, the IAB operation has been part of the 3GPP standardization only for terrestrial networks. Nonetheless, for direct access to handheld UEs through NTNs, two FR1 frequency bands (n256 and n255) with frequency division duplex (FDD) have been allocated by the 3GPP in release 18 <cit.>, which is a key step toward the anticipated integration between the terrestrial and non-terrestrial domains. In this context, a recent preliminary investigation on utilizing the n256 and n255 bands for 5G access through NTNs was carried out in <cit.>. The results showed that these two bands might lead to significant interference on various existing satellite infrastructure, prompting the exploration ofadditional or alternative frequencies for direct access through NTNs in 5G and beyond.Whether or not the currently adopted frequency bands will be modified in the future to avoid such interference, one can be certain that NTNs will play a pivotal role in the near future <cit.>, not only in providing connectivity to rural or isolated areas, but also to hotspot areas where terrestrial networks on their own cannot cope with the ongoing data traffic. From all the above, it becomes quite clear that, similar to the evolution of terrestrial networks, NTNs might also soon need to adopt an IAB operation that is compatible with terrestrial network operators, to provide global connectivity with enhanced spectrum utilization. This was the motivation behind our preliminary work in <cit.>, where we analyzed the main challenges associated with an NTN-based IAB operation. In addition, a case study was provided where a low earth orbit (LEO) satellite provides access to a handheld UE and backhaul to a terrestrial base station (BS) with shared spectrum.§.§ ContributionIn this work, we investigate a different aspect of satellite IAB operation, where a LEO satellite provides direct access to a handheld UE, and backhaul to a second LEO satellite with the same spectrum resources (i.e. IB-IAB) over the S-band in FR1. For the access link, we adopt the FDD transmission according to the 3GPP specification <cit.>, while both TDD and FDD are explored for backhauling through inter-satellite links (ISLs). We study the effects of number of LEO satellites per orbital plane, the altitude of LEO satellites, the required QoS at UE, and the ISL transmission mode on the network throughput under the IAB operation. It should be mentioned that in our previous work on satellite IAB in <cit.>, the backhauling was between a LEO satellite and a terrestrial BS, which is substantially different from the system and analysis presented here. To the best of our knowledge, the work presented here and our previous work in <cit.> are the only available works on satellite IAB so far. The rest of this paper is organized as follows. Section <ref> presents the system and channel models. Section <ref> and Section <ref> deal with the received signals and resource optimization under FDD and TDD backhauling, respectively. Numerical results are presented and discussed in Section <ref>. Finally, conclusions are drawn in Section <ref>.§ SYSTEM AND CHANNEL MODELS §.§ System ModelWe consider a system with two LEO satellites orbiting the same plane, denoted by S_1 and S_2, with IAB capabilities (i.e. S_1 and S_2 are IAB nodes). Both S_1 and S_2 are assumed to communicate with the satellite gateway (or IAB donor) via an ideal feeder link with a dedicated frequency band. Our main focus will be on the IAB operation at S_1, which utilizes the same spectrum resources for data backhauling through the ISL with S_2, and also for data access to the UE via the service link (see Fig. <ref>). We assume that the UE always operates with FDD mode to communicate with S_1 according to the 3GPP standardization on UE access with LEO satellites <cit.>. On the other hand, both TDD and FDD transmission modes will be investigated for data backhauling through the ISL. The ground-based handheld UE is assumed to be located below S_1, as indicated in Fig. <ref>. Therefore, and due to the fact that the terrestrial UE and S_2 are widely separated in space, the two are served by different beams (and possibly by different antenna sets) at S_1 that are almost orthogonal to each other. As a result of the access and backhaul beams orthogonality, each of the two nodes (UE and S_2) can be assigned the full bandwidth when served by S_1 without causing any notable inter-beam interference, even when adopting an IB-IAB operation with shared spectrum. On the other hand, the signal transmitted from S_2 to S_1 can potentially cause interference at the UE through undesired side-lobes. Similarly, the transmitted signal from the UE (which is intended for S_1) can also affect the received signal at S_2 due to the omni-directional transmission property at the handheld UE. Although, the interference at S_2 will have a small effect on the performance for the considered single UE case, given the restriction on the transmit power level at the handheld device.[According to the 3GPP release 18, the maximum UE transmit power within the channel bandwidth of a 5G-NR carrier cannot be higher than 23 dBm, which is almost equivalent to 0.2 Watts <cit.>.]^,[The effect of the transmission of a large number of UEs on satellites under IAB operation will be the topic of a future research.] The interference between the UE and S_2 appears when the TDD is utilized for backhauling. Further details in this regard will be provided in Section <ref>. §.§ Channel Model and Channels GainAll channels are assumed to be line-of-sight (LoS), and the free-space path loss (PL) model is utilized to characterize their gain.§.§.§ UE channel gainThe channel gain between the UE and jth satellite (j∈{1,2}) can be given as:β_UEj = G_S_jG_UE/(4π d_j f_c/c)^2, where f_c is the carrier frequency, c is the speed of light, G_S_j is the antenna gain of S_j, G_UE is the UE antenna gain, and d_j is the distance between the UE and S_j. Given the fact that S_1 is assumed to be located precisely above the UE, d_1 is nothing but the satellite's altitude, which we denote by l_s, while d_2 can be evaluated using a simple trigonometric formula as shown in the Appendix at the end of this paper.§.§.§ ISL path-loss and channel gainThe free-space PL for the ISL (PL_I) can be expressed as <cit.>:PL_I = (4π d_I f_c/c)^2, ifd_I ≤ d_I_max ∞, otherwise where d_I is the ISL distance between S_1 and S_2, and d_I_max is the maximum slant range between S_1 and S_2 to maintain an LoS connection (details are provided below). It follows that the corresponding ISL channel gain is: β_I = G_S_1 G_S_2/PL_I. §.§.§ Slant range and number of satellites per planeFor two satellite nodes orbiting the same plane (and thus have the same altitudes), the maximum slant range can be evaluated according to the following formula <cit.>:d_I_max = 2√(l_s(l_s + 2R_E)), where R_E is the Earth's radius. In general, the slant range between two neighbouring satellites in the same orbital plane depends on the satellites' altitude as well as the number of satellites in that plane. Assuming evenly distributed satellites, the slant range between two neighbouring satellites is <cit.>:d_I = 2(R_E + l_s)sin(π/N_p) with N_p being the number of satellites in the plane. Therefore, from (<ref>) and (<ref>), the required minimum number of satellites (N_p_min) in the plane to maintain an LoS connection is: ⌈ N_p_min⌉ = π/sin^-1(d_I_max/2(R_E + l_s)), where ⌈ x ⌉ is the ceiling function defined as the smallest integer that is not smaller than x. In the following two sections, we will elaborate on the received signals and signal-to-noise ratios (SNRs) at both the UE and S_2, and formulate the corresponding achievable network throughput for two different scenarios. The first scenario assumes FDD-based transmission for the ISL, while the second scenario deals with the case where TDD is utilized for backhauling between S_1 and S_2. § SCENARIO 1: FDD FOR ISL In this scenario, both satellites operate according to the FDD mode (i.e. FDD is utilized for both access to the UE and backhaul through the ISL). Our focus here will be on the data transmission from S_1. It is worth mentioning that in this scenario there is no interference between S_2 and the UE. The reason is that the UE and S_2 operate in FDD, and they can adopt the same uplink frequency to transmit to S_1, and also the same downlink frequency to receive from S_1. Therefore, the transmission from the UE (S_2) to S_1 will not cause any interference at S_2 (UE).§.§ Received Signals and SNRsThe received signals at the UE and S_2 can be expressed, respectively, as follows: y_UE^F = √(P_A) h_UE1 x_UE + z_UE, y_S_2^F = √(P_I_1) h_I x_I_2 + z_S_2,where the superscripts in y_UE^F and y_S_2^F indicate that FDD transmission is adopted for the ISL, P_A and P_I_1 are, respectively, the allocated powers at S_1 for the UE and S_2, h_UE1 is the channel coefficient between S_1 and the UE, while h_I is the ISL channel coefficient. Also, x_UE and x_I_2 are the information symbols intended for the UE and S_2, respectively, satisfying 𝔼{|x_UE|^2} = 𝔼{|x_I_2|^2} = 1, while z_UE and z_S_2 account for the additive white Gaussian noise (AWGN) at the UE and S_2, respectively, with noise power densities (PSDs) of N_0 per Hz.Then, the received SNRs at the UE and S_2 are, respectively, given as: γ_UE^F = P_A/N_0W_Fβ_UE1, γ_S_2^F = P_I_1/N_0W_Fβ_I,where W_F reflects the amount of bandwidth assigned for the FDD transmission in one direction. §.§ Achievable Rates and Power ControlThe total throughput of the network (in bits/sec) is:ℛ^F =ℛ_A^F + ℛ_ISL^F= W_Flog_2(1 +γ_UE^F) + W_Flog_2(1 + γ_S_2^F) with ℛ_A^F and ℛ_ISL^F being the achievable rates for access and backhaul, respectively, under FDD backhauling.To maximize the throughput, optimal power control can be performed subject to (i) total satellite transmit power and (ii) minimum access rate constraints, as follows:P_A, P_I_1maximize W_Flog_2(1 +P_Aβ_UE1/N_0W_F) + W_Flog_2(1 + P_I_1β_I/N_0W_F) subject to P_A + P_I_1≤ P_S_1,W_Flog_2(1 +P_Aβ_UE1/N_0W_F) ≥ℛ, where P_S_1 is the total available transmit power at S_1, and ℛ is a minimum access rate threshold for the UE. Problem (<ref>) and constraints (<ref>) and (<ref>) are convex, and the optimal solution can be obtained using software tools such as the CVX.§ SCENARIO 2: TDD FOR ISL For this scenario, the TDD mode is utilized for backhauling between S_1 and S_2, while FDD is adopted for the communication between S_1 and the UE for data access. It is worth highlighting that in this case, the transmission from S_2 to S_1 would likely cause interference to the UE due to the undesired side-lobes of the main beam that is intended for S_1. In particular, and unlike the previous scenario with FDD, when S_2 operates in TDD mode, the entire available bandwidth will be utilized for both transmission and reception (at different time slots). Similarly, the omni-directional transmission at the UE can also affect the received signal at S_2 when the latter operates in a TDD mode, although such interference at S_2 would be small in case of a single handheld UE transmission.§.§ Received Signals and SNRs We start with the received signal at the UE, which can be expressed as:y_UE^T = √(P_A) h_UE1 x_UE + ϖ√(P_I_2) h_UE2 x_I_1 + z_UE, where the superscript in y_UE^T indicates that the TDD mode is utilized for backhauling, h_UE2 is the channel between S_2 and the UE, P_I_2 is the transmit power from S_2,[In this work we assume that P_I_2 is equivalent to the total transmit power at S_1, i.e. P_I_2 = P_S_1.] x_I_1 is the information symbol intended for S_1 from S_2 (via the ISL) with 𝔼{|x_I_1|^2}=1, and the parameter ϖ accounts for the fact that S_2 operates in the TDD mode and thus only transmits (to S_1) for half of the time. Thereforeϖ = 0, when S_2 is in receiving mode from S_11, otherwise. In addition, the received signal at S_2 (when operating in the receiving mode) is:y_S_2^T = √(P_I_1) h_I x_I_2 + √(P_UE) h_UE2 x_S_1 + z_S_2, where P_UE is the UE uplink transmit power, and x_S_1 is the transmitted data symbol from the UE intended for S_1.The SNR/SINR (signal-to-interference-plus-noise ratio) at the UE and S_2 are, respectively, given as follows: γ_UE^T = P_Aβ_UE1/N_0W_F , for ϖ = 0,P_Aβ_UE1/P_I_2β_UE2W_F/W_T + N_0W_F, for ϖ = 1,γ_S_2^T = P_I_1β_I/P_UEβ_UE2 + N_0W_T,where W_T is the total available bandwidth under the TDD mode.§.§ Achievable Rates and Power ControlIn this case, there are two different achievable rates at the UE, which can be expressed as: ℛ_A_1^T = W_F log_2(1 + P_Aβ_UE1/N_0W_F), ℛ_A_2^T = W_F log_2(1 + P_Aβ_UE1/P_I_2β_UE2W_F/W_T + N_0W_F), where ℛ_A_1^T reflects the achievable access rate under no interference from S_2 (i.e. S_2 is receiving from S_1), while ℛ_A_2^T is the achievable access rate at the UE when S_1 is transmitting to the UE but receiving from S_2.In addition, the backhaul achievable rate under TDD transmission is:ℛ_ISL^T = 1/2 W_T log_2(1+ P_I_1β_I/P_UEβ_UE2 + N_0W_T), where the 1/2 factor in (<ref>) is due to the TDD operation.The goal is to maximize the total throughput via power allocation at S_1. However, such power control is only required when S_1 is transmitting to both the UE and S_2, while for the second case where S_1 is receiving from S_2, both S_1 and S_2 can transmit with their maximum available powers to serve the UE and S_1, respectively.Moreover, the optimization, which is performed to split the power during only one of the TDD time instances, should take into account the minimum average user rate as followsP_A,P_I_1maximize{W_F log_2(1 + P_Aβ_UE1/N_0W_F)+ 1/2 W_T log_2(1+ P_I_1β_I/P_UEβ_UE2 + N_0W_T)}subject to P_A + P_I_1≤ P_S_1,W_Flog_2(1 +P_Aβ_UE1/W_F N_0) + ℛ_A_2^T≥ 2ℛ. Problem (<ref>) is convex and can be solved optimally using software tools. Once ℛ_A_1^T and ℛ_ISL^T are maximized via power control, the total network throughput can be expressed as: ℛ^T = ℛ_ISL^T + 1/2(ℛ_A_1^T + ℛ_A_2^T), where the division over two is to find the average access rate with and without interference from S_2.§ NUMERICAL EVALUATIONSIn this section, we present and discuss the numerical results for the adopted satellite IAB network. The different simulation parameters utilized in our work are shown in Table <ref>. First, we show in Fig. <ref> the ISL and access channels gain by varying the number of LEO satellites in the considered orbital plane and satellites' altitude.[By applying the formula in (<ref>), the minimum number of LEO satellites to maintain an ISL connection was found to be 6 for orbital planes with an altitude of 1200 Km, and 8 for orbital planes with an altitude of 600 Km.] Clearly, the number of satellites in the orbital plane has a direct effect on the ISL link quality, as larger number of satellite nodes means shorter slant range between any two neighbouring satellites, and thus, better channel conditions. In this context, it is worth highlighting that the number of satellites per plane in the Starlink constellation (SpaceX) ranges between 20 and 58 satellites according to the Federal Communications Commission (FCC) report in <cit.>. In addition, the results show that while the quality of the access link is highly affected by the LEO altitude, the effect on the quality of the ISL is marginal. The reason is that for LEO satellites, having twice the altitude does not correspond to having twice the slant range. In fact, in the considered scenario, the slant range at 1200 Km of LEO altitude is only about 9% larger than that at 600 Km, assuming the same number of satellites at both altitudes. Furthermore, the ISL quality is shown to be much higher than that for the access link. This is a direct consequence of the fact that the handheld UE is assumed to have an omni-directional antenna with 0 dBi antenna gain compared to S_2 which has 32 dBi antenna gain.Fig. <ref> shows the network throughput as a function of the total transmit power at the satellite nodes, and for both TDD and FDD transmission modes under a minimum access rate of ℛ = 10 Mbits/sec. The FDD demonstrates superior performance compared to its counterpart the TDD. Moreover, higher levels of satellite transmit powers lead to a bigger gap between the TDD and the FDD modes. The reason is that under the TDD transmission, one cannot escape the resultant interference between S_2 (which transmits with the same power level as S_1 of P Watts) and the ground-based UE, and thus, higher transmit power levels mean larger interference, and hence, bigger gap between the two transmission modes. Fig. <ref> demonstrates the network throughput (which comprises both access and backhaul achievable rates) under different access QoS requirements (ℛ) at the UE. The results indicate a significant degradation in the total throughput is experienced as the minimum access rate increases, especially for the TDD scenario due to the interference between S_2 and the UE that makes it particularly challenging to guarantee a high QoS at the UE. For instance, in order for the UE to enjoy a minimum rate of 28 Mbits/sec, the total throughput will be degraded by 75 Mbits/sec compared to that which guarantees the UE with a QoS of only 10 Mbits/sec. The impact of higher QoS at the UE can be further seen in Fig. <ref>, which shows the distribution of the normalized allocated power for the access (i.e. allocated power for access divided by the total transmit power at S_1) as a function of the minimum required access rate, and for both TDD and FDD of ISL backhauling. The blue bar shows the normalized access power under the FDD scenario, while the red bar shows the amount of power for access during only one time instance of the TDD transmission, particularly the one where S_1 transmits to both the UE and S_2. Further, the yellow bar illustrates the average distribution of access power under the TDD scenario over both TDD time instances.[During the second time instant of the TDD scenario where S_2 is in transmission mode, it is assumed that S_1 deploys all of its available transmission power to serve the UE.] Focusing on the comparison between the red and blue bars where the total power at S_1 is split between the UE and S_2, 96% of the total power is allocated for access to achieve a QoS of 28 Mbit/sec under TDD, compared to 83% under the FDD mode. This explains the sharp decrease in total network throughput under TDD for ISL when the UE QoS is relatively high. Finally, it is also observed from Fig. <ref> and Fig. <ref> that the throughput and power allocation do not experience any change under the TDD when the required QoS is below 20 Mbit/sec. This can be explained in Fig. <ref>, where the average access rate (that also happens to maximize the network throughput) under the TDD transmission is above 18 Mbit/sec. As a result, it is only when the minimum requirement is above this threshold that the network would have to compromise the total throughput by allocating higher power levels to the UE in order to achieve its required QoS. In contrast, under the FDD transmission, the optimal access rate that maximizes the total throughput is just above the 11 Mbit/sec mark, therefore, any QoS requirement that is higher than this threshold will lead to compromising the total network throughput by modifying the power distribution between the access and backhaul links. § CONCLUDING REMARKSThe performance of satellite IB-IAB with inter-satellite backhauling over the S-band frequency spectrum was investigated in this work. Following the 3GPP specifications, the FDD mode was adopted for the direct access between a handheld UE with omni-directional antenna and the LEO satellite, while both TDD and FDD transmission modes were investigated for backhauling through ISLs. Our results demonstrated the large superiority of FDD compared to TDD, and they showed that under a high QoS for the access link, the total network throughput will suffer dramatically if the backhauling was performed according to the TDD mode due to the interference between the access and backhaul links.§ ACKNOWLEDGMENTThis work has been supported by the European Space Agency (ESA) funded activity Sat-IAB: Satellite and Integrated Access Backhaul - An Architectural Trade-Off (contract number 4000137968/22/UK/AL). The views of the authors of this paper do not necessarily reflect the views of ESA. § APPENDIXFrom Fig. <ref>, the distance between S_2 and the UE can be evaluated using the following trigonometric formula:d_2 = √((R_E + l_s)^2 + R_E^2 - 2(R_E+l_s)R_Ecosx), where R_E is the Earth's radius. Assume that there are N_p evenly-distributed satellites within the considered orbital plane, d_2 can be obtained after substituting x= 2π/N_p in (<ref>).IEEEtran
http://arxiv.org/abs/2312.16592v1
{ "authors": [ "Zaid Abdullah", "Eva Lagunas", "Steven Kisseleff", "Frank Zeppenfeldt", "Symeon Chatzinotas" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20231227143603", "title": "Integrated Access and Backhaul via LEO Satellites with Inter-Satellite Links" }
Article Type<day> <Month>, <year> <day> <Month>, <year> <day> <Month>, <year>
http://arxiv.org/abs/2312.16577v1
{ "authors": [ "G. N. Koutsokostas", "S. Sypsas", "O. Evnin", "T. P. Horikis", "D. J. Frantzeskakis" ], "categories": [ "nlin.PS", "math-ph", "math.MP", "nlin.SI" ], "primary_category": "nlin.PS", "published": "20231227140023", "title": "Nonlinear instability and solitons in a self-gravitating fluid" }
FairCompass: Operationalising Fairness in Machine Learning Jessica Liu, Huaming Chen0000-0001-5678-472X Member, IEEE, Jun Shenhttps://orcid.org/0000-0002-9403-7140 Senior Member, IEEE, and Kim-Kwang Raymond Choo0000-0001-9208-5336Jessica Liu and Huaming Chen are with the School of Electrical and Computer Engineering, University of Sydney, Australia. (Corresponding author e-mail: huaming.chen@sydney.edu.au). Jun Shen is with the University of Wollongong, Australia. (e-mail: jshen@uow.edu.au). Kim-Kwang Raymond Choo is with The University of Texas at San Antonio, San Antonio, TX 78249, USA. (e-mail: raymond.choo@fulbrightmail.org).January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Traditionally, sweet orange crop forecasting has involved manually counting fruits from numerous trees, which is a labor-intensive process. Automatic systems for fruit counting based on proximal imaging, computer vision, and machine learning, have been considered a promising alternative or complement to manual counting. These systems require data association components that prevent multiple counting of the same fruit observed in different images. However, there is a lack of work evaluating the accuracy of multiple fruit counting, especially considering (i) occluded and re-entering green fruits on leafy trees, and (ii) counting ground-truth data measured in the crop field. Here, we propose a non-invasive alternative that utilizes fruit counting from videos, implemented as a pipeline. Firstly, we employ convolutional neural networks for the detection of visible fruits. Inter-frame association techniques are then applied to track the fruits across frames. To handle occluded and re-appeared fruit, we introduce a relocalization component that employs 3-D estimation of fruit locations. Finally, a neural network regressor is utilized to estimate the total number of fruit, integrating image-based fruit counting with other tree data such as crop variety and tree size. The results demonstrate that the performance of our approach is closely tied to the quality of the field-collected videos. By ensuring that at least 30% of the fruit is accurately detected, tracked, and counted, our yield regressor achieves an impressive coefficient of determination of 0.85. To the best of our knowledge, this study represents one of the few endeavors in fruit estimation that incorporates manual fruit counting as a reference point for evaluation. We also introduce annotated datasets for multiple orange tracking (MOrangeT) and orange detection (OranDet), which are publicly available and aim to foster the development of novel methods for image-based fruit counting.§ INTRODUCTION Accurate fruit yield estimation is important for making informed decisions about harvesting,storage, and marketing, but such estimation can be a challenging task. Consider the major citrus beltin Brazil, which covers areas in two states (São Paulo and Minas Gerais) and sums 461,921 hectares, 86% reserved to sweet orange production, according to the Fund for Citrus Protection,<cit.>. The most used annual orange crop forecast process, performed by Fundecitrus, is alaborious operation that involves manual fruit stripping (the advanced harvest of all fruit in the tree),in samples of around 1,500 trees spread across this citrus belt. Such sampling and counting procedures are a usual way to obtain pre-harvest fruit yield data <cit.>.Computer vision-based methods have been considered as a prominent alternative for automatic, non-invasivefruit counting, becoming parts of larger yield estimation systems.Earlier works employed feature engineering<cit.> to create fruit detectors, exploring low-level features in images derived from color andtexture. An example for citrus was presented by <cit.>, that employed image processingtechniques and support vector machines to detect green oranges in field images. With the popularization of deeplearning, convolutional neural networks (CNNs) were adopted for automatic feature learning (representationlearning) <cit.>. The work by <cit.> is a turning point in the adoption of CNNsin fruits detection, and this passage from feature engineering to representation learning is reviewed by<cit.>. Despite significant progress in automated fruit detection using deep learning techniques, thecounting stage of the processing pipeline remains a challenging area that requires furtherdevelopment. Single images cannot provide reliable estimates of fruit counting because of occluded fruit: a single standpoint keeps a significant part of the fruit out of sight <cit.>. A natural alternative is to employ multiple images (or, equivalently, video sequences), to get different views able to reach most of the targets and perform counting of the detected fruits. However, while single-view fruit counting presents underestimation issues, a major limitation ofmultiple-view fruit counting is overestimation of yield due to overlapping images, which must beproperly managed to ensure accurate counting <cit.>. An approach being exploredrecently is to scan the orchards using cameras and to assign a unique identifier to eachfruit along the image sequence <cit.>. Such approach,Multi Object Tracking (MOT), involves detecting multiple objects in a sequence of imagesand linking them over time to identify individual objects <cit.>. This is achallenging task that requires precise detection, localization, and association of a same object indifferent images. Essentially, the goal is to identify each object's location, movement, andidentity throughout the entire sequence, avoiding multiple counting of the same object (a fruit). Fruit detection and tracking can also be considered a component of semantic scene understandingsystems[Or SpatialAI systems, as proposed by <cit.>.] foragricultural settings, allowing artificial agents to extract semantic information aboutthe objects in an orchard and interact usefully with its environment. Examples of such interactions are crop monitoring, spraying and harvesting <cit.>.Our objective is to contribute to crop forecast by providing estimates of the actual number of fruits on a tree. We start with the challenging task of counting visible oranges on individual trees underreal citrus farm conditions using video inputs. Yield prediction is performed in earlier maturation stages of sweet orange,when the fruit is green (non-mature), lacking the color contrast against green leaves observed inmature fruit <cit.>. The imaging is performed on ground level, between trees rows, employingsmartphonecameras. In such a setting, there is an issue: tall trees (up to 5 meters height) cannot be fullyimaged in a single frame. In other words, considering the available inter-row space, the trees' height and theabsence of wide-angle optics, it is not possible to keep the entire tree in the camera's field of view (FOV). Even inthe case of wide-angle optics availability, a single view of a tree cannot provide a full assessment of theentire plant fruit set because of occlusions. Therefore, we need of a multiple-view counting system to solve this issue. After fruitcounting, we proceed to the yield estimation for each tree. This value differsfrom the counted fruit by any computer vision system, since there are oranges located in the inner partof the plant canopy, not visible in the images collected in the field even considering multiple views.We implement a neural net-based regressor to estimate the actual number offruit from the fruit counting, integrating other data as plant height, variety, and age. Although we use adifferent set of methods, our work is similar to the one by <cit.> in that they also proposea pipeline comprising fruit detection, 3-D projection, fruit counting and yield estimation in apple orchards.However, our study faces a more challenging scenario considering occlusions and fruit tracking.The main contributions of this work are: * a simple but effective annotation methodology for multiple object trackingin citrus, exploiting camera pose data by structure from motion to estimate thefruits' 3-D location and automate the generation of 2-D tracks ground-truth;* the introduction of MOrangeT <cit.>, a public MOT dataset for citrus composed byimages sequences (frames) recorded on citrus farms plus ground-truth tracking data in theMOT16 format <cit.>, whose annotation effort was performed by our team;* a multiple fruit detection and tracking method based on convolutionalnetworks for detection, on the Hungarian algorithm <cit.> for inter-frameassociation, and a relocalization algorithm to treat occlusions and fruitsexiting/entering the camera's field of view;* an evaluation of the proposed method employing the multiple object tracking accuracy (MOTA) <cit.> and the higher order tracking accuracy (HOTA) <cit.>, and * proposition of a regressor based on artificial neural networks to estimate the number of fruit in a tree given the number of fruit counted using MOT, validated using a set of 1,139 treeswith real yield manually collected.A useful property of the proposed tracking method is the fruit's relative position in 3-D is also estimated,allowing assessment of the spatial distribution of fruits in the tree.This work is organized as follows. In Section <ref> we provide a comprehensive overview of related works in thefield of fruit counting and yield estimation. We discuss the various methods employed for fruit detection and inter-frame fruitassociation. Section <ref> presents our proposed methodology, which encompasses the MOrangeT dataset, anovel annotation technique for MOT ground-truth, the methods employed for orange tracking, and tree yield estimation.The results of our experiments are presented in Section <ref>, followed by a detailed discussion of the findings inSection <ref>. Finally, in Section <ref>, we present our concluding remarks, summarizing the maincontributions of this work and outlining potential paths for future research. For a visual overview, a video showing ourfruit counting results is available[<https://youtu.be/hOq42KMskLQ>]. § RELATED WORKAfter the raise of CNN-based methods for fruit detection, initiated by <cit.>, researchers haveexplored the problem of fruit counting in multiple images, considering important challenges as fruit recounting and occlusions <cit.>. Most of these works add adata association procedure to link detected oranges in different images, avoiding multiple counting, andcommonly employing a tracking algorithm. Some researchers extended their fruit counting systems toyield estimation <cit.>, generally using (linear) regression.<cit.> proposed a fruit counting system tested on data from mangoes orchards. A datasetof 1,500 images, presenting a resolution of 500 × 500 pixels, was employed to train a Faster R-CNN detector<cit.>. The mangoes' tracking was performed by a method employing Kanade-Lucas-Tomasi algorithm(optical flow or KLT), Kalman filters, and the Hungarian algorithm <cit.>. To deal withlong-time occlusions and reappearing fruits, they employed a relocalization procedure based on the fruit'sestimated 3-D position and reprojection on the current frame. They used structure from motion (SfM)<cit.>, using the mangoes as 3-D landmarks and their tracked positions in images asmeasurements. In SfM, the three-dimensional positions of landmarks and the camera are estimated from themeasurements (2-D positions in each frame) using non-linear least squares <cit.>.Liu et al. employed COLMAP <cit.> as SfM implementation, getting estimations for themangoes locations in 3-D and the camera position in space at each frame. To avoid multiple counting,mangoes/landmarks and Faster R-CNN detected bounding boxes are reassociated using the Hungarian algorithm and a speciallydesigned association cost function. Liu et al. achieved a coefficient of determination (R^2) of 0.78in a linear regression model relating fruit count to yield after considering views from two different vantage pointsof two rows facing opposite sides of the trees.Concurrently, <cit.> proposed another image-based counting system for mangoes. Using 10 fps video sequences scaled to 1024 × 1024 pixels frames, they employed their MangoYOLO model, a 33-layers adaptionof YOLOv3 <cit.>, to detect fruits. Kalman filter was employed on tracking, estimating thefruit positions in the next frame. The Hungarian algorithm was used to assign predicted fruit positions todetections in the current frame, using the fruits' centroids distances on the cost function. Wang et al.adopted a threshold for lost trackers: after 15 frames without associations to MangoYOLO detections, the Kalman-basedtracker is ended – if the fruit re-appear, it is considered a new fruit, incrementing counting. In an experimentinvolving a row containing 21 trees, the authors reported a root-mean-square error (RMSE) of 18 fruits/tree. However, reported results indicate a canceling occurring in the mean error between overestimation by repeated counting and underestimation bymissed fruits.<cit.> proposed a novel approach where a vehicle carrying nozzles applied a water mist to citrus trees. The mist induced temperature contrast between fruits and leaves and a thermal camera, also embedded in the vehicle, recorded video streams of thermal frames. Careful experimentation was conducted to find optimalparameters for the amount of water and kind of nozzle, and the impact of ambient temperature and humidity considering the goal of maximizing temperature contrast. A Faster R-CNN model was trained for orange detection in the thermal images, reaching 87.2% average precision (AP). KLT tracking was employed to avoid multiple counting, but the dataassociation used by the authors for multiple fruit tracking is unclear. In a row containing 25 trees, their method was able to count96% of the 747 manually counted fruits in the field. However, the thermal camera's FOV was unable to coverthe entire canopy, so the manually collected ground-truth was restricted to the view area, marked using nylon strings.<cit.> have explored the modern fruiting-wall growing system for apples, where theapple tree canopy is thin, making the fruits and trunks visible. Fruit detection and tree trunk detectionwere performed using the YOLOv4-tiny architecture <cit.>, a fast neural networkaimed for real-time applications. They assume a linear movement of the camera(embedded in a vehicle), so their tracking is adapted to deal with horizontal movements only. TheCSR-DCF algorithm <cit.>, a single target tracker, was employed on trunk tracking,and the estimated horizontal movement guided the apples tracking. Inter-frame fruit assignment isguided by minimum Euclidean distance between fruits' centroids, but the employed assignmentalgorithm is not completely clear. The counting system, evaluated using annotated videos (not the trueyield or the manual counting on the field), reached R^2 = 0.98. However, the reported results showdetection errors in a single frame (mainly false negatives) produce immediate tracking errors.The APPLE MOTS dataset was introduced by <cit.>. Two UAVs and a wearable device,where a camera is embedded in a helmet, were employed to get video sequences in orchardspresenting three different apple varieties. The CVAT annotation tool was employedto produce almost 86,000 manually annotated masks associated to 2,304 unique apple instancesacross 1,673 frames. The authors further evaluated two different MOT algorithms,TrackR-CNN <cit.> and PointTrack <cit.>.The best results were produced by PointTrack, thatreached 52.9 in the multi-object tracking and segmentation accuracy (MOTSA) metric, a variation of the MOTA metric <cit.>for segmented objects[In the segmentation case, masks are provided as object segmentation ground-truth, not just the rectangular bounding boxes.]. The authors call attention to the fact that multiple fruit tracking involves homogeneous objects,i.e., objects very similar to each other. This differs from the people tracking, a populartask in computer vision because of research on surveillance and autonomous vehicles.De Jong et al. consider if this homogeneity in object appearance could impact methods developedoriginally for pedestriantracking in different ways.Another work employing a MOT formulation was presented by <cit.>, again following a tracking by detection framework: a YOLOv3 network <cit.> was employed to find camellia fruit and apples,and Kalman filters used on fruit tracking. The Mahalanobis distance and the Kalman-based predictions are employedto define eligible associations between tracks and new detections in the current frame. Then a similarityfunction based on appearance is employed on association by a nearest-neighbor algorithm. Such a combination ofspatial filtering and appearance similarity was named Cascade-SORT by the authors. <cit.>, havereached MOTA values beyond 0.70 for most of their tests. However, their results are composed of just four short video sequences (one for camellia and three for apples).<cit.> proposed a multiple object tracking method for citrus in-field, composed of a detection component,OrangeYOLO, and a tracking component, OrangeSORT, which are modified versionsof the YOLOv3 network <cit.> and the SORT algorithm <cit.>, respectively. A wide-angle action camera[A DJI OsmoAction camera, presenting 145° FOV.] was adapted to a field rover, that performed a linear movement acrossorchards' rows at 2 m/s. Zhang et al. carefully selected three scales from YOLOv3's backbone (Darknet53) afteranalyzing the receptive fields and the oranges' sizes, getting significant improvements in detection. OrangeYOLO alsopresents a channel-spatial attention mechanism, but the ablation experiment results showed a marginal gain in detection bythis extension to the architecture. OrangeSORT computes the average motion displacement observed in the tracked fruitsto update the state of lost trackers (Kalman filters), i.e., trackers not associated to any OrangeYOLO's detection at the current frame. In the six videos collected in-field for testing, counting error varied from 0.61% (the best case) to24.29% (worst case).<cit.> evaluated five different MOT algorithms for tracking apples in-field: (i) multiple Kalmanfilters combined with the Hungarian algorithm; (ii) kernelized correlation filter; (iii) multiple hypothesis tracking(MHT) <cit.>; (iv) SORT <cit.> and (v) DeepSORT <cit.>. They trainedthe employed apple detectors, Faster R-CNN and YOLOv5, usingpublic datasets as MinneApple <cit.> and Fuji-SfM<cit.>. Nine videos recordedin-field using smartphones[Smartphones iPhone 6S Plus and iPhone 13, Apple Inc.] were employed onthe apples tracking evaluation. Annotations in the MOT format <cit.> for the videos were producedusing the open-source tool CVAT[CVAT – <http://cvat.ai>.]. The authors conducted an interestingsensitivity analysis, employing the ground-truth bounding boxes as input for the trackers, but degrading thedetection probability from 100% (all boxes in the ground-truth) to 20%. MHT and DeepSORT showed thebest tracking performances, even at 60% detection rate. When considering detections from the neural nets,YOLOv5 produced the best detections for tracking, while DeepSORT produced the best results: 20.1% error onaverage for fruit counting.What is the proper way to evaluate MOT trackers? According to <cit.>, evaluation metrics serve twoprimary purposes. Firstly, they allow for straightforward comparison between various methods to determine which onesperform better than others. To achieve this objective, it is recommended to have a single metric that can be used torank and compare these methods. Secondly, evaluation metrics are essential for analyzing and comprehending thevarious types of errors that algorithms make, identifying where they are likely to fall short when applied. The HOTAmetric <cit.> was proposed with these two objectives in mind for the complex task of multiple objecttracking. As stated by its proponents:HOTAmeasureshowwellthetrajectoriesofmatching detections align, and averages this over all matchingdetections, while also penalizing detections that don't match.Few works have employed proper MOT metrics for multiple fruit tracking <cit.>,most of them employing the multiple object tracking accuracy (MOTA) metric proposed by<cit.>. However, MOTA has been criticized because it tends to prioritize detection overassociation <cit.>, while HOTA offers a balanced combination of detection and association scores. Lost detections, caused by occlusions and the harsh light conditions in orchards, lead to fruit track breaks andyield overestimation, as noted by <cit.> and observed in results by <cit.>. We argue that a long-term process, as relocalization, is needed for fruit counting, providing robustness toshort-term detection association in neighbor video frames. The same was speculated by <cit.> while <cit.> developed their landmark-basedre-association method to tackle this issue. The present work is closely related to <cit.>: the 3-D information is employed to, using geometrical restrictions, properly re-assigned lost fruits, andboth works use SfM to get 3-D information. However, <cit.> use the fruits as landmarks,an interesting approach to reduce SfM complexity, but it raises questions about SfM performance when onlya few fruits (or none) are seen by the camera. In such case, the SfM framework could not have landmarks toproper camera motion estimation. In our method, SfM is performed to get the camera relative motion (ego-motion), andfruit localization in 3-D is performed as an independent process. The tracking method presented in our work, however, is more similar to SORT-based methods, in the sense the assignmentsare performed online if ego-motion data is available, differently from the offline SfM processing in <cit.>.Our approach is complementary to the method presented by <cit.>, which assumed unknown camera position (no ego-motion)and employed Kalman filter-based tracking to predict the trajectories of occluded or misdetected fruits. Conversely, we explorerelocalization for long-term tracking, based not on appearance, but on 3-D localization estimated using ego-motion data. The methods for yield estimation vary depending on the type of data collected or the crop.<cit.>sample trees from an orchard and select branches and segments of branches using a multi-stage systematic samplingapproach. They then apply statistical methods to propose a yield estimator. On the other hand, <cit.> usedmachine learning and tracking to provide a direct estimation of fruits. In their case, the structure of the plants makesthe fruit visible on the surface of the canopy, allowing direct counting. Direct estimation of fruits based on imageswas also performed by <cit.>. In our problem, direct estimation is not viable because weidentified a large gap between the visible fruits and the total number of fruits obtained through manual harvesting.This gap may result from the fact that some fruit is located inside the canopy. To address this issue, we propose aneural network regressor for the yield that considers the number of identified fruits and other relevant variables, asdetailed in Section <ref>. A similar approach was used in estimating mango fruit load, where<cit.> applied a correction factor to estimate the fruit load per orchard.However, unlike the regressor we are proposing, this correction factor is calculated per orchard based on the averageratio of the number of fruits identified in images to the manual harvest count per tree in the field. § MATERIALS AND METHODSOur pipeline takes video files that record both faces of a tree, one for each side of the row, in the field.It then generates an estimation of the total yield for that tree, measured in the number of fruits.Figure <ref> shows a pipeline overview. Videos are recorded by fieldstaff (Section <ref>), and ID numeric plates are displayed in the first frames to identify each tree.The first step of the pipeline is to identify the plate and extract the tree ID t. The entire sequence of video frames is extracted from the video file, and a frame sampling procedure (Section <ref>) selects a subset.As the recording is performed by handheld cameras (smartphones), and the human camera operator slowly films the tree, severalframes can be very similar because of too slow camera motion: the frame selection employs the well-known framework of featuredetection and matching <cit.> to select frames presenting distinguished changes, but avoiding large movementthat could prejudice ego-motion estimation and fruit tracking. The selected frames feed a structure from motion procedurethat estimates the camera parameters, including position and orientation, for each frame (Section <ref>). Theselected frames are also inputted to a fruit detection neural network, producing a set of rectangular bounding boxes thatmark the observed oranges in each frame (Section <ref>). Our fruit tracking procedure (Section <ref>)takes the camera parameters and the detected boxes in each frame to track the oranges in the frame sequence. A 3-Drelocalization component deals with occlusions and fruits exiting and entering the camera FOV. The bounding boxes aregrouped in tracks, defining the path that every single orange performed in the frame sequence. These tracks are notnecessarily contiguous, properly dealing with long-term occlusions and fruit reappearance. The number of trackscorresponds to the number of observed fruits in the video. Finally, this number is combined with the value found for theother face of the tree and with additional data, such as tree variety, height, and plant age. Such data is inputted to a yieldregressor (Section <ref>), that produces a final estimation of the number of fruits y_t in the tree t. §.§ Dataset The pipeline shown in Figure <ref> was executed for a set with more than 1,500 sweet orange trees. However, the computer vision-based components need annotated datasets for training and evaluation. Such annotation effort, although favored by the new annotation tool presented in Section <ref>, could not be performed for the entire video set.A subset of 12 videos was annotated for multiple object tracking evaluation, forming the MOrangeT dataset. Part of these annotations were combined to previously annotated images to form another dataset, OranDet, built for orange detection. Such data allowed the training and evaluation of the fruit detection and tracker modules in the pipeline. Although it was not feasible to evaluate multiple object tracking with all 1,543 trees, the whole set was processedthrough the pipeline and forwarded to the yield regressor. This last step, however, processed only 1,139 trees, whichsatisfied the requirement of having at least a fruit counting for one of its sides, as well as other informationconsidered relevant, as one can see in Section <ref>. The following sections provide a more in-depth description of such data.§.§.§ Orange Crop Forecast data The Orange Crop Forecast, conducted by <cit.>, utilized a stratified random sampling technique to selecta representative sample of 1,560 orange trees. The initial drawing involved 1,200 trees that were proportionallydistributed across the citrus belt, stratified based on their region, variety, and age. An additional drawing included 360resets of younger ages to replace trees lost due to diseases, and other causes. Toobtain the sample, a fruit stripping method was employed, involving the advanced harvest of all fruits from each tree.During the stripping process in the field, fruits from different flowering periods andconsequently, in different phenological stages were sampled: green fruit close to the final size (classified as fruit fromthe 1st flowering period, F1); green fruit with a table tennis ball size (∼ 4.5 cm diameter, classified asfruit from the 2nd flowering period, F2); and green fruit with a marble ball size (up to ∼ 3.0 cm diameter,classified as from the 3rd period, F3) <cit.>. The fruit-stripping operation took place betweenMarch 28 and May 11, 2022, and the harvested fruits were transported to a laboratory in Araraquara, SP, Brazil. At thelaboratory, the fruits were sorted into different bloom categories and quantified using automatic counting equipmentbefore being weighed. Fundecitrus provided the ground-truth for fruit counting, tree height, age, and fruit variety for1,543 trees. §.§.§ Video recording The crop forecast staff was instructed to record video sequences for each tree before the manual fruit stripping.Smartphones were employed to record videos presenting a resolution of 1940 × 1080pixels. The staff was oriented to record the videos in portrait orientation, maximizingthe field of view in the vertical direction. Considering the trees' density in commercial farms, it is not feasible tocover an entire tree canopy in a single pass, considering the employed smartphones do not present wide-angle lenses. So, for eachtree in the sample, the forecast staff recorded a video for each side of that tree, i.e., the facade facing a row of theorchard. For each side, the staff member performed a trajectory, recording first the bottom of the tree, then moving tothe central part and, finally, recording the top of the canopy, all contained in a single video take, as shown inFigure <ref>. The staff works on a rigid schedule determined by the forecasting deadline, so the video sequences must berecorded under the available light conditions at that moment. If the light conditions are not ideal, for example, when the sun isbehind the tree of interest, the staff member has to record the video anyway. This commonly produces frames presenting directsunlight over the camera for the higher parts of the canopy. For the same reason, the video set presents diverse exposures,caused by different sunlight and shadows patterns in the trees.§.§.§ Frame sampling For each video (single tree side), all frames are initially extracted using theFFmpeg library[FFmpeg – <https://ffmpeg.org>.], producing a sequence of framesℱ = ⟨ f_1, f_2, … , f_N⟩ (see Figure <ref>). However, not all the N framesare needed for the fruit detection and tracking: similar neighboring frames can be removed to reduce processing time. Themethod employed for frame sampling is divided into two steps. Firstly, the ORB keypoint detector and descriptor<cit.> is employed to extract features for each frame. After the first frame f_1 is added to the sample,the following procedure is iteratively repeated. Let f_i be the last frame in ℱ included in the sample.Descriptors of the frame f_i are compared to those of a frame f_j , j ∈ [i + 1, N] using the FLANN algorithm<cit.>, that produces a set of matching features between the two frames. If the number of correspondingdescriptors is below a threshold, that indicates significant differences between frames, and f_j is included in thesample. Otherwise, the spatial coordinates are used to calculate the average distance between matched keypoints.If this average distance exceeds a defined threshold (10 pixels), f_j is added to the sample. This process is repeatedbetween the last selected frame and the subsequent frames until all the original N frames are analyzed. After sampling,we have a sequence ℱ' = ⟨ f_1, f_2, … , f_M⟩, M < N (the remaining frames are sequentiallyre-indexed). On average, this sampling procedure reduces the number of frames by 50 to 70 percent.§.§.§ Camera matrix estimation Both our annotation tool and our fruit relocalization procedure rely on knowledge about the camera projective matrix. This matrix encodes all geometrical information needed to model the projective process, including the cameraintrinsic parameters (focal distance, for example) and the extrinsic parameters: the camera position and orientation inthe tridimensional space (ego-motion) <cit.>. For each frame f_i ∈ℱ', we need a 3 × 4projective matrix 𝙿_i such that:𝐱_i = 𝙿_i 𝐗,where 𝐗 is a tridimensional point in the scene and 𝐱_i its projection on frame f_i. The𝙿_i matrices can be estimated from image sequences by pose graph optimization (PGO), simultaneouslocalization and mapping (SLAM),structure from motion (SfM) <cit.> or visual-inertial navigation (VIN) <cit.>. Inthis work, we have employed structure from motion, using the COLMAP tool proposed by <cit.>.Figure <ref> and Figure <ref> show an example of the camera position and orientation recovered byCOLMAP for a frame sequence in the dataset, and includes some tree structure: several points 𝐗 inthe scene that acted as landmarksin the SfM process <cit.>. The SfM results from COLMAP are alsoincluded in the public dataset.§.§.§ Annotation and the MOrangeT dataset for fruit tracking As pointed by <cit.>, multiple fruit tracking involves homogeneous objects.Identifying fruits that have been hidden from view for extended periods, either due to occlusionsor being out of sight, is a challenging error-prone task, even for humans. Even when the fruit is visible, registering a bounding box for an orange in each frame is a slow and tedious work. Consideringthese issues, we have developed a new annotation tool that exploits the spherical shape of oranges and employs camera position information estimated by structure from motion.The developed tool lets the users in charge of annotations to draw square bounding boxes for an orangein a few frames. Using the camera projective matrices 𝙿_i for each frame, the orange's center and ray are estimated in the 3-D space. The fruit is reprojected on all frames (using Equation <ref>),being the tool able to automatically check if the fruit is in the FOV of each frame and, in the positivecases, draw a proper bounding box. The user can adjust every bounding box in every frame and,if needed, re-estimate the spherical orange in 3-D and reproject it again, refining the fruit localization.Nonetheless, the user is required to search for instances of occlusion where the orange is obstructed byleaves, branches, or otherfruits, even though it is within the camera's view. This method has made the process of annotating orangetracks faster and more reliable, especially when faced with challenging relocalization instances. Thecapability to modify the orange's ray in 3-D and then project it onto frames results in well-adjustedbounding boxes, which would have otherwise required laborious efforts from annotators. Figure <ref> shows the mosaic window from the custom annotation tool. Inthis example, the tool is displaying the views of a single orange in a 544 frame long sequence.Each square in the mosaic corresponds to the view of that orange in the current frame, includingpart of its neighborhood. The orange’s center and bounding box are displayed in blue. Note thesame orange enters and exits the field of view three times, entering at frames 53, 218and 446. Furthermore, three occluded segments are observed: between frames 106–119, 218–227(occlusion by other orange) and 508–528. We have adopted two conventions: if more than 50%of the fruit is visible, it is considered non-occluded (marked in red inFigure <ref>). Otherwise, the orange is considered occluded (marked in white).Moreover, if any part of the bounding box is out FOV, the orange is considered non-visible,to avoid unfairly penalizing trackers in borderline cases.The MOrangeT <cit.> dataset is composed of 12 sequences, as shown in Table <ref>. Nine of them wererecorded by smartphones connected to a gimbal for stabilization (Zhiyun Smooth 4, Guilin Zhishen InformationTechnology Co.). The set included trees of four orange varieties, Valencia, Natal, Pera, andHamlin, presenting different ages and heights, from four different regions in the São Paulo citrus belt.Although some mature fruits, most of the oranges are green: the crop forecast program needs to perform theassessments at an early stage of the fruits' development. Note that the MOrangeT dataset is employed to evaluate multiple fruit tracking and the counting of visible fruits in videos of single trees (single side), while the Orange CropForecast data is used for the evaluation of yield prediction for the 1,543 trees set.§.§.§ OranDet dataset for fruit detection Utilizing the camera's pose information enables the annotation of an extensive array of bounding boxes within the frames ofvideos in the MOrangeT dataset. We have selected four sequences, V04, V05, V07 and V11 (Table <ref>)to compose a training set for orange detection. Each frame underwent a process of extracting a set of tiles, each measuring416 × 416 pixels, which were then incorporated into the training set. Given the relatively diminutive size of fruitscompared to frame dimensions, the application of tiling emerged as a straightforward yet efficacious strategy to mitigate therisk of overlooking small objects. This process resulted in a compilation of 21,031 tiles extracted from these frame sequences,with an additional 3,065 tiles incorporated from a previous study on orange detection <cit.>. Subsequently, thedataset was partitioned into training and validation subsets. The eight remaining frame sequences were exclusively designatedfor a test set, comprising 33,716 tiles. Table <ref> outlines the definitive composition of the orange detectiondataset, denominated as OranDet, with illustrative examples depicted in Figure 4.Within the context of the pipeline illustrated in Figure <ref>, the OranDet dataset <cit.> was instrumentalin both training and evaluating CNN-based fruit detection. Our orange tracker, depicted in Figure <ref> (f)and expounded upon in Section 3.2.2, does not use any learning-based component. Consequently, the MOrangeT dataset wasexclusively employed to evaluate the tracking performance. Finally, the yield regression employs the Orange Crop Forecastdata and the fruit counting from tracking to train and evaluate the yield predictions, as seen in Figure <ref> (g). §.§ Orange detectionWe trained and evaluated six different architectures for orange detection: YOLOv3 <cit.>,YOLOv6 <cit.>, YOLOv7 <cit.>, EfficientDet <cit.>, and the YOLOv5 andYOLOv8 models proposed by Ultralytics[See <https://docs.ultralytics.com/models>.]. Nowadays, there areseveral neural network architectures for object detection, based on CNNs <cit.> and transformers <cit.>,and the selection of architectures is not straightforward. Considering the extensive use of YOLO-based networks infruit detection <cit.>, specially incitrus detection <cit.>, we have selected YOLOv3, the last architecture proposed by <cit.> (the authors that introduced YOLO), and a few of recent YOLO-based approaches.To provide a comparative analysis, we have opted for EfficientDet as an alternative approach. We used the implementations available in MMYOLO[<https://github.com/open-mmlab/mmyolo>] and MMDetection <cit.>. All models were trained using stochastic gradient descent (SGD) optimizer with momentum. Random flip augmentation was applied in the trainingpipeline for all models, and the Mosaic augmentation, introduced after YOLOv3, was used during the training of YOLOv5, YOLOv6, YOLOv7,and YOLOv8. §.§ Multiple orange tracking Our multiple fruit tracking method gets as input the bounding boxes set ℬ_i from the orange detector and theprojective matrices 𝙿_i for each frame f_i in the frame sequence ℱ_t' (seeFigure <ref>). The method is based on a few components, described as follows.§.§.§ Bounding box assignments An essential component of the method is data association: bounding box assignment is performed between two box sets, matching boxes in a set ℬ_i to boxes in another set ℬ_j. Such association is performed by a variant of the Hungarian algorithm <cit.>, using a cost function based on intersection over union (IoU):cost(𝐛_i^(p), 𝐛_j^(q)) = 1 - IoU(𝐛_i^(p), 𝐛_j^(q)),where 𝐛_i^(p)∈ℬ_i, 𝐛_j^(q)∈ℬ_j, and IoU isthe Jaccard index defined over the intersection area and union area between 𝐛_i^(p) and 𝐛_j^(q):IoU(𝐛_i^(p), 𝐛_j^(q)) = J(𝐛_i^(p), 𝐛_j^(q)) = |𝐛_i^(p)∩𝐛_j^(q)|/|𝐛_i^(p)∪𝐛_j^(q)|.Considering P bounding boxes in ℬ_i and Q boxes in ℬ_j, the assignment matrix𝙰 is a P × Q matrix where 𝙰[p,q] = 1 iff 𝐛_i^(p) and 𝐛_j^(q)are associated, otherwise 𝙰[p,q] = 0. Assume, without loss of generality, that P ≤ Q. The association problem is a 2-D rectangular assignment minimization problem <cit.>:𝙰^* = min_𝙰∑_p = 1^P∑_q = 1^Qcost(𝐛_i^(p), 𝐛_j^(q)) 𝙰[p, q],subject to:∑_q=1^Q𝙰[p,q] = 1, ∀ p ∑_p=1^P𝙰[p,q] ≤ 1, ∀ q.Note that 𝐛_i^(p) refers to the p-th bounding box in ℬ_i and 𝐛_j^(q) refersto the q-th boundingbox in ℬ_j. For this minimization, we have employed a modifiedJonker-Volgenant algorithm with no initialization <cit.>, a O(n^3) variant of the Hungarian algorithm.This implementation supports unbalanced assignments, so working in cases where P ≠ Q: boxes in i not assigned(lost boxes) or boxes in j not assigned (novel boxes). As a post-processing step, we remove assignments whereIoU(𝐛_i^(p), 𝐛_j^(q)) = 0 eventually produced by the minimization algorithm. The assignment component is repeatedly employed in this work: * to perform data association between box sets ℬ_i and ℬ_i+1, corresponding to detected fruits in two neighboring frames f_i and f_i+1;* to perform relocalization, associating reprojected bounding boxes of previously seen oranges to boxes in ℬ_i, detected in the current frame f_i, and* to evaluate results, matching detected boxes to the ground-truth annotation set ℬ̂_i for frame f_i. §.§.§ Orange 3-D localization The association procedure, when applied to bounding boxes sets from neighboringframes f_i and f_i+1, produces contiguous tracks in the form𝒯 = ⟨𝐛_i, 𝐛_i+1, ... , 𝐛_i+n⟩(here the upper script indexes are omitted to simplify the notation). When a contiguous track is sufficiently long (we have adopted n ≥ 5), we employ the camera matrices𝙿_i, 𝙿_i+1,..., 𝙿_i+n to estimate the three-dimensional position of thecorresponding orange. Algorithm <ref> takes as input a track 𝒯 and the corresponding sequence of camera matrices 𝒫 to estimate the center of the orange in the 3-D space,𝐗. The algorithm is based on two concepts: reprojection error and random sample consensus.The reprojection error is the distance between the bounding box center (line 4) and 𝐱_i, the reprojection of on the i-th frame (Equation <ref>). A bounding box is considered an inlier iff this distance is within athreshold, the maximum reprojection error allowed (lines 15–16). The RANdom SAmple Consensus (RANSAC) is a generalframework for robust estimation algorithms proposed by <cit.>: (i) a sample is randomly selected to estimate the target; (ii) the points in the full set within a distance threshold are defined as inliers (consensus);(iii) the target is re-estimated using all inliers, and (iv) the process is iterated if the number of inliers isinsufficient. In Algorithm <ref>, our sample is a combination of three bounding boxes in the track (line 6) and a target 𝐗_𝒮 is estimated using the direct linear triangulation procedure(DLT) <cit.> (line 7). The reprojection is employed to define the set of inliers (lines 12–19) and,if the number of inliers is sufficient (line 20), is estimated from the consensus set, again using DLT. However,considering consensus could not be achieved and the number of combinations of threeboxes (line 6) can be massive, a maximum number of iterations is defined (lines 24–27). If the maximum number ofiterations is reached or if all samples were considered with no consensus, the algorithm returns as Nil andan empty inliers set (no success). In Algorithm <ref>, Indexes(·) returns the frameindexes i for the bounding boxes in the set (in a valid track, there is only one bounding box per frame). Algorithm <ref> models the orange as a sphere and estimates the fruit's center and rayr_3D. The three-dimensional point and the inliers bounding boxes set ℐ are computed usingAlgorithm <ref>. Then the ray is estimated using the dimensions of the boxes in ℐ andsimilar triangles' ratio: r_3D≈r_i d /f_focalwhere d is the distance between the camera center position 𝐂_i at frame f_i andthe orange center (line 10), f_focal is the camera focal distance, and r_i the 2-D rayestimated from the bounding box dimensions (line 8). Each r_i provides an estimation for r_3D (line 11),and we use the median of the estimations set ℛ as our final estimation for the orange three-dimensional ray.We have noted that the bounding boxes usually exceed the fruit boundaries[Note also Equation <ref> is an approximation: even in the absence of noise, the equality would just hold if the sphere center was perfectly projectedin the image's principal point <cit.>.], overestimating the ray, then we multiply the median by a constant c for a better fit (line 13, we have adopted c = 0.9). The camera center 𝐂_i is easily computed from (line 9) using:𝐂_i = -𝙼^-1𝐩_4,where 𝙿_i = [𝙼 | 𝐩_4], i.e., the 𝙼 matrix is composed by the three first columns of and 𝐩_4 is the last column of 𝙿_i.In our MOT tracking for oranges, tracks can be discontinuous, meaning that the fruit is not visible at someframes because it is occluded or out of view. Note that Algorithm <ref> andAlgorithm <ref> have no need for continuous tracks: they can operate on discontinuous ones as new bounding boxes are added to the tracks. Tracks can present one of two possible states in our tracking system:Lost, if the track has no bounding box in the current frame f_i, and Active, if the track presentsan observed box 𝐛_i. When processing the current frame f_i, our system starts performing relocalization, using the Hungarian association to match Lost tracks to bounding boxes in ℬ_i yet not associated toany track. §.§.§ Relocalization The tracks have their fruits' position and ray estimated byAlgorithm <ref>: a center and a ray r_3D are available for each track. Using Equation <ref>,we find the reprojection of the orange's center 𝐱̃_i = 𝙿_i 𝐗 at frame f_i. The rayr_3D and Equation <ref> are employed to find the ray r̃_i for the orange on f_i. Finally, the reprojectedbounding box is defined as:𝐛̃_i = [x̃_i - r̃_i, ỹ_i - r̃_i, 2r̃_i, 2r̃_i].where 𝐱̃_i = (x̃_i, ỹ_i, 1)^⊺ (homogenous coordinates) and 𝐛̃_i is the reprojected bounding box for the track in frame f_i (the ·̃ notation is used to indicate the values came from the orange estimation, not from orange detection). Such boxes are computed for all tracks, producing a set of boxes ℬ̃_i. The Hungarian association (Equation <ref>) is employed to match boxes from tracks in ℬ̃_i to boxes in ℬ_i yet not matched to any track. In thecase of a track is successfully associated to a box, its state is changed to .§.§.§ Next frame associationAfter relocalization, the tracking system will employ the Hungarian association to match boxes𝐛_i^(p)∈ℬ_i to boxes 𝐛_i+1^(q)∈ℬ_i+1. Ifthe association algorithm matches 𝐛_i^(p)→𝐛_i+1^(q), and 𝐛_i^(p) belongs to an Active track 𝒯, the box 𝐛_i+1^(q) is appended to 𝒯.Otherwise, a novel track 𝒯' = ⟨𝐛_i^(p), 𝐛_i+1^(q)⟩ is created.After association, tracks not matched to any box 𝐛_i+1^(q) change to the Lost state,while all the remaining Active tracks presenting more than 5 bounding boxes have their oranges re-estimated usingAlgorithm <ref>. The entire procedure (relocalization followed by next frame association) is repeatedfor each frame until f_M-1.In summary, contiguous tracks presenting at least 5 boxes initialize 3-D spherical models for oranges,parameterized by central point and ray. After this initialization, the fruit is passive of relocalization: if the bounding boxes' association fails by any reason, producing tracks, the reprojection of the spherical model on the current frame can be employed to relocalize the fruit, associating the reprojected box to a detected one. After relocalization, the track is again and can be updated with new detected boxes. Such track is now discontinuous, registering where (bounding box) and when(frame) the fruit is visible. Furthermore, the orange model (central point and ray) is continuously refined as new detections areadded to the track. Only the tracks with successfully estimated 3-D orange models are considered for fruit counting.§.§.§ Implementation The tracker was implemented in Python 3. The 2-D rectangular assignment implementation employed is from SciPy's optimization module <cit.>. The DLT algorithm was implemented using singular valuedecomposition <cit.> available in SciPy's linear algebra module.§.§.§ Tracking evaluation <cit.> presented a sensitivity analysis aimed to evaluate the performance of a fruit tracker under varying probabilities of fruit detection. The analysis begins with perfect detection, from the ground-truth data, where allfruits are identified accurately, and then progressively eliminates certain bounding boxes. Each bounding box in the groundtruth is treated as a random variable with a uniform distribution within the range [0, 1] for each bounding box inf_i. Based on the value, each bounding box is kept or removed from the detections set ℬ_i. Specifically,if the random value of a fruit is below or equal to a pre-defined threshold, it is considered detected; otherwise, itis discarded. Similarly to Villacrés et al., in the present work we evaluated different values of the probability of detection,using 0.4, 0.6, 0.8, and 1 as thresholds (a threshold of 1 equals to “perfect detection”, a copy of ground-truth boundingbox data). The tracker is also evaluated using the detections from the CNN-based orange detection module presented inSection <ref>. The evaluation present values for HOTA <cit.> and MOTA <cit.>.We also include in the evaluation two components of HOTA: DetA, the percentage of aligning detections, and AssA, the averagealignment between matched trajectories. Appendix <ref> presents mathematical definitions for HOTA, DetA and AssA,adapting the original <cit.> formulation to the notation adopted in this work. §.§ Yield regressor After the fruit detection and tracking stages, we have all the fruits in a tree identified and mapped in 3-D. However, some fruits may have not been identified or tracked due to problems in video acquisition or because they are located inside the canopy. To addressthis problem, we propose a regressor that takes the number of fruits on each side of the plant as input and estimates the actual number of fruits on that plant. This estimator can be based on statistics, as proposed in <cit.>, in which the authors consider statistical differences in each orchard to propose a correction factor. Alternatively, it can be implemented as a machine learning model. In our case, we use a machine learning model that learns an implicit representation accounting for the impact of other variables, such as variety, group of variety, dimensions of the plant, region, and sector, to propose an estimated number of fruits.§.§.§ Data used for regressionAlthough we received raw data for 1,543 trees, only 1,197 trees could be successfully processed in the previous stages of the pipeline. Trees with ill-recorded videos fail in the SfM part of the pipeline, and do not follow through the tracking and the yield detection steps. Table <ref> shows the first four lines of a regression data table. Tracking generated the last two columns, CbyT-A and CbyT-B (counting by tracking), which correspond respectively to the fruits automatically counted on sides A and B of the tree. These counting columns can change according to the tracker employed, for example, a tracking using YOLOv3-based detections or counting results from a YOLOv5-based one. Such counting values were aggregated to the original data to enrich the information provided to the regressor. However, not all 1,197 records could be used for training the regressor, as we had to exclude plants presenting missing data and those without fruit counting from tracking for either side.§.§.§ Data preprocessingBefore submitting data to the model, it is necessary to perform some operations to adequate these data to thedynamics of the processing of the artificial neural network. The following operations were executed over the dataset:Transformation of categorical variables: neural networks only accept numerical values as inputs to their neurons, which means that categorical variables must be transformed into numerical values. For example, the value Hamlin, which is one of the possible values for the variable Variety, has to be converted to a number. One of the possible methods is using One Hot Bit Encoding <cit.>, which creates a new column for each possible value of the variable. The column Hamlin, for instance, receives value 1 for whenever a sample is classified with that variety. This procedure is executed for all categorical variables.Standardization of numerical values: to facilitate convergence to the optimal training point, numerical values should be standardized. Variables with very discrepant scales, such as one ranging from [0–100,000] and another from [-1.0–1.0], can cause problems because the gradients calculated during back propagation will have very different effects when applied to different values. One solution for this problem is to apply standardization using the formula 𝐮_i = (𝐯_i - 𝐯)/σ_𝐯, where vector𝐮_i contains the standardized values of 𝐯_i, 𝐯 is the mean value of the vectors 𝐯_i and σ_𝐯 its standard deviation.§.§.§ Machine learning regression modelWe have tried several machine learning algorithms for the regression problem, such as Support Vector Machines<cit.>, Bagging <cit.> and Gradient Boosting <cit.>. None of them,however, had a better performance than a multilayer feed-forward neural network <cit.>.Neural networks are highly flexible in their assembly. For instance, one can create a neural network with just a single layer,and it still is able to solve a problem. However, tasks such as image classification usually requires dozens or even hundreds oflayers. This flexibility is also a weakness, as it necessitates searching through a wide range of configurations to find the mostsuitable one for a specific problem <cit.>. A common heuristic approach is to gradually increase the number oflayers and neurons and evaluate how the results converge towards the desired outcome. Once the parameter range with the bestresults is identified, it can be further refined. In the case of the yield regressor, the neural network should not beexcessively complex due to the amount of data available. If a neural network has too many parameters, it may suffer fromoverfitting, where it learns details in the training dataset and performs poorly with the test data as compared to its trainingperformance.In our path to find the best network configuration, we relied on cross validation, a widely used technique in machinelearning, to statistically assess whether one computational experiment performs better than another <cit.>.Cross validation involves running the same algorithm multiple times, allowing for more reliable performance statistics and acertain level of confidence. In our research, we divided the training dataset into ten parts, also known as folds. The process entails training the algorithmon nine folds while reserving the tenth fold for evaluation. This process repeats until all ten folds have been used as the testset. To determine the level of certainty regarding the superiority of a model, we employed a statistical hypothesis test.Since we had ten folds, the assumption that the data follow a normal distribution does not hold true. The normal distribution istypically reserved for datasets larger than 30 elements with known variance <cit.>. Instead, we used thet-Student distribution for our set of ten performance measurements. We set a p-value of 0.05 to ensure that we select a modelwith 95% confidence.We conducted experiments with neural networks comprising one to six layers, and varying the number of neurons per layer from sevento 28. The last layer of the regressor consists of a single neuron, which is responsible for the calculation of the yieldestimation. Among the tested architectures, the one presented in Table <ref> demonstrated the best results, as willbe discussed in the next section. The model was implemented in Python, using the Keras API[<https://keras.io/about>.],which is built on top of the TensorFlow™ platform.§ RESULTS §.§ Tracking sensitivity to detection In the sensitivity analysis, tracking is performed by the algorithms presented in Section <ref>, with no component based on machine learning: all boxes come from the ground-truth, with random removal of some boxes to emulate detection components presenting different detection rates. Table <ref> shows HOTA, DetA[Note that,for 100% detection rate, to see DetA values below 1.0 sounds counterintuitive: readers should consider thatthe assignments between detections and ground-truth are defined by the association matrices 𝙰̂^(α) that maximize HOTA <cit.>, so association errors could produce some falsepositives and false negatives in Equation <ref>, see Appendix <ref>.] and AssA results for four different rates. The tablealso displays the MOTA metric for comparison, considering that a few works <cit.> employed that metric.The CbyT (counting by tracking) value is the fruit counting, corresponding exactly to the number of (possibly discontinuous) tracksfound by our MOT system, while CbyT-GT is the ground-truth value. The last column displays the relative error: the ratio ofabsolute error to the ground-truth value.Perfect fruit detection does not imply in perfect tracking and counting: association errors, as seen in Figure <ref>, induced by occlusions, cause tracking errors. However, the tracking performance in high (HOTA and MOTA above 0.9), resulting in accurate counting: 2.34% relative error in counting of visible fruits for a 100% detection rate (considering individual trees, the median for the relative error is 1.23%). But accurate counting can be reached even under imperfect detections: an 80% detection rate could reach a 3.01% relative error (median 2.20%). As expected, low detection rates will severely damage counting: a 60% detection rate implied in a 14.44% error in counting (median 12.35%), while a40% rate reached an error of 55.93% (median 55.77%).Figure <ref> shows tracking examples for two videos, V07 and V12. The tracking was performed assuming an 80% detection rate. When the orange position and ray were estimated by Algorithm <ref>, the numeric ID for the track (fruit) is displayed. Tracks not yet presenting a successful estimation are marked with a Nil label, otherwise. Lost tracks are displayed as white, dashed boxes. Note as, at each frame, there is a significant number of Lost tracks, but the system keeps projecting them on each frame at their expected locations. A relocalization example can be seen for track 94 in V07: the track is lost at frames f_260 and f_265 because of occlusion, but the track became Active at f_270. Examples of new tracks creation and orange estimation can be seen for fruits 95, 96 and 97, again in video V07. The three-dimensional positions estimated for each orange in V07 are shown in Figure <ref>.The importance of relocalization is highlighted in Table <ref>, which presents sensitivityresults for both 100% and 80%detection rates, but without incorporating relocalization. In the case of multiple orange tracking, effectively managingocclusions or re-entering fruits is crucial. Without proper handling of these scenarios, the generated counting becomesunstable and unusable, as observed in the relative error column. A comparison between the results in Table <ref>and Table <ref> emphasizes the necessity of long-term tracking for accurate counting. Even a moreadvanced tracker that incorporates motion estimation would struggle to handle occlusions and re-entering objects without adedicated long-term association component <cit.>. §.§ Detection, tracking and counting Table <ref> shows the detection results for seven different orange models. The YOLOv5, YOLOv6, YOLOv7, and YOLOv8networks can use different backbone sizes. We have tested the small and large versions of these networks, the latter producing the best results, so only the large (l) models are shown. Two different backbone sizes are also tested for EfficientDet: the B0 (baseline)and B3 backbones <cit.>. The results in Table <ref> consider a minimum score of 0.5 for the orange class and an IoU of 0.5 between predicted boxes and ground-truth. The values are not an average over images in the test set: they were computed considering the entire set of 121,685 bounding boxes (oranges) in the OranDet test set, being an assessment of all fruit misses and false positives.As shown in Table <ref>, YOLOv5l, YOLOv8l and YOLOv7 presented close results considering the F_1-score, followed by YOLOv3. However, there is a slight difference in their behavior: the YOLOv3 and YOLOv8l networks present better precision, a lower number of false positives, while the YOLOv5l and YOLOv7l models show better recall, missing fewer fruits. Note that the sensitivity test presented in Section <ref> does not consider any level of false positives in the detections, neither any misalignment between boxes in the ground-truth and in the predictions. The different false positive rates of the detection models and their localization accuracy can affect the tracking.Table <ref> shows tracking results for different orange detection models. For each frame, tilling with overlap is employed: the frame is split into 24 tiles with 416 × 416 pixels, presenting overlaps of 82 pixels. The tiles are stacked in a single batch, andsubmitted to the fruit detection CNN. The detection results are merged (“untilling") and non-maximum suppression (NMS) applied (we have adopted 0.2 as IoU threshold for NMS). We have also filtered detections by score, testing three different score thresholds (0.5, 0.6 and 0.7). Table <ref> displays the five best tracking results, considering the score threshold that produced the best result forsuch detection model. The results for each frame sequence in MOrangeT for the two best models (YOLOv5l and YOLOv3) are shown inTable <ref>. It is noteworthy that when the tracking results are integrated, i.e., when the 1,198 fruits/tracks areconsidered, the counting error rates are low. Figure <ref> shows tracking examples for two videos, V07 and V12, the same previously seen in Figure <ref> for comparison. The tracking was performed using detections from the YOLOv5l model. Again, when the orange position and ray were estimated byAlgorithm 2, the numeric ID for the track (fruit) is displayed, and tracks not yet presenting a successful estimation are marked with a Nil label. Lost tracks are displayed as white, dashed boxes. A relocalization example can be seen for track 111 in V07: the track is lost at frames f_260and f_265, but the track became ACTIVE at f_270. A compilation of the results in a video demonstration is available on-line[<https://youtu.be/hOq42KMskLQ>].Perfect tracking implies exact counting and a HOTA value equal to 1. As seen in Table <ref>, high HOTA (or MOTA)values are associated with low errors in counting. However, we can observe low errors in counting even for mediocre values of HOTA (or MOTA), around 0.5, as seen in Table <ref>. Actually, multiple fruit tracking is a harder problem than fruit counting assessment. If the pipeline can track the fruit in part of its trajectory and does not merge tracks of different fruits, it can reach accurate counting values. But losing parts of the fruit' trajectories will severely penalize the tracking quality measures. If the tracker is losing part of the trajectories, it is missing the orange appearances in some frames. Figure <ref> shows histograms for four frame sequences in MOrangeT, considering the number of appearances for each fruit in the ground-truth (orange colored histogram) and in the tracker results (blue colored histogram). More “mass” in bins on the right side means the fruits are observed non-occluded in more frames. Comparing the ground-truth against the tracking results (using YOLOv5l detection inthis example), we can see that our pipeline missed parts of the tracks, even considering accurate counting: the relative countingerrors are 0.91%, 3.81%, 3.65% and 0.87% for V01, V04, V08 and V12, respectively, but the missed appearances will degrade HOTA and MOTA values. Our pipeline can keep fruits in the Lost state, passive of relocalization, so the oranges,even missed in some frames, are properly accounted. Figure <ref> shows an example where motion blur jeopardizeddetection: the fruits are kept in the Lost state, but HOTA and MOTA values decrease because ground-truthconsider them visible oranges (note as part fruits are properly relocalized in the next frame).§.§ Yield regressor results The sample analyzed consisted of the 1,197 plants that resulted from the previous steps of the pipeline. However, some plants had tobe excluded due to missing plant dimensions (W, H or D in Table <ref>) or lack of automatic fruit counting,resulting in a reduced set of 1,139 plants. The regressor was specifically designed to estimate the sum of fruits from the first tothe third flowering, namely F1 + F2 + F3, as the fruits from the fourth flowering are too small to be detected in the previousstages of the pipeline. Consider the counting from our tracking system fed by YOLOv3 orange detections. Out of the 1,139 fruits, 911 were used for trainingof the neural network regressor, and the remaining 228 for testing were reserved for testing. The results are presented inFigure <ref>, where we can observe a significant dispersion of points in the graph. This dispersion affected theoverall performance, resulting in a R^2 value of 0.61. Notably, there is an isolated point in the lower part of the graph wherethe regressor estimated around 50 fruits, while the ground truth falls within the range of 1,250 to 1,500. Conversely, there aretwo isolated points at the top of the graph where the regressor overestimated the values. For these points, the regressor estimatedvalues between 1,500 and 1,750, while the actual values are around 700 for the left point and around1,200 for the right point.During our analysis of the full dataset, which includes the fruit counting from videos captured on each side of the plant (CbyT-A +CbyT-B), the ground truth (F1 + F2 + F3), and the yield estimated by the regressor, we observed that a significant number ofsamples had a much lower number of counted visible fruits (CbyT-A + CbyT-B) compared to the sum F1 + F2 + F3. In fact, most ofthese samples lie below the 40% detection rate mentioned in Table <ref>. This analysis led us to develop the hypothesis that implementing an acceptance threshold for the videos, based on the ratio between detected versus ground truth fruits, could potentially improve the results of the regressor. To test this hypothesis, we established thresholds for the acquisition process.Initially, we considered only videos with a minimum identification rate of 20% of the ground truth fruits. This new experiment reduced the dataset to 741 plants, with 148 plants reserved for testing purposes. The results of this experiment showed a significant improvement, with the R^2 value increasing to 0.79, as illustrated inFigure <ref>. We can also observe a reduced dispersion around the diagonal line, indicating better performance.This improved value of the R^2 suggests that our neural network regressor was able to explain a greater portion of the variancein yield estimation when considering this reduced dataset. Furthermore, when we raised the threshold to require the detection of at least 30% of the ground truth (still below the 40% inTable <ref>), the R^2 value further improved to 0.85, as shown in Figure <ref>. This new experiment reducedthe dataset to 558 trees, with 98 plants reserved for testing purposes, seen in Figure <ref>. These results clearlyindicate the performance of the regressor is closely tied to the quality of the machine learning identification and fruit counting results. It isimportant to acknowledge that it is not possible to identify 100% of the fruits through an automatic counting process, as some fruits are locatedinside the canopies, not visible by camera in any pose. We observe the same behavior when considering the counting produced using tracking from YOLOv5l detections. In this case, requiring detectionof at least 30% of the ground truth, we got a set of 668 trees. Of these, 557 were chosen for training the regressor, resulting in a R^2 value of 0.82 for the test set, which comprised the remaining 111 trees. A scatter plot of this result is seen inFigure <ref>.§ DISCUSSIONThe complex nature of the natural environment, coupled with the variability in fruit farming practices, continues to pose significantchallenges for automating both monitoring and harvesting. As noted by <cit.>, achieving automated operations in complexagricultural fields requires a synergistic approach that combines engineering and standardization of agricultural management practices. Such integration is crucial to enable future automated operations and management. It is essential that breeders and agronomists adapt the plants for automated operation. For example, the results by <cit.> show the clear advantages of systems likefruiting wall on automated counting and yield estimation. Our pipeline reached significant results in a diverse set of orchardmanagement, using hand-held video streams recorded by smartphones. We expect even better results in more controlled settings, likefruiting walls and artificial lightning at night operations, and employing improved hardware like wide-angle lenses, andvisual-inertial cameras. The image capture procedure employed in this work is coupled to the distributed nature of theFundecitrus' Orange Crop Forecast, performed by 30 field agents covering a large territory. Video recordings by smartphones werea practical, cheap solution, after other designs involving UAVs and cameras arrays being considered, and prior tests performed on field. The Crop Forecast employs a well-established methodology <cit.>, relying on a large set of individual trees ratherthan the scanning of entire rows or plots. One challenge arising from this tree-based approach is the difficulty in visually definingthe boundaries of neighboring trees, leading to branches from one tree entering the canopy of another. While the Crop Forecast teamcan manage this issue through manual fruit stripping, enabling accurate attribution of oranges to the correct tree (albeit a laborious task for the forecast staff), a vision-based system would find error-free fruit-to-tree assignment impractical. Other approaches,such as those integrating row-level or plot-level data and utilizing vehicle-mounted cameras, as demonstrated by <cit.>,could be considered. In row-level counting, exact fruit-to-tree attribution would not pose an issue.Besides the large number of works on fruit detection and impressive results reached by neural networks-based systems, fruitdetection is still challenging. Considering large datasets containing a representative set of images, and presentingdiversity inlight conditions, sensors, noise, crop varieties and phenological stage, detection performance can yet be significantly improved. However, accurate detection is necessary but not sufficient for accurate fruit counting, mainly because of occlusion issues. Better detection models depend on large annotated datasets for training and evaluation, but the scarcity of such datasets remains a key bottleneckin developing the next-generation of intelligent systems for precision agriculture, as pointed by <cit.>. We hope that MOrangeT and OranDet be significant contributions in this regard.Kalman filter-based trackers <cit.> can be considered an alternative to handle short-term fruit lost,but it is not as efficient as relocalization for long occlusion periods and changes in the movement direction.Our relocalization module can be considered an alternative to the motion displacement estimation proposed by<cit.>, specially useful for non-linear movements and non-planar fruit spatial distributions.<cit.> argues that SORT-like tracking <cit.> cannot handle dense fruitspatial distribution in space and heavy overlap. We have shown that relocalization based on 3-D is a viable alternative to complementinter-frame fruit tracking by data association algorithms. Full 3-D reconstruction is not needed, just ego-motion, which can beobtained by other methods like visual-inertial odometry, benefiting of new odometry-capable camera hardware that recently becamecommercially available. As seen in Table <ref>, some sequences present large relative errors. Consider sequences V09 and V10, that showtwo faces of the same small orange tree under a challenging setting: wind is shaking the branches, occlusion by leaves is severe, andoranges are in a more mature stage than the examples in the training set. Considering less than 20 oranges are visible, a few missingoranges in these examples represents a large relative error. In this challenge setting, both the short-term tracking component,inter-frame association, and the long-term tracking component, 3-D relocalization, are jeopardized. Noteworthy that whenthe tracking results are integrated, i.e., when the 1,198 fruits/tracks are considered, the counting error rates are low. This isspecially important considering that a yield prediction system must integrate several trees for accuracy. Despite large errors regardingsingle trees, the results point to tracker stability when considering a larger number of oranges. Multiple fruit tracking is a harder problem than fruit counting. Seeking for fruits in images, accurately identifying them,and evaluating when they are visible/reachable and when not is a challenge. Our reported values for HOTA and MOTA, and the results reported by <cit.> and <cit.> indicate MOT in orchards research still needssignificant improvements to achieve high tracking rates, above 0.9. However, accurate fruit counting could be achieved before highly accurate MOT. It is noteworthy that fruit tracking on the field is a challenging task even for humans.Our yield regression confronts a significant challenge: understanding the relationship between observable and unobservable fruits.<cit.> delved into a similar aspect concerning mango trees. Initially, our yield regression displayed a modestprediction capability, with R^2 = 0.61. However, ensuring that a minimum of 30% of the fruits were accurately identified by thecomputer vision-based counting system elevated the R^2 values beyond 0.80. Yet, the validity of assuming that at least 30% ofthe yield is visible in the canopy, amenable to vision-based counting, warrants scrutiny. Is this assumption realistically grounded?We anticipate that sweet orange trees predominantly bear fruit in the outer or upper regions of the canopy due to enhanced sunlightexposure.Upon scrutinizing the MOrangeT dataset, compiled from high-quality video recordings, it became evident that in most sequencesthe oranges identified by annotators corresponded to 30 to 50% of the true yield (F1 + F2 + F3) obtained through fruit stripping.An exception was noted in sequence V03, where only 12.5% of the fruits were discerned by annotators. This serves as evidence thatlow-quality video input significantly contributed to errors, and accurate regression from visible fruits holds promise. Moreover,this underscores the need for advancements in image acquisition systems, potentially incorporating solutions like artificiallighting during night operations and employing high dynamic range image sensors. § CONCLUSIONThis work proposes a complete pipeline for yield estimation in citrus trees, comprising imaging methodology for fruit detection, multiple orange tracking and yield regression integrating fruit counting to other tree data assize and age. Exploiting ego-motion data, i.e., camera pose information, we were able to create (i) a practical process for ground-truth annotation, and (ii) a relocalization module for multiple fruit tracking, able to deal with long-term occlusions and fruit entering and exiting the camera's field of view. Framed as a multiple object tracking (MOT) problem, fruit counting wasdeveloped and evaluated utilizing established MOT metrics such as MOTA and HOTA. A useful intermediary result is the 3-D fruit localization that could be employed in other tasks and analysis, as robotic harvesting. Orange trees under standard crop management practices will present non-visible fruit, hidden in the deeper parts of the canopy. A yield regressor, that takes to account tree size and age besides crop variety, in addition toimage-based fruit counting data, was developed and validated considering a large set of trees (> 1,000) from a real crop forecast effort in one of the largest sweet orange production sites in the world. In this challenging scenario, our pipeline was able to reach an R^2 up to 0.85 to true yield from fruit stripping. Even higher values for R^2 could be reached for better quality video input. Future prospects for this research involve adapting the pipeline for deployment in vehicles and exploring contemporary deep learning-based models tailored for object tracking in image sequences.§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENTThiago T. Santos: Methodology, Software, Validation, Formal Analysis, Investigation, Data Curation, Writing – Original Draft,Visualization. Kleber X. S. de Souza: Methodology, Software, Validation, Formal Analysis, Investigation, Data Curation, Writing – Original Draft, Visualization. João Camargo Neto: Methodology, Software, Formal Analysis, Investigation, Data Curation, Writing – Review. Luciano V. Koenigkan: Methodology, Software, Formal Analysis, Investigation, Data Curation, Writing – Review. Alécio S. Moreira: Data Curation, Writing – Review and Editing. Sônia Ternes: Project administration, Methodology, Investigation, Validation, Data Curation, Writing – Review and Editing.§ ACKNOWLEDGMENTSThis work was supported by the Brazilian Agricultural Research Corporation (Embrapa) under grant 10.18.03.016.00.00. T. T. Santos ispartially funded by FAPESP (grants 2017/19282-7 and 2022/09319-9). We thank the PES/Fundecitrus team for providing video, counting ground-truth and other plant data.plainnat § HOTA Consider an assignment matrix 𝙰̂_i^(α)[p,q] that matches bounding boxes ∈ to ground-truth boxes ∈,the set of annotated bounding boxes for frame f_i, ensuring that IoU(, ) ≥α. The assignments are one-to-one: a boxcan be assigned to at most one boxin the ground-truth and vice versa. As seen in Section <ref>, 𝙰̂_i^(α)[p,q] = 1 in the case of assignment and zero otherwise. Let = ⟨𝐛_i^(p), 𝐛̂_i^(q)⟩ be a compact representation for anassignment. The set of true positives _i for frame f_i is defined as:TP_i^(α) = { = ⟨, ⟩|𝙰̂_i^(α)[p,q] = 1 }Integrating the true positives for all M frames, we have the set := ⋃_i=1^M_iSimilarly, the false negatives set _i for f_i and the set of all false negativesare defined as:_i = {|∑_p 𝙰̂_i^(α)[p,q] = 0}(bounding boxes in the ground-truth not matched to any detected box) and= ⋃_i=1^M_iFinally, the false positive set _i for f_i and the set of all false positivesare defined as:_i = {|∑_q𝙰̂_i^(α)[p,q] = 0}(detected boxes not assigned to any box in the ground-truth) and= ⋃_i=1^M_i Assume (·): ⋃_i→ℕ a function that maps boxes in the ground-truth to numeric identifiers, and (·):⋃_iℬ_i →ℕ a function that does the same for detected boxes. To evaluate tracking in HOTA, <cit.>proposed three novel concepts. For a matching , thetrue positive associations setis composed of all true positivesthat present the same IDsfor prediction and ground-truth as :() = {∈|() = () ∧() = () }The false negative associations setis composed of matches that present the same ground-truth identifier () that , but detections attributed to different IDs, plus the set of false negatives also identified by (): () = {∈|() ≠() ∧() = () ∪{∈|() = () }The last concept is the false positive associations set , composed of matches that present thesame detected identifier (), but whose ground-truth presents different identifiers, plus allfalse positives also identified by ():() = {∈|() = () ∧() ≠() ∪{∈|() = () } Figure <ref> illustrates the three kinds of association sets. Finally, _α can be defined asHOTA_α = √(∑_∈𝒜()/|| + || + ||)where 𝒜() measures the alignment between predicted and ground-truth tracks: 𝒜() = |()|/|()| + |()| + |()| Luiten et al. call this a double Jaccard formulation: the Jaccard metric is employed on the evaluation of detection, with the matches ∈ being weighted by the association score 𝒜(), which also is a Jaccard metric. The proponents also argue that the metric is the geometric mean of a detection score and an association score,considering thatDetA_α = ||/|| + || + || AssA_α = 1/||∑_∈𝒜() _α= √(∑_∈𝒜()/|| + || + ||) = √(DetA_α·AssA_α) _α evaluates detection and association, but not the localization accuracy for the matches, i.e., the spatial fitbetween boxes' areas. Localization is evaluated by the final HOTA score, that integrates _α for differentvalues of α:= ∫_0^1 _α dα≈1/19∑_α∈ [0.05, 0.1, …, 0.9, 0.95]_α Equations <ref> to <ref> correspond to <cit.> formulations, rewritten to employ the notations adopted in the present work. It is important to note that, like in MOTA <cit.>, the employed assignments𝙰̂_i^(α)[p,q] are the ones that maximize the final HOTA score (see <cit.>for details about the matching optimization procedure). Here, we will employ HOTA for the evaluation ofour multiple orange tracking, and DetA and AssA scores to decompose tracking assessmentinto its detection and association components (note that DetA and AssA metrics are computed from DetA_α and AssA_αintegrating different α thresholds, as seen for HOTA in Equation <ref>). We have employed the original code from HOTAauthors[<https://github.com/JonathonLuiten/TrackEval.git>.].
http://arxiv.org/abs/2312.16724v1
{ "authors": [ "Thiago T. Santos", "Kleber X. S. de Souza", "João Camargo Neto", "Luciano V. Koenigkan", "Alécio S. Moreira", "Sônia Ternes" ], "categories": [ "cs.CV", "I.4.9; I.5.4" ], "primary_category": "cs.CV", "published": "20231227212243", "title": "A pipeline for multiple orange detection and tracking with 3-D fruit relocalization and neural-net based yield regression in commercial citrus orchards" }
Observation of χ_cJ→ 3(K^+K^-)M. Ablikim^1, M. N. Achasov^4,c, P. Adlarson^75, O. Afedulidis^3, X. C. Ai^80, R. Aliberti^35, A. Amoroso^74A,74C, Q. An^71,58,a, Y. Bai^57, O. Bakina^36, I. Balossino^29A, Y. Ban^46,h, H.-R. Bao^63, V. Batozskaya^1,44, K. Begzsuren^32, N. Berger^35, M. Berlowski^44, M. Bertani^28A, D. Bettoni^29A, F. Bianchi^74A,74C, E. Bianco^74A,74C, A. Bortone^74A,74C, I. Boyko^36, R. A. Briere^5, A. Brueggemann^68, H. Cai^76, X. Cai^1,58, A. Calcaterra^28A, G. F. Cao^1,63, N. Cao^1,63, S. A. Cetin^62A, J. F. Chang^1,58, G. R. Che^43, G. Chelkov^36,b, C. Chen^43, C. H. Chen^9, Chao Chen^55, G. Chen^1, H. S. Chen^1,63, H. Y. Chen^20, M. L. Chen^1,58,63, S. J. Chen^42, S. L. Chen^45, S. M. Chen^61, T. Chen^1,63, X. R. Chen^31,63, X. T. Chen^1,63, Y. B. Chen^1,58, Y. Q. Chen^34, Z. J. Chen^25,i, Z. Y. Chen^1,63, S. K. Choi^10A, G. Cibinetto^29A, F. Cossio^74C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^78, A. Dbeyssi^18, R.  E. de Boer^3, D. Dedovich^36, C. Q. Deng^72, Z. Y. Deng^1, A. Denig^35, I. Denysenko^36, M. Destefanis^74A,74C, F. De Mori^74A,74C, B. Ding^66,1, X. X. Ding^46,h, Y. Ding^34, Y. Ding^40, J. Dong^1,58, L. Y. Dong^1,63, M. Y. Dong^1,58,63, X. Dong^76, M. C. Du^1, S. X. Du^80, Z. H. Duan^42, P. Egorov^36,b, Y. H. Fan^45, J. Fang^59, J. Fang^1,58, S. S. Fang^1,63, W. X. Fang^1, Y. Fang^1, Y. Q. Fang^1,58, R. Farinelli^29A, L. Fava^74B,74C, F. Feldbauer^3, G. Felici^28A, C. Q. Feng^71,58, J. H. Feng^59, Y. T. Feng^71,58, M. Fritsch^3, C. D. Fu^1, J. L. Fu^63, Y. W. Fu^1,63, H. Gao^63, X. B. Gao^41, Y. N. Gao^46,h, Yang Gao^71,58, S. Garbolino^74C, I. Garzia^29A,29B, L. Ge^80, P. T. Ge^76, Z. W. Ge^42, C. Geng^59, E. M. Gersabeck^67, A. Gilman^69, K. Goetzen^13, L. Gong^40, W. X. Gong^1,58, W. Gradl^35, S. Gramigna^29A,29B, M. Greco^74A,74C, M. H. Gu^1,58, Y. T. Gu^15, C. Y. Guan^1,63, Z. L. Guan^22, A. Q. Guo^31,63, L. B. Guo^41, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^12,g, A. Guskov^36,b, J. Gutierrez^27, K. L. Han^63, T. T. Han^1, X. Q. Hao^19, F. A. Harris^65, K. K. He^55, K. L. He^1,63, F. H. Heinsius^3, C. H. Heinz^35, Y. K. Heng^1,58,63, C. Herold^60, T. Holtmann^3, P. C. Hong^34, G. Y. Hou^1,63, X. T. Hou^1,63, Y. R. Hou^63, Z. L. Hou^1, B. Y. Hu^59, H. M. Hu^1,63, J. F. Hu^56,j, S. L. Hu^12,g, T. Hu^1,58,63, Y. Hu^1, G. S. Huang^71,58, K. X. Huang^59, L. Q. Huang^31,63, X. T. Huang^50, Y. P. Huang^1, T. Hussain^73, F. Hölzken^3, N Hüsken^27,35, N. in der Wiesche^68, J. Jackson^27, S. Janchiv^32, J. H. Jeong^10A, Q. Ji^1, Q. P. Ji^19, W. Ji^1,63, X. B. Ji^1,63, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^71,58, D. Jiang^1,63, H. B. Jiang^76, P. C. Jiang^46,h, S. S. Jiang^39, T. J. Jiang^16, X. S. Jiang^1,58,63, Y. Jiang^63, J. B. Jiao^50, J. K. Jiao^34, Z. Jiao^23, S. Jin^42, Y. Jin^66, M. Q. Jing^1,63, X. M. Jing^63, T. Johansson^75, S. Kabana^33, N. Kalantar-Nayestanaki^64, X. L. Kang^9, X. S. Kang^40, M. Kavatsyuk^64, B. C. Ke^80, V. Khachatryan^27, A. Khoukaz^68, R. Kiuchi^1, O. B. Kolcu^62A, B. Kopf^3, M. Kuessner^3, X. Kui^1,63, N.  Kumar^26, A. Kupsc^44,75, W. Kühn^37, J. J. Lane^67, P.  Larin^18, L. Lavezzi^74A,74C, T. T. Lei^71,58, Z. H. Lei^71,58, M. Lellmann^35, T. Lenz^35, C. Li^43, C. Li^47, C. H. Li^39, Cheng Li^71,58, D. M. Li^80, F. Li^1,58, G. Li^1, H. B. Li^1,63, H. J. Li^19, H. N. Li^56,j, Hui Li^43, J. R. Li^61, J. S. Li^59, Ke Li^1, L. J Li^1,63, L. K. Li^1, Lei Li^48, M. H. Li^43, P. R. Li^38,l, Q. M. Li^1,63, Q. X. Li^50, R. Li^17,31, S. X. Li^12, T.  Li^50, W. D. Li^1,63, W. G. Li^1,a, X. Li^1,63, X. H. Li^71,58, X. L. Li^50, X. Z. Li^59, Xiaoyu Li^1,63, Y. G. Li^46,h, Z. J. Li^59, Z. X. Li^15, C. Liang^42, H. Liang^71,58, H. Liang^1,63, Y. F. Liang^54, Y. T. Liang^31,63, G. R. Liao^14, L. Z. Liao^50, J. Libby^26, A.  Limphirat^60, C. C. Lin^55, D. X. Lin^31,63, T. Lin^1, B. J. Liu^1, B. X. Liu^76, C. Liu^34, C. X. Liu^1, F. H. Liu^53, Fang Liu^1, Feng Liu^6, G. M. Liu^56,j, H. Liu^38,k,l, H. B. Liu^15, H. M. Liu^1,63, Huanhuan Liu^1, Huihui Liu^21, J. B. Liu^71,58, J. Y. Liu^1,63, K. Liu^38,k,l, K. Y. Liu^40, Ke Liu^22, L. Liu^71,58, L. C. Liu^43, Lu Liu^43, M. H. Liu^12,g, P. L. Liu^1, Q. Liu^63, S. B. Liu^71,58, T. Liu^12,g, W. K. Liu^43, W. M. Liu^71,58, X. Liu^38,k,l, X. Liu^39, Y. Liu^80, Y. Liu^38,k,l, Y. B. Liu^43, Z. A. Liu^1,58,63, Z. D. Liu^9, Z. Q. Liu^50, X. C. Lou^1,58,63, F. X. Lu^59, H. J. Lu^23, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^7, Y. P. Lu^1,58, Z. H. Lu^1,63, C. L. Luo^41, M. X. Luo^79, T. Luo^12,g, X. L. Luo^1,58, X. R. Lyu^63, Y. F. Lyu^43, F. C. Ma^40, H. Ma^78, H. L. Ma^1, J. L. Ma^1,63, L. L. Ma^50, M. M. Ma^1,63, Q. M. Ma^1, R. Q. Ma^1,63, X. T. Ma^1,63, X. Y. Ma^1,58, Y. Ma^46,h, Y. M. Ma^31, F. E. Maas^18, M. Maggiora^74A,74C, S. Malde^69, Y. J. Mao^46,h, Z. P. Mao^1, S. Marcello^74A,74C, Z. X. Meng^66, J. G. Messchendorp^13,64, G. Mezzadri^29A, H. Miao^1,63, T. J. Min^42, R. E. Mitchell^27, X. H. Mo^1,58,63, B. Moses^27, N. Yu. Muchnoi^4,c, J. Muskalla^35, Y. Nefedov^36, F. Nerling^18,e, L. S. Nie^20, I. B. Nikolaev^4,c, Z. Ning^1,58, S. Nisar^11,m, Q. L. Niu^38,k,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^63, Q. Ouyang^1,58,63, S. Pacetti^28B,28C, X. Pan^55, Y. Pan^57, A.  Pathak^34, P. Patteri^28A, Y. P. Pei^71,58, M. Pelizaeus^3, H. P. Peng^71,58, Y. Y. Peng^38,k,l, K. Peters^13,e, J. L. Ping^41, R. G. Ping^1,63, S. Plura^35, V. Prasad^33, F. Z. Qi^1, H. Qi^71,58, H. R. Qi^61, M. Qi^42, T. Y. Qi^12,g, S. Qian^1,58, W. B. Qian^63, C. F. Qiao^63, X. K. Qiao^80, J. J. Qin^72, L. Q. Qin^14, L. Y. Qin^71,58, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, Z. H. Qu^72, C. F. Redmer^35, K. J. Ren^39, A. Rivetti^74C, M. Rolo^74C, G. Rong^1,63, Ch. Rosner^18, S. N. Ruan^43, N. Salone^44, A. Sarantsev^36,d, Y. Schelhaas^35, K. Schoenning^75, M. Scodeggio^29A, K. Y. Shan^12,g, W. Shan^24, X. Y. Shan^71,58, Z. J Shang^38,k,l, J. F. Shangguan^55, L. G. Shao^1,63, M. Shao^71,58, C. P. Shen^12,g, H. F. Shen^1,8, W. H. Shen^63, X. Y. Shen^1,63, B. A. Shi^63, H. Shi^71,58, H. C. Shi^71,58, J. L. Shi^12,g, J. Y. Shi^1, Q. Q. Shi^55, S. Y. Shi^72, X. Shi^1,58, J. J. Song^19, T. Z. Song^59, W. M. Song^34,1, Y.  J. Song^12,g, Y. X. Song^46,h,n, S. Sosio^74A,74C, S. Spataro^74A,74C, F. Stieler^35, Y. J. Su^63, G. B. Sun^76, G. X. Sun^1, H. Sun^63, H. K. Sun^1, J. F. Sun^19, K. Sun^61, L. Sun^76, S. S. Sun^1,63, T. Sun^51,f, W. Y. Sun^34, Y. Sun^9, Y. J. Sun^71,58, Y. Z. Sun^1, Z. Q. Sun^1,63, Z. T. Sun^50, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, Y. A. Tang^76, L. Y. Tao^72, Q. T. Tao^25,i, M. Tat^69, J. X. Teng^71,58, V. Thoren^75, W. H. Tian^59, Y. Tian^31,63, Z. F. Tian^76, I. Uman^62B, Y. Wan^55,S. J. Wang ^50, B. Wang^1, B. L. Wang^63, Bo Wang^71,58, D. Y. Wang^46,h, F. Wang^72, H. J. Wang^38,k,l, J. J. Wang^76, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, Meng Wang^1,63, N. Y. Wang^63, S. Wang^12,g, S. Wang^38,k,l, T.  Wang^12,g, T. J. Wang^43, W.  Wang^72, W. Wang^59, W. P. Wang^35,71,o, X. Wang^46,h, X. F. Wang^38,k,l, X. J. Wang^39, X. L. Wang^12,g, X. N. Wang^1, Y. Wang^61, Y. D. Wang^45, Y. F. Wang^1,58,63, Y. L. Wang^19, Y. N. Wang^45, Y. Q. Wang^1, Yaqian Wang^17, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^72, Z. Y. Wang^1,63, Ziyi Wang^63, D. H. Wei^14, F. Weidner^68, S. P. Wen^1, Y. R. Wen^39, U. Wiedner^3, G. Wilkinson^69, M. Wolke^75, L. Wollenberg^3, C. Wu^39, J. F. Wu^1,8, L. H. Wu^1, L. J. Wu^1,63, X. Wu^12,g, X. H. Wu^34, Y. Wu^71,58, Y. H. Wu^55, Y. J. Wu^31, Z. Wu^1,58, L. Xia^71,58, X. M. Xian^39, B. H. Xiang^1,63, T. Xiang^46,h, D. Xiao^38,k,l, G. Y. Xiao^42, S. Y. Xiao^1, Y.  L. Xiao^12,g, Z. J. Xiao^41, C. Xie^42, X. H. Xie^46,h, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^6, Z. P. Xie^71,58, T. Y. Xing^1,63, C. F. Xu^1,63, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^66, M. Xu^71,58, Q. J. Xu^16, Q. N. Xu^30, W. Xu^1, W. L. Xu^66, X. P. Xu^55, Y. C. Xu^77, Z. P. Xu^42, Z. S. Xu^63, F. Yan^12,g, L. Yan^12,g, W. B. Yan^71,58, W. C. Yan^80, X. Q. Yan^1, H. J. Yang^51,f, H. L. Yang^34, H. X. Yang^1, Tao Yang^1, Y. Yang^12,g, Y. F. Yang^43, Y. X. Yang^1,63, Yifan Yang^1,63, Z. W. Yang^38,k,l, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^8, J. H. Yin^1, Z. Y. You^59, B. X. Yu^1,58,63, C. X. Yu^43, G. Yu^1,63, J. S. Yu^25,i, T. Yu^72, X. D. Yu^46,h, Y. C. Yu^80, C. Z. Yuan^1,63, J. Yuan^34, L. Yuan^2, S. C. Yuan^1, Y. Yuan^1,63, Y. J. Yuan^45, Z. Y. Yuan^59, C. X. Yue^39, A. A. Zafar^73, F. R. Zeng^50, S. H.  Zeng^72, X. Zeng^12,g, Y. Zeng^25,i, Y. J. Zeng^59, X. Y. Zhai^34, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,63, B. L. Zhang^1,63, B. X. Zhang^1, D. H. Zhang^43, G. Y. Zhang^19, H. Zhang^80, H. Zhang^71,58, H. C. Zhang^1,58,63, H. H. Zhang^34, H. H. Zhang^59, H. Q. Zhang^1,58,63, H. R. Zhang^71,58, H. Y. Zhang^1,58, J. Zhang^80, J. Zhang^59, J. J. Zhang^52, J. L. Zhang^20, J. Q. Zhang^41, J. S. Zhang^12,g, J. W. Zhang^1,58,63, J. X. Zhang^38,k,l, J. Y. Zhang^1, J. Z. Zhang^1,63, Jianyu Zhang^63, L. M. Zhang^61, Lei Zhang^42, P. Zhang^1,63, Q. Y. Zhang^34, R. Y Zhang^38,k,l, Shuihan Zhang^1,63, Shulei Zhang^25,i, X. D. Zhang^45, X. M. Zhang^1, X. Y. Zhang^50, Y.  Zhang^72, Y.  T. Zhang^80, Y. H. Zhang^1,58, Y. M. Zhang^39, Yan Zhang^71,58, Yao Zhang^1, Z. D. Zhang^1, Z. H. Zhang^1, Z. L. Zhang^34, Z. Y. Zhang^76, Z. Y. Zhang^43, Z. Z.  Zhang^45, G. Zhao^1, J. Y. Zhao^1,63, J. Z. Zhao^1,58, Lei Zhao^71,58, Ling Zhao^1, M. G. Zhao^43, N. Zhao^78, R. P. Zhao^63, S. J. Zhao^80, Y. B. Zhao^1,58, Y. X. Zhao^31,63, Z. G. Zhao^71,58, A. Zhemchugov^36,b, B. Zheng^72, B. M. Zheng^34, J. P. Zheng^1,58, W. J. Zheng^1,63, Y. H. Zheng^63, B. Zhong^41, X. Zhong^59, H.  Zhou^50, J. Y. Zhou^34, L. P. Zhou^1,63, S.  Zhou^6, X. Zhou^76, X. K. Zhou^6, X. R. Zhou^71,58, X. Y. Zhou^39, Y. Z. Zhou^12,g, J. Zhu^43, K. Zhu^1, K. J. Zhu^1,58,63, K. S. Zhu^12,g, L. Zhu^34, L. X. Zhu^63, S. H. Zhu^70, S. Q. Zhu^42, T. J. Zhu^12,g, W. D. Zhu^41, Y. C. Zhu^71,58, Z. A. Zhu^1,63, J. H. Zou^1, J. Zu^71,58 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 BochumRuhr-University, D-44780 Bochum, Germany^4 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^6 Central China Normal University, Wuhan 430079, People's Republic of China^7 Central South University, Changsha 410083, People's Republic of China^8 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^9 China University of Geosciences, Wuhan 430074, People's Republic of China^10 Chung-Ang University, Seoul, 06974, Republic of Korea^11 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^12 Fudan University, Shanghai 200433, People's Republic of China^13 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^14 Guangxi Normal University, Guilin 541004, People's Republic of China^15 Guangxi University, Nanning 530004, People's Republic of China^16 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^17 Hebei University, Baoding 071002, People's Republic of China^18 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany^19 Henan Normal University, Xinxiang 453007, People's Republic of China^20 Henan University, Kaifeng 475004, People's Republic of China^21 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^22 Henan University of Technology, Zhengzhou 450001, People's Republic of China^23 Huangshan College, Huangshan245000, People's Republic of China^24 Hunan Normal University, Changsha 410081, People's Republic of China^25 Hunan University, Changsha 410082, People's Republic of China^26 Indian Institute of Technology Madras, Chennai 600036, India^27 Indiana University, Bloomington, Indiana 47405, USA^28 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione diPerugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy^29 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara,I-44122, Ferrara, Italy^30 Inner Mongolia University, Hohhot 010021, People's Republic of China^31 Institute of Modern Physics, Lanzhou 730000, People's Republic of China^32 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia^33 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile^34 Jilin University, Changchun 130012, People's Republic of China^35 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^36 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^37 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^38 Lanzhou University, Lanzhou 730000, People's Republic of China^39 Liaoning Normal University, Dalian 116029, People's Republic of China^40 Liaoning University, Shenyang 110036, People's Republic of China^41 Nanjing Normal University, Nanjing 210023, People's Republic of China^42 Nanjing University, Nanjing 210093, People's Republic of China^43 Nankai University, Tianjin 300071, People's Republic of China^44 National Centre for Nuclear Research, Warsaw 02-093, Poland^45 North China Electric Power University, Beijing 102206, People's Republic of China^46 Peking University, Beijing 100871, People's Republic of China^47 Qufu Normal University, Qufu 273165, People's Republic of China^48 Renmin University of China, Beijing 100872, People's Republic of China^49 Shandong Normal University, Jinan 250014, People's Republic of China^50 Shandong University, Jinan 250100, People's Republic of China^51 Shanghai Jiao Tong University, Shanghai 200240,People's Republic of China^52 Shanxi Normal University, Linfen 041004, People's Republic of China^53 Shanxi University, Taiyuan 030006, People's Republic of China^54 Sichuan University, Chengdu 610064, People's Republic of China^55 Soochow University, Suzhou 215006, People's Republic of China^56 South China Normal University, Guangzhou 510006, People's Republic of China^57 Southeast University, Nanjing 211100, People's Republic of China^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand^61 Tsinghua University, Beijing 100084, People's Republic of China^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey^63 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^64 University of Groningen, NL-9747 AA Groningen, The Netherlands^65 University of Hawaii, Honolulu, Hawaii 96822, USA^66 University of Jinan, Jinan 250022, People's Republic of China^67 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom^68 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany^69 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom^70 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^71 University of Science and Technology of China, Hefei 230026, People's Republic of China^72 University of South China, Hengyang 421001, People's Republic of China^73 University of the Punjab, Lahore-54590, Pakistan^74 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^75 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^76 Wuhan University, Wuhan 430072, People's Republic of China^77 Yantai University, Yantai 264005, People's Republic of China^78 Yunnan University, Kunming 650500, People's Republic of China^79 Zhejiang University, Hangzhou 310027, People's Republic of China^80 Zhengzhou University, Zhengzhou 450001, People's Republic of China^a Deceased^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China^h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China^i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China^j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China^k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China^l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China^m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan^n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland^o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, GermanyJanuary 14, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The multimodal recommendation has gradually become the infrastructure of online media platforms, enabling them to provide personalized service to users through a joint modeling of user historical behaviors (e.g., purchases, clicks) and item various modalities (e.g., visual and textual). The majority of existing studies typically focus on utilizing modal features or modal-related graph structure to learn user local interests. Nevertheless, these approaches encounter two limitations: (1) Shared updates of user ID embeddings result in the consequential coupling between collaboration and multimodal signals; (2) Lack of exploration into robust global user interests to alleviate the sparse interaction problems faced by local interest modeling. To address these issues, we propose a novel Local and Global Graph Learning-guided Multimodal Recommender (LGMRec), which jointly models local and global user interests. Specifically, we present a local graph embedding module to independently learn collaborative-related and modality-related embeddings of users and items with local topological relations. Moreover, a global hypergraph embedding module is designed to capture global user and item embeddings by modeling insightful global dependency relations. The global embeddings acquired within the hypergraph embedding space can then be combined with two decoupled local embeddings to improve the accuracy and robustness of recommendations. Extensive experiments conducted on three benchmark datasets demonstrate the superiority of our LGMRec over various state-of-the-art recommendation baselines, showcasing its effectiveness in modeling both local and global user interests. § INTRODUCTION With the explosive growth of massive multimedia information (e.g., images, texts, and videos) on online media platforms, such as YouTube and Tiktok, a lot of efforts have been devoted to multimodal recommender systems (MRSs) to assist these platforms in providing personalized services to users. Nowadays, the primary task of MRSs is to design an effective way to integrate item multimodal information into traditional user-item interaction modeling frameworks to capture comprehensive user interests. Some early studies on MRSs adopt either the linear fusion between item modal features and their ID embeddings <cit.> or the attention mechanism on item modalities <cit.> to model representations of users and items. However, The efficacy of these models is somewhat constrained as they only model low-order user-item interactions. The surge of research on graph-based recommendations <cit.> has sparked a wave of explorations in using graph neural networks (GNN) to enhance multimodal recommendations. These works typically capture higher-order user interests from the user-item graph that integrates multimodal contents <cit.>, or construct modality-aware auxiliary graph structures to transfer multimodal knowledge into item and user embeddings <cit.>. Though achieving remarkable progress, existing studies on MRSs still suffer from the following two limitations in modeling user interests.(1) Coupling. Firstly, collaboration and multimodal information provide different avenues for exploring user interests. In general, collaborative signals emphasize similar user behavior patterns, while modal knowledge is reflected through content similarity. However, prior works <cit.> often overlook this matter and share user ID embeddings in both collaborative and multimodal modeling modules (red line in Figure <ref> (a)) to learn user interests that couple collaborative and multimodal signals. Experimentally, we randomly select two users from the Baby dataset and exhibit the gradient comparison of their ID embeddings (with 64 dimensions) from the collaborative and multimodal modeling modules in Figure <ref> (b). In the early stages of training, the ratio of gradients with opposite directions (orange bar) from the two modules in all dimensions exceeds 50% for each user, which demonstrates that collaborative and multimodal signals generally have different guidance for user embedding learning[In fact, approximately 94.26% of users in Baby dataset present such a situation, that is, more than 50% of the embedding dimensions have opposite gradient directions during the training process.]. Though this ratio slightly decreases as the training continues, the coupling design still restricts stable updates of user embeddings.(2) Locality. Secondly, most existing methods <cit.> only learn local user interests from the interaction graph (Figure<ref> (c)), lacking the exploration of user global interests. Sparse user-item interactions limit their modeling of robust user interests. As shown in Figure<ref> (d), user global (general) interests are usually related to item attribute labels that do not rely on the local interactions. Specifically, items usually have multiple common attributions from visual space, such as color, style, shape. Users have different interests in various attributes. For example, u_1 may like clothes with bright colors, while u_2 prefers a simple style. A method that modeling only local interests may recommend the shirt i_1to u_2 based on similar behaviors, i.e., same purchases (i_2, i_3, i_4) between u_1 and u_2. But, the global interests of u_2 can provide additional guidance, making it more likely to recommend the outerwear i_5 with simple style that match u_2's true interests. To address the aforementioned issues, we propose a novel Local and Global Graph Learning-guided Multimodal Recommender (LGMRec), which explores capturing and exploiting both local and global representations of users and items to facilitate multimodal recommendation. Specifically, to address the first limitation, we present the local graph embedding module to independently capture collaborative-related and modality-related local user interests by performing message propagation on user-item interaction graphs with ID embeddings and modal features, respectively. In view of the many-to-many dependency relationship between attributes and items is similar to that between hyperedges and nodes in hypergraphs, we further consider each implicit attribute as a hyperedge, and present a global hypergraph embedding module to model hypergraph structure dependencies, so as to address the second limitation. Extensive experimental results on three real-world datasets demonstrate that LGMRec surpasses various recommendation baselines significantly, and verify its effectiveness and robustness in modeling local and global user interests. § RELATED WORK Graph-based Recommendation The powerful ability of graph neural networks <cit.> in modeling high-order connectivity has greatly promoted the development of recommender systems. Specifically, graph-based recommendation methods model user and item representations by naturally converting the user history interactions into a user-item bipartite graph. Early studies directly inherit the message propagation mechanism of vanilla graph neural network to aggregate high-order neighbor information to represent users and items <cit.>. Later, by simplifying the message propagation process, some graph-based recommendation methods further improve recommendation performance <cit.>. Additionally, some other methods explore more node dependencies to enhance the representations of users and items <cit.>. Later, contrastive learning is also adopted to enhance graph-based recommendations <cit.> to construct contrastive views. However, since no modality features are considered, their modeling abilities are limited by sparse interactions.Hypergraph learning for Recommendation By constructing the hyperedge structure containing more than two nodes, hypergraph learning <cit.> can enhance the generalization ability of the model via capturing complex node dependencies. Some recommendation methods <cit.> try to build hypergraph structures and node-hyperedge connections to capture high-order interaction patterns and achieve substantial performance improvements. To further improve performance, several recently developed methods <cit.> combine self-supervised learning and hypergraph learning to model robust user and item representations. For example, HCCF <cit.> enhances collaborative filtering with the hypergraph-guided self-supervised learning paradigm. Different from these works that generate hypergraph dependencies via only collaborative embeddings, our work achieves hypergraph structure learning with the modeling of modality-aware global relations. Multi-modal Recommendation The multi-modal recommendation has become the basic application on online media platforms to provide personalized services to users by analyzing the massive multi-modal information (e.g., images and textual descriptions) and user historical behaviors (e.g., reviews, clicks). Early studies on MRSs usually incorporate multi-modal contents as side information to extend the vanilla CF framework <cit.> or utilize deep autoencoder to model modal features <cit.>. Inspired by the great success of graph-based recommendation methods <cit.>, some studies directly model user high-order interests on modality-specific interaction graphs <cit.>. For instance, MMGCN <cit.> incorporates modality information into the graph message passing to infer modality-related user preferences. Another line utilizes auxiliary semantic graph structures learned from multimodal features to enhance user or item representations <cit.>. For example, LATTICE <cit.> is a representative method that exploits modal content similarity to generate auxiliary latent item semantic relations to promote recommendation. Recently, Some works <cit.> introduce contrastive learning into MRSs to model robust user and item representations. However, these methods usually perform message passing along the edges of user-item interactions to obtain local user interests, failing to explore modality-aware comprehensive user interests. § METHODOLOGY In this section, we first formulate the problem of multimodal recommendation and present the overall framework of our LGMRec, and then introduce each component in detail. §.§ Problem Statement and OverviewWe set the user set as𝒰={u} and item set as ℐ={i}. The ID embeddings of each user u∈𝒰 and item i ∈ℐ are denoted as 𝐞_u ∈ℝ^d and 𝐞_i ∈ℝ^d, respectively, where d is the embedding dimension. The user-item interactions can be represented as a matrix 𝐑∈ℝ^|𝒰| × |ℐ|, in which the element r_u,i=1 if user u interacts with item i, and r_u,i=0 otherwise. Based on interaction matrix 𝐑, we can construct the user-item interaction graph 𝒢={𝒰∪ℐ, ℰ}, where ℰ is edge set build on observed interactions, i.e., a nonzero r_u,i corresponds to an edge between user u and item i on the graph 𝒢. Further, we incorporate item multimodal contents and denote the original modality feature of item i generated from pre-trained models as 𝐞^m_i ∈ℝ^d_m under modality m ∈ℳ, where ℳ is the set of modalities and d_m denotes the dimension of modal features. In this work, we consider two mainstream modalities, vision v and text t, i.e., ℳ={v,t}. Given the above settings, the multimodal recommendation aims to learn a prediction function to forecast the score r̂_u,i of an item i adopted by a user u via joint modeling user behaviors and multimodal contents. Formally,r̂_u,i = Prediction(𝐑, 𝐄^id, {𝐄_i^m}_m∈ℳ)where Prediction(·) is the prediction function, 𝐄^id=[𝐞_u_1, …, 𝐞_u_|𝒰|, 𝐞_i_1, …, 𝐞_i_|ℐ|] ∈ℝ^(|𝒰|+|ℐ|) × d denotes the ID embedding matrix by stacking all the ID embeddings of users and items, 𝐄_i^m = [𝐞^m_i_1, …, 𝐞^m_i_|ℐ|] ∈ℝ^|ℐ| × d_m is the item modal feature matrix under modality m. Overview. As illustrated in Figure <ref>, the framework of LGMRec consists of three major components: (i) Local graph embedding (LGE) module, which adopts GNN to capture collaborative-related and modality-related user local interests on user-item interaction graph with ID embeddings and modal features, respectively; (ii) Global hypergraph embedding (GHE) module, which learns the global user and item representations by capturing the global hypergraph structure dependencies from different item modal feature spaces; and (iii) Fusion and prediction module, which fuses both local and global embeddings to predict final user preference scores for items.§.§ Local Graph Embedding (LGE) ModuleThe LGE module is designed to independently learn the collaborative-related and modality-related user and item representations with local topology structure for avoiding unstable updates of user embeddings and promoting decoupled user interest learning.§.§.§ Collaborative Graph Embedding (CGE)We first capture the high-order connectivity via the message propagation on the user-item interaction graph with ID embeddings. In particular, the collaborative graph propagation function CGProg(·) in the (l+1)-th layer can be formatted as,𝐄^l+1=CGProg(𝐄^l) = (𝐃^-1/2𝐀𝐃^-1/2) 𝐄^l, where CGProg(·) function inherits the lightweight form of the simplified graph convolutional network <cit.>, 𝐀∈ℝ^(|𝒰|+|ℐ|) × (|𝒰|+|ℐ|) is the adjacency matrix constructed from interaction matrix 𝐑, and 𝐃 is the diagonal matrix of 𝐀. Each diagonal element 𝐃_j,j in 𝐃 denotes the number of nonzero entries in the j-th row vector of matrix 𝐀. The initial embeddings matrix is set as 𝐄^0 = 𝐄^id. Then, we adopt the layer combination <cit.> to integrate all embeddings from hidden layers,𝐄^id_lge =Layercomb(𝐄^0, 𝐄^1, 𝐄^2, …, 𝐄^L), where 𝐄^id_lge∈ℝ^(|𝒰|+|ℐ|) × d is collaborative-related embeddings of users and items with local neighborhood information. We use the mean function to achieve Layercomb(·) for embedding integration.§.§.§ Modality Graph Embedding (MGE)Considering the semantic differences between modalities, we further independently infer the modality-related embeddings of users and items on the interaction graphs with modal features. The original modal features of items are usually generated from different pre-trained models, e.g., ResNet <cit.>, BERT <cit.>, they have different dimensions in different feature spaces. We require the projection of high-dimensional modal feature 𝐞_i^m of each item into a unified embedding space ℝ^d as,𝐞_i^m = Transform(𝐞_i^m) = 𝐞_i^m ·𝐖_m,where 𝐞_i^m is item i's transformed modal feature, Transform(·) is a projection function parameterized by a transformation matrix 𝐖_m ∈ℝ^d_m × d. Due to the difficulty in obtaining user modal information, existing methods often reuse user ID embedding as input for modality-specific graphs, resulting in coupling of collaborative and modal signals. Different from them, we initialize the user modal features by aggregatingitem modal features,𝐞_u^m = 1/|𝒩_u|∑_i ∈𝒩_u𝐞_i^m,where 𝒩_u denotes the neighbor set of user u ∈𝒰 on user-item interaction graph 𝒢. This operation ensures the separate updates of ID embedding and modal features. Thereafter, we can construct the modal feature matrix 𝐄^m = [𝐞^m_u_1, …, 𝐞^m_u_|𝒰|, 𝐞^m_i_1, …, 𝐞^m_i_|ℐ|] ∈ℝ^(|𝒰|+|ℐ|) × d as initial input 𝐄^m,0 to learn modality-related embeddings via implementing a light graph propagation function MGProg(·),𝐄^m,k+1=MGProg(𝐄^m,k)=(𝐃^-1/2𝐀𝐃^-1/2) 𝐄^m,k. Here, we choose high-order modal embeddings 𝐄^m,K in the last K-th layer as the modality-related embeddings (i.e., 𝐄^m_lge = 𝐄^m,K) with local modal information.§.§ Global Hypergraph Embedding (GHE) Module The GHE module is designed to capture the modality-aware global representations of users and items against sparse and noisy user behaviors.§.§.§ Hypergraph Dependency ConstructingExplicit attribute information of item modalities is often unavailable, especially for visual modalities. Hence, we define learnable implicit attribute vectors {𝐯^m_a}_a=1^A (𝐯^m_a ∈ℝ^d_m) as hyperedge embeddings under modality m to adaptively learn the dependencies between implicit attributes and items/users , where A is the number of hyperedges. Specifically, We obtain hypergraph dependency matrices in low-dimensional embedding space by,𝐇_i^m = 𝐄_i^m ·𝐕^m^⊤, 𝐇_u^m = 𝐀_u ·𝐇_i^m^⊤,where 𝐇_i^m ∈ℝ^|ℐ| × A and 𝐇_u^m ∈ℝ^|𝒰| × A are the item-hyperedge and user-hyperedge dependency matrices, respectively. 𝐄_i^m is the raw item modal feature matrix, 𝐕^m = [𝐯^m_1, …, 𝐯^m_A] ∈ℝ^A × d_m is the hyperedge vector matrix, and 𝐀_u∈ℝ^|𝒰| × |ℐ| is the user-related adjacency matrix extracted from 𝐀. Intuitively, items with similar modal features are more likely to be connected to the same hyperedge. The user-hyperedge dependencies are indirectly derived through the user-item interactions, which implies the user behavior intention, i.e., the more frequently users interact with items under a certain attribute, the more they may prefer the attribute.To further avoid the negative impact of meaningless relationships, we employ the Gumbel-Softmax reparameterization <cit.> to ensure that an item is attached to only one hyperedge as much as possible,𝐡_i,*^m = Softmax(logδ-log (1-δ)+𝐡_i,*^m/τ),where 𝐡_i,*^m ∈ℝ^A is the i-th row vector of 𝐇_i^m that reflects the relations between item i and all hyperedges. δ∈ℝ^A is a noise vector, where each value δ_j ∼Uniform(0,1), and τ is a temperature hyperparameter. Afterwards, we can get the augmented item-attribute hypergraph dependency matrix 𝐇_i^m. By performing similar operations on 𝐇_u^m, we can obtain the augmented user-attribute relation matrix 𝐇_u^m.§.§.§ Hypergraph Message PassingBy taking the attribute hyperedge as an intermediate hub, we achieve hypergraph message passing to deliver global information to users and items without being limited by hop distances. Formally,𝐄^m,h+1_i = Drop(𝐇_i^m) ·Drop(𝐇_i^m ⊤) ·𝐄^m,h_i,where 𝐄^m,h_i is the global embedding matrix of items in the h-th hypergraph layer, and Drop(·)denotes a dropout function. We take collaborative embedding matrix 𝐄^id_i,lge of items as the initial global embedding matrix when h=0. Further, we can calculate the global user embedding matrix as,𝐄^m, h+1_u = Drop(𝐇_u^m) ·Drop(𝐇_i^m ⊤) ·𝐄^m,h_i.Apparently, the hypergraph passing explicitly enables global information transfer by taking the item collaborative embedding and modality-aware hypergraph dependencies as input. Then, we can obtain the global embeddings matrix 𝐄_ghe by aggregating global embeddings from all modalities,𝐄_ghe = ∑_m ∈ℳ𝐄^m, H, 𝐄^m, H=[𝐄^m, H_u,𝐄^m, H_i],where 𝐄^m, H_u∈ℝ^|𝒰|× d and 𝐄^m, H_i∈ℝ^|ℐ|× d are global embedding matrices of user u and item i obtained in the H-th hypergraph layer under modality m, respectively.To further achieve the robust fusion of global embeddings among different modalities, we develop cross-modal hypergraph contrastive learning to distill the self-supervision signals for global interest consistency. Specifically, we take the global embeddings of users acquired in different modalities as positive pairs and different users as negative pairs, and then employ the InfoNCE <cit.> to formally define user-side hypergraph contrastive loss as,ℒ^u_HCL = ∑_u ∈𝒰 - logexp(s(𝐄^v, H_u, 𝐄^t, H_u) / τ)/∑_u' ∈𝒰exp(s(𝐄^v, H_u, 𝐄^t, H_u')/ τ),where s(·) is the cosine function, and τ is the temperature factor, generally set to 0.2. Note here we only consider visual and textual modalities, i.e., m∈{v,t}. Similarly, we can define item-side cross-modal contrastive loss ℒ^i_HCL. §.§ Fusion and PredictionWe acquire the final representations 𝐄^* of users and items by fusing their two types of local embeddings 𝐄_lge^id, 𝐄_lge^m and global embeddings 𝐄_ghe,𝐄^* =𝐄_lge^id + ∑_m ∈ℳNorm(𝐄_lge^m) + α·Norm(𝐄_ghe),where Norm(·) is a normalization function to alleviate the value scale difference among embeddings, α is an adjustable factor to control the integration of global embeddings. We then use inner product to calculate the preference score r̂_u,i of user u towards item i, i.e., r̂_u,i=𝐞^*_u ·𝐞^*_i^⊤. The Bayesian personalized ranking (BPR) loss <cit.> is employed to optimize model parameters,ℒ_BPR=-∑_(u,i^+,i^-) ∈ℛlnσ(r̂_u,i^+-r̂_u, i^-) + λ_1 Θ_2^2,where ℛ={(u, i^+, i^-) | (u, i^+) ∈𝒢, (u, i^-) ∉𝒢} is a set of triples for training,σ(·) is the sigmoid function, and λ_1 and Θ represent the regularization coefficient and model parameters, respectively.Finally, we integrate hypergraph contrastive loss with the BPR <cit.> loss into a unified objective as,ℒ = ℒ_BPR + λ_2 · (ℒ^u_HCL + ℒ^i_HCL)where λ_2 is a hyperparameter for loss term weighting. We minimize the joint objective ℒ by using Adam optimizer <cit.>. The weight-decay regularization term is applied over model parameters Θ. § EXPERIMENT §.§ Experimental Settings§.§.§ DatasetsTo evaluate our proposed model, we conduct comprehensive experiments on three widely used Amazon datasets <cit.>: Baby, Sports and Outdoors, Clothing Shoes and Jewelry. We refer to them as Baby, Sports, Clothing for brief. We adopt the 5-core setting to filter users and items for each dataset. The three datasets include both visual and textual modal features. In this work, we use the 4096-dimensional original visual features and 384-dimensional original textual features that have been extracted and published in prior work <cit.>. The statistics of the three datasets are summarized in Table <ref>. §.§.§ Evaluation ProtocolsFor each dataset, we randomly split historical interactions into training, validation, and testing sets with 8:1:1 ratio.Two widely used protocols are used to evaluate the performance of top-n recommendation: Recall (R@n) and Normalized Discounted Cumulative Gain <cit.> (N@n). We tune n in {10, 20} and report the average results for all users in the testing set.§.§.§ Parameter SettingsFor a fair comparison, we optimize all models with the default batch size 2048, learning rate 0.001, and embedding size d=64. For all graph-based methods, the number L of collaborative graph prorogation layers is set to 2. In addition, we initialize the model parameters with the Xavier method <cit.>. For our model, the optimal hyper-parameters are determined via grid search on the validation set. Specifically, the number of modal graph embedding layers and hypergraph embedding layers (K and H) are tuned in {1,2,3,4}. The number A of hyperedge is searched in {1,2,4,8,16,32,64,128,256}. The dropout ratio ρ and the adjust factor α are tuned in {0.1, 0.2, …, 1.0}. We search both the adjust weight λ_2 of contrastive loss and the regularization coefficient λ_1 in {1e^-6, 1e^-5, …, 0.1}. The early stop mechanism is adopted, i.e., the training will stop when R@20 on the verification set does not increase for 20 successive epochs. We implement LGMRec[https://github.com/georgeguo-cn/LGMRec] with MMRec <cit.>.§.§.§ BaselinesWe compare our proposed LGMRec with the following four groups of recommendation baselines, including (1) General CF Models: BPR <cit.>; (2) Graph-based Recommenders: LightGCN <cit.>, SGL <cit.>, NCL <cit.>; (3) Hypergraph-based Recommenders: HCCF <cit.>, SHT <cit.>; and (4) Multi-Modal Recommenders: VBPR <cit.>, MMGCN <cit.>, GRCN <cit.>, LATTICE <cit.>, MMGCL <cit.>, MICRO <cit.> SLMRec <cit.>, BM3 <cit.>. §.§ Performance ComparisonThe performance comparison for all methods on the three datasets is summarized in Table <ref>, from which we have the following key observations: (1) The superiority of LGMRec. LGMRec substantially outperforms all other baselines and achieves promising performance across different datasets. We attribute such significant improvements to: i) The modeling of separated local embeddings that excavates user decoupled interests; ii) The hypergraph learning injects the modality-related global dependencies to local graph embeddings to mitigate interactive sparsity. (2) The effectiveness of modal features. Introducing knowledge-rich modality information is beneficial for boosting performance. Experimentally, though only linearly fusing the ID embeddings and modal features of items, the performance of VBPR still outperforms its counterpart (i.e., BPR). By effectively modeling the modal information, the multimodal recommenders (e.g., MMGCN, LATTICE, SLMRec, BM3) with LightGCN as the backbone network basically achieve better results than LightGCN. (3) The effectiveness of hypergraph learning. Hypergraph-based recommenders (i.e., HCCF and SHT) outperform the graph-based CF model LightGCN, suggesting the effectiveness of modeling global dependencies under hypergraph architecture. Besides, the significant improvement of LGMRec overcompetitive baselines further demonstrates the potential of hypergraph networks in modeling modality-aware global dependencies.§.§ Ablation StudyWe conduct ablation studies to explore the compositional effects of LGMRec. From the results reported in Table <ref>, we can find: (1) The variant w/o MM without multimodal contents degenerates into LightGCN and achieves the worst performance, indicating that introducing modality features can greatly improve accuracy. (2) Removing either LGE or GHE can cause performance drops of LGMRec, demonstrating the benefits of modeling both local and global user interests. Notably, the variant w/o LGE performs worse than w/o GHE, which indicates that local interests directly related to user behavior are more important, and global interests can serve as a supplement. (3) In local graph embeddings, the variant w/o CGE (with MGE only) achieves better performance than w/o MGE (with CGE only) on all datasets, which reveals the importance of integrating multimodal features into user-item interaction modeling. (4) The variant w/o HCL removes hypergraph contrastive learning and only linearly adds all global embeddings. Its performances indicate that contrastive fusion of global embeddings of different modalities can improve performance by modeling the inter-modal global semantic consistency. (5) The variant w/ SUID that still shares user ID embeddings in both MGE and CGE modulesperforms worse than LGMRec, verifying the benefits of independently modeling user decoupled interests.§.§ In-Depth Analysis§.§.§ Performance with Different Data SparsityWe further study the influence of sparse user interactions by comparing LGMRec with five representative multimodal recommendation baselines: MMGCN, LATTICE, MMGCL, SLMRec, and BM3, on Baby and Sports datasets. Multiple user groups are constructed according to the number of their interactions. For example, the first user group contains users interacting with 0-5 items. From the results in Figure <ref>, we can observe that: (1) The superior performance of LGMRec is consistent across user groups with different sparsity degrees, revealing the effectiveness of LGMRec in alleviating interaction sparsity by modeling local and global representations. (2) LGMRec achieves more performance gains on sparser user groups. Specifically, LGMRec realizes 19.95% and 10.83% improvements over the best baseline for the sparsest and densest group on Baby, respectively, verifying the robustness of LGMRec in dealing with sparser user interactions.§.§.§ Hyperparameter Analysis Figure <ref> reports the impact of two key hyperparameters of LGMRec on Clothing dataset:Hyperedge number A. From the left figure in Figure <ref>, we can observe that LGMRec presents performance promotion as the number of hyperedges increases, demonstrating the effectiveness of capturing multi-hyperedge global structures, especially for sparser Clothing datasets.Adjustable weight α. Impact of weight α of fusing global embeddings is also investigated in Figure <ref>. We can find that the performance first rises to an optimal value (α = 0.2) and then declines, which suggests that an appropriate α can improve accuracy by properly supplementing global embeddings, but a too large α may negatively affect performance. §.§ Case StudyWe qualitatively study the global hypergraph dependencies. Specifically, we randomly select two users u_1344, u_4351 with similar global embeddings learned on Baby dataset. Hypergraph dependencies under visual and textual modalities for the two users and the items they interact with are presented in Figure <ref>. The four hyperedges (squares) are shaded depending on the user-hyperedge dependency score. Moreover, the interacted items (circles) are arranged below the corresponding hyperedges in order, according to the maximum item-hyperedge dependency score. From Figure <ref>, we can observe that:(1) The user-hyperedge dependencies differ in different modalities. For example, the global interests of user u_1344 in the visual modality are mainly related to the 4-th attribute hyperedge. Under the textual modality, user u_1344 has larger dependency scores with the 3-rd hyperedges. Thus, we guess that the four items (i_51, i_906, i_1167, and i_2131) closely related to head hyperedges can reflect user u_1344's true preferences, while item i_4663 attached to the 1-st hyperedge may be a noise interaction. (2) Although the interacted items are largely non-overlapping, user u_4351 and user u_1344 still have similar hyperedge dependencies, demonstrating why their global embeddings are similar. The results further reveal that LGMRec can exploit global hypergraph learning to distill similar knowledge of item modal features for performance improvement. § CONCLUSION In this work, we proposed a novel model LGMRec for MRSs, which captures and utilizes local embeddings with local topological information and global embeddings with hypergraph dependencies. Specifically, we adopted a local graph embedding module to independently learn collaborative-related and modality-related local user interests. A global hypergraph embedding module is further designed to mine global user interests. Extensive experiments on three datasets demonstrated the superiority of our model over various baselines. For future work, we intend to seek better means of modeling the differences and commonalities among modalities for further performance improvement. § ACKNOWLEDGEMENTS We would like to thank all anonymous reviewers for their valuable comments. The work was partially supported by the National Key R&D Program of China under Grant No. 2022YFC3802101 and the National Natural Science Foundation of China under Grant No. 62272176. § APPENDIX§.§ Complexity Analysis of LGMRecWe conduct the complexity analysis of LGMRec. The computational cost of LGMRec mainly comes from two parts. In the local graph embedding module, the collaborative graph embedding has a 𝒪(L × |ℰ| × d) complexity, where L is the number of graph message passing layers, |ℰ| is the number of edges in user-item interaction graph 𝒢. For modality graph embedding, the computational cost of modal feature initialization is 𝒪(|ℳ| × d_m × d), where |ℳ| is the number of modalities. The modality graph propagation has the same computational complexity as the collaborative graph embedding. Thus, the overall time complexity of the local graph embedding module is 𝒪(((L+K) × |ℰ| + |ℳ| × d_m) × d).For the global hypergraph embedding module, the time complexity of hypergraph dependency constructing is 𝒪(|ℳ| × A × |ℐ| × (|𝒰|+ d_m)), where A is the number of hyperedges. The hypergraph message passing schema takes 𝒪(|ℳ| × (|ℐ| × H + |𝒰|) × A × d) complexity with the global information propagation, where H is the number of hypergraph layers. The cost of the hypergraph contrastive learning is 𝒪(B × (|𝒰| + |ℐ|) × d) with only two modalities v and t, where B is the batch size. The overall time complexity of the global hypergraph embedding module is 𝒪(|ℳ| × A × (|ℐ| × ((|𝒰|+ d_m) + H × d)) + |𝒰|). In practice, our two modules can be executed in parallel, which makes LGMGN quite efficient in actual execution. During the training process, its actual running time is comparable to existing methods, such as MMGCL <cit.>, and SLMRec <cit.>. Compared to existing graph-based multimodal recommenders, LGMRec only involves 𝒪(|ℳ| × A × d_m) extra parameters for the memory cost. §.§ Baselines(i) General CF Models * BPR <cit.> maps the user and item in a low-dimensional latent embedding space and utilizes Bayesian pairwise ranking loss to optimize model parameters. (ii) Graph-based Recommendations * LightGCN <cit.> is a typical graph-based CF method that utilizes light graph convolutional networks to learn high-order connectivity of users and items. * SGL <cit.> introduces contrastive learning to enhance graph collaborative filtering. We implement this method by data augmentation with random edge dropout. * NCL <cit.> enhances the graph-based CF model by identifying structural and semantic neighboring nodes as positive samples to construct contrastive views. (ii) Hypergraph-based Recommendations * HCCF <cit.> leverages the hypergraph neural network to inject the global collaborative relations into the graph-based recommendation. * SHT <cit.> captures the global collaborative embeddings for contrastive learning by joint utilizing hypergraph encoder and multi-head attention mechanism. (iii) Multi-Modal Recommendations * VBPR <cit.> integrates modal features with ID embeddings to extend the traditional CF paradigm. * MMGCN <cit.> learns fine-grained modality-specific user preferences by achieving message-passing on the user-item bipartite graph of each modality. * GRCN <cit.> is a structure-refined graph multimedia recommender, in which modality contents are used to adjust the structure of interaction graph by identifying the noisy edges. * LATTICE <cit.> exploits multi-modal features to mine the latent semantic structure between items to improve multi-modal recommendation. * MMGCL <cit.> includes contrastive learning into multimodal recommendation via graph augmentation with modality-related edge dropout and masking. * MICRO <cit.> extends the LATTICE <cit.> to fuse multimodal features by introducing contrastive learning to capture modality-shared and modality-specific information. * SLMRec <cit.> devises three types of data augmentation at different granularity to achieve multi-modal self-supervised tasks. * BM3 <cit.> utilizes a simple latent embedding dropout mechanism to generate contrastive view in self-supervised learning for multimodal recommendation.§.§ Parameter SettingFor a fair comparison, we optimize all models with the default batch size 2048, learning rate 0.001, and embedding size d=64. Table <ref> presents other optimal parameter settings for the three datasets. In addition, all experiments in this paper are performed in the same experimental environment with Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz and GeForce RTX 3090. §.§ Hyperparameter AnalysisWe explore the impacts of all hyperparameters for performance and report them in Figure <ref>. * Collaborative graph layers L. From Figure <ref> (a), the results of the single-layer model are slightly inferior to that of the multi-layer model. The outcomes suggest that combining ID embeddings with sufficient multi-layer local structure information can obtain higher-quality user and item representations. In addition, the sparser datasets may require a deeper and larger local structure receptive field to facilitate recommendation, e.g., L=4 on Sports, and L=3 on Clothing. * Modality graph layers K. The results in Figure <ref> (b) demonstrate that the two-layer model achieves better performance and the increasing of layers does not bring a performance improvement, which indicates that the discrimination of nodes is decreasing as the layer number increases. The reason may be that aggregating deeper neighbors may lead to knowledge redundancy of node modality features. * Hypergraph layers H. The impact of hypergraph layer H is shown in Figure <ref> (c). From the results, we can see that shallow global embedding performs better than multi-layer, possibly because multi-layer hypergraph propagation can lead to the excessive smoothness of node representations and reduce performance. In practice, we can take H=1 for all datasets. * Hyperedge number A. Figure <ref> (d) shows the performance of LGMRec with different settings of hyperedge number A. As mentioned in our experiments, the performance promotes as the number of hyperedges increases on the sparser Clothing dataset. For Baby and Sports datasets, the performance usually reaches its optimum at A=4. The results demonstrate the effectiveness of capturing multi-hyperedge global structures, especially for sparser datasets. * Adjustable factor α. The performance of LGMRec with different settings of weight α is reported in Figure <ref> (e). The results on the three datasets show a consistent trend, that is, the performance first increases to the optimal value and then decreases. The results suggest that properly supplementing global embeddings is suitable for modeling robust user interests. So, we can set α=0.3, 0.6, 0.2 for Baby, Sports, and Clothing datasets, respectively. * Drop ratio ρ. We tune the dropout ratio ρ from {0.1,0.2, …, 1.0} to control the retention of hypergraph structure dependencies. The results in Figure <ref> (f) indicate that proper dropout (compared to no dropout, i.e., ρ=1.0) is suitable to suppress the global noise and improve the robustness of the representation. * Coefficient λ_2. Coefficient λ_2 determines the influence of hypergraph contrastive loss, and the performance of LGMRec under differentλ_2is shown in Figure <ref> (g). Similar to adjustable factor α, the performancealso first improvesto reach optimal and then declines as λ_2 increases. The results illustrate that an appropriate λ_2 can mitigate value scale differences between HCL loss and BPR loss. In practice, we can uniformly set λ_2=1e^-4 on the three datasets. * Regularization coefficient λ_1. We perform a grid search for parameter λ_1 to verify the effect of regularization. From Figure <ref> (h), we can see that the effect of different coefficient λ_1 is negligible on the three datasets. Therefore, a small λ_1=1e^-6 is desirable.
http://arxiv.org/abs/2312.16400v1
{ "authors": [ "Zhiqiang Guo", "Jianjun Li", "Guohui Li", "Chaoyang Wang", "Si Shi", "Bin Ruan" ], "categories": [ "cs.IR" ], "primary_category": "cs.IR", "published": "20231227040706", "title": "LGMRec: Local and Global Graph Learning for Multimodal Recommendation" }
[ [   ===== We point out an error in the paper “Linear Time Encoding of LDPC Codes” (by Jin Lu and José M. F. Moura, IEEE Trans). The paper claims to present a linear time encoding algorithm for every LDPC code. We present a family of counterexamples, and point out where the analysis fails. The algorithm in the aforementioned paper fails to encode our counterexample, let alone in linear time. § INTRODUCTIONA Low Density Parity Check (LDPC) code is defined as the null-space of a low density m×n (m<n) matrix over 𝔽_2. In this context, a matrix is called low density if each row has O(1) ones. Gallager was the first to study random ensembles of such codes <cit.> and proved that they can be decoded in linear time by a simple message-passing algorithm. Sipser and Spielman <cit.> showed that this works whenever the parity-check graph is a good enough expander. Although decoding is optimal (linear time), the straightforward encoding procedure is quadratic, because the generator matrix of these codes is dense. Are there more efficient algorithms? There are several specific families of codes for which theencoding complexity has been analyzed. For example: * Spielman <cit.> constructs a family of linear time encodable and decodable codes. * Richardson et al. <cit.> present a number of encoding schemes (distinguished by their preprocessing algorithms) and analyze their expected performance on various LDPC distributions. They find that certain distributions can be encoded in expected linear time using these algorithms. However, Di et al. <cit.> show that these distributions have expected sub-linear distance. As previously mentioned, Richardson et al. <cit.> introduce multiple preprocessingalgorithms. Thesealgorithms, notably Algorithms C and D, exhibit similarities to those proposed by Lu et al. <cit.>. Both papers aim to triangularize the input LDPC matrix, or bring it as close as possible to triangular form, through greedy row and column permutations. The primary distinction between the algorithms suggested by <cit.> and <cit.> lies in the fact that the former allows a restricted number of row additions along with row and column permutations, while the latter does not. <cit.> calculate an expected quadratic encoding complexity forcertain LDPC distributions, whereas<cit.> claim that their algorithm assures linear time encoding for all LDPC codes. Excluding Lu and Moura <cit.>, there have been no claims of a sub-quadratic encoding algorithm for general LDPC codes.In this note we address the algorithm presented in <cit.> and show that it contains a critical flaw. Specifically, their algorithm is indeed linear-time, but fails to encode the code given by the input LDPC matrix. There exists a family of LDPC codes given bymatrices M_n on which the algorithmpresented in <cit.> fails to encode the code.In particular, the algorithm of <cit.> fails on the 9× 18 matrix M_18 depicted in the following figure, as we illustrate in Section <ref>.§ DEFINITIONS If A is an m× n matrix and B is a p × q matrix, then the Kronecker product A ⊗ B is the pm × qn block matrix:A linear codespecified by a parity check matrix M∈{0,1}^m× n, is the linear subspace = x∈^nMx=0.This space is also referred to as the kernel of M, denoted as Ker(M). We like to view M also as the adjacency matrix of a bipartite graphwith left nodes [n] called variables and right nodes [m] called constraints such that M(i,j)=1 iff the i-th variable is connected to the jth constraint.Lu and Moura mainly use the graph representation, while we prefer the matrix view. In Section <ref> we will present the algorithms from <cit.> in both languages. We will freely alternate between the terms variable and column, constraint and row. Given a matrix M∈^m× n, a set of row indices C⊂[m] and column indices V⊂[n] we use the notation M(C,V) to denote the sub-matrix of M induced by these sets. This corresponds to the subgraph of the Tanner graph induced by vertices V⊂ [n] and C⊂ [m].We will use the following definition of an (algebraic) circuit over 𝔽_2.A circuit Φ = (V,E) is a directed acyclic graph such that every vertex has in-degree at most 2 (i.e. fan-in 2). The input of the circuit, I(Φ) ⊆ V are all vertices with in-degree 0. The output of the circuit O(Φ) are all vertices with out-degree 0. The size of the circuit |Φ| = |V|.Vertices of a circuit are sometimes called gates. We note that while formally, the fan-in in this model is 2. The results in this paper remain the same by relaxing the fan-in bound to any constant d instead.Let Φ be circuit with input vertices I=v_1,v_2,…,v_k and output vertices O = u_1,u_2,…,u_m. A circuit naturally calculates a linear function T:𝔽_2^k →𝔽_2^m as follows. Given (x_1,x_2,…,x_n) ∈𝔽_2^n every input vertex v_i is labeled with x_i. Then the label of every other vertex is set to be the sum of the labels of its incoming neighbors (mod 2). The fact that Φ is acyclic ensures that such a labeling is possible. Finally, the value of T(x_1,x_2,…,x_k) = (ℓ(u_1),ℓ(u_2),…,ℓ(u_m)) where ℓ(u_j) is the labeling of u_j. An infinite family of matrices M_n ∈^m_n× n_n is linear time encodable if there exists a constant c > 0 and circuits Φ_n_n of size at most c n such that Φ_n calculates a linear isomorphism T_n:𝔽_2^k_n→ Ker(M_n). Lu and Moura <cit.> first present a construction of linear sized circuits and characterize the codes that are encodable by this construction. These codes are those whose code graph does not contain certain subgraphs called Encoding Stopping Sets (ESS) or Pseudo Encoding Stopping Sets (PESS). Graphs without such subgraphs are called Pseudo-Trees. We beleive this part of their paper is correct, see <cit.> which refers to connected graphs but this easily generalizes to unions of such.Then <cit.> introduces an algorithm that takes as input an LDPC matrix M and outputs a linear sized circuit Φ that is supposed encode M. They do so by decomposing the matrix M into submatrices that are encodable via their initial construction (or a slight modification of it). We will show that this decomposition fails. Doing so requires a few more definitions. Let V ⊆ [n] and C ⊆ [m] be subsets of the columns and rows of a matrix M. An Encoding Stopping Set (ESS) is a submatrix M(C,V) such that: * For all c ∈ C,vM(c,v) = 1⊆ V. That is, all variables participating in this constraint are in V.* For all v∈ V the Hamming weight of the v-th column is at least 2 in M(C,V). That is, every variable in v∈ V participates in at least two constraints in C.* The set of rows corresponding to C is linearly independent (in other words, M(C,[n]) has rank |C| over 𝔽_2). If items 1,2 hold but item 3 does not, we call the submatrix a Pseudo Encoding Stopping Set (or PESS).A matrix is a Pseudo-Tree if it does not contain an ESS or a PESS.Lu and Moura also require that the Pseudo-Tree is connected (as a graph), but this requirement is not necessary for our purposes. Additionally, they define Pseudo-Trees in different terms, but they show the equivalence to our definition (see <cit.>).As mentioned, Pseudo-Trees are linear-time encodable via a greedy algorithm, see <cit.>. Lu and Moura observe that if an ESS is “almost" a Pseudo-Tree, then it is linear-time encodable. By “almost" we mean that there is a constant number of constraints whose removal yields a Pseudo-Tree. Thus, they also give the following definition.Let M(C,V) be an (P)ESS. We say that M(C,V) is a k-fold-constraint Encoding Stopping Set if the following two conditions hold.* There exists k constraints c_1,...,c_k ∈ C s.t. M(C∖c_1,...,c_k, V) does not contain any PESS or ESS.* For any k-1 constraints c_1,...,c_k-1, M(C∖c_1,...,c_k-1, V) contains a PESS or ESS.Lu and Moura provide a linear time encoding algorithm for any 1- or 2-fold constraint (P)ESS <cit.>. §.§ Lu and Moura's algorithms We now present the two main algorithms used in Lu and Moura's paper. We will present a graph version as well as a matrix version for these algorithms. The Algorithm <ref> finds a PESS or a 1 or 2-fold-constraint ESS in a given bipartite graph (see <cit.> for Lu and Moura's version). The second algorithm, <ref> utilizes the algorithm <ref> to decompose the given graph into linear-time encodable components (see <cit.>).We give a more streamlined description of their algorithms. In particular, we added the algorithm <ref> as a sub-procedure of <ref> for easier reference, and we omit the analysis from the description of the algorithms. (P)ESS-FINDER <cit.>,Find a 1 or 2-fold-constraint (P)ESS- graph languageInput: A Tanner graph G=(V∪ C,E) with maximal variable degree 3. * Initialize H=(V_H∪ C_H,E_H) ← G, S ←∅.* While H ∅: * Choose a lightest constraint c ∈ C_H. Add c and its neighbours to S and remove them from H. * If c doesn't have any neighbours in H and <ref>(S) ∅ then output <ref>(S).* Return S. STRIP Remove degree one variables Input: Graph S.* While there exists a degree 1 bit node x ∈ V_S: * Remove x and its neighbours from S. * Return S.(P)ESS-FINDER <cit.>,Find a 1 or 2-fold-constraint(P)ESS- matrix languageInput: An LDPC matrix M ∈0,1^m× n with maximal column weight 3. * Initialize H←M, C←∅, V←∅.* While C[m] and V[n]: * Choose a lightest row c. Let V_c be the indices of the corresponding variables. Add c to C and V_c to V and zero the columns V_c in H. *If V_c=∅, and STRIP(M(C,V))≠∅ then output STRIP(M(C,V)).* Return M(C,V).The STRIP procedure used in the matricial version of <ref> is equivalent to the one defined for graphs, namely we remove from V any column of weight one (and the corresponding row from C) and repeat. DECOMPOSE<cit.>, Decompose a PC to ESS's, PESS's and encodable componentsInput: An LDPC matrix M(C, V). * Initialize i← 1, Components←∅, M(C_1, V_1)← <ref>(M(C,V)).* While M(C_i, V_i)≠ M(C,V): * If M(C_i, V_i) is a PESS:* Find C' ⊆ C_i, such that ∑_c ∈ C' c|_V_i = 0(modulo 2).*Choose a constraint c∈ C' and remove it from C_i.* If i>1: * Remove M(C_i-1, V_i-1) from Components.* Add the constraint c^*=∑_c∈ C'c|_V_i-1 toC_i-1.* Add <ref>(M(C_i-1, V_i-1)) to Components.* Add M(C_i, V_i) to Components.* C ← C∖ C_i.* V ← V∖ V_i.* i← i+1.* M(C_i,V_i)←<ref>(M(C,V)). * Add M(C,V) to Components.* Output Components.For the graph version of Algorithm <ref> we use the notation G(U), to denote the subgraph of G=(U,E) induced by U⊆ U. DECOMPOSE <cit.>, Decompose a graph to ESS's, PESS's and encodable components Input: A Tanner graph G = (V ∪ C,E). * Initialize i← 1, Components←∅, G_1 = G(V_1 ∪ C_1) ← Algorithm<ref>(G).* While G_i ≠ G(V∪ C): * If G_i is a PESS:* Find C' ⊆ C_i, such that ∑_c ∈ C' c|_V_i = 0(modulo 2).* Choose a constraint variable c∈ C' and remove it from C_i.* If i>1: * Remove G_i-1 from Components.* Add the vertex c^* = ∑_c∈ C'c|_V_i-1 to C_i-1.* Add <ref>(G_i-1) to Components.* Add G_i to Components.* C ← C∖ C_i.* V ← V∖ V_i.* i← i+1.* G_i←<ref>(G(V∪ C))* Add G to Components.* Output Components. § THE FLAW IN THE PAPERBefore pointing to the error in <cit.> we provide an outline of their intended encoding strategy.* Input: An LDPC matrix M with kernel dimension k. It is assumed that each column in M has weight at most 3, by adding variables, if needed.* Output: A linear-sized circuit Φ that implements w↦ Gw for all w∈^k, for some matrix G such that Im G = Ker M.The paper's approach for constructing Φ from M is based on a decomposition algorithm, <ref> (see <cit.>) which generates a list of “components” which are pseudo-trees and 1/2-fold-constraint Encoding Stopping Sets using <ref>. As mentioned earlier, <cit.> observe that each component, being a pseudo-tree or a1/2-fold-constraint ESS, admits linear time encoding via the label-and-decide or label-and-decide-recompute algorithm in <cit.>. This algorithm shows how to partition the bits into message bits and output bits so that one can propagate the values from message bits to output bits, using the constraints, in linear time. The decompose algorithm outputs components together with an implicit labeling of their input and output bits. Every component corresponds to a matrix in the output of Algorithm <ref>. However, these can be described as a collection of circuits and connections between them, as portrayed in Figure <ref>. More precisely, the components can be described as a collection of circuits Φ_1,Φ_2,…,Φ_i that can be composed into a circuit Φ that encodes the code, such that in this decomposition some of the input bits of every Φ_i are connected to some of the output bits of Φ_j : 1 ≤ j < i. Every circuit Φ_i is supposed to encode a pseudo-tree or a 1- or 2-fold (P)ESS, thus the matrices that are outputted in Algorithm <ref> should be pseudo-trees or a 1- or 2-fold (P)ESSs.§.§ The Flaw Unfortunately, it is not true that <ref> returns 1- or 2-fold-constraint (P)ESS's and pseudo-Trees. In Step <ref>, parallel to the third line of the decomposition algorithm in <cit.>, the authors claim that if <ref> outputs a PESS, then removing one constraint transforms it to a Pseudo-Tree. In other words, <cit.> falsely assume that the only linear dependency is the sum of all constraints and thus the removal of any single constraint resolves the linear dependency. However, there could be multiple linear dependencies on this same set of variables. The algorithm fails to take these into consideration, thereby resulting in a code that has too many codewords.The main issue is that the constraints that we failed to add at step <ref> are never taken into consideration, thereby resulting in a code that has too many codewords. In the next section we provide an example where<ref> encounters a PESS which has a linear number of constraints that are ignored. The first component output by the algorithm thus has a Kernel that is much larger and contains many non-codewords. § A COUNTER EXAMPLEThere exists a sequence of matrices M_n∈{0,1}^m× n with a sequence of sub matrices A_n that satisfy: *There exists a valid set of choices for <ref> such that <ref>(M_n) outputs A_n. *dim(Ker(M_n)) ≤ n/2+2.*dim(Ker(A_n))≥ n/2+Ω(n).Let us assume that the encoding scheme suggested by <cit.> works. This implies that the message bits of the code are a union of the message bits of the components output by <ref>. That is, denoting by k_i the number of new input bits that enter the i-th component,dim(Ker(M))=∑_i∈[r]k_i As the following corollary shows, this is not always true.There exists a sequence of matrices M_n∈{0,1}^m× n and a valid set of choices for <ref>(M_n) s.t. ∑_i∈ [r] k_i > dim( Ker (M_n)). Assume towards contradiction that <ref> is correct on any input matrix M. We assume that <ref> and <ref> goes through the constraints in order. By inspection, we can see that the first component output is A_n which is an ESS. After removing it from M_n, the algorithm continues to decompose the remainder graph.In the next step, due to the structure of M_n, the algorithm will find a PESS, and therefore add a a constraint to A_n resulting in A'_n, and run <ref> on A'_n. The output of this step is a list of componentsM'_1,…,M'_t, such that, assuming that <ref> is correct and (<ref>) holds, ∑_j=1^t k_j =(Ker A'_n) ≥(Ker(A_n))-1. However, by <ref> of <ref> n/2+2≥ dim(Ker(M_n)), while <ref> of <ref> assures that dim(Ker(A_n))≥ n/2+Ω(n). Combining the above and invoking (<ref>) once more,n/2+2 ≥ dim(Ker(M_n))≥ dim(Ker(A'_n))≥ dim(Ker(A_n))-1 ≥ n/2+Ω(n), and this leads to a contradiction to the correctness of <ref>. The proof of <ref> will follow from the description of the counterexample, along with some necessary properties. Our counter-example has column weight 3 and row weight 6. There are n columns and m rows (m=n/2), where n=11N+7 for any odd integer N. The counter-example M_n∈{0,1}^m× n is M_n= [ [A_n0;B_n I_N+1/2⊗1_3× 2;]] where ⊗ stands for the Kronecker product, see Definition <ref>, and we now detail the matrices A_n,B_n.The matrix A_n. We will now describe A_n ∈{0,1}^(4N+2)×(10N+6). Towards this we shall decompose A_n = S_n + D_n (S stands for “stairs” and D for “diagonals”) and describe these two components separately. S_n is the matrix described by blocks in Figure<ref>, where: * ∀ d∈[4], T_d = I_N⊗_d where _d = (1,1...,1) d-times. * The first row of S_n is all zero except for six 1's at columns 1 to 6 (1,1,1,1,1,1,0,0,...).The matrix D_n. The matrix D_n∈{0,1}^(4N+2)×(10N+6) is described by blocks in Figure <ref>.The first column has a single 1 at the second entry, then there's a square block of height and width 4N+1, with 1's on its main diagonal and on the lower sub-diagonal. Afterwards come three identity matrices of dimensions 3N+1, 2N+1 and N+1. The last column has a single 1 at the last entry (which could be thought of as an identity matrix of dimension 1). All other entries of D_n are 0. We show a formula for A_n.A_n(i,j) = 1 in the following cases:j∈[6] i=1 j∈i-1,i∪4i-1,...,4i+2 i∈[2,N+1] j∈i-1,i∪i+3N+1∪3i+N+1,...,3i+N+3 i∈[N+2, 2N+1] j∈i-1,i∪i+3N+1∪i+5N+2∪2i+3N+3,2i+3N+4 i∈[2N+2,3N+1] j∈i-1,i∪i+3N+1∪i+5N+2∪i+6N+3∪i+6N+5 i∈[3N+2,4N+1] j∈i-1,i∪i+3N+1∪i+5N+2∪i+6N+3∪10N+6 i=4N+2 . Observe that T_d(i,j) = 1 if i = ⌈j/d⌉, i.e. j ∈(i-1)d+1,...,id. The T_d matrix in S_n begins with a column offset of 6 + N∑_k=d+1^4 k, and a row offset of 1+(4-d)N.For example, T_4 is shifted by six columns and one row, and therefore, in the rows corresponding to T_4, S_n(i,j)=1 iff j-6∈{(i-1-1)4+1,...,(i-1)4}. By calculating the shifts on T_3, T_2 and T_1 and rearranging we get the algebraic definition of S_n: S_n(i,j)=1 j∈[6] i=1j∈4i-1,...,4i+2i∈[2,N+1]j∈3i+N+1,...,3i+N+3i∈[N+2, 2N+1]j =2i+3N+3,2i+3N+4i∈[2N+2,3N+1]j = i+6N+3i∈[3N+2,4N+1] . The matrix D_n consists of six diagonals (considering the bottom right entry as a length 1 diagonal).* The first diagonal consists of the entries (i,j) satisfying j=i-1.* The second diagonal has entries (i,j) where j=i and i≥ 2.* The third diagonal starts at the (N+2, 4N+3)-entry and includes all indices (N+2+t, 4N+3+t) (for t∈{0,...3N}). In other words, it contains all (i,j) s.t. j=4N+3+t=i+3N+1 (and i∈{N+2,...,4N+2}).* The other diagonals are similarly calculated from Figure <ref>.Putting together these six diagonals with the formula for S_n yields the full formula for A_n (Equation <ref>). We denote by j_t the leading entry index of the t-th row in S_n. One may verify the following formula:j_t = d_t t + (5-d_t)(4-d_t)/2N + 7-2d_tfor d_t=5-⌈t-1/N⌉ (i.e. d_t is the step "width" of row t, and row t belongs to the rows corresponding to T_d_t in S_n).The matrix B_n. The matrix B_n∈{0,1}^3/2(N+1)×(10N+6) is depicted in Figure <ref>. Note that the first identity matrix in B_n is placed on the second row, while the rest of the identity matrices begin at the first row.It is easy to observe that B_n(i,j)=1 in the following cases:j∈1, 1/2(11N+5), 7N+4, 1/2(17N+11) i=1j∈4N+i,1/2(11N+3)+i, 7N+3+i, 1/2(17N+9)+i i∈[2,3/2(N+1)] . The bottom right block of M_n was defined as I_N+1/2⊗1_3× 2 but could actually be any 3/2(N+1)×(N+1) matrix as long as it has row regularity 2 and column regularity 3. M_18 is small enough to sketch (see Figure  <ref>). Before analyzing the counterexample we only need to confirm that all columns of M_n have weight 3, and that all rows have weight 6 as claimed. Every row in M_n has weight 6, and every column weight 3.Clearly every row in A_n has 6 ones (this can be verifies by looking at Equation <ref>). Every row in B_n has 4 ones (as indicated in Equation <ref>), and the selection of the bottom right matrix is made to ensure that the weight of these rows is completed to 6.Let us verify that every column has 3 ones. The N+1 rightmost columns (corresponding to the bottom left block) clearly have 3 ones, so we only need to show that the columns of[ A_n; B_n ] each have 3 ones. The diagonal i starts at column 2 and continues to column 4N+2, the diagonal i+3N+1 starts at columns 4N+3 and ends a column before the diagonal i+5N+2 starts. Together with the diagonal i+6N+3 and entry A_n(4N+2, 10N+6) the columns 2 to 10N+6 have another "layer" of 1's. The diagonal i-1 "covers" columns 1 to 4N+1, so in total, using Claim <ref>, columns [2, 4N+1] have weight 3, while columns 1∪[4N+2,10N+6] have weight 2. The weight of these columns is completed to 3 by the columns of B_n, which can be easily verified by its definition. We move on to prove <ref>. §.§ Proof of theorem 4.1For the reader's convenience we restate the theorem. [Restated]   * For <ref> we use the following lemma (see[proof: valid-choices]proof). For t=1,2,...,4N+1, choosing row t at iteration t is a valid choice for Algorithm <ref>(M_n). Assuming <ref> makes these choices, then after iteration 4N+1 all the columns with 1's in those rows are zeroed out, yielding the matrix M_n= [ [ 0_(4N+2)×(10N+6) 0_(4N+2)×(N+1); ; 0_3/2(N+1)×(10N+6) I_N+1/2⊗1_3× 2;]] So at iteration 4N+2, row 4N+2 (the last row of A_n) will certainly be chosen since it would have weight 0, while all rows >4N+2 (rows corresponding to B_n) will have weight 2. According to step <ref> of Algorithm <ref> this leads to the call STRIP(A_n) which returns A_n since all columns of A_n have weight of at least 2. Hence, Algorithm <ref> halts and outputs A_n.* The removal of rows 1 and 4N+3 from M_n (corresponding to the first row of A_n=D_n+S_n and the first row of B_n) yields an (m-2)-row matrix that has a “staircase form”, soRank(M_n)≥ m-2. This can be seen by looking at D_n(Figure <ref>) and at B_n (Figure <ref>). The first diagonal in D_n continues into B_n's first identity matrix.We conclude that dim(Ker(M_n))=n-Rank(M_n)≤ n-(m-2)=n/2+2.* A_n has 10N+6 columns and 4N+2 rows, thereforedim(Ker(A_n))=10N+6-Rank(A_n)≥6N+4(n=11N+7)=n/2+Ω(n) ≫ n/2Let us introduce the following notation. For any i∈ [m], j∈ [n],wt(i,j):=∑_k=j^nM_n(i,k)This is the Hamming weight of row i restricted to entries j,… ,n.Recall that j_t is the leading entry index of the t-th row in S_n. For example, in Figure <ref> we highlight the entry S_n(3, j_3). Assume by induction that rows 1,.., t-1 were chosen in iterations 1,.., t-1 of Algorithm <ref>(M_n) for some t∈{2,..., 4N+1}. Then columns 1,...,j_t-1 were zeroed-out in those iterations since for every t, the submatrix M_n(1,...,t-1,1,...,j_t-1) has at least one non-zero entry at every column. This can be seen by recalling that the top left block of M_n is A_n=D_n+S_n and by noting that every column in S_n(1,...,t-1,1,...,j_t-1) has at least one non-zero entry (for illustration look at Figure <ref>). Moreover, the submatrix M_n(1,...,t-1,j_t,...,n) is the zero matrix, so no other columns were zeroed out in previous steps. To verify this claim, observe that by Claim <ref>, the last non-zero entry for every row i∈{1,...,4N+1} comes from S_n, and that S_n({1,...,t-1},{j_t,...,n}) is the zero matrix (see again Figure <ref> for illustration).To complete the proof we need to show that for all i>t,wt(t,j_t)≤ wt(i, j_t),meaning that row t is a valid choice for the algorithm at iteration t∈ [4N+1]. If t=4N+2 then wt(t,j_t)=0 so (<ref>) clearly holds, so assume that t<4N+2. We prove (<ref>) by analyzing separately the rows i ≤ 4N+2 (corresponding to A_n) and the rows i > 4N+2 (corresponding to B_n).We begin with i ≤ 4N+2. Recall that d_t is the number of ones in a row in a the block T_d in D_n, such that t is in that block. If t is the last row of a block T_d, i.e. t ∈N+1,2N+1,3N+1,4N+1, then wt(t,j_t)=d_t=wt(i, j_t) for every row i∈ (t, 4N+2]. Let us demonstrate this by example. The row t=N+1 has non-zeros only coming from S_n (and not D_n). These are four non-zeros coming from T_4 so wt(t,j_t)=4. Rows i∈ [N+2,2N+1] have ones from both S_n and D_n. In S_n, they get three 1'sfrom T_3. An additional 1 comes from the second identity component in D_n. The remaining cases are similar and easy to verify.For t ∉N+1,2N+1,3N+1,4N+1, let t' denote the last row in the block of t, namely, d_t'=d_t and for every i>t', d_i < d_t. For all i∈(t, t'], wt(i,j_t)≥ d_t=wt(t,j_t). For all i∈(t', 4N+2], wt(i,j_t) ≥ d_t since wt(i,j_t)≥ wt(i,j_t')=d_t, thus proving the first item. Thus far we showed that at iterations t∈ [4N+1], row t was not heavier than any of the rows i ∈ (t, 4N+2] (the rows below t in A_n). Now we prove that the first row in B_n is the lightest among all rows of B_n, that is, wt(4N+3,j_t)≤ wt(i, j_t) for any i ≥ 4N+3. we denote by i'=i-4N-2 the index of row i relative to B_n.The value wt(4N+3,j) decreases at columns j∈1, 1/2(11N+3)+1, 7N+4, 1/2(17N+9)+1 while the value wt(i, j), decreases at j∈4N+i', 1/2(11N+3)+i', 7N+3+i', 1/2(17N+9)+i' for all i>4N+3. Thus concluding that wt(4N+3,j_t) ≤ wt(i,j_t) for all i>4N+3 and all t.At last we show that at iterations t∈[4N+1], row t of A_n is no heavier than the first row of B_n (i.e. wt(t,j_t)≤ wt(4N+3,j_t)). This is also a case analysis: * At row t=1,wt(1,1)=wt(4N+3,1)=6.* For t∈[2, N+1], we prove that wt(4N+3,j_t)≥4=wt(t,j_t):The rows t∈[2, N+1] correspond to the block T_4 in S_n, therefore wt(t,j_t)=4. By the definition of S_n (Equation <ref>), when t∈[2, N+1], then j_t=4t-1, i.e. j_t∈[7, 4N+3]. By the definition of B_n (Equation <ref>), the first row has weight of at least 2 for all j≤ 7N+4. Adding weight 2 from the block I_N+1/2⊗1_3× 2 we get that wt(4N+3,j)≥ 4 for all j ∈ [7, 4N+3]. * Similarly we can show that in rows t∈[N+2, 2N+1], wt(4N+3, j_t)≥3=wt(t,j_t). These rows correspond to the block T_3 in S_n, therefore wt(t,j_t)=3 and j_t∈ [4N+7,7N+4], while for all j≤1/2(17N+11), wt(4N+3,j)≥3.* In rows t∈[2N+2, 4N+1], wt(4N+3, j_t)≥2≥ wt(t, j_t). For all columns corresponding to A_n (j∈[10N+6]), wt(4N+3, j)≥2 since the first row of the block I_N+1/2⊗1_3× 2 contributes 2 to the weight of row 4N+3. Rows t∈[2N+2, 4N+1] correspond to the blocks T_2 and T_1 in S_n and therefore wt(t, j_t)∈{1,2}. § THE TYPICAL CASEWe would like to emphasize that although our example may seem like a carefully constructed counterexample, it seems that a random low density matrix will fail as well. In <cit.> the authors analyze the expected behavior of similar algorithms on random matrices, and conclude that none of them yield sub-quadratic encoding complexity. The nature of their analysis is heuristic and therefore cannot hold as a formal proof. Nevertheless, their results are backed with experimentation, so they are likely to have a holding in reality. Our experiments show that a random matrix will have a first component with too many message bits, with very high probability[The probability depends on the row and column weight distributions. The observation applies for example to (3,6)-regular matrices that are large enough (say n>200).]. § RUNNING ALGORITHM <REF> ON M_18 As a warm up, let us run Algorithm <ref> on M_18 depicted in Figure <ref>. There are many choices made by this algorithm, so we will show a sequence of choices that result in an output of components that do not describe the code Ker(M_18).* Let M=M_18. The first time the Algorithm <ref> is called as a subroutine, it is called on M and returns A_18=M([6],[16]), the first six rows in M: * This is because for iterations i=1,2,…,6 of Algorithm <ref>, the i-th row of M is a lightest row.* After selecting these rows, in the sixth iteration V_c = ∅ so after running step 2(b), the STRIP procedure returns A_18 which Algorithm <ref> outputs.This matrix has full rank so it is not a PESS.* The second time Algorithm <ref> is called as a subroutine, it is called on M(7,8,9,17,18) (the grey part of Figure <ref>). One observes that M(7,9,17,18) is a valid output in this step.* M(7,9,17,18) is a PESS, so we remove one row (say, the 7-th row) and recursively run Algorithm <ref> on the matrix whose rows are those of A_18 plus the row c_7 + c_9 (the sum of the seventh and ninth rows of M_18). Assuming <ref> works properly, the components it returns when called on A_18 with the new row, will have at least 9 input bits, since there are 16 variables and 7 constraints. However, the residual matrix M({8,9},{17,18}) has rank 1, so it also has an input bit. In total, the output components will have at least 10 input bits, while M_18 has only 9 (it has full rank). We conclude that the output components of <ref>(M_18) do not describe the code Ker(M_18). plain
http://arxiv.org/abs/2312.16125v1
{ "authors": [ "Yotam Dikstein", "Irit Dinur", "Shiri Sivan" ], "categories": [ "cs.CC" ], "primary_category": "cs.CC", "published": "20231226171833", "title": "The linear time encoding scheme fails to encode" }
[ Chloe Marple January 14, 2024 ==================== NeRF has significantly advanced 3D scene reconstruction, capturing intricate details across various environments. Existing methods have successfully leveraged radiance field baking to facilitate real-time rendering of small scenes. However, when applied to large-scale scenes, these techniques encounter significant challenges, struggling to provide a seamless real-time experience due to limited resources in computation, memory, and bandwidth. In this paper, we propose City-on-Web, which represents the whole scene by partitioning it into manageable blocks, each with its own Level-of-Detail, ensuring high fidelity, efficient memory management and fast rendering. Meanwhile, we carefully design the training and inference process such that the final rendering result on web is consistent with training. Thanks to our novel representation and carefully designed training/inference process, we are the first to achieve real-time rendering of large-scale scenes in resource-constrained environments. Extensive experimental results demonstrate that our method facilitates real-time rendering of large-scale scenes on a web platform, achieving 32FPS at 1080P resolution with an RTX 3060 GPU, while simultaneously achieving a quality that closely rivals that of state-of-the-art methods. Project page: https://ustc3dv.github.io/City-on-Web/https://ustc3dv.github.io/City-on-Web/. § INTRODUCTION NeRF has significantly advanced the field of scene reconstruction, showing an unparalleled ability to capture complex details across diverse environments. Existing works have demonstrated its ability to render small scenes with exceptional quality and performance in real-time <cit.>. NeRF has also been successfully applied to the rendering of large scenes in offline settings, achieving exceptional visual fidelity and generating intricately detailed results <cit.>.Despite these successes, real-time neural rendering of large scenes is profoundly challenging due to inherent computational power, memory, and bandwidth limitations across various devices. The challenges mainly include the following aspects. Firstly, traditional NeRF and its variants are resource-intensive, requiring substantial computational power that exceeds what is typically available in such constrained environments. Secondly, the video memory capacity on client devices is frequently limited, imposing significant restrictions on the capability to process and render substantial assets in real-time simultaneously. The substantial resource becomes a critical issue in the real-time rendering of large scenes, necessitating the quick loading and processing of extensive data sets. Lastly, the dependency on data retrieval from remote servers introduces latency, particularly under network bandwidth limitations, further complicating the real-time rendering process. These hurdles collectively form a significant barrier to delivering an uninterrupted and instantaneous visual experience for large-scale scenes.To address these challenges in real-time rendering of large-scale scenes, we propose our proposed method, City-on-Web. Drawing inspiration from traditional graphics techniques for rendering large-scale scenes <cit.>, we partition the scene into manageable blocks and represent the scene with varying Levels-of-Detail (LOD). We utilize radiance field baking techniques <cit.>, which precompute and store rendering primitives into 3D atlas textures organized in a sparse grid within each block for real-time rendering. However, due to the unavoidable texture resource limitations of shaders, we cannot load all the atlas textures into a single shader. Hence, we represent the scene as a hierarchy of segmented blocks, each rendered by a dedicated shader during rendering.Our block partition and LOD for scene representation bring major benefits for real-time rendering in environments with limited computing resources, memory, and bandwidth. (1) High-Fidelity Reconstruction. With divide and conquer strategy, we ensure that each block possesses sufficient representation ability to reconstruct fine details within the scene faithfully. Additionally, to ensure high fidelity in the rendered output during training, we simulate the blending of multiple shaders that are aligned with the rendering pipeline. (2) Efficient Resource Management. The block and LOD-based representation facilitates dynamic resource management. It simplifies the loading and unloading process, adapting to the viewer's position and field of view in real-time. This dynamic load strategy greatly mitigates the bandwidth and memory demands typically associated with large-scale scene rendering, paving the way for smoother user experiences even on less capable devices. (3) Fast Rendering. Despite the abundance of resources required for rendering large scenes, we ensure real-time rendering efficiency by dividing the scene into non-overlapping blocks, with each shader granted access to resources within its designated block only. This block-rendering approach guarantees that performance does not degrade linearly with increased resources, even if we divide the scene into dozens of blocks. Our experiments demonstrate that City-on-Web can render photo-realistic large-scale scenes at 32FPS at 1080p resolution with an RTX 3060 GPU and uses only 18% of the VRAM and 16% of the payload size compared to current mesh-based methods <cit.>. As our model maintains consistency between training and rendering, we have achieved similar reconstruction quality compared to state-of-the-art methods. To our knowledge, we are the first to achieve real-time neural rendering of large-scale scenes on the web.§ RELATED WORK Large-scale Scene Reconstruction. For radiance field reconstruction of large-scale scenes, a key issue lies in enhancing the model’s representational capacity to adequately capture and render extensive scenes. Block-NeRF <cit.> and Mega-NeRF <cit.> address this by adopting a divide-and-conquer strategy, segmenting expansive scenes into smaller blocks, and applying localized NeRF processing to each. This approach significantly improves both the reconstruction quality and the model's scalability to larger scenes. Switch-NeRF <cit.> employs a gating network to dispatch 3D points to different NeRF sub-networks. Grid-NeRF <cit.> utilizes a compact multiresolution feature planeand combines the strengths of smoothness from vanilla NeRF with the local detail capturing ability of feature grid-based methods <cit.>, efficiently reconstructing large scenes with fine details. NeRF++ <cit.> enhances the reconstruction of unbounded scenes through its innovative multi-spherical representation. On the other hand, Mip-NeRF 360 <cit.> introduces a scene contraction function to effectively represent scenes that extend to infinity, addressing the challenge of vast spatial extents. F2-NeRF <cit.> takes this a step further by implementing a warping function for local spaces, ensuring a balance of computational resources and training data across different parts of the scene. Real-time Rendering. Early works mainly focus on the real-time rendering of a simple single object. NSVF <cit.> improves NeRF by introducing a more efficient sparse voxel field, significantly accelerating rendering speed while maintaining high-quality output. KiloNeRF <cit.> utilizes thousands of small MLPs, each responsible for a tiny scene region, significantly reducing network evaluation time. In contrast, SNeRG <cit.> leverages pre-computed sparse grids, allowing for direct retrieval of radiance field information without needing network evaluation. Termi-NeRF <cit.> terminates ray marching in less impactful scene regions, slashing computation time. DONeRF <cit.> focuses on one sample using a depth oracle network, speeding up rendering while preserving scene quality. Recently, there have been developments that enable real-time rendering of neural radiance fields in small scenes. MERF <cit.> improves upon SNeRG by utilizing a voxel and triplane hybrid representation to reduce memory usage. MobileNeRF <cit.> introduces the polygon rasterization renderingpipeline, running NeRF-based novel view synthesis in real-time on mobile devices. BakedSDF <cit.> bakes volumetric representation into meshes and utilizes spherical harmonics for representing view-dependent color, while NeRF2Mesh <cit.> iteratively refine both the geometry and appearance of the mesh. Level of Detail. Substantial works are devoted to integrating LOD methods into the fabric of traditional computer graphics <cit.>, aiming to streamline rendering processes, reduce memory footprint, bolster interactive responsiveness. Recently, some works begin to apply LOD to the neural implicit reconstruction. NGLoD <cit.> represents LOD through a sparse voxel octree, where each level of the octree corresponds to a different LOD, allowing for a finer discretization of the surface and more detailed reconstruction as the tree depth increases. Takikawa  <cit.> efficiently encode 3D signals into a compact, hierarchical representation using vector-quantized auto decoder method. BungeeNeRF <cit.> employs a hierarchical network structure, where the base network focuses on learning a coarse representation of the scene, and subsequent residual blocks are tasked with progressively refining this representation. TrimipRF <cit.> and LoD-Neus <cit.> leverage multi-scale triplane and voxel representations to capture scene details at different scales, effectively implementing anti-aliasing to enhance the rendering and reconstruction quality. § BACKGROUND AND MOTIVATION Our exploration begins with an in-depth analysis of two influential works, SNeRG <cit.> and MERF <cit.>, which have both set benchmarks for real-time rendering of radiance field. SNeRG precomputes and stores a Neural Radiance Fields model in a sparse 3D voxel grid. Each active voxel in SNeRG contains several attributes: density, diffuse color, andspecular feature vector that captures view-dependent effects. Additionally, an indirection grid is used to enhance rendering by either indicating empty macroblocks or pointing to detailed texels in a 3D texture atlas. This representation allows real-time rendering on standard laptop GPUs. The indirection grid assists in raymarching through the sparse 3D grid by passing empty regions and selectively accessing non-zero densities σ_i, diffuse colors c_i, and feature vectors f_i during rendering. Integrating along each ray r(t) = o + td, we compute the sum of the weights, which can be considered as the pixel's opacity: α(r) = ∑_i w_i, w_i=∏_j=1^i-1(1-α_j)α_i, α_i=1-e^-σ_iδ_i.The color C_d(r) and specular feature F_s(r) along the ray are accumulated using the same weights to compute the final diffuse color and specular feature of ray: C_d(r) = ∑_i w_i c_i, F_s(r) = ∑_i w_i f_i.The step size during ray marching δ_i is equal to the voxel width for an occupied voxel. Subsequently, the accumulated diffuse color and specular feature vector, along with the positional encoding PE(·) of the ray's view direction, are concatenated to pass through a lightweight deferred MLP Φ to produce a view-dependent residual color:C(r)=C_d + Φ(C_d , F_s, PE(d)). While SNeRG achieves impressive real-time rendering results, its voxel representation demands substantial memory, which poses limitations for further applications. MERF presents a significant reduction in memory requirements in comparison to extant radiance field methods like SNeRG. By leveraging hybrid low-resolution sparse grid and 2D high-resolution triplanes, MERF optimizes the balance between performance and memory efficiency. Moreover, it incorporates two pivotal strategies to bridge the gap between training and rendering performance. First, MERF simulates finite grid approach during training, querying MLPs at virtual grid corners and applying interpolation to mimic the rendering process closely. Second, MERF simulates quantization during training and employs the straight-through estimator <cit.>, allowing for the simulation of the quantization process while maintaining differentiability, enabling the model to learn and optimize with quantized values without introducing non-differentiable steps during the backward pass, ensuring a smooth training process.These innovative methods for scene reconstruction offer promising results, but their direct applicability to large scenes remains challenges. MERF's hybrid voxel-triplane representation, despite being memory-efficient, cannot capture large scenes with intricate details due to its fixed resolution constraint. In our efforts to reconstruct large scenes with high fidelity, dividing them into smaller blocks is a practical solution. This method helps in making sure the reconstruction is detailed and accurate.However, this approach of dividing scenes means we will end up with more assets to deal during rendering. Web browsers have limits on how much memory they can use, which can make it hard to show these many pieces of large and detailed models. Another challenge is the data transmission during web rendering. It often requires pulling data from servers, and this can be slow due to network delays. As a result, users might experience long wait times when trying to load all the detailed pieces of a large scene at once for rendering. Drawing inspiration from traditional mesh dynamic resource loading and LOD <cit.>, we generate LOD from our segmented reconstruction results which helps in minimizing the load of distant resources. Additionally, by employing a dynamic loading strategy for blocks, we significantly reduce VRAM usage and decrease the wait time for resource transmission.§ METHOD In this section, we present a method for representing and rendering large scenes on the web. Our approach uses hierarchical spatial partitioning and LOD to manage large-scale scenes dynamically (<ref>). We align the training and rendering stages to ensure consistency (<ref>), employing multiple shaders and alpha blending for seamless integration of scene blocks. Additionally, the framework includes optimization strategies (<ref>) and a process for generating LODs (<ref>) and baking the model (<ref>)for real-time rendering.§.§ Large-scale Radiance FieldIn the realm of scene reconstruction and rendering, NeRF has made significant strides, achieving compelling results. However, it faces inherent challenges when tasked with representing large scenes on the web. Using a single model to represent such vast scenes proves challenging due to its limited expressiveness, particularly in achieving a detailed and accurate reconstruction. Representing scenes with multiple models in a single resolution increases the overhead during rendering, leading to the loading of numerous resources, which is not conducive to efficient rendering.To efficiently represent large scenes captured using the fly-through method, we employ hierarchical spatial partitioning combined with LOD to represent the scene. Specifically, we uniformly partition the area into varying blocks within our region of interest on the xy plane (i.e., the ground plane). Each set of partitioned blocks corresponds to a unique LOD level, allowing for dynamic and efficient representation. Within each block,we use a low resolution voxel with a high resolution triplane that stores density, diffuse color, and specular color, to represent the radiance field for web rendering. Additionally, for blocks along the periphery, we utilize the scene contraction function from MERF <cit.> to account for the data on the boundaries. For internal blocks, we simply adopt settings from the bounded scene. During the training stage, our scene representation aligns with web rendering. We consistently utilize spatial partitioning to structure the scene into distinct blocks without overlap. However, we train the scene only with the finest level of LOD. Within block k , the following trainable components are introduced: (1) f^k: an attribute query function which adopts a hash encoding and a mlp decoder that outputs attributes of points such as densities, diffuse color and specular feature (2) Φ^k: a deferred MLP accounts for view-dependent effects. (3) ψ^k: a proposal MLP for sampling.§.§ Consistent Training and Rendering It is essential to ensure consistency between the training and rendering stages to achieve high-fidelity rendering results on the web as obtained during training. Due to the limited number of texture units within the web rendering environment, we are compelled to create multiple shaders to render distinct blocks. Specifically, one shader is allocated for storing the texture of an individual block. Each block subsequently renders an image respective to the current camera view. However, a simplistic averaging of these resultant rendering outputs can lead to discernible seams and does not ensure 3D consistency at the inter-block boundaries.To address this problem, we simulate the process of multiple shaders rendering images and then linearly weighting them together using the volume rendering weights of blocks. For ray r(t), we uniformly sample between the near and far boundaries based on the scene's bounding box. Then, according to the sample coordinates, we query the corresponding block's proposal MLP to obtain the sample density, which is transformed into probability distributions along the rays. These probabilities guide a resampling strategy, ensuring a concentration on near-surface features with a few samples.Assuming that the proposal MLP yields samples passing through M blocks with a total of N samples, where each block k has n_k samples, we simulate web rendering by performing volume rendering within each block to obtain its individual rendering diffuse color C_d^k, specular feature F^kand opacity α^k according to  <ref>. Then we get block k final rendering color C^k according to <ref>. Consequently, for the sake of 3D consistency in rendering, we depth-sort the blocks and apply volume rendering across multiple blocks in sequence, using opacity to generate the volume rendering weights:C(r) = ∑_k^M ∏^k-1_j=1(1-α^j)C^k.Under the Lambertian surface setting where the specular color is zero, the diffuse color and feature vector obtained from volume rendering on the total of N ray samples from <ref> are equal to the results produced by our approach of conducting volume rendering within each block followed by inter-block volume rendering <ref>. The proof is given in the supplementary. Thus, our rendering approach maintains 3D consistency and simulates multiple shader rendering on the web as shown in <ref>.During rendering, for the sample point p_i, we need to access the voxel grid corner and triplane grid corner where the point is located and use interpolation to obtain the sample point's attributes. During training, we also simulate the voxel and triplane grid points to maintain consistency with the rendering process. By using grid corners' positions to query the attribute query function f_k, we obtain the attributes of the grid corners. Through interpolation, we acquire the attributes of the sample points in a manner similar to the rendering pipeline. This simulaition strategy ensures that the values used for volume rendering within each block during training are as closely matched as possible to the values queried from the baked textures. §.§ OptimizationWe use Charbonnier loss <cit.> for reconstruction and S3IM loss <cit.> to assist multiple blocks' model in capturing high-frequency details. Additionally, we use the interlevel loss to provide supervision signal for proposal MLP and distortion loss to reduce floaters like Mip-nerf 360 <cit.>.Moreover, we random uniform sample points set 𝒫 within the bounding box of the scene and apply L_1 regularization on the alpha to encourage the blocks' model to predict sparse occupied space:ℒ_sparse = 1/|𝒫|∑_p_i∈𝒫|α_i| = 1/|𝒫|∑_p_i∈𝒫|1 - σ_i v|,where v is the step size used in real-time rendering. Additionally, we introduce a regularization term for the opacity of the block. This regularization encourages the opacity of the block to be as close to 0 or 1 as possible, implying either full transparency or full opaqueness:ℒ_opacity= -∑_k(α^klog(α^k) + (1-α^k)log(1-α^k)). In summary, the overall loss function is:ℒ_train = ℒ_charbonnier + λ_1ℒ_S3IM + λ_2ℒ_interlevel + λ_3ℒ_distortion + λ_4ℒ_sparse + λ_5ℒ_opacity.§.§ LOD GenerationTo guarantee superior rendering quality from elevated perspectives and simultaneously diminish the resource demand for distant scene elements, our method involves generating multiple LOD for the scene. We successfully attained the scene's lowest LOD in the training stage, ensuring maximal visual fidelity. For the downsampling and integration baking of multiple block models into a unified model, we initially freeze the training of hash encoding and decoder MLP components within these models. Subsequently, we proceed to retrain a new tiny deferred MLP. Following the deferred MLP's successful retraining, we simulate lower resolution virtual voxels and triplane grid corners within the scenes of these multiple blocks. Lastly, this retrained deferred MLP is refined in collaboration with the network responsible for generating grid corner attributes, thereby optimizing the entire rendering process. §.§ BakingAfter the training stage, we conduct block-based evaluation and store the MLP's outputs onto discrete grids, which generate segmented rendering resources. This approach facilitates efficient resource management for real-time rendering, as each block's resources are handled independently. Initially, we render all training rays to collect ray samples. Samples with alpha and weight values above a certain threshold are retained, and samples below the threshold are discarded. The preserved samples are used to mark the adjacent eight grid points as occupied in the binary grids. After generating binary grids to identify occupied voxels, we follow the MERF by baking high-resolution 2D planes and a low-resolution 3D voxel grid in each block. Only the non-empty 3D voxels are stored using ablock-sparse format. We downsample the occupancy grid with max-pooling for efficient rendering and skip empty space. To further save storage, we compress textures into the PNG format. § EXPERIMENTS §.§ Experiments setupDataset and Metric. Our experiments span across various scales and environments. We have incorporated a real-world urban scene dataset (Campus) and public datasets consisting of real-world rural rubble scenes (Rublle, Building) <cit.> and synthetic city-scale data (BlockA and BlockE in MatrixCity) <cit.>. Our datasets were recorded under uniform, cloudy lighting conditions to minimize variation. To obtain precise pose information, we employed an annular capturing approach, which has a higher overlap rate compared to grid-based capturing methods. <ref> presents an overview of our dataset. To assess the quality and fidelity of our reconstructions, we employ various evaluation metrics, including PSNR, SSIM and LPIPS <cit.>. Implementations and Baselines. Our method takes posed multi-view images captured using a fly-through camera as input. The training code is built on the nerfstudio framework <cit.> with tiny-cuda-nn <cit.> extension. And our real-time viewer is a JavaScript web application whose rendering is implemented through GLS. We set the 512^3 resolution for the voxel and 2048^2 resolution for the triplane within each block. We use a 4-layer MLP with 64 hidden dimensions as an encoder after multi-resolution hash encoding to output density, color, and specular feature. Moreover, a 3-layer MLP with 16 hidden dimensions tiny deferred MLP is developed to predict residual view-dependent color. We sample 16384 rays per batch and use Adam optimizer with an initial learning rate of 1×10^-2 decaying exponentially to 1×10^-3. Our model is trained with 50k iterations on one NVIDIA A100 GPU. We split the scene into 24 non-overlapping blocks forCampus scene and split other scenes into four blocks. Moreover, we benchmark current real-time rendering methods using three critical parameters: Payload (PL), GPU Memory (VRAM), and Frames Per Scond (FPS). Payload refers to the essential data transmitted during the rendering process. We perform qualitative comparisons between our method and existing SOTA methods for large-scale reconstruction. The Campus dataset is partitioned into six sections based on the reconstruction content. NeRFacto, Instant-NGP, and Grid-NeRF were applied to one of these sections, while in other datasets , they are applied to the entire scene. NeRFacto and Instant-NGP are utilized with the highest hash encoding resolution of 8192^3. Similarly, Mega-NeRF divides the Campus dataset into 24 blocks and other datasets into four blocks. Our experiments focus on a single campus section for comparative analysis with existing real-time rendering methods. §.§ Results Analysis We systematically evaluate the performance of both baseline models and our method through qualitative and quantitative comparisons in <ref> and <ref>. Notably, our method demonstrates a remarkable enhancement in visual fidelity as reflected by the SSIM and LPIPS metrics, which indicate the extent of detail restoration. Despite a reduction in PSNR compared to the SOTA methods, this is attributable to the fact that LPIPS and SSIM are more sensitive to the recovery of fine details, whereas PSNR mainly measures pixel-wise color accuracy. Our approach achieves higher fidelity reconstructions, revealing finer details due to our partitioned reconstruction strategy. In our method, each ray is evaluated by the deferred MLP only once, as opposed to other methods that evaluate the MLP at every sample point. Consequently, while our method recovers more intricate geometrical detail, it frequently results in color discrepancies with the ground truth image due to unstable lighting conditions and variable exposure, as shown in <ref>.In our evaluation, detailed in <ref>, we compare our method with current real-time rendering methods, using one segment of the Campus dataset for testing. These tests, are performed on an NVIDIA RTX 3060 Laptop GPU at a 1920×1080 resolution. The results demonstrate that our method excels in reconstruction quality. We represent each scene block using voxels and triplanes, and store the baked grid attributes as images. This strategy significantly reduces the payload. This reduction notably accelerates resource transmission for web-based rendering applications. However, it is observed that our frame rate during rendering is lower compared to other methods. This is attributed to their rendering pipeline based on mesh rasterization, in contrast to our method, which utilizes volume rendering.§.§ LOD Result<ref> presents the quantitative rendering results at various LOD, along with the corresponding payload and VRAM usage. With increasing LOD, the resources required for rendering significantly decrease. Notably, our method's lowest LOD level still maintains high fidelity rendering results, as demonstrated in <ref>. Our LOD strategy significantly streamlines the management of resource loading on web platforms, which is particularly advantageous in rendering distant blocks, as it requires less VRAM. It is worth noting that the VRAM usage presented in <ref> represents the cumulative memory consumption of all blocks. Our dynamic loading strategy adaptively selects resources to load based on the camera's field of view and the distance to each block, effectively keeping the peak VRAM usage around 1100MB. §.§ Ablation Study In<ref>, we conduct an ablation study of our method on one section ofCampus dataset. Our model is trained for four blocks with low resolution (voxel resolution of 512^3 and triplane resolution of 2048^2). We also train a single model for the entire scene with high resolution (voxel resolution of 1024^3 and triplane resolution of 4096^2). Due to our non-overlapping scene partitioning strategy, these two representations have the same resolution across the entire scene. However, our reconstruction quality is higher, achieving better FPS and lower GPU memory usage, as well as reduced payload. We also removed our consistent training, including virtual grids and alpha blending during training. As shown in the table, our consistent training significantly improves reconstruction quality. § CONCLUSION AND DISCUSSIONIn this work, we introduced City-on-Web, which to our knowledge is the first system that enables real-time neural rendering of large-scale scenes over web using laptop GPUs. Our integration of block partitioning with LOD has significantly reduced the payload on the web platform and improved resource management efficiency. We ensured high-fidelity rendering quality by maintaining consistency between training and rendering. Extensive experiments have also fully proved the effectiveness of City-on-Web.Limitation & Future Work. As shown in <ref>, our method still has some limitations. Since we derive alpha blending across shaders based on the Lambertian surface assumption, visible seams may occur at the boundaries between blocks on non-Lambertian surfaces, such as water surfaces. Combining physically-based rendering with multiple shaders blending may alleviate this problem. The deferred MLP in City-on-Web has limited representation ability for view-dependent color, which might cause numerous near-camera floaters. Taking the similar strategy used in <cit.> by utilizing priors to pre-trim the scene or preprocessing the data are possible solutions. ieeenat_fullname § PROOF OF 3D CONSISTENCYFor a given sampling point i,suppose it is located within the region 𝒦 of block k, which contains a total of N_k sampling points. The diffuse color, feature, and opacity output by the shader of block k are denoted as c^k_i, F^k_i, and α^k_i respectively. Then, for this ray, the output diffuse color c^k_d and specular feature F^kof block k are caculated as follows. h^k represents either the diffuse color or specular feature. h^k(r) =∑_i=1^N_k∏_j=1^i-1(1-α_j^k)·α_i^k h_i^kα^k =∑_i=1^N_k∏_j=1^i-1(1-α_j^k)·α_i^kBy integrating the rendering results of each block's shader through the volume renderingbetween blocks, the finaldiffuse color c_d and specular feature F of the ray can be obtained.h(r)=∑_k∏_j=1^k-1(1-α^j)·h^kNote that 1-α^k =1-∑_i=1^N_k∏_j=1^i-1(1-α_j^k)·α_i^k=1 - α_1^k - (1-α_1^k)α_2^k - (1-α_1^k)(1-α_2^k)α_3^k- ⋯= (1 - α_1^k)(1 - α_2^k - (1-α_2^k)α_3^k - ⋯ ) =(1-α_1^k)(1-α_2^k)(1-α_3^k - ⋯)⋮=∏_i=1^N_k(1-α_i^k)Assuming the block positions are already depth-sorted, meaning there are M sampling points along the ray, with each block k containing N_k sampling points, let α_i denote the i-th sampling point on the ray, and α_i^k denote the i-th sampling point in block k along the ray, then h(r) =∑_k∏_j=1^k-1∏^N_j_i=1(1-α_i^j)·h^k = ∑_k∏^N_1+⋯+N_k-1_i=1(1-α_i^j)·(∑_i=1^N_k∏_j=1^i-1(1-α_j^k)·α_i^k h_i^k) = ∑_k∑_i=1^N_k∏^N_1+⋯+N_k-1_i=1(1-α_i^j)∏_j=1^i-1(1-α_j^k)·α_i^kh_i^k = ∑_i=1^M∏^i-1_i=1(1-α_j)α_ih_i<ref> shows that the diffuse color and specular feature we finally obtain by depth-sorting blocks and alpha blending along blocks are consistent with the results of volume rendering integration along the entire ray. Therefore, our method ensures the three-dimensional consistency of rendering. § DATASET To demonstrate the effectiveness of our method, experiments are conducted on a variety of large scenes. The main experiments reported in this paper involve three types of environmental scene datasets of various scales. The Campus dataset is captured at a altitude of about 180 meters, covering an area of approximately 960,000 m^2. The Matrix City Dataset, captured at a altitude of 200 meters, is sparser than the Campus dataset, thus covering a larger area. The Mill 19 dataset covers a total of about 200,000 m^2. We adopt a circular data capture method for photographing, as shown in <ref>. We find that this method often results in a higher overlap rate, allowing for a more accurate estimation of camera poses. Our dataset was captured over 8 hours on a cloudy day, with a fixed exposure setting to ensure almost identical appearance of photos taken at different times. We used Colmap to estimate camera poses. Feature matching was done using a vocabulary tree, followed by a hierarchical mapper followed by a few iterations of triangulation and bundle adjustment to estimate camera poses.§ EFFICIENCY OF NON-OVERLAPPING PARTITION STRATEGYIn the stage of segmenting the scene into distinct blocks, we initially rotate the scene to align it parallel with the xy-plane, then proceed to segment the entire space based on the xy coordinates of spatial points. This strategy's merit lies in ensuring each segmented block is a bounded area. This is in contrast to methods like segmentation strategy of block-nerf  <cit.> and  <cit.>, which cannot assure boundedness in the reconstructed area, potentially leading to substantial memory resource wastage as illustrated in the  <ref>. Our approach allows us to represent the same size area with a bounded region of [-1,1]^3, maintaining the same representation resolution, instead of using an unbounded region that contract to [-2,2]^3 like in the MERF  <cit.> and Mip-NeRF 360 <cit.>. Consequently, this enables the reduction of the resolution of the xy-plane from 4096^2 to 2048^2 without loss in performance. Thus we can effectively reduce the usage of VRAM, especially across the three planes. § IMPLETATION DETAILS. For blocks at the boundaries of the entire scene, an unbounded scene representation is required to represent areas outside the block boundaries. We follow the same approach as MERF to compute ray-AABB intersections trivially. To be specific, we employ the scene contraction function to project the scene external to the unit sphere into a cube, which has a radius of 2. The definition of the j-th coordinate for a contracted point is as follows: contract(𝐱)_j={[ x_j if 𝐱_∞≤ 1; x_j/𝐱_∞ifx_j ≠𝐱_∞>1; (2-1/|x_j|) x_j/|x_j| ifx_j=𝐱_∞>1 ],. § MORE RESULTSWe provide additional qualitative results and on Matrix City, Mill 19, Campus dataset as shown in <ref>. § DISCUSSION Recently, some research has also enabled real-time rendering of large scenes. UE4-NERF <cit.>, building on the MobileNerf framework, divides large scenes into smaller segments for reconstruction and then renders the large-scale scene using the mesh rasterization pipeline. Like MobileNerf, UE4-NERF begins with a 128^3 grid, which assumes an even distribution of scene details in all directions. However, data captured through oblique photography often appears 'flat', meaning there's dense information when projected onto the xy plane, but sparser in the vertical direction due to mostly empty areas. Therefore, UE4-NERF needs more segments for large-scale reconstruction to ensure an even distribution of details in all directions in a block. For example, their Construction Site scene required about 40 blocks to reconstruct a 420m^2 × 240m^2 area, leading to significant memory and VRAM usage, approximately 25GB. Even with UE4's dynamic mesh-based loading, VRAM usage is around 11GB. Such high payload and VRAM demands make it challenging to extend this technology to web platforms and consumer-grade GPUs.Additionally, NeuRas <cit.> has also made progress in real-time rendering of large scenes. It uses texture-less geometry from already reconstructed large scenes. NeuRas applys feature map textures and combines mesh rasterization results with view direction to query a small MLP for view-dependent colors. They optimize the feature texture and view-dependent MLP to enhance rendering results. However, this method also suffers from high memory usage. Our experiments show that obtaining texture-less geometry with ContextCapture[https://www.bentley.com/software/itwin-capture-modeler/] for the Campus scene requires about 6GB, and surface reconstruction via neural rendering methods needs approximately 10GB without mesh simplification.Both these methods have achieved excellent results in real-time rendering of large scenes, but their significant memory and VRAM requirements limit their expansion to web platforms. Our method can implement real-time rendering of large scenes with a payload and VRAM consumption acceptable for web platforms and consumer-grade graphics cards.
http://arxiv.org/abs/2312.16457v1
{ "authors": [ "Kaiwen Song", "Juyong Zhang" ], "categories": [ "cs.CV", "cs.GR" ], "primary_category": "cs.CV", "published": "20231227080047", "title": "City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web" }
The Helicity Barrier in Black Hole Accretion Wong & ArzamasskiyGeorge N. Wong gnwong@ias.edu0000-0001-6952-2147]George N. Wong School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA Princeton Gravity Initiative, Princeton University, Princeton, New Jersey 08544, USA0000-0002-5263-9274]Lev Arzamasskiy School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA Horizon-scale observations from the Event Horizon Telescope (EHT) have enabled precision study of supermassive black hole accretion. Contemporary accretion modeling often treats the inflowing plasma as a single, thermal fluid, but microphysical kinetic effects can lead to significant deviations from this idealized picture. We investigate how the helicity barrier influences EHT-accessible electromagnetic observables by employing a simple model for electron heating based on kinetic physics and the cascade of energy and helicity in unbalanced turbulence. Although the helicity barrier plays only a minor role in regions with high plasma-β, like in SANE disks, it may substantially impact in regions with more ordered magnetic fields, such as the jet and its surrounding wind in SANE flows as well as throughout the entire domain in MAD flows. In SANE flows, emission shifts from the funnel wall towards the lower-magnetization disk region; in MAD flows the emission morphology remains largely unchanged. Including the helicity barrier leads to characteristically lower electron temperatures, and neglecting it can lead to underestimated accretion rates and inferred jet powers. The corresponding higher plasma densities result in increased depolarization and Faraday depths thereby decreasing the amplitude of the β_2 coefficient while leaving its angle unchanged. Both the increased jet power and lower |β_2| may help alleviate outstanding tensions between modeling and EHT observations. We also find that the estimated ring diameter may be underestimated when the helicity barrier is neglected. Our results underscore the significance of the helicity barrier in shaping black hole observables and inferred accretion system parameters.§ INTRODUCTION Low luminosity active galactic nuclei (LLAGN) are usually modeled as radiatively inefficient accretion flows (RIAFs) onto supermassive black holes <cit.>. In contrast to radiatively efficient thin disks model, RIAFs comprise geometrically thick, optically thin disks of hot plasma that circle the hole at subkeplerian speeds; in RIAFs, the gravitational binding energy of the inflowing plasma is converted into heat that cannot be radiated away before the plasma accretes down to the horizon. The excess heat provides a thermal pressure that supports a puffy disk. Two of the LLAGN with the largest known sizes on the sky lie at the centers of our galaxy () and the nearby elliptical galaxy Messier 87 (M87*). These two presumed RIAF sources are large enough to be directly observed by the Event Horizon Telescope (EHT), and the EHT's very long baseline interferometric experiment has produced radio images of the horizon-scale emission in spectacular detail <cit.>. The observations can be used to probe plasma physics in these extreme environments while also providing constraints on key physical parameters of the accretion system, like the black hole mass and spin, the system accretion rate, and the amount of magnetic flux trapped on the horizon <cit.>. Ongoing measurements and the next generation of these experiments will help inform further parameter constraints and enable precision tests of our understanding of the physics governing these systems. The first observational results have already revealed tension between models and data in quantities like the resolved linear polarization fraction, which magnetized models often overproduce, and the predicted jet powers, which lie almost categorically at the lower end of the observational bounds.Parameter estimation infers that M87* and are likely Coulomb collisionless since the path length to a Coulomb interaction greatly exceeds the size of the system. The electrons and ions that make up the infalling plasma therefore do not have time to equilibrate and relax to a thermal Maxwell–Jüttner distribution <cit.>. Nevertheless, it may be that the ion- and electron-distribution functions are independently thermal, since intraspecies interactions due to kinetic plasma instabilities can drive particle-wave interactions that enable relaxation (seeand discussion therein). The mechanisms that govern the heating and cooling of the ions and electrons are the subject of detailed study. Since the radio emission observed from and M87* is produced by the synchrotron process, the distribution function of the electrons plays a crucial role in determining the observational features of the sources. Accurately modeling particle acceleration is thus essential, since turbulent heating, reconnection, and shocks all yield different heating profiles and it is likely that different combinations of all heating mechanisms operate in different parts of the accretion flow. Interpreting the nonthermal features of the observations will also require a detailed understanding of the processes that determine the local particle distribution functions.RIAFs have long been modeled with semianalytic <cit.> and numerical <cit.> methods. The latter models are usually produced through general relativistic magnetohydrodynamics (GRMHD) simulation and are often favored over semianalytic models because they naturally incorporate properties of the turbulent dynamics, produce variability, and effectuate the connection between the accretion disk, wind, and jet, all of which may play an important role in determining the observational appearance of the system. The output of the fluid simulations is typically processed through general relativistic ray tracing (GRRT) codes to generate simulated observables like images and spectra. In the standard modeling procedure, only the total energy of the fluid is tracked and evolved in the simulation. Since the electron distribution function is required to compute the radiative transfer coefficients, the typical modeling approach is to assume that the electrons are thermal and assign their temperature in a post-processing step by partitioning the total internal energy of the fluid into the ions and electrons following a prescription that depends on the ratio of the gas-to-magnetic pressure and the magnetization. These thermodynamic prescriptions are usually motivated by (kinetic) plasma theory for turbulent cascades, magnetic reconnection, collisionless shocks, and so on.[Some alternative two-temperature methods track the internal energies of the ions and electrons separately and model electron heating as some fraction of the total numerical dissipation <cit.>.] This approach introduces significant uncertainty and may well explain the model/data tension: the presence of a population of cold electrons would require an increased mass accretion rate, higher jet power, and would result in more depolarization from Faraday scrambling. The turbulent cascade model is often invoked to quantify the energy partition: energy is injected at large scales (e.g., from the magnetorotational instability or large-scale torques) and cascades to higher wavenumbers until it is dissipated as thermal energy into the ions or electrons at their associated Larmor scales. But when the turbulence is imbalanced, not all of the energy injected at large scales can be treated the same way, and if plasma β≡ P_ gas/P_ mag is small, conservation of helicity can inhibit energy flow in the cascade.In low-β plasmas, the sense of the helicity cascade above and below the ion Larmor scale changes: as wavenumber increases into the kinetic scale, the fluid cross-helicity transforms conservatively into magnetic helicity and the direction of the helicity cascade inverts <cit.>. Since (generalized) helicity is conserved, the helicity-endowed component of the turbulence is then unable to cascade below the ion Larmor scale and an effective helicity barrier is produced, limiting the fraction of the energy at large scales that can reach and heat the electrons. Observational evidence for the helicity barrier's operation has recently been found in the context of the solar wind via correlation of ion-cyclotron waves and electron-scale turbulence <cit.>.In this paper, we use a simple model to study the effect of the helicity barrier on electron heating in radiatively inefficient accretion flows and probe its effect on the electromagnetic observables accessible to the horizon-scale radio observations. Since the presence of long-lived imbalanced turbulence is required for the helicity barrier to operate, one might expect the helicity barrier to be most important in the directed winds above the surface of the disk, but it is challenging to generate a predictive model for the quantitative details. In this study, we consider a limited subset of models and perform a preliminary study of the effect of the helicity barrier. We thus aim only to identify and describe broad qualitative trends to judge the importance of the effect and whether it may help explain contemporary questions raised by the data. We leave a more detailed study to future work. In Section <ref> we provide a brief overview of the problem of particle heating and its connection to the helicity barrier. We describe the details of our numerical methods in Section <ref>. In Section <ref>, we present the results of our numerical exploration, and we discuss model assumptions and limitations in Section <ref>. We conclude in Section <ref>. § PARTICLE HEATING IN COLLISIONLESS TURBULENCERobust interpretation of black hole images requires a detailed understanding of how emission is produced by the accretion flow. Most of the emission is due to synchrotron radiation <cit.> and thus it is very sensitive to the electron momentum distribution function. The electron distribution function is determined by the details of the particle heating mechanisms that transform the gravitational energy released during accretion into thermal kinetic energy. The channels responsible for this conversion include dissipation in shocks <cit.>, at reconnection sites <cit.>, and in turbulent cascades. The relative importance of these channels is not well understood. In this paper, we focus on the turbulent cascade as the main source of energy for electrons.§.§ The turbulent cascade The conventional picture of the turbulent cascade involves a specified outer injection scale at which energy is supplied by large-scale processes (e.g., the MRI or large-scale torques) and specified smaller dissipation scales at which the energy transforms into unordered kinetic motion (e.g., plasma kinetic scales or viscous/resistive scales in collisional systems). Solutions that bridge between these scales must conserve energy flux and are often assumed to only include interactions that are local in scale (e.g., only the eddies of similar sizes can efficiently interact with each other). The large separation between injection and dissipation scales often leads to assumption of “zeroth law of turbulence,” which states that the large-scale behavior of the cascade does not depend on the physics responsible for its dissipation. The assumption of scale-independence has been very useful in constructing models for collisional turbulence <cit.>. The RIAF systems most relevant for the EHT are much better described as Coulomb collisionless, but when the ratio of gas to magnetic pressure is large (which is a likely description of much of the accretion flow), perturbations in the magnetic field can drive sufficient deviations from local thermodynamic equilibrium to trigger kinetic micro-instabilities, which are non-local in nature and increase the effective collisionality beyond what the naïve Coulomb collision picture implies <cit.>. This enhanced collisionality can lead to considerable dissipation close to the injection scale due to pressure-anisotropic viscous stress <cit.>. Several mechanisms for energy dissipation have been proposed in the context of turbulent cascades, including cyclotron heating <cit.>, stochastic heating <cit.>, Landau damping <cit.>, as well as reconnection and Fermi-type acceleration in relativistic plasmas <cit.>. To avoid committing to a particular mechanism for dissipation, we adopt the sigmoidal R_ low–R_ high model <cit.>, in which the ion-to-electron heating ratio smoothly transitions between two asymptotic values in regions with low and high plasma β = P_ gas / P_ mag. Although this form is quite simplified, it is straight-forward to implement and the sigmoidal shape is qualitatively supported by some studies of energy dissipation in collisionless plasmas <cit.>.§.§ The helicity barrierWhen plasma β≪ 1, further complexities arise if the energies of waves propagating in opposite directions are unequal, i.e., when the turbulence is imbalanced. This condition is typical in the case of the solar-wind plasma, but it may also occur in black hole accretion when strong outflows produce an imbalance biased along the outflow direction. How does imbalance alter the picture of a turbulent cascade? In imbalanced turbulence, the fluid is endowed with non-zero helicity, which must be conserved across the cascade in addition to the standard conservation of energy flux. In the inertial range (on scales k_⊥ρ_ i≪ 1 with ρ_i the ion Larmor scale), both energy and helicity, which takes the form of a cross-helicity, can cascade simultaneously towards smaller scales; however, in the kinetic range, the cross-helicity is conservatively transformed into a magnetic helicity, the dispersion of the waves changes (Alfvén waves are converted into kinetic Alfvén waves, which are dispersive), and there is no solution that conserves the fluxes of both the energy and the generalized helicity.The lack of solution results in an effective helicity barrier <cit.>, as the unbalanced portion of the cascading energy is trapped at scales k_⊥ρ_ i∼ 1. The energy accumulates at that scale until other cascade directions are enabled ( found that helicity barrier allows energy to enter a cascade of ion-cyclotron-waves, which eventually dissipate though ion-cyclotron heating). The imbalanced portion of the cascade thus only energizes the ions, and the maximum energy the electrons can receive is the balanced portion of the energy flux, which is itself divided between ions and electrons. The level of imbalance is quantified by the normalized quantity ∈[-1, 1], whose absolute value increases to unity as the level of imbalance grows. The primary effect of the helicity barrier that we consider in this paper is thus the reduction of electron heating, Q_ e→ (1-) Q_ e. To compute , it is useful to work in the Elsässer formulation of magnetohydrodynamics. In the relativistic context and written in terms of the fluid four-velocity u^μ and magnetic field four-vector b^μ = - u_ν(⋆ F)^μν, with ⋆ F^μν the Hodge dual of the electromagnetic Faraday tensor, the Elsässer variables are <cit.>z_±^μ = u^μ±b^μ√(),where the enthalpy is= ρ + u + P + b^α b_α,and where ρ is the rest-mass density of the fluid, u is its internal energy, and P is its pressure. The standard interpretation of z^μ_± is that they describe the evolution of (pseudo-)Alfvén waves propagating through the equilibrium magnetic field.Describing the fluid as a mean background with fluctuations, the fluctuations are just the differences between z_±^μ and their locally time-averaged values,δ z^μ_+= z^μ_+ - < z^μ_+ >,δ z^μ_-= z^μ_- - < z^μ_- >,and it is easy to show that the reduced relativistic Elsässer equations, written in terms of these difference variables, reduce to the standard equations of Newtonian reduced magnetohydrodynamics.The Elsässer variables can be used to compute two ideal pseudoenergy invariants, (δ z^μ_±)^2, where we have introduced the shorthand (v^μ)^2 = v^μ v_μ. The sum of the two pseudoenergies is the total energy in the system, and the difference measures the preference to generate waves in one direction or another (thus when the difference is non-zero, the system generates imbalanced turbulence). The normalized difference is the normalized cross-helicity:|| = | (δ z^μ_+)^2 - (δ z^μ_-)^2 (δ z^μ_+)^2 + (δ z^μ_-)^2 | .Notice that there is ambiguity in how to perform the average in Equations <ref> & <ref>: When the system is variable, the fluid frame changes and the part of the electromagnetic field that is seen as the magnetic field by the fluid, b^μ, changes with time. The quantities < z^μ_±> should represent the mean background flow; in regions where a characteristic background can be identified, performing a direct average of the four-vector components is then acceptable. In contrast, in regions that are highly variable, a mean background may not be readily identifiable and the meanings of δ z^μ_± become less clear. In such regions, our averaging procedure yields smaller values for , which one might heuristically expect since the rapidly varying magnetic field and flow geometry do not allow helicity to accumulate along a particular direction over a sustained period of time. We compute < z^μ_±> as an average over the full duration of the simulations beginning after the transient from the initial conditions dies out. We have verified that decreasing the averaging window by a factor of two or four does not qualitatively change our results. The value ofin each fluid snapshot cannot be used directly to compute the effect of the helicity barrier, since the latter arises because of accumulated cross-helicity. The physical picture is as follows: the cross-helicity injected at large-scales cascades down to smaller scales on the eddy turnover timescale, and eventually cross-helicity (of a particular sign) accumulates at the ion Larmor scale. The helicity barrier is effective only when cross-helicity has time to build up at the Larmor scale—injections of negative and positive cross-helicity at the large scales do cascade, but they ultimately cancel out. Because we perform our analysis in post-processing, we cannot track the cascade and injection of cross-helicity over time since that would require tracking the flow of non-zero cross-helicity fluid parcels as they evolve with the fluid. In this analysis, we instead approximate the buildup of cross-helicity by averaging the signed value ofover approximately a dynamical time. This signed average is a good proxy for the total amount of accumulated cross-helicity in an axisymmetric flow with small radial velocities; we adopt this procedure even though our simulations are three-dimensional for the sake of computational efficiency.Figure <ref> shows the signed value ofin a snapshot compared against the average ofover one dynamical time at r = 3GM/c^3 where much of the observed emission is produced. The location of the disk can be identified by the plasma density, and the maximal extent of the emission region is bounded by magnetization σ = 1 contours. Evidently, the effect of averaging is to smooth out small-scale fluctuations inwhile the broader, large-scale features are left mostly unchanged. The sign ofin the disk region is determined by the instantaneous flow properties. In MADs especially, cross-helicity of a particular sign may be long lived, as transient vertically asymmetric features are launched from large radii and fall through the event horizon. We discuss this smoothing procedure and compare between different averaging windows in the discussion section (see especially Figure <ref>). § NUMERICAL METHODSWe use the PATOKA pipeline to produce simulated images of RIAFs assuming the Kerr geometry <cit.>. The pipeline comprises a fluid simulation step, in which the general relativistic magnetohydrodynamics (GRMHD) code  <cit.> produces the time evolution of the accretion flow in full 3D, and a ray-tracing step, in which the general relativistic radiative transfer code  <cit.> is used to compute the emission, extinction, and rotation of polarized light throughout the fluid simulation domain and track it to an observer at large distance.§.§ The fluid model The fluid evolution is obtained by solving the GRMHD equations, which take the form of a hyperbolic system of conservation laws∂_t ( √(-g)ρ u^t )= -∂_i ( √(-g)ρ u^i ),∂_t ( √(-g)T^t_ν)= - ∂_i ( √(-g)T^i_ν) + √(-g)T^κ_λΓ^λ_νκ, ∂_t ( √(-g) B^i )= - ∂_j [ √(-g)( b^j u^i - b^i u^j ) ],along with the constraint∂_i ( √(-g) B^i )= 0.Here, the plasma rest mass density is ρ and its four-velocity is u^μ. The magnetic field is represented by the b^μ four-vector.The spacetime geometry enters through the metric g_μν, its determinant g, and the Christoffel symbol Γ^α_βγ. The symmetric rank-2 tensor T^μν represents the stress–energy of the fluid, which has contributions from both the fluid and the electromagnetic fieldT^μν = ( ρ + u + P + b^λb_λ)u^μu^ν + (P + b^λb_λ/2)g^μ^ν - b^μb^ν,where here u is the internal energy of the fluid and the fluid pressure P is related to its internal energy by a constant adiabatic index γ̂ with P = (γ̂ - 1) u.§.§ Radiative transfer The time series fluid data are processed into simulated images with a radiative transfer post-processing step using the code <cit.>.Each simulated image comprises a square grid of square pixels defined by a field-of-view (or width) in units of GM/c^2, distance from observer to source d_src, and orientations with respect to the black hole spin axis and midplane (inclination and position angle). Pixels report the Stokes parameters I_ν, Q_ν, U_ν, V_ν at their centers.To construct an image,  first traces photon trajectories backward from the camera into the simulation domain by solving the geodesic equationsd x^αd λ = k^α dk^αdλ = - Γ^α_μν k^μ k^ν,where Γ is a Christoffel symbol, λ is an affine parameter, and k^α is the photon wavevector. then integrates forward along each geodesic trajectory to solve the polarized radiative transfer equation, which in flat space can be writtendds( I_νQ_νU_νV_ν) =(j_ν,Ij_ν,Qj_ν,Uj_ν,V) - [ α_ν,I α_ν,Q α_ν,U α_ν,V; α_ν,Q α_ν,I ρ_ν,V - ρ_ν,U; α_ν,U-ρ_ν,V α_ν,I ρ_ν,Q; α_ν,V ρ_ν,U-ρ_ν,Q α_ν,I ]( I_νQ_νU_νV_ν) ,where we have neglected scattering as its effect is negligible at the radio frequencies we are interested in. Here, emissivities j_ν, absorptivities α_ν, and rotativities ρ_ν are frame-dependent quantities <cit.>. To compute the transfer coefficients, we use the thermal fits described in <cit.>. Further detail about can be found in <cit.>.Because GRMHD simulations introduce numerical floors in regions with high magnetization σ = b^2 / ρ, the plasma density and temperature are unreliable in such regions. To avoid contaminating the simulated images with numerical artifacts from the floors, we set the plasma density to zero in regions with σ > 1. Applying this σ-cutoff is reasonable, as the true density in highly magnetized regions like the jet is very small and therefore very little emission is produced there.§.§ Computing the electron temperatureSince the fluid simulations only track the total internal energy of the total fluid, there is freedom in assigning the electron distribution function. For M87*, radio frequency emission is produced by the synchrotron process <cit.>, and for the relevant plasma parameters, the 230GHz emission observed by the EHT likely comes predominantly from the thermal core of the distribution function. We thus assume that the electron population can be modeled as a relativistic thermal Maxwell–Jüttner distribution, which is characterized by a single temperature T_ e.The problem is thus to determine T_ e given the total internal energy u of the fluid and the local fluid properties, which requires partitioning the total internal energy u into an ion componentand an electron one . Schematically, the internal energy can be written asu=+ = (u_ i,h + u_ i,z)+ (u_ e,h + u_ e,z) = ( u_ i,h + u_ e,h)+ ( u_ i,z + u_ e,z) = u_ h + u_ z.Here, we have used the subscript h (or z) to denote the part of the internal energy that can be related to heating via the helical (or zero-helicity) part of the turbulent fluctuations. When β is small, the helicity barrier stops any of u_ h from cascading below the ion Larmor scale and heating the electrons, so u_ e,h = 0 subject to the condition that β < β_ critical. We set β_ critical = 1 to be consistent with the physical derivation of the barrier, but we have found that varying this cutoff value above unity has negligible impact on our results.When β < β_ critical, we assume that u_ z = (1 - ) u, i.e., that energy imbalance is equal to injection imbalance. This equivalence is likely not true in general: <cit.> finds that energy imbalance is larger than injection imbalance, although how well the quantitative details hold in non-idealized scenarios is uncertain. Nevertheless, under our assumption, the ion and electron energies are simply= u_i,z + u_i, h ,= u_e,z.For apples-to-apples comparison, we fix R ≡ u_i,z / u_e,z regardless of , which is reasonable under the approximation that the balanced component of the turbulent cascade is unaware of the imbalanced component. The ratio of total internal energies is thenR_u ≡ / = R_u(R, ) = R + 1 - .Finally, to compute the electron temperature, we must find the relationship between the ion–electron temperature ratio R_T≡ / and the energy ratio R_ u, which we do by assuming an ideal gas equation of state. Let the internal energies beu_ i = ( γ_ i - 1 )^-1 n_iT_i, u_ e = ( γ_ e - 1 )^-1 n_e kB T_e.Taking 1/y and 1/z to be the number of electrons and nucleons (protons + neutrons) per (unionized) atom, respectively, then n_ e = y ρ / m_p, n_ i = z ρ / m_p, and we have that n_ i = z n_ e / y. The ratio of energies is therefore R_u = = ( γ_ e - 1) (γ_ i - 1)z/y n_en_e=z (γ_ e - 1 )y ( γ_ i - 1 )R_T .Assuming fully ionized hydrogen,[ Inferred brightness temperatures are in excess of 10^9K, which is well above the ionization temperatures for both hydrogen and helium. The plasma composition is not well-constrained, however, and there may be non-trivial fractions of helium and heavier ions <cit.>.] which has y = z = 1, if the ions are nonrelativistic γ_ i = 5/3 and the electrons are relativistic γ_ e = 4/3, thenR_T = 2 R_u .We have not yet specified R, the ratio of ion-to-electron energies in the zero-helicity fluid component, which is in reality determined by the microphysics. To parameterize over this uncertainty, we let R take any form allowed by the R_ low–R_ high prescription <cit.>R =+β_R^21 + β_R^2 ,which is motivated by models for electron heating in a turbulent collisionless plasma that preferentially heats the ions when the gas pressure exceeds the magnetic pressure. Here β_R≡β / β_0, and β_0, , andare parameters that control the temperature ratio in regions of low (high) β where the plasma is dominated by gas (magnetic) pressure. The value of β_0 determines where the transition betweenandoccurs. We adopt typical values forand β_0 and set them each to unity. Since plasma β is large in disk regions, models with largemostly have cooler disks and, by contrast, hotter coronæ and funnel walls, and thus often produce more emission from regions off the midplane. § RESULTSWe now use a set of GRMHD simulations to study the importance of the helicity barrier in simulated polarized observations of RIAFs. For simplicity, we focus on the M87* accretion system and so set the black hole mass to M = 6.5 × 10^9M_⊙ for consistency with observational results (see Table 1 of ). This mass choice provides a physical length scale to the simulations. We use Equation <ref> and the averaging procedure described above to computeacross the domain and calculate electron temperatures. We image each snapshot of the fluid simulation twice, once using electron temperatures computed incorporating the helicity barrier and once with the effects of the helicity barrier turned off. For each time series of images, we rescale the mass density of the accreting plasma until the average of the 230GHz flux density light curve matches the observed (instantaneous) value of F_230 GHz = 0.65Jy (see Appendix B.1 of ). See Appendix D offor caveats and more detail about the flux-fitting procedure.To compare against observations, the camera must be assigned both an inclination and a position angle, which we define relative to the jet launched by the system. For our target M87*, there is clear evidence of a large-scale jet(see ) with a measured inclination angle of 17^∘ relative to the line of sight. We therefore orient our camera at either 17^∘ or 163^∘ relative to the axis of the jet in the simulation (this is coincident with the black hole spin axis) according to which parity reproduces the observed brightness asymmetry seen in <cit.>, which manifests as a greater brightness temperature in the bottom half of the image. The statistical axisymmetry of the accretion flow means that rotating the image to align the position angle of the jet with its observed value does not influence the other images statistics. We thus fix the position angle of the images so that the jet lies in the vertical direction, as determined by the default PA = 0 setting for the simulations. §.§ Model parameter space The space of possible accretion configurations is high dimensional, covering the black hole mass and angular momentum parameters, the accretion rate of the system, boundary conditions and gas composition, and the magnetic field configuration. It is not computationally feasible to explore the full parameter space, so we focus on the subset corresponding to the canonical models used in the initial Event Horizon Telescope analysis of M87*. We thus aim to identify general trends and gauge the overall importance of the helicity barrier rather than make quantitatively precise predictions. The magnetization of an accretion system can be used to differentiate flows according to whether the magnetic pressure near the horizon is strong enough to counterbalance the inward ram pressure of the fluid. When the magnetic pressure is high enough, the infalling motion of the plasma is arrested and the flow enters the magnetically arrested disk (MAD; ) state. The alternative scenario is canonically referred to as standard and normal evolution (SANE; ). SANE flows are turbulent but steady; in MAD flows, large tubes of magnetic flux arrest the inward motion of the flow and accretion is chaotic and mediated by transient filaments of hot plasma that thread the region between the hole and plasma at large radius. We consider both the MAD and SANE accretion states.We express the black hole angular momentum in terms of the dimensionless spin parameter ≡ J c / GM^2 with | | ≤ 1, where J is the magnitude of the angular momentum. By convention, we set < 0 when the angular momentum of the accretion flow and the spin of the black hole are anti-aligned. There is no reason that the angular momenta of the hole and the flow must be precisely aligned or anti-aligned. Tilted systems have recently gained broad attention; for simplicity, however, we restrict our focus to systems with no tilt. We consider five black hole spins = -15/16, -1/2, 0, 1/2, and 15/16 (hereafter written as -0.94, -0.5, 0, 0.5, 0.94 to be consistent with EHTC publications). Although computing the radiative transfer coefficients requires choosing mass-density and length scales, since the GRMHD equations and Equation <ref> are invariant under these rescalings, it is possible to measure the degree of cross-helicity directly from the scale-free fluid snapshot variables before restricting to a particular observer inclination or black hole accretion system. In Figure <ref> we show the simulation-averaged values both for || and plasma β. For ||, we have computed the time-average of the absolute value of the signed quantitythat has been calculated per fluid snapshot as described in Section <ref> (and shown in the right-most panel of Figure <ref>). We show the average of the absolute value to account for the fact that the infall timescale is often shorter than the timescale over whichchanges sign, since the magnitude ofcontrols the helicity barrier. Thus, the non-zero imbalance in the midplane of the MADs is due to spontaneous symmetry breaking that does not average out before the fluid parcels carrying the cross-helicity fall through the event horizon. Different choices for averaging windows are considered in the discussion (see especially Figure <ref>).In regions where β is large, the accretion flow takes the form of a turbulent disk, fluctuations are large, andis smallest. This effect is most prominently seen in the SANE flows and flows with small . Since MAD flows have more consistent magnetic fields,keeps the same sign over longer timescales; this is reflected in the characteristically larger values ofin the MAD flows. In all cases, the helicity barrier operates most strongly in regions with the most ordered magnetic field. In SANE flows the most ordered fields live within the jet and its enveloping wind. These regions have low plasma β and are approximately bounded by the magnetization σ = 1 contour. In MAD flows, the field maintains order throughout the domain and helicity builds up nearly equally everywhere. The “funnel” regions in the low-spin cases exhibit particularly ordered fields, which arise as accreted magnetic field lines build up near the horizon and are less perturbed by, e.g., the strong torques that a spinning black hole would impart on them due to frame dragging. §.§ Images and emission source We have thus far exploredin global accretion models from an observer-agnostic perspective. To understand how the helicity barrier influences observables, it is necessary to adopt an emission model, i.e., we must both choose thermodynamic flow parameters and set the observer inclination. We will focus on observations of the M87* accretion flow targeted by the EHT.We use the code to produce polarimetric images at the 230 GHz operational frequency of the EHT. In Figure <ref>, we show example images produced from the same single fluid snapshot shown in Figure <ref> evaluated using thermodynamic models that either do or do not incorporate the influence ofon the ion–electron energy partition as described in Section <ref>. Columns show the full polarized properties of the light, including total intensity, degree of local linear polarization √(Q^2 + U^2)/I, electric vector position angle 1/2arctan U/Q (measured east-of-north or counterclockwise-from-vertical on the sky), and the degree of circular polarization V/I, respectively. The bottom two rows of Figure <ref> show the same images at the top two rows but blurred with a 20 μ as Gaussian to simulate the effective resolution of the Event Horizon Telescope. This blurring is particularly important when considering observations of, e.g., resolved linear polarization, which may be high when the resolution element is smaller than the spatial correlation length of the EVPA but which is decreased dramatically when blurring over regions with a rapidly varying EVPA. Although the images with and without the barrier are produced from the same fluid model, they correspond to different accretion rates selected such that the average flux density is 0.65 Jy to be consistent with observations. Thus, although the morphology of the fluid in the underlying accretion flows is the same for the different images, the number density and magnetic field strength differ. Figure <ref> shows the factor by which the accretion rate must be increased for the flux to match observations. For internal consistency with the scale-free GRMHD equations, the increased accretion rate requires any local energy density quantitybe increased by the same factor. In our case, the plasma number density, the fluid internal energy, and the square of the magnetic field strength must all be increased by the value shown in Figure <ref>. MADs and especially largemodels have the largest required increase, as emission in those systems tends to be in regions with the largest imbalance and the greatest importance of the helicity barrier. Incorporating the helicity barrier yields higher estimates for jet power, since the jet power scales directly with the accretion rate.[The relative power of the jet compared to the infalling rest-mass energy is determined by the simulation. When the plasma number density is increased to match the observed flux, all energy densities must be rescaled by the same factor, so the absolute jet power scales with the accretion rate.] Given an emission model, it is possible to evaluate how the helicity barrier alters the source morphology. In Figure <ref>, we show the location of the emission in both the MAD model of Figures <ref> & <ref> as well as a representative SANE model. The MAD and SANE models have different spins and different electron thermodynamics. The right panels of the figure show the characteristic magnetization σ and cross-helicityof the emission. The funnel region that lies at the interface between the jet core and the disk typically has larger values of σ than the disk. As expected, including the effects of the helicity barrier limits emission from regions with large . Figure <ref> shows that emission in MAD models does not change drastically while in SANE models the emission tends to shift away from the funnel wall and toward the lower-magnetization disk region. The right panels of Figure <ref> show this trend as well: emission shifts from regions of highto smallwhile the characteristic magnetization σ in the emission regions shifts from large values to small values in the SANE flow.Figure <ref> shows how the emission source changes across all library models. Emission in regions with largedecreases as expected. MADs typically produce emission throughout their infalling regions regardless of the thermodynamics prescription; as more of their domain has large values for , the effect of the helicity barrier is very evident as emission in regions with largedrops significantly, altering the shape of each curve. SANE models with significant funnel-wall emission (i.e., models with large values of ) are often the most strongly affected. SANE models with low , i.e., models where the majority of the emission comes from the disk, are almost completely unaffected.§.§ Polarization We now evaluate how including the effect of the helicity barrier can affect the linear polarimetric β_2 observable, which has been used by the EHT to gauge the strength of the horizon-scale magnetic field and differentiate between different accretion models <cit.>. The complex β_2 coefficient measures the power in (amplitude) and orientation of (argument) the azimuthally symmetric mode of the linear polarization vector across the image. The final value of the β_2 coefficient is determined by both the structure of the magnetic field in the emitting regions of the flow and the degree of depolarization due to differences in, e.g., Faraday rotation as the light propagates through the flow. Since the spin of the black hole influences the structure of the magnetic field, there is a trend of ∠β_2 with spin, with higher values of | | producing more toroidal fields and pushing ∠β_2 towards zero (radial linear polarization pattern). It is worthwhile to understand how plasma physics uncertainties might complicate this relationship.Figure <ref> shows how both the amplitude and argument of β_2 change for the different models in our library. Broadly, the amplitude of β_2 decreases with the inclusion of the helicity barrier while the argument of β_2 is mostly unaffected. The amplitude typically decreases the most in MAD models. Sincein MAD flows is mostly consistent across the domain, the regions that contribute to the image are mostly unchanged so that the general image structure persists. The differences are instead due to the reduced emission-per-particle due to lower temperatures, which must be compensated by an increased number density. This renormalization results in both an increased accretion rate but, more importantly, also increased depolarization since the differences in the increased column density of plasma along neighboring lines of sight lead to more extreme differing levels of rotation over the course of the light's propagation.The differences that produce more scrambled images are quantitatively related to the Faraday depth along the geodesics as emission travels from its source to the observer. Figure <ref> shows a proxy for the factor by which the Faraday depth increases when the effects of the helicity barrier are included. Our proxy is computed by first evaluating the total Faraday depth along the full geodesic for each pixel in the image and then computing the polarization P = √(Q^2 + U^2)-weighted average of these values over all image pixels. The increase in Faraday depth is significant in MAD flows for all models; in SANE flows, the Faraday depth increases are more evident in models with large , where the emission is more likely to arise in the large- jet funnel regions.This scrambling effect can be seen in the EVPA panels of Figure <ref>, especially in the lower parts of the images where the bottom panel (with its lower density) has more coherent EVPA compared to the top panel). In the blurred linear polarimetric maps in the same figure, it is clear that the linear polarization in the bottom-right (southwest) part of the image decreases because the overall intensity is canceled out by the near-random phases of the neighboring pixels' EVPAs. In SANE models the polarization pattern is often already highly scrambled because the magnetic fields in the emission region are highly disordered. Since the images start out scrambled, increasing the number density of the flow does not have as noticeable an effect.The increased optical depth through the disk also means that the image of the lensed photon ring will appear less depolarized (see the blue ring that appears in the “without barrier” resolved linear polarization image of Figure <ref>). When the disk is optically thin, each pixel contains contributions from the direct image as well as the lensed secondary (and so on) images. The lensed images exhibit a conjugate polarization signature; the contributions from the lensed images cancel in part and the summed final polarization signal is decreased (for more detail see ). The increased column density due to the effect of the helicity barrier on the temperatures means that the secondary image is less prominent and thus less cancellation happens along the relevant trajectories. §.§ Ring diameter & variability Finally, we check whether disregarding the helicity barrier can bias several other parameters inferred by the EHT. Here, we focus on the ring diameter <cit.>, which has been used to test consistency of the observational data with the theory of general relativity, and on the variability in the compact-flux light curve, which has demonstrated notable disagreement between models and the observational data <cit.>. We measure a ring diameter for each image in our library with the ring extractor () method described in 9 of <cit.>.makes its measurement from the algorithmically identified “center point” of each image—the point that is most equidistant from the peak intensity along each of 360 equally-spaced rays cast from itself. The left panel of Figure <ref> shows a sampling of ring diameters taken from our M87*-like models, which are at low inclination where a ring diameter measurement is easiest to perform. As can be seen in the figure, disregarding the helicity barrier in MAD models tends to increase the measured ring diameter slightly although the overall measurements stay roughly consistent. In SANE models the helicity barrier alters the electron temperatures such that the measured ring diameter is roughly consistent or slightly larger. The largest increase in measured ring diameter occurs for the models with large , where emission in the jet funnel is suppressed and the disk contributes much more significantly to the image. The right panel of Figure <ref> shows how the measured modulation index changes when the effect of the helicity barrier is included in the electron thermodynamics calculation. The modulation index isM_Δ T≡σ_Δ Tμ_Δ T,where σ_Δ T and μ_Δ T are the standard deviation and mean of the time series, respectively, measured over some interval Δ T. We use Δ T = 553GM/c^3 (≈ 6.5 months for M87*) to be consistent with the timescale used in the EHT analysis of the Galactic Center, which found inconsistencies between data and observation. MAD models are mostly unaffected since the geometric extent of the emission region does not change significantly with or without the helicity barrier. In contrast, in SANE models, especially with larger values of , the emission tends to shift towards the disk when the helicity barrier is included, increasing the relative imprint of the turbulent dynamics near the horizon (which is more variable than in the funnel wall).§ DISCUSSIONOur approach is subject to several limitations. First, our GRMHD simulations do not dynamically evolve electron temperatures and instead only track the total energy of the ion-plus-electron fluid, leaving the electron distribution function to be prescribed in post-processing. This procedure relies on the assumption that the ratio of heating rates can be directly mapped to the ratio of temperatures, which may be a reasonable assumption if the majority of internal energy is locally generated but need not be the case. Second, our base heating model does not depend on the structure of the accretion flow and does not correspond to any specific dissipation mechanism. Additionally, our simulations do not take into account the effects of pressure anisotropy on dynamics of the flow and thermodynamics of ions and electrons.Additional limitations are due to our implementation of the helicity barrier physics. Although it is necessary to compute the strength of local fluctuations in the Elsässer variables, there is no clear way to calculate the mean flow ⟨ z_±⟩ due to the global geometric structure and the relativistic nature of the problem. In this work, we use temporal averages rather than spatial ones, and our results depend on the details of the averaging procedure. To estimate the uncertainty due to averaging, in Figure <ref> we compare the measured value of β_2 for different averaging windows for a representative set of models and find that the choice is relatively robust. Our model assumes that inhomogeneities do not allow the locally generated helicity to be transported away.Finally, even though the barrier is expected to form in low-β regions of turbulence, we have applied the barrier-induced heating reductions across the entire domain. We do not expect this distinction to be qualitatively important, as high-β regions have relatively small amounts of electron heating in any case. Nevertheless, the heating reduction due to helicity barrier applies only in the regions of the flow where the main dissipation mechanism is turbulence. In our simulations, we assume that the entire domain is dissipated through turbulence, which is most likely incorrect. The relative importance of different dissipation channels is not yet well understood. It is worthwhile to consider whether the effects of the helicity could be reproduced with modifications to the canonical electron temperature prescription of Equation <ref>. To first order, the helicity barrier produces cooler electrons across the domain and thereby increases the temperature ratio T_i/T_e everywhere; this change could be emulated by increasing theparameter by a factor of a few to order ten. Cool electron populations like the one that would result from this change have been invoked to explain disagreements between observations and model predictions for jet power and polarimetric properties <cit.>.Is it possible to do better? The strength of the helicity barrier depends on the normalized cross helicity σ_c. Comparing the panels of Figures <ref> and <ref> shows that there is no clear relationship betweenand other fluid parameters, like σ or β. Any modification to Equation <ref> would at least need to be a function of some other locally calculable quantity that is not readily identifiable, so it is not clear how to modify the prescription without introducing an extra complexity comparable to directly evaluating .Thus, while such a global approach would produce the same qualitative effects as incorporating the helicity barrier, the complicated structures seen in Figure <ref> suggest that any global approach would be inaccurate in detail. How the inaccuracies due to this approximation would compare to other modeling uncertainties is a different question, and we caution that the sensitive dependence of the observables on the details of electron distribution function makes it challenging to evaluate any kind of Jacobian. Performing a rigorous comparison is thus well beyond the scope of this paper. § SUMMARYWe have studied the effects of imbalanced turbulence and the resultant helicity barrier in the context of radiative inefficient black hole accretion. We have computed the degree of cross-helicity buildup in a suite of numerical accretion simulations covering both magnetically arrested disk (MAD) and standard and normal evolution (SANE) flows and over a range of black hole spins. We have also used results from local simulations of non-relativistic low-β turbulence <cit.> to explore how including (or not) the helicity barrier in the imaging procedure can affect predictions for 230GHz horizon-scale black hole images relevant for Event Horizon Telescope analyses <cit.>. The local level of sustained imbalance determines the importance of the helicity barrier, which in turn limits electron heating. We have found that the imbalance tends to be smaller in regions of the flow with high plasma β (commonly found in the disks of SANE flows and flows with low black hole spin). In contrast, in regions with ordered magnetic fields, such as in the jet and its surrounding wind in SANE flows as well as throughout much more of the domain in MAD flows, imbalance persists, helicity builds, and electron heating is more restricted. Accounting for the helicity barrier thus causes emission to shift away from the funnel wall towards the lower-magnetization disk region in SANE flows, while the emission morphology is largely unaffected in MADs. When comparing to observations, the total emission produced by a candidate accretion flow must match its observed value, and the cooler electrons require larger plasma number densities and magnetic field strengths. Thus, neglecting the helicity barrier can lead to underestimated accretion rates and inferred jet powers by more than a factor of two. The higher plasma densities also lead to increased Faraday depths and depolarization, resulting in decreased amplitudes of the polarimetric β_2 observable. Finally, we find that the inferred ring diameter and light curve variability modulation index are mostly unchanged for MAD flows but may increase for SANE flows, especially with large values of . The increased jet powers and decreased coherent polarizations due to inclusion of the helicity barrier may help explain some qualitative differences between observed EHT data and contemporary modeling efforts <cit.>.The authors thank Michi Bauböck, Andrew Chael, Matt Kunz, Elias Most, Eliot Quataert, Jonathan Squire, Jim Stone, and Muni Zhou for useful discussions and suggestions. The authors also thank the anonymous referee for useful comments and suggestions. G.N.W. was supported by the Taplin Fellowship. Support for L.A. was provided by the Institute for Advanced Study. aasjournal
http://arxiv.org/abs/2312.16172v1
{ "authors": [ "George N. Wong", "Lev Arzamasskiy" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231226185959", "title": "Balanced Turbulence and the Helicity Barrier in Black Hole Accretion" }
Feature Selection for High-Dimensional Neural Network Potentials with the Adaptive Group Lasso Johannes Sandberg^1,2,3, Thomas Voigtmann^1,2, Emilie Devijver^4, Noel Jakse^3 January 14, 2024 ==============================================================================================We consider the problem of estimating unknown parameters in stochastic differential equations driven by colored noise, given continuous-time observations. Colored noise is modelled as a sequence of mean zero Gaussian stationary processes with an exponential autocorrelation function, with decreasing correlation time. Our goal is to infer parameters in the limit equation, driven by white noise, given observations of the colored noise dynamics. As in the case of parameter estimation for multiscale diffusions, the observations are only compatible with the data in the white noise limit, and classic estimators become biased, implying the need of preprocessing the data. We consider both the maximum likelihood and the stochastic gradient descent in continuous time estimators, and we propose modified versions of these methods, in which the observations are filtered using an exponential filter. Both stochastic differential equations with additive and multiplicative noise are considered. We provide a convergence analysis for our novel estimators in the limit of infinite data, and in the white noise limit, showing that the estimators are asymptotically unbiased. We consider in detail the case of multiplicative colored noise, in particular when the Lévy area correction drift appears in the limiting white noise equation. A series of numerical experiments corroborates our theoretical results. AMS subject classifications. 60H10, 60J60, 62F12, 62M05, 62M20.Keywords. Diffusion processes, colored noise, filtered data, Lévy area correction, maximum likelihood estimator, stochastic gradient descent in continuuous time.§ INTRODUCTION Estimating parameters from data in physical models is important in many applications. In recent years, model calibration has become an essential aspect of the overall mathematical modelling strategy <cit.>. Often complex phenomena cannot be described by deterministic equations and some form of randomness needs to be taken into account, either due to model uncertainty/coarse-graining, parametric uncertainty or imprecise measurements. Often, noise in dynamical systems is modelled as white noise, i.e., as a mean-zero Gaussian stationary process that is delta-correlated in time, leading to Itô stochastic differential equations (SDEs). Inferring unknown parameters in diffusion models is therefore an essential problem which has been thoroughly investigated <cit.>. There are many applications, however, where modeling noise as an uncorrelated-in-time process is not accurate and where non-trivial (spatio-)temporal correlation structures need to be taken into account, leading to colored noise. See., e.g., <cit.> and the references therein for applications of colored noise to physics, chemistry, and biology. Colored noise is modelled as a mean zero Gaussian stationary process with an exponential autocorrelation function, i.e., a stationary Ornstein-Uhlenbeck process <cit.>. A natural question is whether the solution to an SDE driven by colored noise converges to the solution to the white noise-driven SDE, in the limit as the correlation time of the noise goes to zero. This is certainly the case for SDEs driven by additive colored noise. In fact, it can be shown that, under standard dissipativity assumptions on the drift, that convergence is uniform in time, and error estimates can be obtained <cit.>. In particular, the stability and ergodic properties still hold in a vicinity of thewhite noise regime, i.e., for SDEs driven by colored noise with a sufficiently small correlation time. The white noise limit becomes more complicated when the noise is multiplicative, in particular in the multidimensional case. In one dimension, the well known Wong–Zakai theorem <cit.> implies that, in the white noise limit, we obtain the Stratonovich SDE. However, this is not true in general in dimensions higher than one, except for the case of reducible SDEs <cit.>. In general, this limiting procedure introduces an extra drift term, in addition to the stochastic integral, due to properties of the Lévy area of Brownian motion <cit.>. This additional drift term, the Lévy area correction, can have profound impact on qualitative properties of solutions to the SDE <cit.>, <cit.>. In addition to the rigorous derivation of the Lévy area correction drift term <cit.>, see also <cit.> for a derivation using the theory of rough paths. The Lévy area correction drift term can also be derived, for multiplicative SDEs driven by colored noise, using multiscale analysis <cit.>, <cit.>. We also note that, for SDEs driven by colored multiplicative noise, and in the presence of an additional time scale, due to, e.g., delay effects or inertia, the white noise limit might lead to stochastic integrals that are neither Itô nor Stratonovich <cit.>. This is a phenomenon that has also been verified experimentally on a noisy electric circuit <cit.>.The main goal of this paper is to develop inference methodologies for inferring parameters in the limiting, white noise SDE from data coming from the colored-noise driven equation, in the regime of small noise correlation time. This problem, and the approach adopted in this paper, is similar to the problem of estimating unknown parameters in homogenized models given observations of the slow variable from the corresponding multiscale dynamics <cit.>. Both in the problem studied in this paper and in the multiscale dynamics one, standard inference methodologies suffer from the problem of model misspecification: the data, obtained from observations of the full dynamics, is not compatible with the coarse-grained/homogenized/white noise model, but only at appropriate time scales, at which the limiting equation is valid. This leads to a systematic bias in the, e.g., maximum likelihood estimator (MLE). In this paper, in addition to the MLE we also consider stochastic gradient descent in continuous time (SGDCT), which allows for online learning from data. While the former is a well-established method for parameter estimation, the latter has been recently developed as an inference methodology for diffusion processes in <cit.>, and further analyzed in <cit.>, where a rate of convergence is obtained and a central limit theorem is proved. The SGDCT estimator was recently applied to McKean (mean field) SDEs in <cit.>.In contrast to stochastic gradient descent in discrete time, which has been studied in detail in, e.g., <cit.>, SGDCT consists in solving an SDE for the unknown parameter and thus performing online updates. In particular, it continuously follows a noisy descent direction along the path of the observations, yielding rapid convergence. Both the MLE and SGDCT, even if they perform well in problems where one is confronted with single-time-scale data, and for which model and data are compatible at all scales, fail in inferring parameters from observations driven by colored noise, due to systematic bias <cit.>. Therefore, it is necessary to preprocess the data. Inspired by <cit.>, we propose to filter the data through an appropriate exponential kernel, and to then use the filtered data/process in the definition of the estimators. This approach was applied to the MLE for multiscale SDEs in <cit.>, where it is demonstrated that the filtering methodology outperforms and is more robust than classic subsampling techniques <cit.>. It was shown in this paper that inserting filtered data in the MLE allows one to correctly estimate the drift coefficient of the homogenized equation, when data are given from the slow variable of the multiscale system. In addition to the MLE, filtered data have then been used in combination with the continuous-time ensemble Kalman–Bucy filter <cit.>. The same methodology was also successfully applied to the case of discrete-time observations from multiscale dynamics <cit.>. In particular, based on <cit.> and the convergence of eigenvalues and eigenfunctions of the generator of multiscale dynamics to the corresponding eigenpairs of the generator of the homogenized process <cit.>, martingale estimating functions are first constructed and then modified taking into account filtered data. Moreover, a different filtering approach based on moving average is then presented in <cit.>, still in the framework of multiscale diffusions.The main contribution of this article is showing that coupling filtered data with either MLE or SGDCT is beneficial for their effectiveness when applied to stochastic models with colored noise. Our novel estimators based on filtered data are indeed able to learn linear parameters in the drift of the limit equation with white noise from the trajectories of the solution of the SDE driven by colored noise. We first consider the setting of additive noise and prove that both estimators are asymptotically unbiased in the limit of infinite data and when the correlation time of the colored noise vanishes. Our convergence analysis uses ideas from <cit.> and relies on the results in <cit.>. In addition, we show that our methodology is not restricted to the case of additive noise, by analyzing a particular case of multiplicative noise which yields Lévy area correction. Despite the presence of this additional term, we show both theoretically and numerically that our estimators succeed in inferring the drift coefficient of the limit equation. This opens the possibility of extending these approaches to more general settings and more complex stochastic models driven by colored noise. Outline.The rest of the paper is organized as follows. In <ref> we introduce the general framework of stochastic processes driven by colored noise, and in <ref> we present the filtered data methodology, which is combined with MLE and SGDCT estimators, to infer parameters in SDEs driven by colored additive noise. Then, <ref> is devoted to the convergence analysis of the proposed estimators, and in <ref> we extend our methods to a particular case of diffusion with multiplicative colored noise, which yields Lévy area correction. Finally, in <ref> we demonstrate the effectiveness of our approach through numerical experiments, and in <ref> we draw our conclusions and address possible future developments.§ PROBLEM SETTINGWe consider the framework of <cit.> and model colored noise as a Gaussian stationary diffusion process, i.e., the Ornstein–Uhlenbeck process. Consider the system of SDEs for the processes X_t^∈^d, Y_t^∈^n in the time interval [0,T]X̣^_t= h(X^_t)t + g(X^_t) Y^_t/ t,Ỵ^_t= - A/^2 Y^_tt + σ/ W_t,with initial conditions X_0^∈^d, Y_0^∈^n, and where h ^d →^d, g ^d →^d × n, A ∈^n × n, σ∈^n × m, W_t ∈^m is a standard Brownian motion, and 0 < ≪ 1 is the parameter which characterizes the colored noise. In the limit as → 0, the process X_t^ converges to an SDE driven by white noise <cit.>. Define Σ = σσ^⊤∈^n × n and assume that the eigenvalues of A have positive real parts and that the matrix Σ is positive definite. Then, the process Y^_t has a unique invariant measure which is Gaussian with zero mean and covariance matrix Σ_∞∈^n × n which satisfies the steady-state variance equation A Σ_∞ + Σ_∞ A^⊤ = Σ.Moreover, define the quantitiesB ^d →^d × n, B(x) = g(x) A^-1, R ^d →^d × n, R(x) = g(x) Σ_∞, D ^d →^d × d, D(x) = R(x) B(x)^⊤, b ^d →^d, b(x) = ∇· D(x)^⊤ - B(x) ∇· R(x)^⊤.In the limit as → 0 the process X_t^ converges weakly in 𝒞^0([0,T];^d) to the solution X_t of the Itô SDE <cit.><cit.>X̣_t = (h(X_t) + b(X_t))t + √(2 D^S(X_t)) W_t,with initial condition X_0 = X_0^, where W_t again denotes standard d-dimensional Brownian motion, D^S denotes the symmetric part of the matrix D, i.e., D^S(x) = (D(x) + D(x)^⊤)/2, and the additional drift term b is called Lévy area correction.Let n=m=2, α, γ, η be positive constants and consider system (<ref>) with A = α I + γ J andσ = √(η) I,whereI = [ 1 0; 0 1 ]and J = [01; -10 ].Then we haveΣ = η I, Σ_∞ = η/2α I, and A^-1 = 1/ρA^⊤ where ρ = α^2 + γ^2,which giveB(x) = α/ρ g(x) - γ/ρ g(x) J,R(x) = η/2α g(x),D(x) = η/2ρ g(x) g(x)^⊤ + ηγ/2 αρ g(x) J g(x)^⊤,which in turn implyD^S(x) = η/2ρ g(x) g(x)^⊤.Therefore, we obtainb(x) = η/2ρ( ∇· (g(x)g(x)^⊤) - g(x) ∇· g(x)^⊤) - ηγ/2αρ( ∇· (g(x)Jg(x)^⊤) - g(x)J ∇· g(x)^⊤).Since it will be employed in the following, we recall here the conversion from Stratonovich to Itô integrals and SDEs. Consider the d-dimensional Stratonovich SDEX̣_t = h(X_t)t + g(X_t) ∘Ẉ_t,where X_t ∈^d, h ^d →^d, g ^d →^d × m and W_t is an m-dimensional Brownian motion. Then, equation (<ref>) is equivalent to the Itô SDEX̣_t = (h(X_t) + c(X_t))t + g(X_t)W_t,withc(x) = 1/2( ∇· (g(x)g(x)^⊤) - g(x) ∇· g(x)^⊤),and where the divergence of a matrix is computed per row. Moreover, we have the following conversion for integrals∫_0^T X_t ∘Ỵ_t = ∫_0^T X_tY_t + 1/2 [X_t, Y_t]_T,where [X_t, Y_t]_T denotes the quadratic covariation of two processes X_t and Y_t.§.§ One-dimensional caseIn the one-dimensional case, i.e., when d=n=m=1, a stronger result at the level of paths can be proven <cit.>. We notice that at stationarity the process Y^_t is a standard Gaussian process with auto-correlation function𝒞(t,s) = [ Y^_t Y^_s ] = σ^2/2A e^-A/^2t-s,which implies lim_→ 0[ Y^_t/Y^_s/] = σ^2/A^2δ(t),and therefore Y^_t/ converges to white noise (σ/A) W_t asvanishes. Moreover, by equation (<ref>), the process X^_t converges in the limit as → 0 to the solution X_t of the Itô SDEX̣_t = h(X_t)t + σ^2/2A^2 g(X_t) g'(X_t)t + σ/A g(X_t)W_t,which, due to <ref>, is equivalent to the Stratonovich SDEX̣_t = h(X_t)t + σ/A g(X_t) ∘Ẉ_t.Finally, through a repeated application of Itô's formula we can derive error estimates. In particular, it is possible to show that[ (X^_t - X_t)^2 ]^1/2≤ C,where C>0 is a constant independent of . We remark that for some particular examples pathwise convergence can also be proved in arbitrary dimensions <cit.>.§ PARAMETER ESTIMATION FOR ADDITIVE COLORED NOISEIn this section, we consider the problem of estimating parameters which appear linearly in the drift, given observations only from the fast process (X_t^)_t∈[0,T]. In this section we focus on the (easier) case of additive noise case, where the Lévy area correction does not appear. Let the drift function h depend linearly on an unknown matrix θ∈^d ×ℓ, i.e., h(x) = θ f(x) for some function f ^d →^ℓ, and let the diffusion function g(x) = G be constant. Then, system (<ref>) readsX̣^_t= θ f(X^_t)t + G Y^_t/ t,Ỵ^_t= - A/^2 Y^_tt + σ/ W_t,and the limit equation (<ref>) becomesX̣_t = θ f(X_t)t + √(2D^S) W_t,where D = G Σ_∞ A^-⊤ G^⊤. We assume the following conditions which guarantee ergodicity of the colored and white noise SDEs.There exist constants 𝔞, 𝔟, 𝔠 > 0, such thatθ f(x) · x ≤𝔞 - 𝔟x^2 and A y · y ≥𝔠y^2,for all x ∈^d and y ∈^n. Moreover, f ∈𝒞^2(^d) and the constants satisfy2 𝔟𝔠 - G^2 > 0.Let us consider the simple case of the Ornstein–Uhlenbeck process in one dimension, i.e., set the drift function to f(x) = - x. Then the process X^_t can be computed analytically and is given byX^_t = X^_0 e^-θ t + G/∫_0^t e^-θ(t-s) Y^_ss.We remark that in this case the joint process (X^_t, Y^_t) is a standard Gaussian process whose covariance matrix at stationarity is 𝒞 = [ G^2 σ^22θ(A^2 - ^4θ^2)G σ^22A(A + ^2 θ);G σ^22A(A + ^2 θ)σ^22A ],and the limit process is X_t ∼𝒩(0, G^2σ^2/2A^2θ). We recall that our goal is to infer the parameter θ given a realization (X_t^)_t∈[0,T] of the process defined by (<ref>). In the following subsections, we propose two different estimators: the first one is similar to the MLE, while the second one is based on the SGDCT. §.§ Maximum likelihood type estimator Let us first focus on equation (<ref>) and construct an estimator for θ given a trajectory (X_t)_t∈[0,T] from the same limit equation. Inspired by the MLE, we propose the following estimatorθ(X,T) = ( ∫_0^T X̣_t ⊗ f(X_t) ) ( ∫_0^T f(X_t) ⊗ f(X_t)t )^-1,where ⊗ denotes the outer product.The estimator in equation (<ref>) is not directly obtained by maximizing the likelihood function, in fact the actual MLE would require the knowledge of the diffusion term. However, since the shape of the estimator is similar and it is obtained by replacing the diffusion term by the identity matrix, in the following we will always refer to it as the MLE estimator. In the following result, whose proof can be found in <ref>, we show that the MLE estimator is asymptotically unbiased in the limit of infinite time. For clarity of the exposition, <ref>(i), which is required in <ref> below for the well-posedness of the estimator, will be stated in the next section together with the corresponding <ref>(ii) for filtered data.Let <ref>(i) hold and let θ(X,T) be defined in (<ref>). Then, it holdslim_T→∞θ(X,T) = θ,a.s.Let us now consider the more interesting problem of estimating the parameter θ given observations from the system with colored noise. Since the process X_t^ is close in a weak sense to the process X_t, it is tempting to replace X_t by X_t^ in the definition of the previous estimator, which yieldsθ(X^,T) = ( ∫_0^T X̣_t^⊗ f(X_t^) ) ( ∫_0^T f(X_t^) ⊗ f(X_t^)t )^-1.However, this estimator fails even when the process X_t is one-dimensional. In particular, the estimator vanishes in the limit of infinite observation time, as stated in the following proposition, whose proof is presented in <ref>.Let <ref>(i) hold and let θ(X^,T) be defined in (<ref>). Then, if d=1it holdslim_T→∞θ(X^,T) = 0,a.s.Therefore, in the next sections we propose two different approaches to infer the unknown coefficients in presence of colored noise, which rely on filtering the data with an appropriate kernel of the exponential family. §.§ The filtered data approach In <cit.> exponential filters were employed to remove the bias from the MLE when estimating parameters in the homogenized SDE given observations of the slow variable in the multiscale dynamics. Motivated by this paper, we introduce the kernel k[0,∞) → given byk(r) = 1/δe^-r/δ,where δ > 0 is a parameter measuring the filtering width. As it will become transparent later,we need to assume the following condition on the filtering width in order to ensure the required ergodicity properties.The parameter δ in (<ref>) satisfiesδ > 𝔠/2 𝔟𝔠 - G^2,where 𝔟, 𝔠, G are given in <ref>.Notice that the condition given in <ref> on the filtering width δ is not very restrictive. In fact, if we assume, e.g., the function f to be f(x) = x^2r+1 for an integer r>0, then the coefficient 𝔟 > 0 can be taken arbitrarily large, and therefore the parameter δ can be chosen along the entire positive real axis. We then define filtered data by convolving the original data, driven by both colored and white noise, with the exponential kernel, and we obtainZ^_t = ∫_0^t 1/δe^-(t-s)/δ X_s^ s,Z_t = ∫_0^t 1/δe^-(t-s)/δ X_ss.The new estimators based on filtered data are obtained replacing one instance of the original data by the filtered data in both the terms in (<ref>) for processes driven by white noise and (<ref>) for processes driven by colored noise. Before defining the estimators and presenting the main theoretical results of this section, we introduce a technical assumption, which corresponds to the strong convexity of the objective function in <cit.>. This assumption is related to the nondegeneracy of the Fisher information matrix studied in <cit.> for mean field SDEs, see also <cit.> and, in our context, with the identifiablity of the drift parameters θ from observations of X_t^.There exists a constant K>0 such that for all v ∈^ℓ(i) v^⊤^μ[ f(X) ⊗ f(X) ] v ≥ K v^2, (ii) v^⊤^μ[ f(X) ⊗ f(Z) ] v ≥ K v^2,where μ is the invariant measure of the joint process (X_t,Z_t), which will be rigorously defined in <ref>. Moreover, the coefficient a in (<ref>) is such that aK > 1. Notice that <ref> implies that the same properties hold true also for the processes X_t^ and Z_t^ driven by colored noise ifis sufficiently small. In fact, we have for all v ∈^ℓv^⊤^μ^[ f(X^) ⊗ f(Z^) ] v= v^⊤^μ[ f(X) ⊗ f(Z) ] v+ v^⊤( ^μ^[ f(X^) ⊗ f(Z^) ] - ^μ[ f(X) ⊗ f(Z) ] ) v.Then, letting 0 < 𝔡 < K, by the convergence in law of the process (X_t^,Z_t^) to (X_t,Z_t), there exists _0 > 0 such that for all < _0 we getv^⊤( ^μ^[ f(X^) ⊗ f(Z^) ] - ^μ[ f(X) ⊗ f(Z) ] ) v≤^μ^[ f(X^) ⊗ f(Z^) ] - ^μ[ f(X) ⊗ f(Z) ]v^2 ≤𝔡v^2,which together with equation (<ref>) and <ref> givesv^⊤^μ^[ f(X^) ⊗ f(Z^) ] v ≥ (K - 𝔡) v^2K_𝔡v^2,where we notice that K_𝔡 can be chosen arbitrarily close to K. Similarly, we also obtain v^⊤^μ^[ f(X^) ⊗ f(X^) ] v ≥ K_𝔡v^2.Moreover, we remark that even if <ref> is stated for vectors v ∈^ℓ, then it also implies that for any matrix V ∈^ℓ×ℓ we have V ^μ[ f(X) ⊗ f(X) ]V≥ K V^2, V ^μ[ f(X) ⊗ f(Z) ]V≥ K V^2, V ^μ^[ f(X^) ⊗ f(X^) ]V≥ K_𝔡V^2, V ^μ^[ f(X^) ⊗ f(Z^) ]V≥ K_𝔡V^2,whereand · stand for the Frobenius scalar product and norm, respectively. In fact, e.g., for the first inequality, but similarly for the others, we haveV ^μ[ f(X) ⊗ f(X) ]V = ∑_i=1^ℓ v_i^⊤^μ[ f(X) ⊗ f(X) ] v_i ≥ K ∑_i=1^ℓv_i^2 = K V^2,where v_1^⊤, …, v_ℓ^⊤ are the rows of the matrix V, and the inequality follows from <ref>.The hypotheses about the coercivity of the matrices ^μ[ f(X) ⊗ f(X) ] and ^μ[ f(X) ⊗ f(Z) ] in <ref> are not a strong limitations of the scope of the next theorems. In fact, the matrix ^μ[ f(X) ⊗ f(X) ] is already symmetric positive semi-definite, so we are only requiring the matrix to have full rank. Moreover, the process Z_t is a filtered version of the process X_t and therefore in practice we expect f(X_t) and f(Z_t) to behave similarly for the majority of time, in particular when the filtering width δ is sufficiently small, implying the property for ^μ[ f(X) ⊗ f(Z) ]. Hence, we expect these assumptions to hold true in all concrete examples, as we also observed in our numerical experiments. We now study the performance of the MLE estimator (<ref>) in the presence of observations from the limit equation. In particular, we define the estimator θ_exp^δ(X,T) = ( ∫_0^T X̣_t ⊗ f(Z_t) ) ( ∫_0^T f(X_t) ⊗ f(Z_t)t )^-1and prove that it is asymptotically unbiased in the limit of infinite data. The following result is therefore analogous to <ref>, and shows that our modification does not affect the unbiasedness of the MLE.Let <ref>(ii) hold and let θ_exp^δ(X,T) be defined in (<ref>). Then, it holdslim_T→∞θ_exp^δ(X,T) = θ,a.s.We now focus on our main problem of interest, i.e., the case when the observations are generated by the system with colored noise, for which the corresponding estimator is given byθ_exp^δ(X^,T) = ( ∫_0^T X̣_t^⊗ f(Z_t^) ) ( ∫_0^T f(X_t^) ⊗ f(Z_t^)t )^-1.The following theorem, which is the main result of this section and its proof can be found in <ref>, shows that, in contrast with the estimator in (<ref>), the proposed estimator is asymptotically unbiased in the limit as →0 and T→∞.Let <ref>(ii) hold and let θ_exp^δ(X^,T) be defined in (<ref>). Then, it holdslim_→0lim_T→∞θ_exp^δ(X^,T) = θ,a.s.In particular, due to <ref>, the estimator in (<ref>) gives a straightforward methodology for inferring the unknown drift coefficient given observations from the system (<ref>) with colored noise. Moreover, in the next section we show that this methodology based on filtered data can be applied to a different estimator, i.e., the SGDCT. <ref> and <ref> are important for the well-posedness of the MLE estimators (<ref>), (<ref>), (<ref>), (<ref>). In fact, by the ergodic theorem we have, e.g., for the estimator (<ref>), and similarly for the others, thatlim_T →∞1/T∫_0^T f(X_t^) ⊗ f(Z_t^)t = ^μ^ [f(X^) ⊗ f(Z^)].Therefore, the positive-definiteness of the expectations in <ref> guarantees the invertibility of the matrices in the definition of the estimators, for sufficienlty large time T.The exponential kernel in (<ref>) is not the only possible choice for the filtering procedure. A different technique based on moving averages is presented in <cit.>, where it is applied to multiscale diffusions. We expect that the analysis presented in this paper applies to the moving average-based filtering methodology, when applied to systems driven by colored noise. We will leave this for future work. §.§ Coupling filtered data with SGDCTIn this section, we present a different approach for inferring parameters in SDEs driven by colored noise. In particular, we employ the SGDCT method introduced in <cit.>. We first consider the SGDCT as an inference methodology for estimating the parameter θ in the model (<ref>) given observations (X_t)_t∈[0,T] from the same equation. The SGDCT consists of the following system of SDEs for the unknown parameter%̣ṣθ_t= ξ_t I_t ⊗ f(X_t),I_t= X̣_t - θ_t f(X_t)t,where ℐ_t is called the innovation and ξ_t is the learning rate, which has the formξ_t = a/b+t,for some constants a,b>0, and with initial condition θ_0 = θ_0 ∈^d ×ℓ. We note that the SGDCT estimator (<ref>) is in fact quite similar to the MLE estimator (<ref>), with the additional feature of having introduced the learning rate ξ_t. Proceeding similarly to the previous section, we define analogous estimators for the reduced model using first data from the multiscale process driven by colored noise, and then employing filtered data. It turns out that for the latter, it is important to not modify the innovation term in order to keep the estimator asymptotically unbiased, as we will see in the theoretical analysis. The three considered estimators correspond the following system of SDEs:%̣ṣθ_t^ = ξ_t I_t^⊗ f(X_t^), I_t^ = X̣_t^ - θ_t^ f(X_t^)t,%̣ṣθ_exp,t^δ = ξ_t I_exp,t^δ⊗ f(Z_t), I_exp,t^δ = X̣_t - θ_exp,t^δ f(X_t)t,%̣ṣθ_exp,t^δ, = ξ_t I_exp,t^δ,⊗ f(Z_t^), I_exp,t^δ, = X̣_t^ - θ_exp,t^δ, f(X_t^)t,with initial conditions θ_0^ = θ_exp,0^δ = θ_exp,0^δ, = θ_0 ∈^d ×ℓ. Consider now the one-dimensional case that was introduced in <ref>, i.e., when d=ℓ=n=m=1. The solutions of the SDEs (<ref>), (<ref>), (<ref>), (<ref>) have a closed form expression, and therefore the estimators can be computed analytically and are given byθ_t= θ + (θ_0 - θ) e^-∫_0^t ξ_r f(X_r)^2r + √(2D^S)∫_0^t ξ_s e^-∫_s^t ξ_r f(X_r)^2r f(X_s)W_s,θ_t^ = θ + (θ_0 - θ) e^-∫_0^t ξ_r f(X_r^)^2r + G ∫_0^t ξ_s e^-∫_s^t ξ_r f(X_r^)^2r f(X_s^) Y_s^/ s,θ_exp,t^δ = θ + (θ_0 - θ) e^-∫_0^t ξ_r f(X_r) f(Z_r)r + √(2D^S)∫_0^t ξ_s e^-∫_s^t ξ_r f(X_r) f(Z_r)r f(Z_s)W_s,θ_exp,t^δ, = θ + (θ_0 - θ) e^-∫_0^t ξ_r f(X_r^) f(Z_r^)r + G ∫_0^t ξ_s e^-∫_s^t ξ_r f(X_r^) f(Z_r^)r f(Z_s^) Y_s^/ s.In the next section we show that MLE and SGDCT behave similarly in the limit at t → +∞, and therefore also the estimators based on SGDCT are asymptotically unbiased. In particular, we have the following convergence results.Let θ_t be defined in (<ref>). Under <ref>(i), it holdslim_t→∞θ_t = θ, inL^2.Let θ_t^ be defined in (<ref>). Under <ref>(i), it holdslim_t→∞θ_t^ = 0, inL^2.Let θ_exp,t^δ be defined in (<ref>). Under <ref>(ii), it holdslim_t→∞θ_exp,t^δ = θ, inL^2.Let θ_exp,t^δ, be defined in (<ref>). Under <ref>(ii), it holdslim_→0lim_t→∞θ_exp,t^δ, = θ, inL^2.The proofs of these theorems, which are outlined in <ref>, are based on the additional assumption that the state space for equations (<ref>) and (<ref>) is a compact phase space, namely the d-dimensional torus 𝕋^d. In particular, we consider the wrapping of the stochastic processes on the torus <cit.>. Indeed the theoretical analysis, and especially <ref>, are based on the results for the Poisson problem presented in <cit.>, in which the Poisson equation on the torus is considered. We believe that it should be possible, at the expense of introducing additional technical difficulties, to modify our proof so that it applies to the case where the state space is the whole ^d. In this case the solution of the Poisson PDE and its derivatives are not expected to be bounded; we will need to control the time spent by the process outside a compact subset of the phase space and to use the results in <cit.>, see also <cit.>. The numerical experiments in <ref> indeed show that our estimators work when the state space is ^d. Since the focus of this paper is on parameter estimation, we chose to dispense with all these tehcnical difficulties by considering the case where our processes are defined on the torus. The estimators introduced here for SDEs driven by colored additive noise will then be successfully applied to numerical examples in <ref>, where we will observe that the exact unknown parameter can be accurately approximated.§ CONVERGENCE ANALYSISThis section is devoted to the proofs of the main results presented in the previous section, i.e., <ref>, which show the asymptotic (un)biasedness of the proposed estimators. The convergence analysis is divided in three parts. We first study the ergodic properties of the stochastic processes under investigation together with the filtered data, then we study the infinite time limit of our SGDCT estimators, and, finally, we focus on the proofs of the main results of this work. We remark that A denotes the Frobenius norm of a matrix A throughout this section.§.§ Ergodic propertiesWe consider system (<ref>) and the limit equation (<ref>) together with the additional equations given by the filtered data (<ref>), i.e.,X̣^_t= θ f(X^_t)t + G Y^_t/ t,Ỵ^_t= - A/^2 Y^_tt + σ/ W_t,Ẓ^_t= 1/δ(X^_t - Z^_t)t,andX̣_t= θ f(X_t)t + √(2D^S) W_t,Ẓ_t= 1/δ(X_t - Z_t)t,respectively. We first verify that the measures induced by the stochastic processes admit smooth densities with respect to the Lebesgue measure. Since white noise is present only in one component, this is a consequence of the theory of hypoellipticity, as shown in the next lemma.Let μ_t^ and μ_t be the measures at time t induced by the joint processes (X_t^,Y_t^,Z_t^) and (X_t,Z_t) given by equations (<ref>) and (<ref>), respectively. Then, the measures μ_t^ and μ_t admit smooth densities ρ^_t and ρ_t with respect to the Lebesgue measure. Let us first consider the system driven by colored noise. The generator of the joint process (X_t^, Y_t^, Z_t^) isℒ^ = θ f(x) ·∇_x + 1/G y ·∇_x - 1/^2 A y ·∇_y + 1/δ(x-z) ·∇_z + 1/2^2σσ^⊤∇^2_yy𝒳_0 + 1/2^2∑_i=1^n 𝒳_i^2,where𝒳_0= θ f(x) ·∇_x + 1/G y ·∇_x - 1/^2 A y ·∇_y + 1/δ(x-z) ·∇_z,𝒳_i= σ_i ·∇_y,i = 1, …, n,and where σ_i, i = 1, …, n, are the columns of the matrix σ. The commutator [𝒳_0, 𝒳_i] is[𝒳_0, 𝒳_i] = - 1/σ_i · G^⊤∇_x + 1/^2σ_i · A^⊤∇_y,and the commutator [𝒳_0, [𝒳_0, 𝒳_i]] is[𝒳_0, [𝒳_0, 𝒳_i]] = 1/σ_i · G^⊤θ∇_x f(x) ∇_x - 1/^3σ_i · A^⊤ G^⊤∇_x + 1/δσ_i · G^⊤∇_z + 1/^4σ_i · (A^⊤)^2 ∇_y.Therefore, for any point (x,y,z) ∈^2d+n, the setℋ = Lie( 𝒳_i, [𝒳_0, 𝒳_i], [𝒳_0, [𝒳_0, 𝒳_i]]; i = 1, …, n )spans the tangent space of ^2d+n at (x,y,z). The result then follows from Hörmander’s theorem (see, e.g., <cit.>). The proof of hypoellipticity for the limit system (X_t,Z_t) is similar and we omit the details. We are now interested in the limiting properties of the measures μ_t^ and μ_t, and, in particular, in the stationary Fokker–Planck equations of the systems of SDEs. The next lemma guarantees that the joint processes (X_t^,Y_t^,Z_t^) and (X_t,Z_t) are ergodic. Under <ref>, the processes (X_t^,Y_t^,Z_t^) and (X_t,Z_t) given by equations (<ref>) and (<ref>), are ergodic with unique invariant measures μ^ and μ, whose densities ρ^ and ρ with respect to the Lebesgue measure solve the stationary Fokker–Planck equations- ∇_x ·( θ f(x) ρ^(x,y,z) ) - 1/∇_x ·( G y ρ^(x,y,z) ) - 1/δ∇_z ·( (x-z) ρ^(x,y,z) )+ 1/^2∇_y ·( A y ρ^(x,y,z) ) + 1/2 ^2σσ^⊤∇_y^2 ρ^(x,y,z)= 0,and- ∇_x ·( θ f(x) ρ(x,z) ) - 1/δ∇_z ·( (x-z) ρ(x,z) ) + D^S ∇_x^2 ρ(x,z) = 0,wheredenotes the Frobenius inner product between two matrices. Moreover, the measure μ^ converges weakly to μ asgoes to zero. Let us first consider the system driven by colored noise. <ref> guarantees that the Fokker–Planck equation can be written directly from the system (<ref>). In order to prove ergodicity, consider the function𝒮(x,y,z) = [ θ f(x) + 1/Gy;-1/^2 Ay;1/δ(x-z) ]·[ x; y; z ] = θ f(x) · x + 1/G y · x - 1/^2 A y · y + 1/δx · z - 1/δz^2.Due to <ref> and by Young's inequality we then have for all γ_1,γ_2>0𝒮(x,y,z) ≤𝔞 - ( 𝔟 - G^2/2γ_1 - 1/2δγ_2) x^2 - 1/^2( 𝔠 - γ_1/2) y^2 - 1/δ( 1 - γ_2/2) z^2.Choosing γ_1 = 𝔠 and γ_2 = 1 we get𝒮(x,y,z)≤𝔞 - ( 𝔟 - G^2/2 𝔠 - 1/2δ) x^2 - 𝔠/2 ^2y^2 - 1/2δz^2 ≤𝔞 - min{𝔟 - G^2/2 𝔠 - 1/2δ, 𝔠/2 ^2, 1/2δ}( x^2 + y^2 + z^2 ),where the coefficient in front of the norm of x is positive due to condition <ref>, and which shows that the dissipativity assumption is satisfied. It remains to prove the irreducibility condition <cit.>. We remark that system (<ref>) fits the framework of the example the end of <cit.>, and therefore, <cit.> is satisfied. Ergodicity then follows from <cit.>. The proof for the limit system (X_t,Z_t) is analogous, and therefore we omit the details. Finally, the weak convergence of the measure μ^ to μ is given by, e.g., <cit.> or <cit.>. We are now ready to state important formulas that link expectations of different quantities with respect to the invariant measure, and which will be employed in the proof of the main theorems.Let d=1, then it holds1/G ^μ^[ Y^⊗ f(X^) ] = - θ^μ^[ f(X^) ⊗ f(X^) ].Let F →^ℓ be a primitive of f defined byF(x) = ∫_0^x f(t)t.Due to <ref>, multiplying equation (<ref>) by 1 ⊗ F(x), integrating over ^2d+n and then by parts, and noting that∫_^2d+n 1 ⊗ F(x) ∇_x ·( θ f(x) ρ^(x,y,z) )= - θ∫_^2d+n f(x) ⊗ f(x) ρ^(x,y,z),1/∫_^2d+n 1 ⊗ F(x) ∇_x ·( G y ρ^(x,y,z) )= - 1/G ∫_^2d+n y ⊗ f(x) ρ^(x,y,z),we get the desired result. The following equalities hold true(i) 1/G ^μ^[ Y^⊗ f(Z^) ] = - θ^μ^[ f(X^) ⊗ f(Z^) ] - 1/δ^μ^[ X^⊗∇ f(Z^) (X^ - Z^) ], (ii) 1/δ^μ[ X ⊗∇ f(Z) (X - Z) ] = - θ^μ[ f(X) ⊗ f(Z) ].Let us first consider point (i). Due to <ref>, multiplying equation (<ref>) by x ⊗ f(z), integrating over ^2d+n and then by parts, and noting that∫_^2d+n x ⊗ f(z) ∇_x ·( θ f(x) ρ^(x,y,z) )= - θ∫_^2d+n f(x) ⊗ f(z) ρ^(x,y,z),1/∫_^2d+n x ⊗ f(z) ∇_x ·( G y ρ^(x,y,z) )= - 1/G ∫_^2d+n y ⊗ f(z) ρ^(x,y,z),1/δ∫_^2d+n x ⊗ f(z) ∇_z ·( (x-z) ρ^(x,y,z) )= - 1/δ∫_^2d+n x ⊗∇ f(z) (x - z) ρ^(x,y,z),we get the desired result. Analogously, multiplying equation (<ref>) by x ⊗ f(z), integrating over ^2d and then by parts, we obtain point (ii), which concludes the proof.§.§ Infinite time limit of SGDCT In this section we show that the SGDCT estimator behaves like the MLE estimator in the infinite time limit. The first result is a technical lemma which will be required later. It is based on <cit.> and we restate it in our setting for clarity of exposition. Let ℒ^ and ℒ be the generators of the processes given by the SDEs (<ref>) and (<ref>), respectively. Let h^𝕋^2d+n→^P and h 𝕋^2d→^P be functions in W^k,∞ with k,P ∈, k ≥ 2, such that^μ^[ h(X^, Y^, Z^) ] = 0 and^μ[ h(X, Z) ] = 0.Then, there exist unique solutions ψ^^2d+n→^P and ψ^2d→^P in W^k,∞ of the Poisson problemsℒ^ψ^(x,y,z) = h^(x,y,z), ℒψ(x,z) = h(x,z).The result follows from <cit.> noting that the hypoellipticity condition is guaranteed by <ref>. We recall that we are working under the assumption that the state space is compact, as stated in <ref>. Replacing the torus with the whole space would involve different technicalities, especially for proving that the functions in <ref> and their derivatives are bounded. We believe that the needed results follow from the results presented in <cit.>; but we do not study this extension here. Before moving to the limiting properties, we first need to control the moments of the SGDCT estimators. We show in the next lemma that the moments are bounded uniformly in time.Let the estimators θ_t, θ_t^, θ_exp,t^δ, θ_exp,t^δ, be defined by the SDEs (<ref>), (<ref>), (<ref>), (<ref>). Under <ref>, for all p≥1 there exists a constant C>0 independent of time, such that following bounds hold(i) [ θ_t^p ] ≤ C, (ii) [ θ_t^^p ] ≤ C, (iii) [ θ_exp,t^δ^p ] ≤ C, (iv) [ θ_exp,t^δ,^p ] ≤ C.We only give full details for (iv), and then we outline the differences with respect to (ii). Let us first show (iv). We consider the SDE for the estimator (<ref>) and we rewrite it as%̣ṣθ_exp,t^δ, = - ξ_t θ_exp,t^δ,^μ^[ f(X^) ⊗ f(Z^) ]t + ξ_t θ f(X_t^) ⊗ f(Z_t^)t + 1/ξ_t G Y_t^⊗ f(Z_t^)t - ξ_t θ_exp,t^δ,( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] )t,then by Itô's lemma we obtainθ_exp,t^δ, = - ξ_t θ_exp,t^δ,^μ^[ f(X^) ⊗ f(Z^) ] θ_exp,t^δ,/θ_exp,t^δ, t + ξ_t θ f(X_t^) ⊗ f(Z_t^) θ_exp,t^δ,/θ_exp,t^δ, t- ξ_t θ_exp,t^δ,( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] ) θ_exp,t^δ,/θ_exp,t^δ, t+ 1/ξ_t G Y_t^⊗ f(Z_t^) θ_exp,t^δ,/θ_exp,t^δ, t,where we recall that · denotes the Frobenius norm of a matrix. Let us also consider the process Θ_exp,t^δ, which satisfies the SDE%̣ṣΘ_exp,t^δ, = - ξ_t K Θ_exp,t^δ, t + ξ_t θ f(X_t^) ⊗ f(Z_t^)t + 1/ξ_t G Y_t^⊗ f(Z_t^)t- ξ_t Θ_exp,t^δ,( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] )t,where K is given by <ref>, and note that by Itô's lemma we getΘ_exp,t^δ, = - ξ_t K Θ_exp,t^δ, t + ξ_t θ f(X_t^) ⊗ f(Z_t^) Θ_exp,t^δ,/Θ_exp,t^δ, t- ξ_t Θ_exp,t^δ,( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] ) Θ_exp,t^δ,/Θ_exp,t^δ, t+ 1/ξ_t G Y_t^⊗ f(Z_t^) Θ_exp,t^δ,/Θ_exp,t^δ, t.Due to <ref> and since the drift and diffusion functions in the previous SDEs are continuous if θ_exp,t^δ,, Θ_exp,t^δ, > R for any R>0, applying the comparison theorem <cit.> we deduce that( θ_exp,t^δ,≤Θ_exp,t^δ,, t ≥ 0 ) = 1.Moreover, the process Θ_exp,t^δ, can be written as the solution of the integral equationΘ_exp,t^δ, = e^-K ∫_0^t ξ_rrΘ_exp,0^δ, + ∫_0^t ξ_s e^-K ∫_s^t ξ_rrθ f(X_s^) ⊗ f(Z_s^)s- ∫_0^t ξ_s e^-K ∫_s^t ξ_rrΘ_exp,s^δ,( f(X_s^) ⊗ f(Z_s^) - ^μ^[ f(X^) ⊗ f(Z^) ] )s+ 1/∫_0^t ξ_s e^-K ∫_s^t ξ_rr G Y_s^⊗ f(Z_s^)s,and notice that, from the definition of the learning rate in equation (<ref>), we havee^-K ∫_s^t ξ_rr = ( b+s/b+t)^aK.Using now <ref> for the functionh^(x,y,z) = f(x) ⊗ f(z) - ^μ^[ f(X^) ⊗ f(Z^) ],we deduce that the solution to the Poisson equation -ℒ^ψ^ = h^(x,y,z), with ψ^ = ψ^(x,y,z), is bounded, together with its derivatives. Next, we apply Itô's formula to the functionϕ^(t,x,y,z,ϑ) = (b+t)^aK-1ϑψ^(x,y,z),to deduce (b+t)^aK-1Θ_exp,t^δ,ψ^(X_t^, Y_t^, Z_t^) - b^aK-1Θ_exp,0^δ,ψ^(X_0^, Y_0^, Z_0^) = (aK-1) ∫_0^t (b+s)^aK-2Θ_exp,s^δ,ψ^(X_s^, Y_s^, Z_s^)s + ∫_0^t (b+s)^aK-1Θ_exp,s^δ, h^(X_s^, Y_s^, Z_s^)s + 1/∫_0^t (b+s)^aK-1Θ_exp,s^δ,∇_y ψ^(X_s^, Y_s^, Z_s^) ·σẈ_s + ∫_0^t (b+s)^aK-1%̣ṣΘ_exp,s^δ,ψ^(X_s^, Y_s^, Z_s^).This, together with (<ref>), impliesΘ_exp,t^δ, = Θ_exp,0^δ,( b/b+t)^aK - a/b+tΘ_exp,t^δ,ψ^(X_t^, Y_t^, Z_t^) + a b^aK-1/(b+t)^aKΘ_exp,0^δ,ψ^(X_0^, Y_0^, Z_0^)+ a ∫_0^t (b+s)^aK-1/(b+t)^aKθ f(X_s^) ⊗ f(Z_s^)s + a/∫_0^t (b+s)^aK-1/(b+t)^aK G Y_s^⊗ f(Z_s^)s+ a(aK-1) ∫_0^t (b+s)^aK-2/(b+t)^aKΘ_exp,s^δ,ψ^(X_s^, Y_s^, Z_s^)s+ a/∫_0^t (b+s)^aK-1/(b+t)^aKΘ_exp,s^δ,∇_y ψ^(X_s^, Y_s^, Z_s^) ·σẈ_s- a^2 ∫_0^t (b+s)^aK-2/(b+t)^aKΘ_exp,s^δ,( KI + f(X_s^) ⊗ f(Z_s^) ) ψ^(X_s^, Y_s^, Z_s^)s+ a^2 ∫_0^t (b+s)^aK-2/(b+t)^aKΘ_exp,s^δ,^μ^[ f(X^) ⊗ f(Z^) ] ψ^(X_s^, Y_s^, Z_s^)s+ a^2 ∫_0^t (b+s)^aK-2/(b+t)^aK( θ f(X_s^) ⊗ f(Z_s^) + 1/G Y_s^⊗ f(Z_s^) ) ψ^(X_s^, Y_s^, Z_s^)s.Due to the boundedness of the function ψ^ and the processes X_t^,Y_t^,Z_t^, inequality <cit.>, and Jensen's inequality we obtain[ Θ_exp,t^δ,^2q]≤ C + C/b+t[ Θ_exp,t^δ,^2q] + C ∫_0^t (b+s)^aK-2/(b+t)^aK[ Θ_exp,s^δ,^2q]s+ C ∫_0^t (b+s)^2aK-2/(b+t)^2aK[ Θ_exp,s^δ,^2q]s,which for t sufficiently large implies(b+t)^aK[ Θ_exp,t^δ,^2q] ≤ C(b+t)^aK + C ∫_0^t 1/(b+s)^2 (b+s)^aK[ Θ_exp,s^δ,^2q]s,and by Grönwall's inequality we get[ Θ_exp,t^δ,^2q] ≤ C e^∫_0^t 1/(b+s)^2 s≤ C e^1/b.We remark that if t is small, i.e., in a compact interval [0,T], then we could directly apply Grönwall's inequality from equation (<ref>). In fact, we would have[ Θ_exp,t^δ,^2q] ≤ C + C ∫_0^t (b+s)^aK-1/(b+t)^aK[ Θ_exp,s^δ,^2q]s,which similarly as above yields to[ Θ_exp,t^δ,^2q] ≤ C e^∫_0^t 1/b+s s≤C/b (T + b).This shows that all the even moments of order p = 2q are bounded. If p is odd, then by Hölder's inequality we have[ Θ_exp,t^δ,^p] ≤( [ Θ_exp,t^δ,^p+1] )^p/p+1,and therefore any odd moment can be bounded by the consecutive even moment. Finally, the desired result follows from (<ref>). Let us now consider (ii). The main difference in comparison to (iv) is that we do not have the colored noise process Y_t^; this simplifies the analysis. On the other hand, we have the diffusion term in the SDE for the estimator (<ref>), which we rewrite as%̣ṣθ_exp,t^δ = - ξ_t θ_exp,t^δ^μ[ f(X) ⊗ f(Z) ]t + ξ_t θ f(X_t) ⊗ f(Z_t)t- ξ_t θ_exp,t^δ( f(X_t) ⊗ f(Z_t) - ^μ[ f(X) ⊗ f(Z) ] )t+ ξ_t √(2D^S) W_t ⊗ f(Z_t).Similarly as above we work with the process Θ_exp,t^δ defined by the SDE%̣ṣΘ_exp,t^δ = - ξ_t K Θ_exp,t^δ t + ξ_t θ f(X_t) ⊗ f(Z_t)t- ξ_t Θ_exp,t^δ( f(X_t) ⊗ f(Z_t) - ^μ[ f(X) ⊗ f(Z) ] )t+ ξ_t √(2D^S) W_t ⊗ f(Z_t),which can be written asΘ_exp,t^δ = e^-K ∫_0^t ξ_rrΘ_exp,0^δ + ∫_0^t ξ_s e^-K ∫_s^t ξ_rrθ f(X_s) ⊗ f(Z_s)s- ∫_0^t ξ_s e^-K ∫_s^t ξ_rrΘ_exp,s^δ( f(X_s) ⊗ f(Z_s) - ^μ[ f(X) ⊗ f(Z) ] )s+ ∫_0^t ξ_s e^-K ∫_s^t ξ_rr√(2D^S)Ẉ_s ⊗ f(Z_s),and whose moments give an upper bound to the moments of θ_exp,t^δ due to the comparison theorem <cit.>. Proceeding analogously to (iv) we obtain for a function ψ= ψ(x,z) which is bounded along with its derivativesΘ_exp,t^δ = Θ_exp,0^δ( b/b+t)^aK - a/b+tΘ_exp,t^δψ(X_t, Z_t) + a b^aK-1/(b+t)^aKΘ_exp,0^δψ(X_0, Z_0)+ a ∫_0^t (b+s)^aK-1/(b+t)^aKθ f(X_s) ⊗ f(Z_s)s + a ∫_0^t (b+s)^aK-1/(b+t)^aK√(2D^S)Ẉ_s ⊗ f(Z_s^)+ a(aK-1) ∫_0^t (b+s)^aK-2/(b+t)^aKΘ_exp,s^δψ(X_s, Z_s)s+ a ∫_0^t (b+s)^aK-1/(b+t)^aKΘ_exp,s^δ∇_x ψ(X_s, Z_s) ·√(2D^S)Ẉ_s- a^2 ∫_0^t (b+s)^aK-2/(b+t)^aKΘ_exp,s^δ( KI + f(X_s) ⊗ f(Z_s) - ^μ[ f(X) ⊗ f(Z) ] ) ψ(X_s, Z_s)s+ a^2 ∫_0^t (b+s)^aK-2/(b+t)^aKθ f(X_s^) ⊗ f(Z_s^) ψ(X_s, Z_s)s+ a^2 ∫_0^t (b+s)^aK-2/(b+t)^aK√(2D^S)Ẉ_s ⊗ f(Z_s^) ψ(X_s, Z_s),and the desired result then follows from Grönwall's inequality. Finally, the proofs of (i) and (iii) are almost the same as the proofs of (ii) and (iv), respectively. We omit the details. The next two propositions are crucial for proving the (un)biasedness of the SGDCT estimators for processes driven by colored noise. Since the proofs are similar, we only provide details in the case with filtered data.Let X_t^, Y_t^ be solutions of system (<ref>). Under <ref>(i), it holdslim_t→∞θ_t^ = θ + 1/G ^μ^[ Y^⊗ f(X^) ] ^μ^[ f(X^) ⊗ f(X^) ]^-1, inL^2.Let X_t^, Y_t^, Z_t^ be solutions of system (<ref>). Under <ref>(ii), it holdslim_t→∞θ_exp,t^δ, = θ + 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1, inL^2.Let us consider the SDE for the estimator (<ref>) and rewrite it as( θ_exp,t^δ, - θ - 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1) = - ξ_t ( θ_exp,t^δ, - θ - 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1) ^μ^[ f(X^) ⊗ f(Z^) ]t- ξ_t ( θ_exp,t^δ, - θ) ( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] )t+ 1/ξ_t G ( Y_t^⊗ f(Z_t^) - ^μ^[ Y^⊗ f(Z^)] )t.LettingΔ^_t = θ_exp,t^δ, - θ - 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1andΓ^_t = Δ_t^^2,by Itô's lemma we haveΓ̣_t^ = - 2 ξ_t Δ_t^^μ^[ f(X^) ⊗ f(Z^) ] : Δ_t^ t- 2 ξ_t ( θ_exp,t^δ, - θ) ( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] ) : Δ_t^ t+ 2/ξ_t G ( Y_t^⊗ f(Z_t^) - ^μ^[ Y^⊗ f(Z^)] ) : Δ_t^ t,wherestands for the Frobenius inner product, and which due to <ref> impliesΓ̣_t^ ≤ - 2 K ξ_t Γ_t^ t- 2 ξ_t ( θ_exp,t^δ, - θ) ( f(X_t^) ⊗ f(Z_t^) - ^μ^[ f(X^) ⊗ f(Z^) ] ) : Δ_t^ t+ 2/ξ_t G ( Y_t^⊗ f(Z_t^) - ^μ^[ Y^⊗ f(Z^)] ) : Δ_t^ t.By the comparison principle we obtainΓ_t^ ≤Δ_0^^2 e^-2K ∫_0^t ξ_rr -2 ∫_0^t ξ_s e^-2K ∫_s^t ξ_rr( θ_exp,s^δ, - θ) ( f(X_s^) ⊗ f(Z_s^) - ^μ^[ f(X^) ⊗ f(Z^) ] ) : Δ_s^ s+ 2/∫_0^t ξ_s e^-2K ∫_s^t ξ_rr G ( Y_s^⊗ f(Z_s^) - ^μ^[ Y^⊗ f(Z^)] ) : Δ_s^ sI_t^1 + I_t^2 + I_t^3,and we now study the three terms in the right-hand side separately. First, we use equation (<ref>) to deduce thatlim_t→∞ I_t^1 = lim_t→∞Δ_0^^2 ( b/b+t)^2aK = 0.Then, applying <ref> for the right hand side of the Poisson equation being the functionh_1^(x,y,z) = f(x) ⊗ f(z) - ^μ^[ f(X^) ⊗ f(Z^) ],we deduce that the solution to the Poisson equation ψ_1^ = ψ_1^(x,y,z) is bounded, together with all its derivatives. Letting ζ_t = (b+t)^2aK - 1 and applying Itô's lemma to the functionϕ_1^(t,x,y,z,ϑ) = ζ_t (ϑ - θ) ψ_1^(x,y,z) : ( ϑ - θ - 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1),we obtainζ_t ( θ_exp,t^δ, - θ) ψ_1^(X_t^, Y_t^, Z_t^) Δ_t^ - ζ_0 ( θ_exp,0^δ, - θ) ψ_1^(X_0^, Y_0^, Z_0^) Δ_0^ = ∫_0^t ζ_s' ( θ_exp,s^δ, - θ) ψ_1^(X_s^, Y_s^, Z_s^) Δ_s^ s+ ∫_0^t ζ_t ( θ_exp,s^δ, - θ) ℒ^ψ_1^(X_s^, Y_s^, Z_s^) Δ_s^ s+ 1/∫_0^t ζ_s ( θ_exp,s^δ, - θ) ∇_y ψ_1^(X_s^, Y_s^, Z_s^) Δ_s^·σ W_s+ ∫_0^t ζ_s [ ( θ_exp,s^δ, - θ) ψ_1^(X_s^, Y_s^, Z_s^) + Δ_s^ψ_1^(X_s^, Y_s^, Z_s^)^⊤] θ_exp,s^δ,,which impliesI_t^2=2a/b+t( θ_exp,t^δ, - θ) ψ_1^(X_t^, Y_t^, Z_t^) Δ_t^ - 2ab^2aK-1/(b+t)^2aK( θ_exp,0^δ, - θ) ψ_1^(X_0^, Y_0^, Z_0^) Δ_0^ - 2a(2aK-1)/(b+t)^2aK∫_0^t (b+s)^2aK-2( θ_exp,s^δ, - θ) ψ_1^(X_s^, Y_s^, Z_s^) Δ_s^ s- 2a/(b+t)^2aK∫_0^t (b+s)^2aK-1( θ_exp,s^δ, - θ) ∇_y ψ_1^(X_s^, Y_s^, Z_s^) Δ_s^·σ W_s- 2a^2/(b+t)^2aK∫_0^t (b+s)^2aK-2[ ( θ_exp,s^δ, - θ) ψ_1^(X_s^, Y_s^, Z_s^) + Δ_s^ψ_1^(X_s^, Y_s^, Z_s^)^⊤] [ ( -(θ_exp,s^δ, - θ) f(X_s^) + 1/G Y_s^) ⊗ f(Z_s^) ]s.By the boundedness of the function ψ_1^ and the estimator θ_exp,s^δ, by <ref> and <ref>, respectively, and due to the Itô isometry, we obtain[ I_t^2 ] ≤ C ( 1/b+t + 1/(b+t)^2aK).Repeating a similar argument for I_t^3, but now applying <ref> to the function h_2^(x,y,z) = y ⊗ f(z) - ^μ^[ Y^⊗ f(Z^)],and Itô's lemma to the functionϕ_2^(t,x,y,z,ϑ) = ζ_t ψ_2^(x,y,z) ( ϑ - θ - 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1),where ψ_2^ is the solution of the Poisson problem, we also get that[ I_t^3 ] ≤ C ( 1/b+t + 1/(b+t)^2aK).Finally, decomposition (<ref>) together with equations (<ref>), (<ref>), (<ref>) yield that Δ_t^→ 0 in L^2, which in turn gives desired results.§.§ Proof of the main results We are now ready to prove the theoretical results presented in <ref>. Replacing equation (<ref>) in the definition of the estimator (<ref>) we haveθ(X,T) = θ + √(2D^S)( ∫_0^T Ẉ_t ⊗ f(X_t) ) ( ∫_0^T f(X_t) ⊗ f(X_t)t )^-1.Then, by the ergodic theorem for additive functionals of Markov processes and the martingale central limit theorem <cit.>, <cit.> we obtainlim_T→∞1/T∫_0^T f(X_t) ⊗ f(X_t)t = ^μ[ f(X) ⊗ f(X) ],a.s.,andlim_T→∞1/T∫_0^T Ẉ_t ⊗ f(X_t) = 0,a.s.,which imply the desired result.Replacing the first equation from (<ref>) in the definition of the estimator (<ref>) we haveθ(X^,T) = θ + 1/G ( ∫_0^T Y_t^⊗ f(X^_t)t ) ( ∫_0^T f(X^_t) ⊗ f(X^_t)t )^-1,a.s.,and by the ergodic theorem we obtainlim_T→∞θ(X^,T) = θ + 1/G ^μ^[ Y^⊗ f(X^) ] ^μ^[ f(X^) ⊗ f(X^) ]^-1,a.s.Finally, employing <ref> we get the desired result.Replacing equation (<ref>) in the definition of the estimator (<ref>) we haveθ_exp^δ(X,T) = θ + √(2D^S)( ∫_0^T Ẉ_t ⊗ f(Z_t) ) ( ∫_0^T f(X_t) ⊗ f(Z_t)t )^-1.Then, by the ergodic theorem and the martingale central limit theorem <cit.> we obtainlim_T→∞1/T∫_0^T f(X_t) ⊗ f(Z_t)t = ^μ[ f(X) ⊗ f(Z) ],a.s.,andlim_T→∞1/T∫_0^T Ẉ_t ⊗ f(Z_t) = 0,a.s.,which imply the desired result.Replacing the first equation from (<ref>) in the definition of the estimator (<ref>) we haveθ_exp^δ(X^,T) = θ + 1/G ( ∫_0^T Y_t^⊗ f(Z^_t)t ) ( ∫_0^T f(X^_t) ⊗ f(Z^_t)t )^-1,and by the ergodic theorem we obtainlim_T→∞θ_exp^δ(X^,T) = θ + 1/G ^μ^[ Y^⊗ f(Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1,a.s.Employing formula (i) in <ref> we havelim_T→∞θ_exp^δ(X^,T) = - 1/δ^μ^[ X^⊗∇ f(Z^) (X^ - Z^) ] ^μ^[ f(X^) ⊗ f(Z^) ]^-1,a.s.which due to the weak convergence of the joint process (X_t^,Z_t^) to (X_t,Z_t) giveslim_→0lim_T→∞θ_exp^δ(X^,T) = - 1/δ^μ[ X ⊗∇ f(Z) (X - Z) ] ^μ[ f(X) ⊗ f(Z) ]^-1,a.s.Finally, formula (ii) in <ref> yields the desired result.The proof is analogous to the proof of <ref>, so we omit the details here. Moreover, it can also be seen as a particular case of <cit.>.The desired result is obtained upon combining <ref> and <ref>.Let us consider the SDE for the estimator(<ref>) and rewrite it as( θ^δ_exp,t - θ)= - ξ_t ( θ^δ_exp,t - θ) [ f(X) ⊗ f(Z) ]t- ξ_t ( θ^δ_exp,t - θ) ( f(X_t) ⊗ f(Z_t) - [ f(X) ⊗ f(Z) ] )t+ ξ_t √(2D^S) W_t ⊗ f(Z_t).LettingΔ_t = θ_exp,t^δ - θandΓ_t = Δ_t^2,by Itô's lemma we haveΓ̣_t= - 2 ξ_t Δ_t ^μ[ f(X) ⊗ f(Z) ] : Δ_tt- 2 ξ_t Δ_t ( f(X_t) ⊗ f(Z_t) - ^μ[ f(X) ⊗ f(Z) ] ) : Δ_tt+ ξ_t^2 √(2D^S)^2 f(Z_t)^2t+ 2 ξ_t √(2D^S) W_t ⊗ f(Z_t) Δ_t,which due to <ref> impliesΓ̣_t≤ - 2 Kξ_t Γ_tt- 2 ξ_t Δ_t ( f(X_t) ⊗ f(Z_t) - ^μ[ f(X) ⊗ f(Z) ] ) : Δ_tt+ ξ_t^2 √(2D^S)^2 f(Z_t)^2t+ 2 ξ_t √(2D^S) W_t ⊗ f(Z_t) Δ_t.By the comparison principle we obtainΓ_t≤Δ_0^2 e^-2K ∫_0^t ξ_rr -2 ∫_0^t ξ_s e^-2K ∫_s^t ξ_rrΔ_s ( f(X_s) ⊗ f(Z_s) - ^μ[ f(X) ⊗ f(Z) ] ) : Δ_ss+ √(2D^S)^2 ∫_0^t ξ_s^2 e^-2K ∫_s^t ξ_rrf(Z_s)^2s+ 2 ∫_0^t ξ_s e^-2K ∫_s^t ξ_rr√(2D^S) W_s ⊗ f(Z_s) Δ_sI_t^1 + I_t^2 + I_t^3 + I_t^4,and we now study the four terms in the right-hand side separately. The first two terms I_t^1 and I_t^2 appear also in the proof of <ref> with colored noise, where we show that their expectations vanish as t→∞. Regarding I_t^2, the main difference, which does not affect the final result, is in equation (<ref>), where in this case we would haveI_t^2=2a/b+t( θ_exp,t^δ - θ) ψ_1(X_t, Z_t) Δ_t - 2ab^2aK-1/(b+t)^2aK( θ_exp,0^δ - θ) ψ_1(X_0, Z_0) Δ_0- 2a(2aK-1)/(b+t)^2aK∫_0^t (b+s)^2aK-2( θ_exp,s^δ - θ) ψ_1(X_s, Z_s) Δ_ss+ 2a^2/(b+t)^2aK∫_0^t (b+s)^2aK-2[ ( θ_exp,s^δ - θ) ψ_1(X_s, Z_s) + Δ_s ψ_1(X_s, Z_s)^⊤] [ (θ_exp,s^δ - θ) f(X_s) ⊗ f(Z_s) ]s,- 2a^2/(b+t)^2aK∫_0^t (b+s)^2aK-2[ ( θ_exp,s^δ - θ) ψ_1(X_s, Z_s) + Δ_s ψ_1(X_s, Z_s)^⊤] [ √(2D^S) W_s ⊗ f(Z_s) ],- 2a^3 √(D^S)^2/(b+t)^2aK∫_0^t (b + s)^2aK-3( ψ_1(X_s, Z_s) + ψ_1(X_s, Z_s)^⊤)f(Z_s) ⊗ f(Z_s)s.Then, we have[ I_t^3 ] ≤C/(b+t)^2aK∫_0^t (b+s)^2aK-2 s ≤ C ( 1/b+t + 1/(b+t)^2aK),and [I_t^4] = 0 since by the boundedness of the estimator θ_exp,t^δ due to <ref> we get[ (I_t^4)^2 ] = 4 ∫_0^t ξ_s^2 e^-4K ∫_s^t ξ_rr√(2D^S)Δ_s f(Z_s)^2s ≤ C ( 1/b+t + 1/(b+t)^4aK).Therefore, we deduce that Δ_t → 0 in L^2, which in turn gives desired results.The desired result is obtained applying <ref>, and, similarly to the proof of <ref>, <ref>. § LÉVY AREA CORRECTIONIn this section, we demonstrate that the MLE and SGDCT estimators, that were introduced and analyzed in the previous sections, can be extended to SDEs driven by colored multiplicative noise. We are primarily interested in the case where the limiting SDE contains an additional drift term that is due to the Lévy area correction introduced in <ref>; compare the Stratonovich stochastic integral in the Wong–Zakai theorem. Consider, in particular, the setting of <ref>, let d=2 and set the drift function h ^2 →^2 and the diffusion function g ^2 →^2 × 2 to h(x) = - θ x and g(x) = √(κ + βx^2) I,where θ,κ,β are positive constants. Then the system (<ref>) readsX̣^_t= - θ X^_tt + √(κ + βX^_t^2)Y^_t/ t,Ỵ^_t= - A/^2 Y^_tt + √(η)/ W_t,and the limit equation (<ref>) becomesX̣_t = - L X_tt + √(κ_0 + β_0 X_t^2) W_t,where κ_0 = κη/ρ, β_0 = βη/ρ andL = ( θ - β_0/2) I + γβ_0/2 α J.We can easily verify that <ref> is satisfied in this framework because f(x) = -x and- θ x · x = - θx^2 and A y · y = αy^2.Moreover, we assume that the parameters are chosen so that the processes are ergodic with a unique invariant measure, and in particular thatθ > β_0/2,which is necessary from the definition of L in (<ref>). We are interested inferring the parameters in the drift function, i.e., the four components of the matrix L ∈^2×2,of the limit SDE (<ref>) from observations of the process X_^t in (<ref>). Similarly to the case of additive noise, we propose the MLE estimatorL_exp^δ(X^,T) = - ( ∫_0^T X̣_t^⊗ Z_t^) ( ∫_0^T X_t^⊗ Z_t^ t )^-1,and the SGDCT estimator which solves the SDEs%̣ṣL_exp,t^δ, = - ξ_t I_exp,t^δ,⊗ Z_t^, I_exp,t^δ, = X̣_t^ + L_exp,t^δ, X_t^ t,or equivalently%̣ṣL^δ,_exp,t = - ξ_t L_exp,t^δ, X_t^⊗ Z_t^ t - ξ_t X̣_t^⊗ Z_t^.In the next sections we study the asymptotic unbiasedness of the MLE estimator L_exp^δ(X^,T) and the SGDCT estimator L^δ,_exp,t in the limit as time goes to infinity and →0. Since the main ideas in the convergence analysis are similar to the case of additive noise, we only give a sketch of the proofs. Then, in the numerical experiments in <ref> we observe that both the estimators are able to accurately infer the unknown matrix L in the limit SDE.§.§ Asymptotic unbiasedness for MLE estimator Similarly to the analysis in <ref>, we first consider the following system of SDEs for the stochastic processes X_t^ and Y_t^ together with the additional equation (<ref>) for the filtered data Z_t^X̣^_t= - θ X^_tt + √(κ + βX^_t^2)Y^_t/ t,Ỵ^_t= - A/^2 Y^_tt + √(η)/ W_t,Ẓ^_t= - 1/δ(Z^_t - X^_t)t,and the corresponding system obtained in the limit as →0X̣_t= - L X_tt + √(κ_0 + β_0 X_t^2) W_t,Ẓ_t= - 1/δ(Z_t - X_t)t.We verify that the system of SDEs are hypoelliptic, and therefore the measures induced by the stochastic processes admit smooth densities with respect to the Lebesgue measure. Let ν_t^ and ν_t be the measures at time t induced by the joint processes (X_t^,Y_t^,Z_t^) and (X_t,Z_t) given by equations (<ref>) and (<ref>), respectively. Then, the measures ν_t^ and ν_t admit smooth densities φ^_t and φ_t with respect to the Lebesgue measure. Let us first consider the system driven by colored noise. The generator of the joint process (X_t^, Y_t^, Z_t^) isℒ^ = - θ x ·∇_x + 1/√(κ + βx^2) y ·∇_x - 1/^2 A y ·∇_y - 1/δ(z-x) ·∇_z + η/2^2Δ_y 𝒳_0 + η/2^2∑_i=1^n 𝒳_i^2,where𝒳_0= - θ x ·∇_x + 1/√(κ + βx^2) y ·∇_x - 1/^2 A y ·∇_y - 1/δ(z-x) ·∇_z𝒳_i= ∂/∂ y_i,i = 1,2.The commutator [𝒳_0, 𝒳_i] is[𝒳_0, 𝒳_i] = - 1/√(κ + βx^2)∂/∂ x_i + 1/^2 a_i ·∇_y,where a_i, i = 1,2, are the columns of the matrix A, and the commutator [𝒳_0, [𝒳_0, 𝒳_i]] is[𝒳_0, [𝒳_0, 𝒳_i]]= - θκ/√(κ + βx^2)∂/∂ x_i + β/^2( x_i y ·∇_x - y · x ∂/∂ x_i) + 1/δ√(κ + βx^2)∂/∂ z_i - 1/^3√(κ + βx^2) a_i ·∇_x + 1/^4 a_i · A^⊤∇_y.Therefore, for any point (x,y,z) ∈^6, the setℋ = Lie( 𝒳_i, [𝒳_0, 𝒳_i], [𝒳_0, [𝒳_0, 𝒳_i]]; i = 1, …, 2 )spans the tangent space of ^6 at (x,y,z). The desired result then follows from Hörmander’s theorem (see, e.g., <cit.>). Finally, the proof for the limit system (X_t,Z_t) is analogous, and therefore we omit the details. We can now write the stationary Fokker–Planck equations for the processes (X_t^,Y_t^,Z_t^) and (X_t,Z_t) given by equations (<ref>) and (<ref>), i.e.,∇_x ·( θ x ρ^(x,y,z) ) - 1/∇_x ·( √(κ + βx^2) y ρ^(x,y,z) ) + 1/δ∇_z ·( (z-x) ρ^(x,y,z) ) + 1/^2∇_y ·( Ayρ^(x,y,z) ) + η/2 ^2Δ_y ρ^(x,y,z)= 0,and∇_x ·( Lx ρ(x,z) ) + 1/δ∇_z ·( (z-x) ρ(x,z) ) + 1/2Δ_x ( (κ_0 + β_0 x^2) ρ(x,z) ) = 0,respectively.Employing these Fokker–Planck equations, we show the following technical result, whose identities are analogous to the ones in <ref>.The following equalities hold true(i) 1/^μ^[ √(κ + βX^^2) Y^⊗ Z^] = θ^μ^[ X^⊗ Z^] + 1/δ^μ^[ X^⊗ (Z^ - X^) ], (ii) 1/δ^μ[ X ⊗ (Z - X) ] = - L ^μ[ X ⊗ Z ].Let us first consider point (i). Multiplying equation (<ref>) by x ⊗ z, integrating over ^6 and then by parts, and noting that∫_^6 x ⊗ z ∇_x ·( θ x ρ^(x,y,z) )= - θ∫_^6 x ⊗ z ρ^(x,y,z),1/∫_^6 x ⊗ z ∇_x ·( √(κ + βx^2) y ρ^(x,y,z) )= - 1/∫_^6√(κ + βx^2) y ⊗ z ρ^(x,y,z),1/δ∫_^6 x ⊗ z ∇_z ·( (z - x) ρ^(x,y,z) )= - 1/δ∫_^6 x ⊗ (z - x) ρ^(x,y,z),we get the desired result. Analogously, multiplying equation (<ref>) by x ⊗ z, integrating over ^4 and then by parts, we obtain point (ii), which concludes the proof. The asymptotic unbiasedness of the estimator L_exp^δ(X^,T) defined in (<ref>) is finally given by the following theorem.Let L_exp^δ(X^,T) be defined in (<ref>). Then it holdslim_→0lim_T→∞L_exp^δ(X^,T) = L,a.s.Replacing the first equation from (<ref>) in the definition of the estimator (<ref>) we haveL_exp^δ(X^,T) = θ - 1/( ∫_0^T √(κ + βX^_t^2) Y^_t ⊗ Z_t^ t ) ( ∫_0^T X_t^⊗ Z_t^ t )^-1,and by the ergodic theorem we obtain lim_T→∞L_exp^δ(X^,T) = θ - 1/^μ^[ √(κ + βX^^2) Y^⊗ Z^] ^μ^[ X^⊗ Z^]^-1,a.s.Employing formula (i) in <ref> we havelim_T→∞L_exp^δ(X^,T) = - 1/δ^μ^[ X^⊗ (Z^ - X^) ] ^μ^[ X^⊗ Z^]^-1,a.s.,which due to the weak convergence of the joint process (X_t^,Z_t^) to (X_t,Z_t) giveslim_→0lim_T→∞L_exp^δ(X^,T) = - 1/δ^μ[ X ⊗ (Z - X) ] ^μ[ X ⊗ Z ]^-1,a.s.Finally, formula (ii) in <ref> yields the desired result.§.§ Asymptotic unbiasedness for SGDCT estimator We proceed similarly to the analysis in <ref>. We recall that, for the sake of the proof of convergence for the SGDCT estimator, we wrap all the processes in the d-dimensional torus 𝕋^d, and therefore we work under the additional assumption that the state space is compact, as explained in <ref>. The first step consists in showing that <ref> is satisfied. There exists a constant K>0 such that for all V ∈^2×2V ^μ^[ X^⊗ Z^]V ≥ K V^2.We first prove that the statement holds true for vectors v ∈^2 and for the limit equation, i.e.,v^⊤^μ[ X ⊗ Z ] v ≥ K v^2.The result then follows from <ref>. Multiplying equation (<ref>) by x ⊗ x, integrating over ^4 and then by parts, and noting that∫_^4 x ⊗ x ∇_x ·( L x ρ(x,z) )= - L ∫_^6 x ⊗ x ρ^(x,y,z) - ∫_^6 x ⊗ x ρ^(x,y,z) L^⊤,1/2∫_^4 x ⊗ x Δ_x ( (κ_0 + β_0 x^2) ρ(x,z) )= ∫_^4 (κ_0 + β_0 x^2) ρ(x,z) I,we getL ^μ[ X ⊗ X ] + ^μ[ X ⊗ X ] L^⊤ = ^μ[ κ_0 + β_0 X^2 ] I.By definition of L in (<ref>), the solution of the continuous Lyapunov equation is given by^μ[ X ⊗ X ] = ^μ[ κ_0 + β_0 X^2 ]/2 θ - β_0 I.We consider the equation in <ref>(ii), which implies^μ[ X ⊗ Z ] = (I + δ L)^-1[ X ⊗ X ] = ^μ[ κ_0 + β_0 X^2 ]/2 θ - β_0 (I + δ L)^-1.Noting that^μ[ κ_0 + β_0 X^2 ] ≥κ_0,and that (I + δ L)^-1 = [ ( 1 + δθ - δβ_0/2) I + δγβ_0/2 α J ]^-1= [ ( 1 + δθ - δβ_0/2)^2 + δ^2 γ^2 β_0^2/4 α^2]^-1[ ( 1 + δθ - δβ_0/2) I - δγβ_0/2 α J ],we obtainv^⊤^μ[ X ⊗ Z ] v ≥κ_0 [ ( 1 + δθ - δβ_0/2)^2 + δ^2 γ^2 β_0^2/4 α^2]^-12 + 2δθ - δβ_0/4θ - 2β_0v^2K v^2,where K>0 due to (<ref>), and which concludes the proof. The second ingredient necessary to prove the convergence of the SGDCT estimator L_exp,t^δ, is bounding its moments uniformly in time.Let the estimators L_exp,t^δ, be defined by the SDE (<ref>). For all p≥1 there exists a constant C>0 independent of time such that following bound holds[ L_exp,t^δ,^p ] ≤ C.We only give a sketch of the proof since it follows the same steps of the proof of <ref>. Let us consider the SDE for the estimator and rewrite it as%̣ṣL^δ,_exp,t = - ξ_t L_exp,t^δ,^μ^[ X^⊗ Z^]t + ξ_t θ X_t^⊗ Z_t^ t - 1/ξ_t √(κ + βX_t^^2) Y_t^⊗ Z_t^ t- ξ_t L^δ,_exp,t( X_t^⊗ Z_t^ - ^μ^[ X^⊗ Z^] ).We then introduce the auxiliary process 𝔏^δ,_exp,t defined by the SDE%̣ṣ𝔏^δ,_exp,t = - ξ_t K 𝔏_exp,t^δ, t + ξ_t θ X_t^⊗ Z_t^ t - 1/ξ_t √(κ + βX_t^^2) Y_t^⊗ Z_t^ t- ξ_t 𝔏^δ,_exp,t( X_t^⊗ Z_t^ - ^μ^[ X^⊗ Z^] )t,which can be written as𝔏^δ,_exp,t = e^-K ∫_0^t ξ_rr𝔏^δ,_exp,0 + ∫_0^t ξ_s e^-K ∫_s^t ξ_rrθ X_s^⊗ Z_s^ s- 1/∫_0^t ξ_s e^-K ∫_s^t ξ_rr√(κ + βX_s^^2) Y_s^⊗ Z_s^ s- ∫_0^t ξ_s e^-K ∫_s^t ξ_rrξ_t 𝔏^δ,_exp,s( X_s^⊗ Z_s^ - ^μ^[ X^⊗ Z^] )s,and which, by <ref> and due to the comparison theorem <cit.>, is such that( L_exp,t^δ,≤𝔏_exp,t^δ,, t ≥ 0 ) = 1.We now proceed analogously to the proof of <ref>, we consider the solution ψ^ of the Poisson problem for the generator and we apply Itô's lemma. We remark that <ref> holds true also for the generators of the processes (<ref>) and (<ref>), since the hypoelliptic setting is guaranteed by <ref>. After some computation we get𝔏_exp,t^δ, = 𝔏_exp,0^δ,( b/b+t)^aK - a/b+t𝔏_exp,t^δ,ψ^(X_t^, Y_t^, Z_t^) + a b^aK-1/(b+t)^aK𝔏_exp,0^δ,ψ^(X_0^, Y_0^, Z_0^)+ a ∫_0^t (b+s)^aK-1/(b+t)^aKθ X_s^⊗ Z_s^ s - a/∫_0^t (b+s)^aK-1/(b+t)^aK√(κ + βX_s^^2) Y_s^⊗ Z_s^ s+ a(aK-1) ∫_0^t (b+s)^aK-2/(b+t)^aK𝔏_exp,s^δ,ψ^(X_s^, Y_s^, Z_s^)s+ a/∫_0^t (b+s)^aK-1/(b+t)^aK𝔏_exp,s^δ,∇_y ψ^(X_s^, Y_s^, Z_s^) ·σẈ_s- a^2 ∫_0^t (b+s)^aK-2/(b+t)^aK𝔏_exp,s^δ,( KI + X_s^⊗ Z_s^ - ^μ^[ X^⊗ Z^] ) ψ^(X_s^, Y_s^, Z_s^)s+ a^2 ∫_0^t (b+s)^aK-2/(b+t)^aK( θ X_s^⊗ Z_s^ + 1/√(κ + βX_s^^2) Y_s^⊗ Z_s^) ψ^(X_s^, Y_s^, Z_s^)s,which implies[ 𝔏_exp,t^δ,^2q] ≤ C + C ∫_0^t (b+s)^aK-2/(b+t)^aK[ 𝔏_exp,s^δ,^2q]s.Finally, the desired result follows from Grönwall's inequality and equation (<ref>). We now compute the limit of the SGDCT estimator as time tends to infinity. The proof of next lemma is similar to the proof of <ref>.Let X_t^, Y_t^, Z_t^ be solutions of system (<ref>). Then it holdslim_t→∞L_exp,t^δ, = θ - 1/^μ^[ √(κ + βX^^2) Y^⊗ Z^] ^μ^[ X^⊗ Z^]^-1, inL^2.Let us consider the SDE for the estimator (<ref>) and rewrite it as( L_exp,t^δ, - θ + 1/^μ^[ √(κ + βX^^2) Y^⊗ Z^] ^μ^[ X^⊗ Z^]^-1) = - ξ_t ( L_exp,t^δ, - θ + 1/^μ^[ √(κ + βX^^2) Y^⊗ Z^] ^μ^[ X^⊗ Z^]^-1) ^μ^[ X^⊗ Z^]t- ξ_t ( L_exp,t^δ, - θ) ( X_t^⊗ Z_t^ - ^μ^[ X^⊗ Z^] )t- 1/ξ_t ( √(κ + βX_t^^2) Y_t^⊗ Z_t^ - ^μ^[ √(κ + βX^^2) Y^⊗ Z^] )t.LettingΔ_t^ = L_exp,t^δ, - θ + 1/^μ^[ √(κ + βX^^2) Y^⊗ Z^] ^μ^[ X^⊗ Z^]^-1andΓ_t^ = Δ_t^^2,by Itô's lemma we haveΓ̣_t^ = - 2 ξ_t Δ_t^^μ^[ X^⊗ Z^] Δ_t^ t- 2 ξ_t ( L_exp,t^δ, - θ) ( X_t^⊗ Z_t^ - ^μ^[ X^⊗ Z^] ) Δ_t^ t- 2/ξ_t ( √(κ + βX_t^^2) Y_t^⊗ Z_t^ - ^μ^[ √(κ + βX^^2) Y^⊗ Z^] ) Δ_t^ t,which due to <ref> impliesΓ̣_t^ ≤ - 2 K ξ_t Γ_t^ t- 2 ξ_t ( L_exp,t^δ, - θ) ( X_t^⊗ Z_t^ - ^μ^[ X^⊗ Z^] ) Δ_t^ t- 2/ξ_t ( √(κ + βX_t^^2) Y_t^⊗ Z_t^ - ^μ^[ √(κ + βX^^2) Y^⊗ Z^] ) Δ_t^ t.Then, by the comparison principle we obtainΓ_t^ ≤Δ_0^^2 e^-2K ∫_0^t ξ_rr -2 ∫_0^t ξ_s e^-2K ∫_s^t ξ_rr( θ_exp,s^δ, - θ) ( f(X_s^) ⊗ f(Z_s^) - ^μ^[ f(X^) ⊗ f(Z^) ] ) : Δ_s^ s- 2/∫_0^t ξ_s e^-2K ∫_s^t ξ_rr( Y_s^⊗ f(Z_s^) - ^μ^[ Y^⊗ f(Z^)] ) : Δ_s^ sI_t^1 + I_t^2 + I_t^3.The last steps needed to bound these quantities and to obtain the desired result are similar to what is done at the end of the proof of <ref>, and we omit the details. We only remark that <ref> holds true also for the generators of the processes (<ref>) and (<ref>), since the hypoellipticity of these generators is guaranteed by <ref>, and that the moments of the estimator L_exp,t^δ, are bounded uniformly in time due to <ref>. We are now ready to show the asymptotic unbiasedness of the estimator L^δ,_exp,t.Let L_exp,t^δ, be defined in (<ref>). Then it holdslim_→0lim_t→∞L_exp,t^δ, = L, inL^2.The desired result is obtained applying <ref> and, following the proof of <ref>, due to <ref>.Notice that in <ref>, as well as in the previous results in <ref>, the order of the limits is important and they cannot commute. In fact, the convergence of the stochastic processes with respect to the parameteris in law. Hence, as it is shown in the proofs, we first need to reach the expectations with respect to the invariant measures through the ergodig theorem, and therefore the infinite limit in time, and then let the correlation time vanish. § NUMERICAL EXPERIMENTSIn this section, we present a series of numerical experiments which confirm our theoretical results. Synthetic data are generated employing the Euler–Maruyama method with a fine time step h = ^3, where we set the scale parameter to = 0.05 and = 0.1, respectively. All the experiments are performed for M = 100 times.The red/yellow lines in the displayed plots represent the average estimated values and the blue/green shades correspond to the standard deviations, while the dashed black lines are the exact values. The results are shown as functions of time. We finally remark that filtered data are generated setting the filtering width δ = 1, and the learning rate for the SGDCT estimator is chosen to be ξ_t = a/(b+t) where a and b will be specified in the next sections. Moreover, all the initial conditions for both the stochastic processes and the SGDCT estimators are set to be zeros, and the final time of integration is chosen such that the mean values of the estimators reach convergence, i.e., they stabilize and do not oscillate. §.§ Additive noiseConsider the system of SDEs (<ref>) with additive colored noise and its limiting equation (<ref>) with white noise. We first verify that the estimators θ(X^,T) and θ_t^, which are only based on the original data X_t^ driven by colored noise, are not able to correctly estimate the unknown drift coefficient θ, as predicted by <ref>. We focus on the one-dimensional case, i.e., we let d=ℓ=n=m=1, we set the parameters θ = G = A = σ = 1, and we fix the final time T = 1000 and = 0.1. Moreover, the parameters in the learning rate ξ_t for the SGDCT estimator are chosen to be a=1 and b=0.1. In <ref>, we display the approximations provided by the estimators θ(X^,T) and θ_t^ as a function of time. We notice that the estimated values are close to zero independently of time. In particular, we observe that the smaller value of the scale parameteryields estimates closer to zero. This shows that the drift coefficient θ cannot be inferred from the data which originates from the SDE driven by colored noise, and, consequently, confirms that the original data must be preprocessed.Let us now focus on the MLE and SGDCT estimators θ(X,T), θ_exp^δ(X,T), θ_exp^δ(X^,T) and θ_t, θ_exp,t^δ, θ_exp,t^δ,, for which we rigorously showed their asymptotic unbiasedness in <ref>. We consider the two-dimensional setting in <ref>, and we set α = γ = η = 1, and the drift function to be h(x) = -x. Moreover, we choose the diffusion function g(x) = G = I, where I denotes the identity matrix, so that the noise is additive. The remaining parameters are the coefficients a=100 and b=0.1 in the learning rate ξ_t of the SGDCT estimators, the final time T = 2000, and the unknown matrix θ∈^2×2 which is given byθ = [ 2 1; 1 2 ].We verify in <ref> that all the estimators are able to correctly infer the true drift coefficient θ. In particular, we observe that the estimated values tend to stabilize around the correct parameters when time increases, and that the bias and the standard deviation decrease. We remark that, even if the variance of the SGDCT estimators is larger than the variance of the MLE estimators when only few data are collected, the variance of the two approaches is comparable if the final time is sufficiently large. Finally, we also notice that decreasing the values of , and therefore considering colored noise which is closer to white noise, provides better approximations both in terms of bias and especially variance of the estimators at finite time. §.§ Lévy area correctionWe consider here the framework of <ref> and study the case of multiplicative noise that yields a Lévy area correction in the limit equation. We set the final time to T = 4000, the coefficients in the equations to θ = α = γ = κ = β = 1, and the parameters a=10 and b=0.1 in the learning rate ξ_t for the SGDCT estimator. The approximations provided by the estimators L^δ_exp(X^,T) and L^δ,_exp,t are shown in <ref>. The results are in line with our findings from the previous test cases. We only remark that, given the more complex stochastic models and consequently the more challenging inference problem, the final estimations are slightly worse. Nevertheless, if the final time and the parameterare sufficiently large and small, respectively, both the MLE and the SGDCT estimator are able to provide a reliable approximation of the unknown drift coefficient L. Finally, we notice that in this example the MLE seems to be more robust the SGDCT estimator, which also depends on the learning rate. On the other hand, we believe that the SGDCT estimator with filtered data can be successfully employed in more general setting when the drift function does not depend linearly on the parameter, and therefore the MLE does not have a closed-form expression. § CONCLUSIONIn this work, we considered SDEs driven by colored noise, modelled as Gaussian stationary processes with exponential autocorrelation function, i.e., a stationary Ornstein–Uhlenbeck process. In the limit as the correlation time of the noise goes to zero, the solution of the SDE converges to the solution of an SDE driven by white noise. We studied the problem of inferring unknown drift coefficients in the limit equation given continuous trajectories from the dynamics with colored noise, employing both MLE and SGDCT estimators. This is similar to the problem of inferring parameters in a coarse-grained SDE given observations of the slow variable in the full systems, a problem that has been extensively studied in previous works by our group <cit.>. We first focused on the case of additive noise and noticed that, without preprocessing the data, it is not possible to retrieve the exact drift coefficient, due to the incompatibility between the original observations and the limit equation, as first observed in<cit.> for multiscale diffusions. We overcame this issue by introducing filtered data, as in <cit.>, obtained by convolving the original trajectory with an exponential kernel, in the definition of the estimators. We proved both theoretically and demonstrated through numerical experiments that the estimators developed in this paper are asymptotically unbiased, i.e., that they converge to the exact parameter values in joint limit as the observation time tends to infinity and the correlation time of the noise goes to zero.Moreover, we applied our estimators to SDEs driven by multiplicative colored noise, for which an additional drift term, due to the Lévy area correction, appears in the limiting SDE. We showed that even in this case our methodology allows us to effectively infer the drift coefficients. We consider this to be an interesting result, since it is not clear at all that an MLE or SGCT-based inference methodology can identify and learn the Lévy area correction. The results presented in this paper can be improved and extended in many different directions. First, the theoretical analysis for the SGDCT estimator is restricted to stochastic processes on a compact state space, i.e., the multidimensional torus. However, as suggested by the numerical examples, we believe that this restriction is not necessary, and that similar convergence results can be proved for colored noise-driven SDEs in unbounded domains. Since the focus of this paper was the development of the new inference methodologies, we chose to work on the torus in order to avoid technical issues related to the study of hypoelliptic PDEs in unbounded domains.Second, in the study of the identifiability of the Lévy area correction we considered the particular example where both the drift functions in the colored multiplicative SDE are linear in the unknown coefficients. In this case simple analytical formulas for the Lévy area correction exist <cit.> and it is straighforward to compare between theory and the results of numerical simulations. We believe, however, that the filtered data methodology can be employed in a much more general setting.In particular, it would be interesting to infer drift functions in colored SDEs with multiplicative noise that depend nonlinearly on the parameters and, possibly, also in a nonparametric form. Furthermore, we would also like to obtain convergence rates and central limit theorems, and therefore asymptotic normality, both for the MLE and for the SGDCT estimators.Lastly, in the present paper we considered the case of continuous, uncorrupted by noise, observations. It is important to extend our methodology so that it applies to the realistic case of discrete-in-time, noisy observations, both in the low and high frequency regimes. Naturally, the ultimate goal of this project is to apply our inference methodologies to real data. We plan to return to all these issues in future work.§.§ AcknowledgementsGP is partially supported by the Frontier Research Advanced Investigator Grant ERC grant Machine-aided general framework for fluctuating dynamic density functional theory. The work of SR has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 Scaling Cascades in Complex Systems (project number 235221301). The work of AZ was partially supported by the Swiss National Science Foundation, under grant No. 200020_172710. siamnodash
http://arxiv.org/abs/2312.15975v1
{ "authors": [ "Grigorios A. Pavliotis", "Sebastian Reich", "Andrea Zanoni" ], "categories": [ "math.NA", "cs.NA" ], "primary_category": "math.NA", "published": "20231226100210", "title": "Filtered data based estimators for stochastic processes driven by colored noise" }
Learning from small data sets: Patch-based regularizers in inverse problems for image reconstruction Moritz Piening^1,Fabian Altekrüger^2, Johannes Hertrich^2, Paul Hagemann^1, Andrea Walther^2, Gabriele Steidl^1 January 14, 2024 ===================================================================================================================== Differing from traditional semi-supervised learning, class-imbalanced semi-supervised learning presents two distinct challenges: (1) The imbalanced distribution of training samples leads to model bias towards certain classes, and (2) the distribution of unlabeled samples is unknown and potentially distinct from that of labeled samples, which further contributes to class bias in the pseudo-labels during training. To address these dual challenges, we introduce a novel approach called Twice Class Bias Correction (TCBC). We begin by utilizing an estimate of the class distribution from the participating training samples to correct the model, enabling it to learn the posterior probabilities of samples under a class-balanced prior. This correction serves to alleviate the inherent class bias of the model. Building upon this foundation, we further estimate the class bias of the current model parameters during the training process. We apply a secondary correction to the model's pseudo-labels for unlabeled samples, aiming to make the assignment of pseudo-labels across different classes of unlabeled samples as equitable as possible. Through extensive experimentation on CIFAR10/100-LT, STL10-LT, and the sizable long-tailed dataset SUN397, we provide conclusive evidence that our proposed TCBC method reliably enhances the performance of class-imbalanced semi-supervised learning.§ INTRODUCTIONSemi-supervised learning (SSL) <cit.> has shown promise in using unlabeled data to reduce the cost of creating labeled data and improve model performance on a large scale. In SSL, many algorithms generate pseudo-labels <cit.>for unlabeled data based on model predictions, which are then utilized to regularize model training. However, most of these methods assume that the data is balanced across classes. In reality, many real-world datasets exhibit imbalanced distributions <cit.>, with some classes being much more prevalent than others. This imbalance affects both the labeled and unlabeled samples, resulting in biased pseudo-labels that further worsen the class imbalance during training and ultimately hinder model performance. Recent research <cit.> has highlighted the significant impact of class imbalance on the effectiveness of pseudo-labeling methods. Therefore, it is crucial to develop SSL algorithms that can effectively handle class imbalance in both labeled and unlabeled data, leading to improved performance in real-world scenarios. Unlike traditional SSL techniques that assume identical distributions of labeled and unlabeled data, this paper addresses a more generalized scenario of imbalanced SSL. Specifically, we consider situations where the distribution of unlabeled samples is unknown and may diverge from the distribution of labeled samples <cit.>. In this context, two challenges need to be addressed: (1) How to mitigate the model's class bias induced by training on imbalanced data, and (2) How to leverage the model's predictions on unlabeled samples during the training process to obtain improved pseudo-labels. To elucidate these challenges, we devised an experiment, as depicted in Figure <ref>, where labeled samples follow a long-tailed distribution, and unlabeled samples follow a uniform distribution. Figure <ref> illustrates the recall performance of the FixMatch model trained under this scenario on the test set. Despite the presence of numerous minority class samples in the unlabeled data, due to the class imbalance in the labeled samples, the eventual model still exhibits significant class bias. Figure <ref> represents the pseudo-label class distribution on unlabeled data obtained by Fixmatch, highlighting a notable class bias present in Fixmatch's pseudo-labels, which can adversely affect model training. To address these two challenges within imbalance SSL, we introduce a novel approach termed “Twice Class Bias Correction" (TCBC). The primary challenge of the first issue lies in the potential inconsistency between the class distributions of labeled and unlabeled samples. During training, the class distribution of training samples may undergo substantial fluctuations, rendering it infeasible to rely on the assumption of consistent class distribution to reduce model bias. To address this challenge, we dynamically estimate the class distribution of participating training samples. Leveraging the assumption of consistent class-conditional probabilities, we guide the model to learn a reduced class bias objective on both labeled and pseudo-labeled samples, specifically targeting the posterior probabilities of samples under a class-balanced prior. As depicted in Figure <ref>, our approach significantly diminishes the model's bias across different classes. The complexity of the second challenge lies in the presence of class bias in the model during training and the unknown distribution of unlabeled samples. This results in uncontrollable pseudo-labels acquired by the model on unlabeled data. A balanced compromise solution is to ensure that the model acquires pseudo-labels as equitably as possible across different classes. Hence, we introduce a method based on the model's output on samples to estimate the model's class bias under current parameters. We leverage this bias to refine predictions on unlabeled samples, thereby reducing class bias in pseudo-labels. As illustrated in Figure <ref>, our approach achieves a less biased pseudo-label distribution on class-balanced unlabeled samples. Our primary contributions are as follows: (1) We present a novel technique that harnesses the class distribution of training samples to rectify the biases introduced by class imbalance in the model's learning objectives. (2) We introduce a method to evaluate the model's class bias under the current model parameter conditions during the training process and utilize it to refine pseudo-labels. (3) Our approach is straightforward yet effective, as demonstrated by extensive experiments in various imbalanced SSL settings, highlighting the superiority of our method. Code and appendix is publicly available at https://github.com/Lain810/TCBC.§ RELATED WORKClass Imbalanced learning attempts to learn models that generalize well on each class from imbalanced data.Resampling and reweighting are two commonly used methods. Resampling methods balance the number of training samples for each class in the training set by undersampling  <cit.> the majority classes or oversampling <cit.> the minority classes. Reweighting <cit.> methods assign different losses to different training samples of each class or each example.In addition, some works have used logits compensation <cit.> based on class distribution or transfer learning <cit.> to address this problem. Semi-supervised learning try to improve the model’s performance by leveraging unlabeled data <cit.>. A common approach in SSL is to utilize model predictions to generate pseudo-labels for each unlabeled data and use these pseudo-labels for supervised training. Recent SSL algorithms, exemplified by FixMatch <cit.>, achieve enhanced performance by encouraging consistent predictions between two different views of an image and employing consistency regularization. While these methods have seen success, most of them are based on the assumption that labeled and unlabeled data follow a uniform label distribution. When applied to class-imbalanced scenarios, the performance of these methods can significantly deteriorate due to both model bias and pseudo-label bias. Class imbalanced semi-supervised learning has garnered widespread attention due to its alignment with real-world tasks. The DARP <cit.> refines initial pseudo-labels through convex optimization, aiming to alleviate distribution bias resulting from imbalanced and unlabeled training data. In contrast, CREST <cit.> employs a combination of re-balancing and distribution alignment techniques to mitigate training bias. ABC <cit.> introduces an auxiliary balanced classifier trained through down-sampling of majority classes to enhance generalization. However, many existing methods assume a similarity between marginal distributions of labeled and unlabeled data classes, an assumption that often doesn't hold or remains unknown before training. To address this limitation, DASO <cit.> combines pseudo-labels from both linear and similarity-based classifiers, leveraging their complementary properties to combat bias. L2AC <cit.> introduces a bias adaptive classifier to tackle the issue of training bias in imbalanced semi-supervised learning tasks.§ METHODOLOGY §.§ PreliminariesAssuming the existence of a labeled set denoted by 𝒟_l = {(_n, _n)}_n=1^N and an unlabeled set denoted by 𝒟_u = {_m}_m=1^M, where _n, _m ∈𝒳 represents training samples in the input space, and _n ∈𝒴 represents the labels assigned to labeled samples, with 𝒴 denoting the label space.The class distribution of labeled data and unlabeled data is denoted by _l() and _u(), respectively. Moreover, we denote N_k and M_k as the number of labeled and unlabeled samples in class k, respectively. Without loss of generality, we assume that the classes are arranged in descending order based on the number of training samples, such that N_1 ≥ N_2 ≥…≥ N_K. The goal of imbalanced SSL is to learn the model f that generalizes well on each class from imbalanced data, parameterized by θ.In SSL, one effective method involves utilizing pseudo-labeling techniques to enhance the training dataset with pseudo-labels for unlabeled data. In pseudo-labeling SSL, each unlabeled sample is provided with a pseudo-label based on the model's prediction. An optimization problem with an objective function ℒ = ℒ_s+λℒ_u is utilized to train the model on both labeled and pseudo-labeled samples. The loss function consists of two terms: the supervised loss ℒ_s computed on labeled data and the unsupervised loss ℒ_u computed on unlabeled data. The parameter λ is used to balance the loss from labeled data and pseudo-labeled data. The computation formula for the supervised loss on labeled samples is given byℒ_s=1/N∑_x_i∈ B_l ℋ(_i, p( | _i)),where B_l denotes a iteration of labeled data sampled from D_l. ( | _i)=softmax(f(_i)) represents the output probability, and ℋ is the cross-entropy loss. Similarly, the loss function on unlabeled samples can also be formulated asℒ_u=1/M∑__j∈ B_uℳ·ℋ(ŷ_j, p( | _j)),where ℳ = 𝐈[max(p̂(y | _j )) ≥τ_c] and 𝐈 is the indicator function, τ_c is the threshold. ŷ_j = maxp̂(y | _j ) denotes the pseudo-label assigned by the model to the weakly augmented sample _j, and _j represents the unlabeled samples that undergo strong augmentation.§.§ Model bias correction In imbalanced SSL, the first challenge to address is how to learn a model that is devoid of class bias. Generally, achieving a evaluationmodel without class bias involves approximating a Bayesian optimal model under a class-balanced distribution. This entails minimizing the loss on data where the marginal distribution follows a uniform distribution, _ev(y)=1/K. The corresponding posterior probability of the samples is denoted as _ev(| ).When directly optimizing surrogate loss, such as the softmax cross-entropy loss, on training data with an imbalanced class distribution _tr(), the learned posterior probability of the samples becomes _tr(|), which differs from _ev(| )and tends to favor the classes with more samples. Therefore, resolving this challenge involves addressing the mismatch between _tr(|) and _ev(|).When considering imbalanced SSL, the situation becomes more complex. The training data consist of labeled data (with a marginal distribution _l()) and unlabeled data (with a marginal distribution _u()). Here, _l() represents a known long-tail distribution, while _u() represents an unknown distribution that may differ from _l(). Assuming that the training samples, including labeled and unlabeled samples, are drawn from the same probability distribution of the corresponding class,_l(|)=_u(|)=(|)∈𝒳, ∈𝒴.By applying the Bayes' theorem, we establish that (|) ∝(|) (). If we consider labeled and unlabeled losses separately, due to the fact that _l() ≠_u(), the posterior probabilities learned from these two parts are different. Specifically, let's assume a sample _i exists in both the labeled and unlabeled data. The posterior probabilities _tr( | _i) learned by the model represent a weighted average of _l( | _i) and _u( | _i). We denote the marginal probability distribution corresponding to _tr( | ) as _tr(). According to Eq. (<ref>), and considering _ev(y) = 1/K, we have:_tr( | ) ∝(| ) ·_tr(y)∝_ev( | ) / _ev() ·_tr(y) ∝_ev( | )·_tr(y).Because we aim to learn _ev(|), which can be represented as _ev(|)=(f()), but what we learn through the loss ℒ is _tr(|). According to Equation (<ref>), we have:_tr(|)∝_ev( | )·_tr(y) ∝(f()+ln_tr()).Therefore, the (|_i) in ℒ_s and (|_j) in ℒ_u should be:_tr(|_i)= (f()+ln_tr())_tr(|_j) = (f(_j)+ln_tr()).This loss can be considered as an extension of the logit adjustment loss <cit.> or balance softmax <cit.> in imbalanced SSL. As the class distribution of unlabeled data is unknown and varies during training, we need to estimate _tr(y). We use the class distribution of the participating training samples in the most recent T iterations as an estimate of _tr(y). Additionally, since there are weights associated with the loss in ℒ_u, we treat it as a form of undersampling. Consequently, the number of samples for class y in a batch is given by:count()=∑__i ∈ B_l𝐈(_i=)+∑__j ∈ B_uλℳ·𝐈(ŷ_j=).In the experiments, we set T to be 50*K. To investigate the correlation between _tr(y) and the true distribution (representing the class distribution of 𝒟_l∪𝒟_u), we monitored the L2 distance between _tr(y) and the true distribution during the training process. Figure <ref> shows the results for both cases: when the distributions of labeled and unlabeled samples are consistent and when they are inconsistent. It can be observed that when the distributions are consistent, _tr(y) maintains a small distance from the true distribution. When the distributions are inconsistent, _tr(y) continuously approaches the true distribution. §.§ Pseudo-label refinementIn the previous section, we utilized an estimate of the marginal distribution _tr(y) of the training data to learn a class-balanced model. However, is it optimal to use this model to generate pseudo-labeled data for training during the training process?We conducted a parametric decomposition of Equations (<ref>) and (<ref>), taking into further consideration the parameters θ_t of the model at the t-th iteration:_bal( |; _t)∝(|; _t) ·_tr( | _t) / _tr( | _t) ∝_tr( |; _t) / p_tr(y | θ_t) ∝softmax(f()+ln_tr()-ln_tr(y | _t)),where _bal( |; _t) represents the posterior probability under the uniform distribution prior for a given model parameters _t. It takes into account the model parameters more explicitly than _ev( | ). To compute _bal( |; _t), a crucial step is to estimate the class prior p_tr(y | θ_t) under the current parameters. Referring to <cit.>, we can estimate p_tr(y | θ_t) through the model's outputs on the training samples:_tr( | θ_t)= 𝐄__i ∈𝒟_tr_tr( |; _t) = 𝐄__i ∈𝒟_tr(_ev(|; _t) ·_tr()),where 𝒟_tr represents the dataset composed of participating training samples. Let d_y(_t) = ln_tr() - ln p_tr(y | _t), we have:d_y(_t)= ln_tr()-ln p_tr( | _t)= ln(_tr()/𝐄__i ∈𝒟_tr ((f() + ln_tr()))) =-ln𝐄__i ∈𝒟_tr( e^f^(_i)/∑_k=1^K e^f^k(_i)+ln_tr(k)). Utilizing d_y(_t) to modify f() allows us to acquire pseudo-labels under the current parameters that mitigate class bias. However, an accurate estimation of d_y(_t) necessitates considering the entire dataset. To ensure stable and efficient estimation of d_y(_t), we devised a momentum mechanism that leverages expectations computed over each iteration for momentum updates: d_y(_t+1)= m · d_y(_t) + (1-m) · d'_y(_t),where m∈ [0, 1) is a momentum coefficient,d'_y(_t) is computed utilizing data from the t-th iteration. Consequently, the refined pseudo-label can be expressed as follows:ŷ^re_j = max( (f(_i)+d_y(_t+1)) ).Fixing _tr(y) at 1/K leads the algorithm to degenerate into a process that exclusively incorporates pseudo-label refinement into the FixMatch. We conducted a comparative analysis between the approach that exclusively employs pseudo-label refinement and the original FixMatch, aiming to explore the characteristics of pseudo-label refinement. Experimental trials were carried out utilizing unlabeled samples distributed uniformly, aligning with the conditions depicted in Figure <ref>. Figure <ref> visualizes the L2 distance between the distribution of pseudo-labels and the uniform distribution. Clearly, with the advancement of training, the refined distribution of pseudo-labels gradually converges toward the uniform distribution. Our proposed approach successfully mitigates the class bias present in pseudo-labels. In summary, Figure <ref> presents the overall training procedure for TCBC. The labeled and unlabeled loss are given by:ℒ_s=1/N∑_x_i∈ B_l ℋ(_i, _tr( | _i)), ℒ_u=1/M∑__j∈ B_uℳ·ℋ(ŷ^re_j, _tr( | _j)).§ EXPERIMENTS This section presents a comprehensive evaluation of our algorithm's performance within the context of imbalanced SSL in classification problems. §.§ Experimental setupDatasets We conduct experiments on three benchmarks including CIFAR10, CIFAR100 <cit.> and STL10 <cit.>, which are commonly used in imbalance learning and SSL task. Results on real-world dataset, SUN-397, are also given in appendix. To validate the effectiveness of , we evaluate TCBC under various ratio of class imbalance. For imbalance types, we adopt long-tailed (LT) imbalance by exponentially decreasing the number of samples from the largest to the smallest class. Following <cit.>, we denote the amount of samples of head class in labeled data and unlabeled data as N_1 and M_1 respectively. The imbalance ratio for the labeled data and unlabeled data is defined as γ_l and γ_u, which can vary independently. We have N_k=N_1·γ_l^ϵ_k and M_k=M_1·γ_u^ϵ_k, where ϵ_k = k-1/K-1. Baseline methods For supervised learning, we train network using cross-entropy loss with only labeled data. For semi-supervised learning, we compare the performance of  with FixMatch <cit.>, which do not consider class imbalance. To have a comprehensive comparison, we combine several re-balancing algorithms with FixMatch, including DARP <cit.>, CReST <cit.>, ABC <cit.>, DASO <cit.> and L2AC <cit.>. Training and Evaluation We train Wide ResNet-28-2 (WRN28-2) on CIFAR10-LT, CIFAR100-LT and STL10-LT as a backbone. We evaluate the performance of  using an EMA network, where parameters are updating via exponential moving average every steps, following <cit.>. We measure the top-1 accuracy on test data and finally report the median of accuracy values of the last 20 epochs following  <cit.>. Each set of experiments was conducted three times. Additional experimental details are provided in the appendix.§.§ Results In the Case of γ_l = γ_u.We initiate our investigation by conducting experiments in the scenario where λ_l = λ_u.Our evaluation of the proposed TCBC approach is exhaustive, encompassing a comprehensive comparative analysis against various recent state-of-the-art methods. These methods include DARP <cit.>, CReST+ <cit.>, ABC <cit.>, DASO <cit.>, and L2AC <cit.>. Further details about these methods are provided in appendix.The main results on the CIFAR-10 dataset are shown in Table <ref>. It is evident that across various dataset sizes and imbalance ratios, our approach (TCBC) substantially enhances the performance of FixMatch. Moreover, our TCBC consistently surpasses all the compared approaches in these settings, even when they are designed with the assumption of shared class distributions between labeled and unlabeled data. For instance, considering the highly imbalanced scenario with γ_l=γ_u=150, our TCBCachieves improvements of 2.8% and 5.0% in situations where N_1=1500, M_1=3000 and N_1=500, M_1=4000, respectively, compared to the L2AC.To facilitate a more comprehensive comparison, we also conducted an evaluation of TCBC using the CIFAR-100 dataset. As illustrated in Table <ref>, our  exhibits a more competitive performance in comparison to the state-of-the-art methods ABC, DASO and L2AC. In the Case of γ_l ≠γ_u.In practical datasets, the distribution of unlabeled data might significantly differs from that of labeled data. Therefore, we explore uniform and reversed class distributions, such as setting γ_u to 1 or 1/100 for CIFAR10-LT. In the case of the STL10-LT dataset, as the ground-truth labels of the unlabeled data are unknown, we can only control the imbalance ratio of the labeled data. We present the summarized results in Table <ref>. Our method demonstrates superior performance when confronted with inconsistent class distributions in unlabeled data. For instance, when γ_u is set to 1 and 1/100 on CIFAR10-LT, TCBC achieves absolute performance gains of 19.4% and 17.4% respectively compared to FixMatch. Similarly, on CIFAR100-LT, our method consistently outperforms compared methods. Even in the case of STL10-LT where the distribution of unlabeled data is unknown, TCBC attains the best results with an average accuracy gain of 3.8% compared to L2AC. These empirical results across the three datasets with unknown class distributions of unlabeled data validate the effectiveness of TCBC in leveraging unlabeled data to mitigate the negative impact of class imbalance.§.§ Ablation StudyTo explore the contributions of each key component in TCBC, we conducted a series of ablation studies. We set N_1 to 500 and M_1 to 4000, and performed experiments on CIFAR-10 with various settings of γ_l=100. As shown in Table <ref>, it is evident that using either model bias correction or pseudo-label refinement alone can significantly enhance the performance of FixMatch. This underscores the effectiveness of the two components in our approach.However, pseudo-label refinement performs poorly when γ_u=100 and γ_u=1/100, mainly due to the imbalanced distribution of 𝒟_l∪𝒟_u, which introduces class bias into the trained model. Model bias correction effectively addresses this issue. Similarly, pseudo-label refinement also enhances the effectiveness of model bias correction. §.§ Discussion Can our method adapt to unlabeled samples with different class distributions? To assess the effectiveness of our approach across different distributions of unlabeled samples, supplementary experiments were conducted using the CIFAR-10-LT dataset. The labeled sample distribution γ_l was maintained at a constant value of 100, while systematically modifying the imbalance ratio (γ_u) of unlabeled data. In this study, we configured N_1 as 1500 and M_1 as 3000, and we conducted a comparative analysis of our method against the performance of DASO. The outcomes of the experiments are illustrated in Table <ref>. Notably, our TCBC method consistently demonstrated superior performance compared to DASO across all test scenarios, achieving an average performance increase of 4%.These results clearly demonstrate that our method can effectively adapt to imbalanced SSL environments in the real world.How does our method perform on samples of different frequencies?A comparative analysis was performed on our TCBC in comparison to various configurations of ABC and DASO. Figure <ref> shows the confusion matrices of the models obtained by different algorithms on the test set when N_1 is set to 1500 and M_1 is set to 3000. The two confusion matrices on the left depict the test results of ABC and TCBC when the labeled and unlabeled class distributions are consistent. From the recall of the minority class samples, it is evident that ABC exhibits bias across different classes. However, our approach effectively mitigates such biases. Under the settings of γ_l=100 and γ_u=1, both our method and the model learned by DASO reduced the model's class bias. However, our method outperforms DASO on both majority and minority classes. This observation indicates that our TCBC has successfully learned a model without class bias. Have our methods learned better features? We further visualized the features of unlabeled samples using t-SNE <cit.> in the same setting as depicted in Figure <ref>. As shown in Figure <ref>, compared to FixMatch, TCBC has learned more discriminative features. For instance, in the case of γ_u=100, FixMatch exhibits only 8 clusters, while our method demonstrates all 10 clusters. This high-quality feature representation reflects the effectiveness of our method in learning an unbiased model and refining pseudo-labels. How does the TCBC enhance model performance?We examined the changes in recall of minority and all classes of unlabeled samples under the settings of Figure <ref> for both FixMatch and TCBC methods. As shown in Figure <ref>, it is evident that in FixMatch, the recall of the minority class quickly plateaus and exhibits a significant disparity from the recall of all classes. In contrast, in TCBC, the recall of the minority class continues to increase and approaches the average recall. This indicates that TCBC ensures equitable treatment of pseudo-labeling for minority classes throughout the process, aligning with our motivation.§ CONCLUSIONIn this work, we address the model bias and pseudo-label bias in imbalanced SSL through the introduction of a novel twice correction approach. For tackling model bias, we propose the utilization of an estimate of the training sample class distribution to rectify the model's learning objectives onto an unbiased posterior probability. To address pseudo-label bias, we refine a better set of pseudo-labels by estimating the class bias under the current parameters during the training process. Extensive experimental results demonstrate that our method outperforms existing approaches.§ ACKNOWLEDGMENTSThis work is supported by the National Science Foundation of China (61921006). We would like to thank Xin-chun Li, and the anonymous reviewers for their helpful discussions and support.
http://arxiv.org/abs/2312.16604v1
{ "authors": [ "Lan Li", "Bowen Tao", "Lu Han", "De-chuan Zhan", "Han-jia Ye" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231227150636", "title": "Twice Class Bias Correction for Imbalanced Semi-Supervised Learning" }
[1]This research is partially supported by NSFC of China No. 12171088. ]Huiqun Jiang^* []Corresponding author. huiqun.jiang@foxmail.com School of Mathematics and Statistics, Fuzhou University, Fuzhou, 350108, Fujian, China. Let ℬ̇≜ℬ̇_Ḣ, μ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, where Ḣ is a totally disconnected graph of order s. In this paper, by using Hadamard and Conference matrices as tools, the maximum order of ℬ̇ and the extremal graphs are studied. It is shown that ℬ̇ exists if and only if μ^2 is a positive integer. A formula of the maximum order of ℬ̇ is given in the case of μ^2=p× q such that p, q are integers and there exists a p-order Hadamard or (p+1)-order Conference matrix. In particular, it is proved the maximum order of ℬ̇ is 2s when either q=1, s=cμ^2=cp or q=1, s=c(μ^2+1)=c(p+1), c=1,2,3,⋯. Futhermore, some extremal graphs are characterized. Star complement; Signed bipartite graph; Totally disconnected graph; Hadamard matrix; Conference matrix § INTRODUCTION A signed graph is a pair (G, σ), where G is a simple graph, called the underlying graph, and σ: E →{ -1, 1} is a mapping, called the sign function or signature. In certain circumstances, it will be more convenient to deal with (G, σ) in such a way that its set of negative edges is emphasized, in which case we will write (G, Σ^-) instead, where Σ^-=σ^-1 (-) denotes the set of negatives edges. A simple graph G can be regarded as a signed graph (G,∅). The Reconstruction Theorem is well-known in the study of the spectrum of simple or signed graphs, which is firstly proved by Cvetković, Rowlison and Simić in <cit.>. The role of an individual eigenvalue in the structure of a simple or signed graph is revealed by the theorem. The star complement technique <cit.> is a procedure for constructing a simple or signed graph with a prescribed star complement for an eigenvalue to illustrate the use of the Reconstruction Theorem. In this procedure, there is a necessary condition that the eigenvalue does not appear in the spectrum of the prescribed star complement. In <cit.>, by using this technique and the Kronecker product, Ramezani constructs some families of the signed graphs with only two opposite eigenvales. The typical usages of this technique are to determine the maximum order of simple or signed graphs, and construct maximal simple or signed graphs. In <cit.>, by using this technique, Ramezani, Rowlinson and Stanić prove that the maximum order n of signed graphs is no more than n-k+23, in which k is the multiplicity of an eigenvalue μ∉{-1,0,1}. In <cit.>, Yuan, Mao and Liu characterize maximal signed graphs with signed C_3 or C_5 as star complements for -2. In <cit.>, Mulas and Stanić characterize maximal signed graphs (G, Σ^-) with any signed subgraphs whose spectrum lies in (-2,2) as star complements, where (G, Σ^-) has the eigenvalues ±2. Let Ḣ≜ (sK_1, ∅) be a totally disconnected signed graph of order s. In <cit.>, Stanić studies signed graphs (G, Σ^-) with Ḣ as a star complement for an eigenvalue μ. It is shown that if s=μ^2, then the star set induces ((n-μ^2) K_1, ∅) or ( K_n-μ^2, ∅), and the number of different eigenvalues of (G, Σ^-) is two or three, where n is the order of (G, Σ^-). It is easy to see (G, Σ^-) is bipartite in the case that the star set induces ( (n-μ^2) K_1, ∅). Therefore, we study the following problem in this paper: Problem 𝒫: Let Ḣ be a totally disconnected graph of order s, μ be a non-zero real number. What is the maximum order of the signed bipartite graph ℬ̇ such that ℬ̇ has Ḣ as a star complement for the eigenvalue μ ? In Section 3, we show that ℬ̇ exists if and only if μ^2 is a positive integer and s≥μ^2 (see Proposition <ref>). Since the adjacency matrix of ℬ̇ can be orthogonal, then Hadamard and Conference matrices can be tools to study the problem 𝒫. We can find some properties of Hadamard and Conference matrices in <cit.>. A formula of the maximum order of ℬ̇ is given in the case of μ^2=p× q such that p, q are integers and there exists a p-order Hadamard or (p+1)-order Conference matrix. In particular, the maximum order of ℬ̇ is 2s in the case of s=cμ^2 such that μ^2 is the order of a Hadamard matrix, or s=c(μ^2+1) such that μ^2+1 is the order of a Conference matrix (see Corollary <ref>). The Section 2 contains terminologies and notations along with the Reconstruction Theorem. In Section 4, some extremal graphs are characterized. § PRELIMINARIES Let (G, Σ^-) be a signed graph of order n. If the vertices u and v of (G, Σ^-) are adjacent, then we write u∼ v. In particular, if they are adjacent by a positive (or negative) edge, then we write u+∼ v (or u-∼ v ). If u and v are not adjacent, then we write u v. The adjacency matrix of (G, Σ^-) is the n × n matrix A_(G, Σ^-)=(a_ij), where a_ij=1 if u+∼ v, a_ij=-1 if u-∼ v and a_ij=0 if u v. Let μ be an eigenvalue of (G, Σ^-) with the multiplicity k. If S is a subset of the set V(G) such that |S|=k and μ is not an eigenvalue of the induced subgraph (G, Σ^-)-S, then S is called a star set for μ in (G, Σ^-) and the graph (G, Σ^-)-S (of order n-k) is called a star complement for μ in (G, Σ^-). The properties of star sets and star complements for the corresponding eigenvalues can be found in <cit.>. The following result, called the Reconstruction Theorem, is fundamental to the theory of star complements. <cit.> Let (G, Σ^-) be a graph with adjacency matrix ([ A_S B^T; B C ]), where A_S is the k× k adjacency matrix of the subgraph induced by a vertex set S. Then S is a star set for μ in (G, Σ^-) if and only if μ is not an eigenvalue of C and μ I-A_S=B^T(μ I-C)^-1B. This result is initially formulated in the context of simple graphs. Since every simple graph can be interpreted as a signed graph, then it follows that Theorem <ref> is an extension of <cit.>. In <cit.>, Cvetković, Rowlinson and Simić prove that star sets and star complements exist for any eigenvalues in simple or signed graphs. In order to solve the problem 𝒫, it is necessary to compute an inverse of an inversable matrix forming as μ I-C. In <cit.>, there is the way to compute the matrix (μ I-C)^-1. Let S be a star set of order k for μ in (G, Σ^-). A bilinear form on ℝ^n-k is defined as follows: ⟨ x,y ⟩=x^T (μ I-C)^-1 y(x,y∈ℝ^n-k). Let u be a vertex in S, 𝐛_u be a row of B^T whichdetermines the neighbors of u in the signed subgraph (G, Σ^-)-S and the signatures of associate edges. By Equation (<ref>), for every two vertices u,v ∈ S, we have ⟨𝐛_u,𝐛_v ⟩= { μ if u=v ,-1if u +∼ v ,1if u -∼ v ,0if uv . . If ⟨𝐛_u,𝐛_u ⟩=μ, then u is said to be good for μ. If both u, v are good for μ, and ⟨𝐛_u,𝐛_v ⟩∈{ -1,0, 1}, then u, v are said to be compatible for μ. Let Ḣ' be a signed graph with the adjacency matrix C, μ be a real number such that μ does not appear in the spectrum of Ḣ'. Then μ does appear in the graph obtained by adding edges bewtween Ḣ' and a single vertex u, where the edges are allowed to have signatures by Equation (<ref>). It follows that u is good for μ if and only if ⟨𝐛_u,𝐛_u ⟩=μ; if both u and v are good for μ, then u,v are compatible for μ if and only if ⟨𝐛_u,𝐛_v ⟩∈{-1,0,1}. Thus, in order to solve the problem 𝒫, pairwise compatible vertices for μ are required as many as possible. § THE MAXIMUM ORDER OF ℬ̇ Let Ḣ be totally disconnected, V(Ḣ)={ 1,2 ⋯,s }, u be a single vertex such that u ∉ V(Ḣ), 𝐛_u =(b_u1, ⋯, b_us )^T be a (0,± 1)-vector such that 𝐛_u determines the neighbors of u in Ḣ and the signatures of associate edges u1, u2, ⋯,us. By Equation (<ref>), we get ∑_i=1^s b_ui b_vi= { μ^2if u=v ,-μ if u +∼ v ,μ if u -∼ v ,0if uv . . Let Ḣ be a totally disconnected graph of order s, μ be a non-zero real number, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for μ. Then ℬ̇ exists if and only if μ^2 is a positive integer and s≥μ^2. If s=μ^2, then the following theorem is obtained in <cit.>, which shows an inequality between μ^2 and the order of ℬ̇, and the spectrum of ℬ̇. <cit.> If a signed graph (G, Σ^-) of order n is decomposed into the star complement ( μ^2 K_1, ∅) (for μ) and ( (n-μ^2) K_1, ∅), then n≤ 2μ^2 and the spectrum of (G, Σ^-) is [μ ^n-μ^2,0 ^2μ^2-n, (-μ) ^n-μ^2]. This theorem reveals, if a signed graph (G, Σ^-) of order n is decomposed into the star complement ( s K_1,∅) (for μ) and ( (n-s)K_1, ∅), then n≤ 2s and the spectrum of (G, Σ^-) is [μ ^n-s,0 ^2s-n, (-μ) ^n-s]. It is easy to see the signed graph (G, Σ^-) is bipartite. Since ℬ̇ is bipartite and Ḣ is totally disconnected, then for every two vertices u,v∉ V(Ḣ), uv, otherwise there is a induced signed graph ( K_3,∅) in ℬ̇. Thus the following theorem is obtained. Let Ḣ be a totally disconnected graph of order s, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ. If the order of ℬ̇ is n, then n≤ 2s and the spectrum of ℬ̇ is [μ ^n-s,0 ^2s-n, (-μ) ^n-s]. Let A_ℬ̇=([ 0 B^T; B 0 ]). By Equations (<ref>) and (<ref>), we get B^TB=μ^2I. It is essential for the problem 𝒫 to construct such B. If μ^2=1, then B can be an identity matrix. Thus, the maximum order of ℬ̇ is 2s, where ℬ̇ is a signed bipartite graph with Ḣ as a star complement for ±1. Let Ḣ be a totally disconnected graph of order s(s≥ 2), ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ. If μ^2=1, then the maximum order of ℬ̇ is 2s. If s=μ^2, then B is a square (-1,1)-matrix. An n× n (-1,1)-matrix ℋ(n) with ℋ(n)^Tℋ(n)=n I is called to be a Hadamard matrix. A Hadamard matrix is defined by Sylvester in <cit.> and is studied further by Hadamard in <cit.>. Hadamard conjectures that a Hadamard matrix of order 4n exists for every natural n (Hadamard Conjecture). This condition is necessary, and the sufficiency part is still an open problem. These Hadamard matrices are systematically studied by Paley in <cit.>. It is shown the five following decompositions of a positive integer n such that n is the order of a Hadamard matrix: (1) n=2^p, p≥ 1; (2) n=2^p(q^h+1), p≥ 2, h≥ 1, q is a prime; (3) n=2^pq(q+1), p≥ 2, q≡ 3 (mod 4), q is a prime; (4) n=q+1, q≡ 3 (mod 4), q is a prime power; (5) n=2(q+1), q≡ 1 (mod 4), q is a prime power. The cases of the first three decompositions are solved by Paley in <cit.>, and the rest cases are solved by Ionin and Shrikhande in <cit.>. Let 𝒩(ℋ) be a subset of natural numbers such that every entry in 𝒩(ℋ) is at least one of the five above decompositions. Firstly, we consider μ^2∈𝒩(ℋ) and s=μ^2≥ 2. Let μ^2 (μ^2≥ 2) be a positive integer, 𝐛 be a (-1,1)-vector of rows μ^2, B be a matrix consisted of 𝐛's such that B^TB=μ^2 I. If μ^2∈𝒩(ℋ), then the maximum number of colums of B is μ^2. Secondly, we consider that s=μ^2 and μ^2∈ 2ℤ^+-1 or μ^2∈ 2ℤ^+\ 4ℤ^+. Let ⊗ be for the Kronecker product . Let μ^2 (μ^2≥ 2) be a positive integer, 𝐛 be a (-1,1)-vector of rows μ^2, B be a matrix consisted of 𝐛's such that B^TB=μ^2 I. If μ^2∈ 2ℤ^++1 or μ^2∈ 2ℤ^+\ 4ℤ^+, then the maximum number of colums of B is 1 or 2. Let μ^2=2^p(2q+1), where p=0,1 and q=1,2,3,⋯. If p=0, then B is a vector. Otherwise, there is at least one 0 in the second column of B, which is a contradiction that B is a (-1,1)-matrix. Thus the maximum number of colums of B is 1. Let ℋ(2)=([11;1 -1 ]), B=ℋ(2) ⊗𝐣_2q+1 ,𝐲= 𝐲_1⊗𝐲_2 be a (-1,1)-vector such that B^T𝐲=0, where the number of rows of 𝐲_1 (or 𝐲_2) is 2 (or 2q+1). Then (ℋ(2)^T 𝐲_1 ) ⊗(𝐣^T 𝐲_2 ) =0, and ℋ(2)^T 𝐲_1=0 or 𝐣^T 𝐲_2 =0, which is a contradiction. Thus the maximum number of colums of B is 2. By Lemma <ref> and <ref>, the following theorem is obtained. Let Ḣ be a totally disconnected graph of order s, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, n be the maximum order of ℬ̇. If s=μ^2≥ 2, then n={ s+1 s∈ 2ℤ^++1,s+2 s∈ 2ℤ^+\ 4ℤ^+,2s s∈𝒩(ℋ). . If s=μ^2+1, then we consider whether there is a (0,± 1)-matrix B such that B^TB=(s-1)I or not. An n× n square matrix 𝒞(n) with 𝒞(n)^T 𝒞(n)=(n-1) I, in which all the diagonal entries are 0 and all off-diagonal entries are ± 1, is said to be a Conference matrix. In <cit.>, it is shown that there is a Conference matrix 𝒞(n) in the case that n-1 is an odd prime power (the odd prime power can be 1). Let 𝒩(𝒞) be a set of odd prime powers. Thirdly, we consider that μ^2∈𝒩(𝒞) and s=μ^2+1. Let μ^2∈𝒩(𝒞) with μ^2≥ 2, 𝐛 be a (0,± 1)-vector of rows μ^2+1, in which the number of zero is 1. If there is a matrix B consisted of 𝐛's such that B^TB=μ^2 I, then the maximum number of colums of B is μ^2+1. Let Ḣ be a totally disconnected graph of order s, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, n be the maximum order of ℬ̇. If s=μ^2+1≥ 3, then n={ μ^2+3 μ^2∈ 2ℤ^+\ 4ℤ^+,2μ^2+1 μ^2∈𝒩(ℋ),2μ^2+2 μ^2∈𝒩(𝒞). . If μ^2∈ 2ℤ^+\ 4ℤ^+, then n≥μ^2+3 by Theorem <ref>. We suppose that there are two vertices u, v in ℬ̇ such that the neighbors of u are different from ones of v. Then the number of the common neighbors between u and v is μ^2-1. Let 𝐛_u is a (0,± 1)-vector of order s such that 𝐛_u determines the neighbors of u in Ḣ and the signatures of associate edges. Since μ^2 is even, then μ^2-1 is odd and 𝐛_u^T 𝐛_v≠0, which is a contradiction. Thus n=μ^2+3. If μ^2∈𝒩(ℋ), then n=2μ^2+1 by Theorem <ref>. If s∈𝒩(𝒞), then n=2μ^2+2 by Lemma <ref>. Finally, we consider that μ^2=p× q such that p, q are integers and there exists a p-order Hadamard or (p+1)-order Conference matrix. Let Ḣ be a totally disconnected graph of order s, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, n be the maximum order of ℬ̇, where there are positive integers p and q such that μ^2=pq≥ 2. (1) If p,q∈𝒩(ℋ) and cpq≤ s< (c+1)pq, then n=s+cpq, where c=1,2,3,⋯. (2) If q∈𝒩(𝒞), then Table <ref> holds. Since p,q∈𝒩(ℋ), then there are Hadamard matrices ℋ(p) and ℋ(q). If s=pq, then B=ℋ(p) ⊗ℋ(q). Let 𝐲=𝐲_1 ⊗𝐲_2 be a (-1,1)-vector such that B^T𝐲=0, where the number of rows of 𝐲_1 (or 𝐲_2) is p (or q). Then (ℋ(p)^T 𝐲_1 ) ⊗(ℋ(q)^T 𝐲_2 ) =0, and there is a zero entry in ℋ(p)^T 𝐲_1 or ℋ(q)^T 𝐲_2, which is a contradiction. Thus n=s+pq. If cpq≤ s<(c+1)pq, then B^T=( I_c⊗ℋ(p)^T ⊗ℋ(q)^T 0), and n=s+cpq. Thus the result (1) holds. Since q∈𝒩(𝒞), then there is a Conference matrix 𝒞(q+1). If s=μ^2=2q, then n=2q+3=s+2 by Theorem <ref>. If s=μ^2+1=2q+1, then n= s+2 by Theorem <ref>. Thus the result (2.1) holds. Since p∈𝒩(ℋ), then there is a Hadamard matrix ℋ(p). If s=p(q+1), then B=ℋ(p) ⊗𝒞(q+1). Let 𝐱_1 be a (0, ± 1)-vector of rows p, 𝐱_2 be a (0, ± 1)-vector of rows q+1, 𝐱=𝐱_1 ⊗𝐱_2 be a vector such that B^T𝐱=0 and there is only one 0 either in 𝐱_1 or in 𝐱_2. Then(ℋ(p)^T 𝐱_1 ) ⊗(𝒞(q+1)^T 𝐱_2 ) =0. If there is only one 0 in 𝐱_1, then there are no zero entries in ℋ(p)^T 𝐱_1 or 𝒞(q+1)^T 𝐱_2, which is a contradiction. If there is only one 0 in 𝐱_2, then there is a zero entry in ℋ(p)^T 𝐱_1 or 𝒞(q+1)^T 𝐱_2, which is a contradiction. Thus n=s+p(q+1). If cp(q+1)≤ s<(c+1)p(q+1), then B^T=( I_c⊗ℋ(p)^T ⊗𝒞(q+1)^T 0), and n=s+cp(q+1). Thus the result (2.2) holds. Since p∈𝒩(𝒞), then there is a Conference matrix 𝒞(p+1). If s=(p+1)(q+1), then B=𝒞(p+1) ⊗𝒞(q+1). Let 𝐲_1 be a (0, ± 1)-vector of rows p+1, 𝐲_2 be a (0, ± 1)-vector of rows q+1, 𝐲=𝐲_1 ⊗𝐲_2 be a vector such that B^T𝐲=0 and there is either only one 0 in both 𝐲_1 and 𝐲_2 or two 0's in 𝐲_1. Then ( 𝒞(p+1)^T 𝐲_1 ) ⊗( 𝒞(q+1)^T 𝐲_2 ) =0. If there is only one 0 in 𝐲_1 and 𝐲_2, then there is a zero entry in 𝒞(p+1)^T 𝐲_1 or 𝒞(q+1)^T 𝐲_2, which is a contradiction. If there are only two 0's in 𝐲_1, then not all entries in 𝒞(p+1)^T 𝐲_1 are 0's and there are no zero entries in 𝒞(q+1)^T 𝐲_2, which is a contradiction. Thus n=s+(p+1)(q+1). If c(p+1)(q+1)≤ s< (c+1)(p+1)(q+1), thenB^T=( I_c⊗𝒞(p+1)^T ⊗𝒞(q+1)^T 0) , and n=s+c(p+1)(q+1). Thus the result (2.3) holds. Let Ḣ be a totally disconnected graph of order s, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ (μ^2≥ 2), n be the maximum order of ℬ̇. (1)If μ^2∈𝒩(ℋ) and cμ^2≤ s< (c+1)μ^2, then n=s+cμ^2. (2) If μ^2∈𝒩(𝒞) and c(μ^2+1)≤ s<(c+1)(μ^2+1), then n=s+c(μ^2+1). Let Ḣ be a totally disconnected graph of order s, ℬ̇ denote an arbitrary signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, n be the maximum order of ℬ̇, where there are positive integers p and q (p> q) such that μ^2=p q≥ 2. (1) If p∈𝒩(ℋ), q∈𝒩(𝒞) and pq≤ s< p(q+1), then s+p≤ n≤ 2s. (2) If p,q∈𝒩(𝒞) and pq≤ s< (p+1)q, then s+1≤ n≤ 2s. (3) If p,q∈𝒩(𝒞) and (p+1)q≤ s< (p+1)(q+1), then s+p+1≤ n≤ 2s. Since p∈𝒩(ℋ), q∈𝒩(𝒞), then there is a Hadamard matrix ℋ(p) and a Conference matrix 𝒞(q+1). If s=p q, then B=ℋ(p)⊗𝐣_q. Let 𝐱=𝐱_1 ⊗𝐱_2 be a (-1,1)-vector such that B^T𝐱=0, where the number of rows of 𝐱_1 (or 𝐱_2) is p (or q). Then (ℋ(p)^T 𝐱_1 ) ⊗(𝐣^T 𝐱_2 ) =0, and there is a zero entry in ℋ(p)^T 𝐱_1 or 𝐣^T 𝐱_2 =0, which is a contradiction. Thus s+p≤ n≤ 2s. If s=μ^2=p q and p,q∈ℋ(𝒞), then n=s+1 by Lemma <ref>. If pq≤ s<(p+1)q, then s+1≤ n≤ 2s. Since p∈𝒩(𝒞), then there is a Conference matrix 𝒞(p+1). If s=(p+1)q, then B=𝒞(p+1)⊗𝐣_q. Let 𝐲_1 be a (0,± 1)-vector of rows p+1, 𝐲_1 be a (0,± 1)-vector of rows q, 𝐲=𝐲_1 ⊗𝐲_2 be a vector such that B^T𝐲=0 and there is only one 0 either in 𝐲_1 or in 𝐲_2. Then (𝒞(p+1)^T 𝐲_1 ) ⊗( 𝐣^T 𝐲_2 ) =0. If there is only one 0 in 𝐲_1, then there is a zero entry in 𝒞(p+1)^T 𝐲_1 and 𝐣^T 𝐲_2 ≠ 0, which is a contradiction. If there is only one 0 in 𝐲_2, then there are no zero entries in 𝒞(p+1)^T 𝐲_1 and 𝐣^T 𝐲_2 ≠ 0, which is a contradiction. Thus s+p+1≤ n≤ 2s. § THE MAXIMAL SIGNED BIPARTITE GRAPH ℬ̇_M Let (G_1, Σ_1^-) and (G_2, Σ^-_2) be two signed graphs. If there exists a permutation (0, 1)-matrix P such that A_(G_2, Σ^-_2)=P^-1 A_(G_1, Σ^-_1) P, then (G_1, Σ^-_1) and (G_2, Σ^-_2) are said to be isomorphic. Similarly, if there is a diagonal (-1,1)-matrix D such that A_ (G_2, Σ^-_2) =D^-1 A_ (G_1, Σ^-_1)D, then (G_1, Σ^-_1) and (G_2, Σ^-_2) are said to be switching equivalent. Isomorphism and switching equivalence are equivalence relations preserving the eigenvalues. If (G_1, Σ^-_1) is isomorphic to a switching equivalence of (G_2, Σ^-_2), then (G_1, Σ^-_1) and (G_2, Σ^-_2) are said to be switching isomorphic, denoted by (G_1, Σ^-_1)≅^s (G_2, Σ^-_2). Switching isomorphism is also an equivalence relation preserving the eigenvalues. Let Ḣ be a totally disconnected graph of order s, ℬ̇_m is a maximal signed bipartite graph with Ḣ as a star complement for an eigenvalue μ. Let Ḣ be a totally disconnected graph of order s, ℬ̇_m denote the maximal signed bipartite graph with Ḣ as a star complement for an eigenvalue μ. If μ^2=1, then ℬ̇_m ≅^s(sK_2,∅). Let S be a vertex subset of ℬ̇ _m such that ℬ̇_m-S=Ḣ. Since μ=± 1, then by Equation (<ref>), we obtain that each vertex in S is adjacent with only one vertex in Ḣ, and each vertex in Ḣ does not have common neighbors in S. Thus ℬ̇_m ≅^s (sK_2,∅). Now we consider s=μ^2≥ 2 or s=μ^2+1. Let Ḣ be a totally disconnected graph of order μ^2. If μ^2=2, then (K_2,2,E(K_2) ) is the maximal signed bipartite graph with Ḣ as a star complement for ±√(2). If μ^2=3, then ( K_1,3,∅) is the maximal signed bipartite graph with Ḣ as a star complement for ±√(3). If μ^2=4, then ( K_4,4,E(C_6) ) is the maximal signed bipartite graph with Ḣ as a star complement for ± 2. A simple graph is said to be biregular if its vertex degrees assume exactly two different values. Let K_n,n\ n K_2 be a simple graph obtained by deleting n edges without common vertices in K_n,n. Let Ḣ be a totally disconnected graph of order μ^2+1. If μ^2=2, then (K_1,3∪ K_1,∅) is the maximal signed bipartite graph with Ḣ as a star complement for ±√(2). If μ^2=3, then ( K_4,4\4K_2, E(P_4∪ K_2) ) is the maximal graph with Ḣ as a star complement for ±√(3), where P_4 is a path of length 4. If μ^2=5, then ( K_6,6\6K_2, E(BR) ) is the maximal graph with Ḣ as a star complement for ±√(5), where the simple graph BR (see Figure <ref>) is biregular. If s=μ^2∈𝒩(ℋ), then the order of ℬ̇_m is 2s by Theorem <ref>, and ℬ̇_m has only two opposite eigenvalue. If s=μ^2+1, then the maximum order of ℬ̇_m is 2s+2 by Theorem <ref>, and ℬ̇_m also has only two opposite eigenvalue. In <cit.>, Ramezani proves that the signed graphs with just two distinct eigenvalues are signed strongly regular graphs. In <cit.>, Stanić shows certain structural and spectral properties of signed strongly regular graphs, which is also bipartite. Let the signature of a walk be determined by the product of signatures of edges, u and v be two vertices in a signed graph, w_2(u,v) be the difference between the numbers of positive and negative walks traversing along 2 edges between u and v. A signed graph (G, Σ^-) is said to be strongly regular (for short, SRG^s (n,r,a,b,c)) whenever it is regular and satisfies the following four conditions: (1) (G, Σ^-) is neither homogeneous complete nor totally disconnected; (2) there exists a ∈ℤ such that w_2(u,v) = a, for all u+∼ v; (3) there exists b ∈ℤ such that w_2(u,v) = b, for all u-∼ v; (4) there exists c∈ℤ such that w_2(u,v) = c, for all u v. Thus A_(G, Σ^-)^2=a/2( A_(G, Σ^-) +A_G)-b/2( A_(G, Σ^-)-A_G )+ cA_G+rI, where G is for the complement of a simple graph G. We use the notation SRG^s(2n) to denote as SRG^s(2n,n,0,0,0). Let Ḣ be a totally disconnected graph of order s, ℬ̇_m denote the maximal signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, where s=μ^2≥ 2. (1) If s∈ 2ℤ^++1, then ℬ̇_m ≅^s (K_1,s,∅). (2) If s∈ 2ℤ^+\ 4ℤ^+, then ℬ̇_m ≅^s ( K_2,s, E(K_1,s/2) ). (3) If s∈𝒩(ℋ), then ℬ̇_m ≅^s SRG^s(2s). Since ℬ̇_m is bipartite and Ḣ is totally disconnected, then for every two vertices u,v∉ V(Ḣ), uv, otherwise there is a induced signed graph (K_3,∅) in ℬ̇_m. By Equations (<ref>) and (<ref>), we get B^TB=sI. If s∈ 2ℤ^+-1 or s∈ 2ℤ^+\ 4ℤ^+, then the results (1) and (2) hold by Lemma <ref>. If s∈𝒩(ℋ), then A_ℬ̇_m^2=sI and ℬ̇_m is a signed strongly regular graph. Let n be the order of ℬ̇_m, r,a,b,c be integers satisfying Equation (<ref>). Then n=2s, r=s and a=b=c=0. Thus the result (3) holds. Let Ḣ be a totally disconnected graph of order s, ℬ̇_m denote the maximal signed bipartite graph with Ḣ as a star complement for an eigenvaule μ, where s=μ^2+1≥ 3. (1) If μ^2 ∈ 2ℤ^+\ 4ℤ^+, then ℬ̇_m ≅^s ( K_2,μ^2∪ K_1, E(K_1,μ^2/2) ). (2) If μ^2∈𝒩(ℋ), then ℬ̇_m ≅^s SRG^s(2μ^2)∪ (K_1,∅). (3) If μ^2∈𝒩(𝒞), then ℬ̇_m ≅^s SRG^s(2μ^2+2). By Equation (<ref>), we get B^TB=μ^2I. If μ^2 ∈ (2ℤ^+\ 4ℤ^+) ∪𝒩(ℋ), then every two vertices in ℬ̇_m have the same neighbors. Otherwise the inner product between a (-1,1)-vector with odd rows and an all-one vector is zero, which is a contradiction. The results (1) and (2) hold by Theorem <ref>. If μ^2∈𝒩(𝒞), then A_ℬ̇_m^2=μ^2I, and ℬ̇_m is a signed strongly regular graph. Let n be the order of ℬ̇_m, r,a,b,c be integers satisfying Equation (<ref>), then n=2μ^2+2, r=s+1 and a=b=c=0 by Lemma <ref>. Thus the result (3) holds. Let SRG^s(2s) is bipartite. If s∈𝒩(ℋ), then the underlying graph of SRG^s(2s) is K_s,s. If s∈𝒩(𝒞), then the underlying graph of SRG^s(2s) is K_s,s\ s K_2. Let (G_1, Σ^-_1) and (G_2, Σ^-_2) be two signed bipartite graphs with the adjacency matrices ([ 0 B_1^T; B_1 0 ]) and([ 0 B_2^T; B_2 0 ]), respectively. The signed graph (G_1, Σ^-_1) ⊗̇ (G_2, Σ^-_2) is defined as the signed bipartite graph with the adjacency matrix A_(G_1, Σ^-_1) ⊗̇ A_(G_2, Σ^-_2)=([0 B_1^T⊗ B_2^T;B_1 ⊗ B_20 ]). Let c(G_1, Σ^-_1) be for the disjoint union of c copies of (G_1, Σ^-_1). By Theorem <ref>, <ref> and <ref>, the following result is obtained. Let Ḣ be a totally disconnected graph of order s, ℬ̇_m denote the maximal signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, where there are positive integers p and q such that μ^2=p q≥2. (1) If p,q∈𝒩(ℋ) and cpq≤ s<(c+1)pq (c=1,2,3,⋯), then ℬ̇_m ≅^s ( c SRG^s(2p)⊗̇ SRG^s(2q) ) ∪( (s-cpq )K_1,∅). (2) If q∈𝒩(𝒞), then Table <ref> holds. Let Ḣ be a totally disconnected graph of order s, ℬ̇_m denote the maximal signed bipartite graph with Ḣ as a star complement for an eigenvalue μ, where there are positive integers p and q (p>q) such that μ^2=p q≥ 2. Then by Theorem <ref>, the three following results are obtained. (1) If p∈𝒩(ℋ), q∈𝒩(𝒞) and s=pq, then ℬ̇_m≅^s SRG^s(2p)⊗̇ (K_1,q, ∅). (2) If p,q∈𝒩(𝒞) and s=pq, then ℬ̇_m ≅^s (K_1,pq, ∅). (3) If p,q∈𝒩(𝒞) and s=(p+1)q, then ℬ̇_m ≅^sSRG^s(2p+2)⊗̇ (K_1,q, ∅). 𝐑𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐬 amsplain 10 37 D. Cvetković, P. Rowlinson, S. K. Simić, A study of eigenspaces of graphs, Linear Algebra and its Applications. 182 (1993) 45-66. 19 D. Cvetković, P. Rowlinson, S. K. Simić, Eigenspaces of Graphs, Cambridge University Press, New York, 1997. 35 D. Cvetković, P. Rowlinson, S. K. Simić, An Introduction to the Theory of Graph Spectra, Cambridge University Press, New York, 2010. 42 J. Hadamard, Resolution d'une question relative aux determinants, Bulletin des Sciences Mathematiques. 17 (1893) 240-246. 40 Y.J. Ionin, M.S. Shrikhande, Combinatorics of Symmetric Designs, Cambridge University Press, New York, 2006. 2 R. Mulas, Z. Stanić, Star complements for ± 2 in signed graphs, Special Matrices. 10 (2022) 258-266. 39 R. E. A. C. Paley, On orthogonal matrices, Journal of Mathematics and Physics. 12 (1933) 311-320. 1 F. Ramezani, Constructing signed strongly regular graphs via star complement technique, Mathematical Sciences. 12 (2018) 157-161. 3 F. Ramezani, P. Rowlinson, Z. Stanić, On eigenvalue multiplicity in signed graphs, Discrete Mathematics. 343 (2020) 111982. 43 Z. Stanić, On strongly regular signed graphs, Discrete Applied Mathematics. 271 (2019) 184-190. 17 Z. Stanić, Signed graphs with totally disconnected star complements, Revista de la Unión Matemática Argentina. 62 (2021) 95-104. 41 J. J. Sylvester, Thoughts on Orthogonal Matrices, Simultaneous Sign Successions, and Tessellated Pavements in Two or More Colors, with Applications to Newton's Rule, Ornamental TileWork, and the Theory of Numbers, Philosophical Magazine. 34 (1867) 461-475. 4 X. Yuan, Y. Mao, L. Liu, Maximal signed graphs with odd signed cycles as star complements, Applied Mathematics and Computation. 408 (2021) 126367.
http://arxiv.org/abs/2312.15990v1
{ "authors": [ "Huiqun Jiang", "Yue Liu" ], "categories": [ "math.CO", "math-ph", "math.MP" ], "primary_category": "math.CO", "published": "20231226104506", "title": "Maximal signed bipartite graphs with totally disconnected graphs as star complements" }
Hard X-ray Generation and Detection of Nanometer-Scale Localized Coherent Acoustic Wave Packets in SrTiO_3 and KTaO_3 David A. Reis January 14, 2024 ===================================================================================================================== Deep neural networks have shown remarkable performance in image classification. However, their performance significantly deteriorates with corrupted input data. Domain generalization methods have been proposed to train robust models against out-of-distribution data. Data augmentation in the frequency domain is one of such approaches that enable a model to learn phase features to establish domain-invariant representations. This approach changes the amplitudes of the input data while preserving the phases. However, using fixed phases leads to susceptibility to phase fluctuations because amplitudes and phase fluctuations commonly occur in out-of-distribution. In this study, to address this problem, we introduce an approach using finite variation of the phases of input data rather than maintaining fixed phases. Based on the assumption that the degree of domain-invariant features varies for each phase, we propose a method to distinguish phases based on this degree. In addition, we propose a method called vital phase augmentation (VIPAug) that applies the variation to the phases differently according to the degree of domain-invariant features of given phases. The model depends more on the vital phases that contain more domain-invariant features for attaining robustness to amplitude and phase fluctuations. We present experimental evaluations of our proposed approach, which exhibited improved performance for both clean and corrupted data. VIPAug achieved SOTA performance on the benchmark CIFAR-10 and CIFAR-100 datasets, as well as near-SOTA performance on the ImageNet-100 and ImageNet datasets. Our code is available at https://github.com/excitedkid/vipaug. § INTRODUCTIONDeep learning is being actively explored for various applications in computer vision such as image classification and object detection <cit.>. The rapid development of deep learning methods has led to performance that can surpass that of human effort on some tasks. For example, deep neural networks (DNNs) can achieve high accuracy on image classification tasks with in-distribution data. However, the real-world performance of DNNs can be poor compared with that of manual classification by humans <cit.>. Because the distributions of train and test datasets may differ in the real world, deep learning models cannot be trained to compensate for all of the potential types of data corruption. To address this challenge, domain generalization methods have been developed to train models to be more robust to out-of-distribution (OOD) data <cit.>. These techniques aim to minimize any deterioration in performance on clean data while improving the performance of deep learning models on corrupted data.Data augmentation methods have also been proposed to improve domain generalization. Some of these approaches <cit.> based on the frequency domain show that phases contain domain-invariant features. To make the models depend on the phases, only the amplitudes of the input data are varied with several different techniques and fix the phases. The fixed phases are then combined with the augmented amplitudes to reconstruct the image. However, with corrupted data, amplitudes and phases can fluctuate significantly as shown in Figure <ref> and Figure <ref>. Therefore, existing methods with fixed phases are not robust to phase fluctuations. To address the limitations of existing methods, we propose to introduce finite phase variations to ensure robustness to phase fluctuations. We propose two hypotheses. First, the degree of domain-invariant feature inclusion, which we define as the robustness weight, varies for each phase. We define a phase with relatively high robustness weights as a vital phase and a phase with low robustness weights as a non-vital phase. Accordingly, we propose a method to detect vital and non-vital phases based on the magnitude of the amplitudes. Second, applying different strengths of variations according to robustness weights allows a model to depend more on vital phases, which enhances its robustness against corruption. By retaining the advantages of existing methods and addressing the vulnerability to phase fluctuations, we propose a novel approach called vital phase augmentation (VIPAug).VIPAug applies variations to the phases of input data based on robustness weight and replaces all amplitudes, enabling the model to depend on the vital phases. VIPAug incorporates phase variations by employing a Gaussian distribution and partially replacing the phases with those of fractal images. We also present the experimental results of our approach, which show the improved accuracy on both clean and corrupted data compared with baseline methods. The contributions of this study are summarized as follows:* We experimentally demonstrate that the robustness weights of phases differ for the first time.* We propose a method to identify vital and non-vital phases based on their weights. * We propose VIPAug as a novel augmentation approach that combines the new phase variations with existing methods based on amplitudes variations. This approach enables the model to perform more robustly against phase fluctuations while depending on the phases. * Our experimental results show that the proposed method achieved state-of-the-art performance on the CIFAR-10 and CIFAR-100 datasets <cit.> and nearly state-of-the-art performance on the ImageNet-100 and ImageNet datasets <cit.>. § RELATED WORKS §.§ Domain generalization Deep learning models should be robust against unseen domains that may be used in real-world applications. Domain generalization methods aim to generalize models to OOD data by using only training data from a given source domain. Domain generalization can be implemented in a variety of ways, including contrastive learning, ensemble learning, and meta-learning. Contrastive learning methods reduce the multi-domain gap to improve generalization ability. Motiian et al. motiian2017unified2 exploited the Siamese architecture with a contrastive loss. Yoon et al. yoon2019generalizable3 extended a contrastive semantic alignment loss to mitigate the bias of data and establish domain-invariant representations. Ensemble learning methods combine several models to improve generalization. Ding et al. ding2017deep5 used multiple domain-specific deep neural networks to capture a shared representation within multiple sources. Similarly, Liu et al. liu2020ms6 proposed a multi-site network with domain-specific batch normalization layers. The meta-learning approach diversifies different models to improve their stability and generalization performance. Zhao et al. zhao2021learning10 proposed a memory-based identification loss designed to harmonize with meta-learning. All these methods have a limitation in that they do not directly increase the diversity of the training data. For this reason, we concentrate on data augmentation among various methods for domain generalization. §.§ Data augmentation Data augmentation has been studied to improve the generalization performance of deep learning models. Mixup <cit.> is designed to mix two images with linear combinations to improve generalization ability. Cutout <cit.> and Random Erasing <cit.> randomly erases a part of an image to improve accuracy and generalize to the occluded objects. AutoAugment <cit.> optimizes a group of augmentations with reinforcement learning. However, these methods only generalize a model to limited scenarios and are not robust to various distributional shifts such as common corruptions.Common corruptions refer to the possible distortions and distributional shifts in the real world such as shot noise, motion blur, and snow. Recently, several data augmentation methods have been proposed to improve performance on common corruption scenarios. These methods generate multi-source domains from a single-source domain using various transformations and mixing strategies. AugMix <cit.> proposed parallel data pipelines to generate diverse domains while maintaining semantic content. PixMix <cit.> mixes original images with external fractal images to introduce greater structural complexity. Zhou et al. mixstyle12 randomly mixed instance-level feature statistics of training samples across source domains. However, these methods do not take into account that the image phases contain domain-invariant features.Data augmentation using the frequency domain has also become a topic of active research that leverages domain-invariant features in images. APR-SP <cit.> fixes the phases and replaces the amplitudes with those from other images. FACT <cit.> fixes the phases and mixes the amplitudes with those from other images. Both methods introduce amplitude variations to enable a model to learn domain-invariant features from the phases. However, fixing the phases makes DNNs vulnerable to phase fluctuations. HybridAugment++ <cit.> makes the model rely on the low-frequency components of data, but this approach does not consider variations in the robustness weight of each phase. PRIME <cit.> is an integrated method that considered augmentation in the spectral, spatial, and color domains. Although this approach greatly increased diversity with augmentation in the three domains, the authors did not consider that the phase contains domain-invariant features in an image.§ METHODWe propose VIPAug as a data augmentation method that integrates changes in the amplitudes and finite variations in the phase spectrum. The phase variations apply different intensities of variations to vital and non-vital phases according to their robustness weights. First, we propose a method to distinguish the vital and non-vital phases using the magnitude of the amplitudes. VIPAug contains two types of phase augmentation; one utilizes Gaussian distributions, and the other employs fractal images. VIPAug encourages the model to depend on the phases over the amplitudes, specifically on the vital phases. Due to this dependence on the vital phases, the model achieves robustness against fluctuations in terms of both amplitudes and phases. The entire VIPAug process is shown in Figure <ref>. §.§ Detection of vital phaseThe conventional approach uses 2D discrete Fourier transform (DFT) for each channel of an RGB image to obtain amplitudes and phases. Unlike 2D DFT, 3D DFT can be used to acquire amplitudes and phases that include features between each channel. Leveraging these amplitude and phase spectrums can improve the accuracy on clean and corrupted data. With image's height H, width W, channel C, coordinates of image's spatial domain (x,y,z) and frequency domain (u,v,w), the 3D DFT equation is represented as follows: F(u, v, w)=∑_x=0^H-1∑_y=0^W-1∑_z=0^C-1 f e^-j 2 π(x/H u+y/W v+z/C w),where the input image is represented by f=f(x, y, z). We can derive the image's amplitudes A(u,v,w) and phases P(u,v,w):F(u, v, w) =|F(u, v, w)| e^j ·arctanI(u, v, w)/R(u, v, w) =A(u, v, w) e^j P(u, v, w), where I(u,v,w) and R(u,v,w) represent the imaginary and real parts of the DFT result. The relation between the image f and its corresponding amplitudes A(u, v, w) and phases P(u,v,w) is described using inverse discrete Fourier transform (iDFT):f=1/H W C∑_u=0^H-1∑_v=0^W-1∑_w=0^C-1 A e^j{2 π(u/H x+v/W y+w/C z)+P} ,where A=A(u,v,w) and P=P(u,v,w). The image f can be represented as a linear combination of complex exponential terms. The amplitude of the exponential term is proportional to the number of object features. Object features are semantically preserved across domains. Therefore, the phase containing more domain-invariant features has a larger amplitude. We hypothesize that the model should depend more on phases at larger amplitudes than relatively lower ones. The vital phase coordinates (u_vital,v_vital,w_vital) are determined by applying an S × S × 1 argmax filter to the amplitudes of each region as shown in Figure <ref>. The filter encompasses all regions without any overlap. For sets 𝒰_vital, 𝒱_vital, and 𝒲_vital consisting of the elements u_vital, v_vital, and w_vital, respectively, we get vital phase coordinate set 𝒞_vital, where 𝒞_vital={(u, v, w) | u ∈𝒰_vital , v ∈𝒱_vital , w ∈𝒲_vital }. We denote the vital phases P_vital (u, v, w) and non-vital phases P_nonvital (u, v, w) as follows:P(u,v,w)=P_vital(u,v,w)if(u,v,w)∈𝒞_vitalP_nonvital(u,v,w)otherwise.P_vital (u,v,w) andP_nonvital (u,v,w) are classified based on robustness-related weights within a specified frequency range. The filter with a specific frequency range can prevent an increase in phase feature loss in that frequency range by not applying the filter to the entire phases at once. §.§ Vital phase augmentationTo ensure that the model depends on the vital phases, we apply weak variations to the vital phases, whereas strong variations are applied to the non-vital phases. Excessive variations impede the model from learning domain-invariant features from the phase spectrum. There are two types of phase augmentation: vital phase augmentation using a Gaussian distribution (VIPAug-G) and using fractal phases (VIPAug-F)§.§.§ VIPAug-G.VIPAug-G involves random sampling from a zero-mean Gaussian distribution and adding the obtained values to the phases. A Gaussian distribution exhibits high probability density around the mean and low probability density away from the mean. A Gaussian distribution effectively introduces finite variations to the phases to strengthen the model's dependency on the vital phases. The variations from Gaussian distributions 𝒩 with different variances are applied based on the corresponding weights:P_aug^gauss (u, v, w)=P(u, v, w) + V(u,v,w),where a random variable V(u,v,w)∼𝒩(0, σ_vital ^2) if (u,v,w)∈𝒞_vital, V(u,v,w) ∼𝒩(0, σ_nonvital ^2) otherwise and σ_vital ^2 ≪σ_nonvital ^2.With -π≤ P_vital ≤π and -π≤ P_nonvital ≤π, the variation should be correspondingly small for the narrow range of vital and nonvital phases. In contrast to pixel-level perturbation caused by Gaussian noise, VIPAug-G introduces variations to the phases of the complex exponential functions that comprise an image by linear combination. §.§.§ VIPAug-F.VIPAug-F preserves the vital phases and entirely substitutes the non-vital phases with fractal phases. This method induces larger variations than VIPAug-G to enhance robustness against more significant fluctuations in phases. The replacement images should be from another domain and that domain should have different classes compared with the source domain. Replacing the original phases with those from other images from the same source domain prevents the model from depending on the phases, and diminishes the model's capability to learn domain-invariant features from the phases. Hence, we use the phases of a fractal image to enhance the structural complexity <cit.> of the image.Fractal images are randomly chosen from a pool of 14,200 images. The non-vital phases are replaced with the fractal phases:P_aug^frac(u,v,w)=P(u,v,w)if(u,v,w)∈𝒞_vitalP_fractal(u,v,w)otherwise,where P_fractal (u,v,w) denotes phases of the fractal image. VIPAug-F is designed to be robust against stronger phase fluctuations. However, completely replacing the non-vital phases may result in a substantial loss of image features. To retain original image features in the non-vital phases, we randomly apply VIPAug-F each iteration. Additionally, optional modifications may be necessary according to the training dataset. Due to more image features falling into the low-frequency region <cit.>, the non-vital phase with the highest weight is retained at the low-frequency region. This is because the non-vital phases also have different relative robustness weights depending on the magnitude of the amplitudes.§.§.§ VIPAug.VIPAug combines amplitude augmentation and two types of phase augmentation. Denoting VIPAug-G as function g(·) and VIPAug-F as function h(·), we obtainP_aug = g∘ t∘ h(P),where ∘ denotes function composition and t(·) is a phase change by pixel-wise augmentations from AutoAugment. The augmented amplitude A_aug is obtained by APR-SP. We can reconstruct the augmented image f_aug through iDFT with P_aug and A_aug.§ EXPERIMENT§.§.§ Datasets.We experimentally evaluated the performance of VIPAug on the most widely used CIFAR-10, CIFAR-100, ImageNet-100, and ImageNet datasets. CIFAR-10 and CIFAR-100 comprise 50,000 training images and 10,000 testing images, and each image is a 32×32 color image with 10 classes and 100 classes, respectively. ImageNet consists of 1.2 million images and 1,000 classes. ImageNet-100 consists of 100 randomly selected classes of ImageNet. The training and test dataset contain 1,300 and 50 images per class, respectively. We used 14,200 fractal images from collections on DeviantArt to train the model. To measure the domain generalization performance, we used the corrupted datasets CIFAR-10-C, CIFAR-100-C, ImageNet-100-C, and ImageNet-C <cit.>, which contain 15 types of corruption, including noise, blur, weather, and digital corruption. Each type is demonstrated at five levels of severity.§.§.§ Metrics.We evaluated the domain generalization performance of the proposed method by measuring its accuracy on clean images and classification error rates on corrupted images. We also used the mean corruption error (mCE) on ImageNet-100-C and ImageNet-C, which is a normalized measure of the classification error rate by using AlexNet <cit.>. The corrupted test data has five severity levels 1 ≤ s ≤ 5. The corruption error for each type of corruption was calculated as follows: CE_Corruption ^Network =∑_s=1^5 E_s,Corruption^Network / ∑_s=1^5 E_s,Corruption^AlexNet. We then calculate the mCE by averaging the CE_Corruption ^Network for each type of corruption. §.§ CIFAR-10 and CIFAR-100 §.§.§ Training setup.We used a ResNet-18 <cit.> architecture as a baseline model. We trained all methods for 250 epochs. Detailed training setup can be seen in the supplementary material. We used the 2 × 2 × 1 argmax filter, and set σ_vital =0.001 and σ_nonvital =0.014 on CIFAR-10 and σ_vital =0.005 and σ_nonvital =0.012 on CIFAR-100. VIPAug-G uses small values for the variance to introduce small variation to the phase. More details can be seen in the supplementary material. We also applied the modification to VIPAug-F on CIFAR-10 by setting the low-frequency region to 4/9 of the total phase. The non-vital phase with the highest weight is retained at the low frequency region.§.§.§ Results.Table <ref> shows the performance comparison with the state-of-the-art models on CIFAR-10 and CIFAR-100. The baseline model achieved 95.3% accuracy on the clean domain of CIFAR-10. However, the performance dropped significantly with an error rate of 25.3% in the corrupted domain, which shows the importance of domain generalization. PRIME improved performance in the corrupted domain with primitive augmentations, but the method suffered from performance degradation in the clean domain. APR-SP fixes the phases and replaces the amplitudes with the amplitudes of other images. APR-SP improved accuracy in both the clean and corrupted domains. However, the method only considers the amplitude replacement, making it vulnerable to phase fluctuations in common corruptions. VIPAug combines phase variations with amplitude replacement to perform robustly against phase and amplitude fluctuations. VIPAug achieved an accuracy of 95.8 % and a corruption error rate of 8.4%. Compared with APR-SP, these values were 0.2%p and 0.3%p better, respectively. VIPAug outperformed all the other methods on both clean and corrupted datasets. VIPAug-G and VIPAug achieved the lowest error rate for corrupted data, while VIPAug-F and VIPAug achieved the highest accuracy for clean data. Compared with APR-SP, VIPAug-G outperformed on the corrupted domain, indicating that VIPAug-G is more robust to corruption without sacrificing accuracy on uncorrupted data. VIPAug-F also exhibited improved performance for clean and corrupted data compared with APR-SP, despite the strong variation. These results show that the model still learns domain-invariant features from the phase, even with the high variation of VIPAug-F. VIPAug achieved state-of-the-art performance on CIFAR-10 and CIFAR-100. §.§ ImageNet-100 and ImageNet§.§.§ Training setup.We used a ResNet-18 architecture as the baseline model on ImageNet-100 and a ResNet-50 <cit.> model on ImageNet. The models were trained for 100 epochs. Detailed training setup can be seen in the supplementary material. We evaluated all methods on ImageNet-100 using the same training settings. For ImageNet, we used pretrained weights for alternative methods if available. Otherwise, we used the performance results reported in AugMix. We used the 2×2×1 argmax filter, and set σ_vital =0.001 and σ_nonvital =0.005. We applied the modification to VIPAug-F by setting the low-frequency region to 1/4 of the total phase. §.§.§ Results.In Table <ref>, we compared VIPAug with APR-SP and PRIME on ImageNet-100. The variations of VIPAug exhibited greater accuracy on the clean domain compared with the other methods. VIPAug also showed greater clean accuracy by 0.1%p and decreased mCE by 2.2%p compared with APR-SP. PRIME reduced the clean accuracy compared with the baseline, but significantly improved the generalization ability on the corrupted domain. We conjecture that the diverse color transformation of PRIME contributed to the generalization capability. We added the color transformation of PRIME to VIPAug and compared the performance. VIPAug with color decreased the mCE by 4.4 %p compared with APR-SP and achieved nearly state-of-the-art performance compared with PRIME. When VIPAug and PRIME were applied together, they showed an overwhelming performance of 55.4 % mCE. These results confirm that VIPAug and PRIME can be considered somewhat orthogonal approaches.In Table <ref>, domain generalization methods were evaluated on ImageNet for each corruption type <cit.>. VIPAug did not excel in all corruption types, but the method achieved the best average performance on the corrupted domains while maintaining performance on the clean domain. This demonstrates the effectiveness of VIPAug on a large-scale dataset. §.§ Ablation studies§.§.§ Robustness weight. We evaluated our first hypothesis that vital phase contains more domain-invariant features than non-vital phase by comparing the performance of Reverse VIPAug and VIPAug. Reverse VIPAug treated vital phase as non-vital phase and one of non-vital phases as vital phase. If the robustness weights of vital phase and non-vital phase are the same, the performance should be similar. However, as shown in Table <ref>, Reverse VIPAug performed worse than VIPAug. The clean accuracy of Reverse VIPAug was 0.5%p lower than that of VIPAug, and the mCE was 5.0%p higher. This indicates that Reverse VIPAug is not as robust to corruption as VIPAug. In particular, when compared with APR-SP without phase variation, the clean accuracy of Reverse VIPAug was 0.4%p lower and the mCE was 2.8%p higher. This suggests that adding variation to the phase does not always improve performance.We then evaluated the second hypothesis that the strengths of variation should be proportional to the robustness weight of each phase by comparing the performance of Uniform VIPAug (a) and Uniform VIPAug (b). Uniform VIPAug (a) set σ_vital =0.001 and σ_nonvital =0.001, and randomly replaced vital phase and non-vital phase with fractals to give the same strengths of variation. Uniform VIPAug (b) set σ_vital =0.005 and σ_nonvital =0.005, with other conditions same as Uniform VIPAug (a). Uniform VIPAug (a) and (b) exhibited lower performance on both clean and corrupted images compared with VIPAug. Our results suggest that the model becomes more robust to corruption if the strengths of variations are varied according to the robustness weights. §.§.§ Modification on VIPAug-F.VIPAug-F introduces strong variations to partially replace phases with fractal images phases. To investigate the effects of different modification ranges on the performance of VIPAug-F, we conducted an ablation study on ImageNet-100.Because the robustness weights are relatively different between non-vital phases, we modified VIPAug-F not to additionally replace the case with the second largest magnitude of amplitude at the low-frequency spectrum. This is because the low-frequency region contains more image features <cit.>.We conducted the experiment in four cases: no modification, modification of 1/4 of the entire phases, modification of 4/9, and modification of the entire phases, as shown in Figure <ref>. We found that VIPAug-F with no modification had the lowest clean accuracy and the highest mCE. VIPAug-F with more modification had higher clean accuracy but lower mCE. These results suggest that finite variation should be given to the phase in order to improve the performance of VIPAug-F empirically. We need to find the appropriate hyperparameters that balance the clean accuracy and mCE.§.§.§ Other datasets for VIPAug-F. In Table <ref>, we compared the performance of VIPAug-F when using different datasets instead of fractals. We used ImageNet, Stylized-ImageNet, and GTA5 <cit.> datasets. The number of images was 14,200, the same as fractal images. The 14,200 images were randomly selected from each dataset. Other datasets show a slight performance decrease for clean and corrupted datasets compared with fractal dataset. This is because ImageNet, Stylized-ImageNet, and GTA5 all have similar classes to the CIFAR-100. If two images of different classes are mixed together, the model cannot depend on the phase well. On the other hand, the fractal images have no class. Fractals also introduce structural complexity to images <cit.>. CIFAR-100 has a wide variety of classes, making it difficult to find a dataset consisting of completely different images. Therefore, the dataset without a class is more suitable.§.§.§ Comparison with 2D DFT.We compared the performance of VIPAug and 2D-DFT VIPAug in Figure <ref>. 2D-DFT VIPAug showed 0.7%p lower clean accuracy and 2.3%p higher corruption error rate than VIPAug. Extending along the channel axis in 3D DFT allows vital phases to be identified for all channels when the filter is applied. Considering the large performance difference, amplitude and phase features between channels have a significant impact on the model's performance. §.§.§ Adversarial robustness.We compared the adversarial robustness of the model by applying APR and VIPAug on CIFAR-10. Adversarial training was conducted using the fast gradient sign method (FSGM) <cit.>. We employed AutoAttack <cit.> to test adversarial robustness. As shown in Table <ref>, VIPAug is effective against AutoAttack while maintaining test accuracy. § CONCLUSIONSWe first argue that the robustness weight differs for each phase of the image. We propose a novel method to classify vital and non-vital phases according to their weights. We make the model more robust to corruption by giving different strengths of variation to the phases according to their weights. Our extensive experimental results showed that our approach achieved SOTA performance on CIFAR-10 and CIFAR-100, and achieved performance close to the SOTA methods on ImageNet-100 and ImageNet. In this study, we presented a new perspective on the phase in domain generalization research. We suggested a promising direction for subsequent research on how to deal with the image phase.§ ACKNOWLEDGMENTSThis work was supported in part by Institute of Informationcommunications Technology PlanningEvaluation(IITP) grant funded by Korea government(MSIT) (No.2020-0-00440, Development of Artificial Intelligence Technology that Continuously Improves Itself as the Situation Changes in the Real World). This research was supported in part by the KAIST Convergence Research Institute Operation Program. The students are supported by the BK21 FOUR from the Ministry of Education (Republic of Korea).
http://arxiv.org/abs/2312.16451v2
{ "authors": [ "Ingyun Lee", "Wooju Lee", "Hyun Myung" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231227073517", "title": "Domain Generalization with Vital Phase Augmentation" }
label1]Khai Phan label2]Siddharth Rout label3]Chao-An Lin label1]Rajeev Jaiman[label1]organization=Faculty of Applied Science, addressline=University of British Columbia,city=Vancouver, state=BC, country=Canada [label2]organization=Department of Earth, Ocean and Atmospheric Sciences,addressline=University of British Columbia,city=Vancouver, state=BC, country=Canada [label3]organization=Department of Power Mechanical Engineering,addressline=National Tsing Hua University,city=Hsinchu, country=TaiwanNoise generated by vortices is a sensible measure of the strength of wakes generated in a flow field. This paper presents an innovative approach to actively control wakes and noise generated in a flow past a cylinder using deep reinforcement learning (DRL) as a continuously learning active control algorithm with noise levels recorded using acoustic level around the wake region. The two primary objectives of this study is to investigate the feasibility and effectiveness of use of acoustic level as controlling parameter and employing DRL algorithms to optimize the control of flow dynamics. A hydrophone array is utilised to capture the wake pattern along the flow field downstream of a circular cylinder in the form of acoustic signals. The collected data are used to create a real-time feedback loop for a DRL agent to adjust jet actuators strategically placed on the cylinder's surface. This approach enables the agent to learn and adapt its control strategy based on the observed acoustic feedback, resulting in a closed-loop control system. We demonstrate that the DRL-based flow control strategy can effectively reduce wake amplitude and the noise generated. The noise level reduces by an appreciable 6.9% and 9.5% in the given setting for two control configurations. Similarly, the drag coefficient reduces by a remarkable 15.9% and 23.8% respectively. Specifically, it reduces the oscillation amplitude in drag and noise. The proposed method offers promising results in terms of reducing flow-induced vibration. The paper highlights the potential for using DRL, jets, and hydrophone arrays in active flow control, opening new avenues for optimizing flow control in practical engineering applications. Reinforcement Learning Deep Q-Network Wake Control Drag Reduction Neural Networks§ INTRODUCTION Flow control has always been one of the most anticipated engineering problems due to its ubiquitous applicability. From the suppression of flow oscillation in open cavities <cit.> to the construction of hybrid rocket motors <cit.>, flow control has been used as an indicator of how technology has developed to counter the stochasticity of nature. Throughout the last few decades, the number of paradigms for flow control has been increasing more and more. Applications of flow control to air vehicle systems, including fixed wing airfoils, turbomachinery, combustion, aeroacoustics, vehicle propulsion integration, and rotorcraft. Flow control methods can be categorized into Active Flow Control (AFC) and Passive Flow Control (PFC). There are many innovative applications established in various industries<cit.>. PFC have a constant control law that is consistent with time and do not get any feedback on how well the controller performs, such as having changes to aerodynamic shapes or textures. The passive methods include Gurney flap, vortex generator, bump, cavity, roughness, small disturbance, bleed, splitter plate, polymer, and biomimetic techniques<cit.>. Some examples are leading-edge serrations, riblets, corrugated airfoils and lubricated skins. Most of these are widely implemented in aircrafts to delay flow separation and increase lift to drag ratio. Winglets are nowadays found to be used to reduce tip vortex formation to reduce drag. However, these control strategies are limited as the control can not be manipulated temporally based on feedback or requirements. For instance, what if the PFCs act adversely? So, AFC is a good way out as it can take in feedback from the state and actuate the controller intelligently. The active methods include oscillation and flow perturbation, acoustic excitation, jet, synthetic jet, plasma actuator, and Lorentz force. Many interesting AFCs developed in past decades. For example, installation of the synthetic jet to change the vortex shedding pattern<cit.>; utilizing wavelength actuators to attenuate turbulence<cit.>; and studying the effects of acoustic excitation on vortex shedding <cit.>. Among those numerous methods, the application of blowing-suction of velocity jets stands out as one of the most practical and widely recognized, evidenced by NASA’s experiment on the Boeing 757 with jet actuators incorporated in the vertical stabilizer to reduce drag and improve the overall performance of the plane <cit.>. Why specifically controlling flow past a cylinder? Flow-induced forces play determinant roles in the life and safety of structures as well. Oscillations in the flow cause fatigue, enhance defects, aeroelastic flutters, and decrease the factor of safety of structures. Falling of the famous Tacoma Narrows Bridge is a popular case of structural failure due to similar causes<cit.>. Tall buildings like Taipei 101, Burj Khalifa etc. have to be designed to be able to face fast winds<cit.>.Passive techniques developed in the past few decades are still very promising due to the ease of utilization in industry<cit.>. AFCs past cylinders are excellent toy problems to demonstrate concepts. The oppositely placed suction and blowing around cylinder became popular active control strategy in 2000s<cit.>. To improve the quality nonlinear control algorithms<cit.> and eigensystem-realization based reduced order model for suppression of wakes<cit.> are popular. However, these deterministic control algorithms like proportional–integral–derivative(PID) controllers often require approximation of state space and calculation of the transfer function to actuate AFCs is expensive yet inaccurate and non-generalizable as the transfer function is case dependant. Hence, data-driven model-free methods like reinforcement learning (RL) for AFC are well appreciated as they are generalizable.In the past few years, there has been a surge in Deep Reinforcement Learning (DRL) based flow control techniques<cit.>. This work is hence based on a DRL algorithm to be able to extend the work to realistic cases like flow past marine vehicles. A few recent attempts to utilize DRL in AFC include controlling two synthetic jets of blowing/suction <cit.> and implementing adjoint-based partial derivative equation augmentation to DRL to solve flow simulation more efficiently<cit.>. Both showed great results in controling wakes. In order to reduce vortex shedding, the two aforementioned papers both tried to decrease the drag coefficient in the simulation. However, no work is registered in which acoustic-based flow control model is used to reduce wakes formation and control flow dynamics. Deep Q-Nework (DQN) is a branch of DRL that involves calculating the Q values of each step the model takes and, much like other ML algorithms, learning to maximize the returned reward from the environment. DQN seems to function well in more abstract tasks and therefore is widely used in robotics and has achieved great success in this field. For instance, Fernandez-Fernandez et al. study the application of DQN to human-like sketching performed by robots<cit.>. In the context of of controls and optimization in fluid dynamics morphing airfoils and shape optimization using DQN is also evident<cit.>. Despite the level of sophistication of the model, it is not very widely used in AFC modelling. In addition, manipulation of other properties apart from drag calculation is not common in the overall scientific conversation in AFC. According to Klapwijk et al., turbulence in the fluid flows is the source of sound generation in the system. The article explores noise levels when the turbulent flow is increased<cit.>. Interestingly, it also claims that the noise generation mechanism is difficult to understand. In this work, it is shown that by controling vortex generated noise using deep Q-learning drag and wake amplitudes could be controlled.In the nineteenth century, wakes are identified as the major source of noise in the flow past an object. There lies the concept behind this work and it is theoretically supported. Strouhal in 1878 and Kohlrausch in 1881 independently found out about a faint sound originating from vortices, to which the latter described as 'reibungstone' <cit.>. Sir James Lighthill, in 1950s, discovered the theoretical connection between fluid flow and acoustics from the conservation laws to derive the wave equation for acoustics, called the theory Lighthill's Acoustic Analogy <cit.>. Sir Lighthill creates an experiment assuming a patch of turbulent flowing fluid surrounded by a large domain of surrounding stationary fluid. Let turbulent flow produce noise; however, the noise would transmit to the surrounding fluid at rest. By analysing and comparing the terms in the conservation equations for stationary fluid, the resulting equation could be written as a forced bidirectional wave equation. It becomes clear that the forcing term or the source term in the wave equation is what generates noise in flow. Here is the derived wave equation in Einstein's notations,∂^2 ρ/∂^2 t- c_o^2 ∇^2ρ = ∂^2 T_ij/∂ x_i∂ x_jT_ij = ρv_iv_j- σ_ij + (p - c_o^2ρ)δ_ij where T is called Lighthill's turbulence stress tensor and has three components or three sources for noise generation. ρv_iv_j is the convection of momentum fluctuation, σ is the viscous stress tensor and (p - c_o^2ρ)δ_ij is the difference in exact pressure p and approximated thermodynamic pressure, c_o^2ρ. ρ is the density, c_o is the speed of sound, t is the the time dimension, x is the spatial dimension and v is the velocity. This equation quantifies sound sources from flowing fluid where it takes care of thermodynamics jumps, turbulent fluctuations, and viscous dissipation. Now the fact is if the stress tensor is non-zero sound is formed and with the origin of wakes, vortex stretching starts into action and even with laminar vortex street sound is produced. This sound is often considered tonal while a broadband noise is generated in case of turbulent flows. Various further works have demonstrated this <cit.>. So, wakes make noise and hence definitely if louder is the noise in a flow then stronger are the wake vorticities. Though this logic is very much valid but it has not been used for control or analysis rather flow states like pressure and velocity fields are considered as direct measurements and sometimes vorticity field is used as a direct measure of rotational energy in a flow. In this research, we aim to minimize the wake formation in the flow and hence lower the specially calculated effective sound pressure levels (SPLs) created by the vortices in the flow past a stationary circular cylinder using the flow-generated sound. Source of sound is a better and easier way to control vorticity in a flow as vorticity is the source of generated noise, hence being a more logical measure for such flow control problems. Furthermore, we will explore the effect of lowering SPLs on the oscillating drag experienced by the cylinder. As for the active control algorithm aspect of the research, DQN based reinforcement learning is used to control the blowing-suction of two synthetic jets on opposite ends of the cylinder perpendicular to flow.This research is organized as follows. In order, sections 2,3, and 4 are dedicated to DQN-based control algorithm, introducing in detail the setup of the model used in the simulation, and the jets' actuation. Section 5 introduces SPL formulation and section 6 discusses the control strategies with experiments and results. Thence, section 7 concludes the work.§ DEEP Q-LEARNING Constructed on the Markov Decision Process in which the quality of action at a particular state is learned based on the reward due to the action, Q-Learning, has been a very popular early reinforcement learning<cit.>. Conceptually, the action at a particular state is independent of the historical state following Markov probability. However, the rewards are learned from the cumulated score of rewards in an episode of the control process. The convergence of the optimality control problem using the Bellman Equation under stochastic updates was proved soon after<cit.>. The limitation of Q-Learning is the finite nature of the map between state to best action, which is called Q-table. Deep neural networks as excellent maps in Q-Learning replace the Q-table to get called Deep Q-Network (DQN) and the algorithm is called Deep Q-Learning. It is a breakthrough reinforcement learning algorithm since it has been able to match human-level control of console to Atari games<cit.>. DQN allows mapping to conditional non-linear control algorithms. The second benefit is the possibility of being trained in an infinite and continuous environment state space.The DQN agent learns to estimate and optimize the Q-values, which represent the expected rewards of an action taken in a particular state. This process is carried out on the neural network that examines all possible actions that result in the Q-values as outputs. The Bellman equation is then used to bridge the gap between predicted Q-values and target Q-values. 𝐕(𝐬) = max (𝐑(𝐬,𝐚) + γ𝐕(𝐬'))𝐕 is Q-value; 𝐑 is the reward of action 𝐚 in state 𝐬; γ is the discount, representing the importance of immediate and future rewards; and 𝐬' is the following state. The algorithms can have a deterministic policy or a stochastic policy. A deterministic policy maps each action to a specific state, while a stochastic policy operates upon the probability distribution of the actions. The standard DQN algorithm is used as the reinforcement learning framework. However, certain modifications are made to make the DQN more compatible with the fluid mechanics nature of the project. Although most DQN algorithms are inherently well suited for deterministic problems due to limited and discrete state space in episodic environments. However, fluid flow field is a continuous state space and the algorithm is introduced to stochasticity with a random exploration strategy, initialization and sampling of mini-batches to interact with the stochastic environment in the flow. However, this can cause great instabilities and might prevent the algorithm from converging to a desired state. Therefore, we divide the simulation into a stochastic exploration stage and a deterministic testing stage. Regarding the construction of the neural network of the DQN Agent, four fully connected layers are made between the input and output layers. The input layer takes in the calculated SPL and the output layer gives out the two jet velocities. Each layer consists of 50 neural nodes which let 7902 learnable parameters. ReLU is the nonlinear activation function used at each node, which essentially acts as switch, hence making DQN a multilayer nonlinear switch. Adam optimizer<cit.> in PyTorch library is applied to maximize rewards returned in each episode. More information about the utilization of the DQN agent in different tasks will be provided in the following section. The purpose of this task is to minimize the sound pressure levels (SPL) created by the wakes past the cylinder. There are multiple specific setup details in order to obtain the SPL reduction. § PROBLEM INTRODUCTION AND SETUPThe computational setup is built on DOLFINx, which is a high-performance solver of partial differential equations written in C++ for backend integration with legacy FEniCSx(version 2019.1.0) and python for interface<cit.>. The project allows the use of the standard benchmark case "Flow past a cylinder (DFG 2D-3 benchmark)”, as a simulation framework to further develop the research based on <cit.>. The setup includes a horizontal rectangle with a height of 0.41m and a length of 2.2m, and the bottom left corner of the rectangle is at coordinate (0, 0). The obstacle is a circular-based cylinder with a radius of 0.05m centered at coordinate (0.2, 0.2). As the flow develops its oscillation, though laminar, the obstacle will experience a drag force, C_D, which can be determined using the formula: C_D = 2/ρ U^2_meanL∫_∂Ω_S{ρν𝐧.∇ u_t_s (t) n_y - p(t)n_x} dswhere u_t_s is the tangential velocity component at the interface of the obstacle ∂Ω_S, defined asu_t_s = 𝐮.(n_y, -n_x)𝐧 is the normal unit vector at the surface, n_x and n_y are the x-component and y-components of normal vector, U_mean the average inflow velocity, ρ the fluid density, ν kinematic viscosity and L the characteristic length of the cylinder, which is the diameter in this case.The uniformly separated measurements around the cylinder can be used to determine the drag coefficient by summation of discrete measurements as approximate integration. For further details about the dimensions of the setup, refer to Figure 2. Inflow is actuated from the left wall (near the cylinder) with a parabola shape according to the following formula for velocity:u(y) = 4U_y(0.41-y)/0.41^2y is the y-coordinates, and U_y is 1.5 in this scenario, instead of the sinusoidal profile in the test problem as provided by Turek et al.<cit.>. Furthermore, the outflow is the rightmost wall. The upper, lower, and obstacle walls all have a non-slip condition (u=0) as presented in Figure 2. § JETS CONFIGURATIONTwo jets with blowing and suction control are used to manipulate the flow. The first jet (referred to as Jet 1) is at the top of the circular base of the cylinder (at coordinate (0.2, 0.25)), and the second jet (Jet 2) is at the bottom (at coordinate (0.2, 0.15)). The width of the jets is small, at 0.25 percent of the diameter. The jets can perform blowing and suction independently, meaning blowing and suction can happen simultaneously. A reinforcement learning algorithm is applied to control the blowing and suction of the jets. More information about the execution of the simulation will be provided in the next section. § FEEDBACK FORMULATION To measure the SPLs, pressure is recorded from the pressure field provided in the simulation with surfaces of closely located sensors around the cylinder, 0.05m away from the container’s walls (refer to Figure 3). The sensor surfaces enclose the vortex street created by the flow and therefore can give a more accurate reflection of the varying pressure field along the vortex street and it helps us determine the static pressure level, which are essential for the calculation of the SPLs. We set 2000 sensor points horizontally and 500 sensor points vertically on each side. The pressure of each point at a particular time is then extracted from the pressure field created by the simulation. The upper and lower sensor surfaces are mainly concerned as they cover the length of the vortex street, which will be the source of most of the noise generation. These vortices are born from instabilities in the bottom and top regions of flow separation alternatively. Hence, having a distinction between the top horizontal sensor array and the bottom horizontal sensor array helps in understanding the vortex periodicity. Pressure values of every sensor point at each time step are recorded and passed through a function to convert to the relative SPL_i for each sensor and effective SPL for the system SPL_eff using the formulae below. SPL_i = 20log|p_i-p_avg|_rms/p_avgSPL_eff = 10log∑_i=1^n |p_i-p_avg|^2_rms/p^2_avg With p_i and p_avg are the pressure value of each sensor and the average pressure value of all the sensor points at that time step averaged over previous 2000 time steps, that ensures to capture sufficient number of pressure oscillations to approximate the static pressure in the environment that is dynamic in nature. The SPLs at multiple sensors are then passed through another function to calculate the root-mean-square value, which is plotted to see the behaviour of the overall noise level in the environment.§ CONTROL STRATEGIES The simulation runs for 20 seconds, from t = 0 to t = 20, to see the full behaviour of the SPL, as well as to allow the DQN algorithm ample time to learn and optimize. Each second has 500 time steps, resulting in 10,000 time steps to be solved overall. Regarding the recording process, we start by allowing the flow to develop and form the vortex street for the first 6 seconds. Then, when the oscillation stabilizes, the jets are let to intervene at t = 6. Due to computational limitations, the jet velocity values change every 50 time steps, which corresponds to a frequency of 10 Hz. A drastically rapid rate in the flow may over-influence the flow and alter it completely. Moreover, computational limitations of the simulation also play a part in this jet interjection.§.§ Stage 1: Explore the optimal range of jet values §.§.§ Set-upIt is challenging to determine what velocity value of the jet is able to influence the flow as we desired. Therefore, build-up strategy is used for the DQN to explore the value that can lower the SPLs. The velocity values of the jets are generated based on the values of the previous time step. The increments and decrements include ±0.01, ±0.05, ±0.1, ±0.5 and 0, so there shall be 81 combinations of actions for the DQN algorithm to manage. For example, if both jets initially have the values of 0.1, the algorithm will have the option to decrease Jet 1 by 0.01, 0.05, 0.1, 0.5 or keep it constant, and likewise for Jet 2. This results in a large number of actions for the DQN to explore, which can potentially hinder the ability to optimize the reward as the optimization problem needs more variables to be tuned and the gradient surface becomes rough and noisy, though the capability of optimization with more variables is better. Hence, fewer controlling variables are typically preferred. The velocity values are kept constant between the two interventions. The input of the DQN is a vector with a dimension of 1x4. The first two are for the SPLs of upper and lower sensor surfaces and the last two are the velocity values of Jet 1 and Jet 2. In this stage, we follow the model in Figure 2, which allows the simulation to pass the data (reward, action, states) to the DQN at every time step. However, the jet velocity and therefore the action are kept constant every 50 time steps. The reward returned is determined by a function in the simulation. In this task, the simulation learns how to reduce approximately 3 - 5 dB and create a convergence in the process. A simulation runs without any jet intervention results in two oscillations. While the lower SPL peaks at roughly 74.5 dB, the upper SPL’s peak is about 0.25 dB lower. Moreover, the range of the SPL is from 71.7 to 74.5 dB (2.8 from maximum to minimum). Refer to Figure 5 for further information. Meanwhile, the drag coefficient is also measured due to its popularity in wake control, so this measurement can be used as a means of verification for the noise reduction method. The drag experienced initially is large due to the direction of flow at the beginning. When the flow stabilizes, the plot suggests it has an oscillatory behaviour, with a maximum and a minimum at approximately 3.185 and 3.123, respectively.The reward function is set as below:(*) -1 for every 0.4 dB below 66. §.§.§ ResultsAfter applying the jet intervention and running the simulation to its completion, the result is plotted in Figure 6. From t = 4 to t = 6 on the x-axis, the SPL is stable because there is no intervention yet. However, from t = 6, there are many fluctuations in the SPL due to the exploration of the DQN algorithm. Many large values of jet values are chosen, which leads to observable discrepancies before the 8-second mark. The overall SPL in this time range is still relatively the same as when the jet is not turned on, but there is a small surge near t = 8, demonstrating the model has been able to achieve a jet pattern that can affect the SPL oscillations. From t = 8to t = 10, the SPL continues to rise to a peak of 75 dB, then gradually falls over time. It is also evident that there are some interventions that can bring the SPL down, preventing it from increasing drastically. The act of exploration searches for optimal policy control by letting random actions to learn the optimality like mutation steps in the genetic algorithm and simultaneously exploits the learning with the actions that reduce the SPL by preventing the random actions letting more actions from optimal policy to learn optimality. This causes the initial SPL to rise and then gradually drop<cit.>. The SPL keeps its decreasing tendency until t = 16, as the interventions are noticeably fewer as the DQN agent slowly enters the exploiting phase.After t = 16, the SPL reaches the desired range of values (which gives the maximum reward of 10). More interventions are observed since now the jets have to maintain this level, instead of lowering it like before. However, fluctuations still occur. A reduction in effective SPL is observed.Looking at the velocity plot in Figure 6b, it can be seen that a velocity with a magnitude of around 1.0 can influence the flow to the extent we desire. Along with results from other trials, one of which will be presented in Figure 7, it can be deduced that jet velocity with a magnitude above 3 shall likely cause simulation failure due to the breakdown of the PDE solver at high jet velocities due to the instability of the discretization scheme because of the high courant number near the jets. Essentially, it is a computational limitation and it can be tackled by using higher-order numerical schemes or finer discretization. In Figure 7a, although the SPL is at the desired value, the simulation is stopped at just past t =10 (5000 mark on the x-axis). So, the lower value of the jet is at -3.5 and that is not ideal for the simulation. This threshold will be used to limit the jet speed in Stage 2 for the test cases.§.§ Stage 2: Testing with definite jet velocity values§.§.§ Set-up Based on the result of the previous section, we set a deterministic DQN algorithm with blowing and suction of three different values. In this stage, the jet velocity is no longer built up from the previous time step but rather has a distinct, fixed set of values. Two different test cases are presented: [±1.50, ±2.25, ±2.75] and [±2.00, ±2.50, ±3.00] (referred to as Test case 1 and Test case 2 respectively), to verify the result. The expectation from this stage is similar to the one before, which is lowering from 3 - 5 dB approximately. The reward calculation function is kept the same as in the previous stage (as in Table 1), but the reward calculation process is different. While the previous stage fully adopts the Markov model (Figure 1), the strategy for this stage is accumulating the rewards between two interjections and averaging them, then returning to the DQN. Lastly, the 50-divisible state is returned to the DQN. Both test cases will implement this model. Refer to Figure 8 for further details. §.§.§ ResultsThe resulting SPL of the upper and lower sensors are displayed in Figure 9 for both cases. Test case 1, using velocity values with lower magnitudes, converges to an SPL of just below 70 dB. Meanwhile, Test case 2 demonstrates a slightly lower SPL than that of Test case 1, and it seems to converge earlier as well. Also, the amplitude of oscillations is evident to have reduced.On the other hand, the instantaneous drag coefficient in each case is calculated and the result is impressive(refer to Figure 10). The system experiences a large drag force at the start due to the direction of the flow, and this soon dissipates to values. The drag coefficient also expresses an oscillatory behaviour, although the amplitude is small. After great fluctuations in the exploration stage, the drag coefficient converges to a stable value, which is also lower than the initial drag and expresses oscillatory behaviour yet, with reduced amplitude.§ CONCLUSIONIn this study, the sensibility of using noise as controlling parameter for flow control is suggested and discussed. Also, explored the application of Deep Reinforcement Learning (DRL) for active flow control to mitigate wake noise generated by a flow past a circular cylinder. Our approach involved employing hydrophone arrays or pressure sensors to capture acoustic signals and creating a feedback loop for a DRL agent to strategically control jet actuators placed on the cylinder's surface. The agent learned and adapted its control strategy based on the observed acoustic feedback, leading to a closed-loop control system. The results of our investigation demonstrated that DRL-based flow control effectively reduced wake intensity and the noise generated, and it also showed promising results in term of reducing drag. Not only the drag but also reduces the oscillations in drag and noise. This can play a crucial role in reduction of flutter in flow induced vibrations in marine oil rigs, aircraft wings etc. controlling hydrodynamic instabilities.The study involved two main stages: the first stage aimed to explore the optimal range of jet velocity values and build a strategy for reducing noise. This stage revealed that jet velocities with a magnitude around 1.0 can influence the flow to achieve the desired SPL reduction. It also highlighted the importance of avoiding excessive jet velocities.In the second stage, we conducted tests with fixed jet velocity values to verify the results from the exploration stage. The results showed that DRL-controlled jet actuators successfully achieved a significant reduction in SPL. Test case 2, using higher jet velocities, demonstrated comparatively better noise reduction and quicker convergence. The SPL without any control has a mean value of  73.5dB. With the test case 1, flow control brings it down to  69dB which is roughly 6.91% reduction. Similarly, in test case 2 it reduces to  66.5 which is a remarkable 9.5% drop. Similarly, the drag coefficient without any control was oscillatory with a mean value of  3.15. With the test case 1, flow control brings down the drag coefficient to  2.65 which is roughly a remarkable 15.9% reduction. Similarly, the coefficient in test case 2 sees a reduction of 23.8% to clock a mean of  2.4. The lateral oscillation due to lift forces is also remarkably dampened. Additionally, the study also observed that the drag coefficient experienced oscillatory behavior but converged to a stable trend, indicating that the DRL approach effectively controls the flow dynamics very much positively.This research underscores the potential of DRL algorithms with jet actuators, and sensor arrays as an add-on in active flow control. The findings open up new avenues for optimizing flow control in practical engineering applications and hold promise for reducing noise, drag, and enhancing the performance of various engineering systems. Future work in this area can explore more complex flow scenarios, further refine control strategies, and investigate the application of DRL in other engineering domains. The study serves as a stepping stone towards the integration of machine learning techniques for enhancing the efficiency and performance of active flow control systems. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENTKhai Phan: Methodology, Software, Investigation, Writing – Original Draft Siddharth Rout: Conceptualization, Investigation, Methodology Software, Writing – Original Draft, Writing – Review & Editing Chao-An Lin: Conceptualization, Resources, Writing – Review & Editing Rajeev Jaiman: Supervision, Resources, Writing – Review & Editing§ ACKNOWLEDGEMENTThe authors wish to thank Mr. Joseph Moster for his technology support. Thanks are also due to the Department of Mathematics at The University of British Columbia for granting access to infrastructural resources. § DECLARATION OF COMPETING INTERESTThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITYThe authors declare that the data and code supporting the findings of this study are available within the paper and the GitHub repository: https://github.com/Siddharth-Rout/FlowControlDRLgithub.com/Siddharth-Rout/FlowControlDRL
http://arxiv.org/abs/2312.16376v1
{ "authors": [ "Khai Phan", "Siddharth Rout", "Chao-An Lin", "Rajeev Jaiman" ], "categories": [ "physics.flu-dyn", "physics.app-ph", "physics.comp-ph" ], "primary_category": "physics.flu-dyn", "published": "20231227015804", "title": "Acoustics-based Active Control of Unsteady Flow Dynamics using Reinforcement Learning Driven Synthetic Jets" }
Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking Thiem Hoang Received ...; accepted... ==========================================================================================Fine-tuning large language models (LLMs) with domain-specific instructions has emerged as an effective method to enhance their domain-specific understanding. Yet, there is limited work that examines the core characteristics acquired during this process. In this study, we benchmark the fundamental characteristics learned by contact-center (CC) specific instruction fine-tuned LLMs with out-of-the-box (OOB) LLMs via probing tasks encompassing conversational, channel, and automatic speech recognition (ASR) properties. We explore different LLM architectures (Flan-T5 and Llama), sizes (3B, 7B, 11B, 13B), and fine-tuning paradigms (full fine-tuning vs PEFT). Our findings reveal remarkable effectiveness of CC-LLMs on the in-domain downstream tasks, with improvement in response acceptability by over 48% compared to OOB-LLMs. Additionally, we compare the performance of OOB-LLMs and CC-LLMs on the widely used SentEval dataset, andassess their capabilities in terms of surface, syntactic, and semantic information through probing tasks.Intriguingly, we note a relatively consistent performance of probing classifiers on the set of probing tasks. Our observations indicate that CC-LLMs, while outperforming their out-of-the-box counterparts, exhibit a tendency to rely less on encoding surface, syntactic, and semantic properties, highlighting the intricate interplay between domain-specific adaptation and probing task performance opening up opportunities to explore behavior of fine-tuned language models in specialized contexts. § INTRODUCTION Language models (LMs) have made significant strides in recent years, with their ability to generate coherent and contextually relevant text garnering attention from researchers and practitioners alike <cit.>. These models, trained on massive amounts of data, have demonstrated their proficiency across a range of natural language processing tasks, including machine translation, sentiment analysis, and text summarization. Researchers have also explored the potential of fine-tuning these general-purpose LMs on domain-specific data, leading to improved performance in areas such as biomedical research <cit.>, coding <cit.>, and finance <cit.>. However, one domain that has received relatively little attention is the contact center industry. Contact centers play a crucial role in customer service and support for various businesses. They handle a wide range of customer queries, ranging from technical support to billing inquiries. The effectiveness of these interactions directly impacts customer satisfaction and, ultimately, business success. Integrating LMs into contact center operations has the potential to revolutionize the industry. For example, contact center agents can leverage LMs to access a vast array of information and generate personalized, contextually appropriate responses in real-time. However, conversations in contact centers often involve domain-specific knowledge, jargon, and abbreviations, posing challenges for traditional LMs to comprehend accurately. Moreover, the unique conversational dynamics and customer service etiquettes in contact centers further complicate the task of capturing domain-specific nuances effectively. Instruction fine-tuning <cit.> has emerged as one the promising approaches to develop domain-specific LMs generalizable to numerous tasks. However, the effectiveness and applicability of this technique in the contact center domain have not been thoroughly investigated.In recent years, parameter-efficient methods have gained popularity for fine-tuning large language models (LLMs) efficiently with limited computational resources while preserving performance. Among these techniques, Low-Rank Adaptation (LoRA) <cit.> has gained traction due to its advantages in training speed and inference latency, often outperforming full fine-tuning. However, its suitability and effectiveness in the contact center domain remains largely unexplored.In this paper, we explore the potential of instruction fine-tuning in improving language model performance in contact-center domain and seek to address the following research questions:* RQ1: How effective is instruction fine-tuning in improving the performance of LLMs on downstream tasks in contact-center domain? * RQ2: What specific properties unique to contact-center (CC) interactions are acquired by LLMs fine-tuned on CC instruction sets in contrast to out-of-the-box (OOB) models? * RQ3: How does the choice of model architecture and size shape the performance of LLMs on probing tasks? * RQ4: How do the fundamental characteristics learned when fine-tuning LLMs with parameter-efficient methods differ compared to traditional full fine-tuning methods? * RQ5: Once fine-tuned on a domain-specific instruction, what general purpose fundamental properties do LLMs retain?To address these questions, we conduct a comprehensive analysis of CC-LLMs examining the linguistic patterns, domain-specific knowledge, and conversational dynamics that influence the LLM's performance. By shedding light on the unique properties of CC interactions and investigating the potential of instruction fine-tuning, this research aims to contribute to the advancement of language models in specialized domains. Ultimately, our findings can pave the way for more effective and efficient customer interactions in contact centers, benefiting both service providers and customers alike. § TRAINING CONTACT-CENTER INSTRUCTION-TUNED LMNumerous closed-source <cit.> and open-source <cit.> general purpose LLMs have demonstrated abilities to address a diverse range of tasks in natural language processing. However, specialised models like CodeT5 <cit.>, StarCoder <cit.>, Med-PaLM <cit.>, BioGPT <cit.>, Galactica <cit.>, BloombergGPT <cit.> emphasize the significance of domain-specific models in achieving exceptional performance within fields like coding, bio-medicine, science, and finance. These models excel at producing high-quality outputs and tackling domain-specific challenges, illustrating the need of tailored LMs in diverse domains. Inspired by the above works, we leverage in-house dataset[We cannot release the dataset due to proprietary reasons.] of conversational interactions between agents and customers to train a CC-specific LLM (CC-LLM) to model the properties of CC conversations. Due to the spontaneous nature of these conversations, the data is often nuanced with characteristics such as multi-party speakers, disfluencies, overtalks, call transfers, etc. Furthermore, the data is obtained post transcription from an automatic speech recognition (ASR) system, thus introducing the challenge of dealing with ASR errors such as insertions, deletions, and substitutions, in turn establishing the need for a model robust to the conversational properties. In this work, we adopt an approach of instruction fine-tuning <cit.>, which is fine-tuning the language model on a mixture of tasks expressed via natural language instructions.The process of fine-tuning a LM for contact-center applications involves three main components:a contact-center dataset, instructions specific to contact center use-cases, and a language model. To curate the contact-center dataset, we collect ASR transcripts of English conversations between agents and customers from various sectors, such as e-commerce, ed-tech, logistics, etc. We observe an average word-error-rate (WER) of 14.3 on these transcripts. The next step is to gather the instructions and their corresponding responses from the collected calls. We employ three processes to obtain these instructions:* Initially, we utilize our previously annotated data from use-cases such as sentiment detection, intent classification, entity recognition, and question answering. We reformat this data into triplets containing an instruction, input, and output. The instructions and outputs for these tasks are aggregated through a semi-automatic process involving human intervention. We leverage the human-in-the-loop approach to generate instructions and their variations that can elicit the desired response for the given task. * Following this, we expand the instructions by employing a paraphrasing process. This allows us to generate multiple styles of the same instructions, thereby increasing the diversity of the instruction set. * In addition to using the annotated data from the past, we also gather new sets of instructions by instructing human annotators to generate relevant questions that can be asked and answered during a call. Similar to the previous step, we expand these generated instructions using the paraphrasing process. To assist the annotators in generating these tasks, we provide them with a list of insights that we aim to extract from the calls to address various use-cases. Examples of such insights include understanding and tracking customer and agent behaviors, following the steps taken in the call to resolve customer issues, and identifying different objections raised by the customers.Here are some important statistics on the internally curated contact-center dataset: * Total corpus size: 110030* Number of instructions: 2468* Number of tasks: 59Some example tasks considered in the dataset include reason for call,call summarization, segmented call summarization, confirmed next steps, Question-Answering (QA), entity extraction, topic segmentation and text rewriting. Refer to Appendix <ref> for definition of these tasks.Further, we fine-tune OOB-LLMs that are free for commercial use on the curated dataset. Specifically, we obtain CC-Flan-T5 model by fine-tuning the corresponding sized OOB-Flan-T5 model, and obtain CC-Llama model by fine-tuning the corresponding sized OOB-Llama-Instruct model.§ PROBING TASKS In this section, we delve into the probing tasks employed to uncover the various properties learned by LMs, with a specific focus on the characteristics that are fundamental to effectively understand the context of CC interactions. Probing tasks tailored to the CC domain provide valuable insights into the capabilities and limitations of LMs in this specific area, as demonstrated in a previous study <cit.>. In their work, the authors propose probing tasks to investigate the conversational, channel, and ASR properties of pre-trained LMs. Given our own work in the contact-center domain, we refer to these probing tasks and utilize the details outlined in the work to construct datasets [We cannot release the dataset due to proprietary reasons.] for a set of classifier-based probing tasks.Additionally, we also probe the LMs on a benchmark probing task of SentEval suite <cit.> that aims to uncover the linguistic knowledge and underlying properties learned by the model. SentEval suite consists of probing tasks across the categories of surface information, syntactic information and semantic information.§ EXPERIMENT DESIGNTo address the research questions outlined in Section <ref>, we design a series of experiments investigating the impact of model size, architecture, fine-tuning paradigms and evaluate the properties learned by language models in the CC domain.Firstly, we compare three types of models: OOB foundation model, OOB instruction model, and the CC instruction model. At first, we compare these models on the CC-specific downstream tasks (RQ1) to understand the quality of generated responses given the call-transcript and an instruction. Secondly, we study the differences in performance across these models in terms of their capability to exhibit the learning of fundamental characteristics of CC data (RQ2). We delve into the impact of model size and architecture (RQ3). We compare different LLMs to explore how the choice of model architecture and size influences their performance on probing tasks. This investigation is critical in unraveling the intricate relationship between model design choices and the underlying properties learned by LLMs.Thirdly, as fine-tuning is a crucial aspect of training LLMs, we also compare two fine-tuning paradigms: full fine-tuning and LoRA-based parameter-efficient fine-tuning (PEFT). By contrasting these approaches, we aim to uncover how the fundamental characteristics learned by LLMs differ under each method. This investigation is directly tied to RQ4, where we seek to understand the differences between traditional full fine-tuning and PEFT methods.Finally, to evaluate the general-purpose properties represented by surface, syntactic and semantic information learnt by LLMs, we utilize the SentEval dataset (RQ5). For probing the dataset, we train a one-layer linear MLP classifier, following the previous work by .§ IMPLEMENTATION DETAILS In this section, we provide a detailed account of the implementation specifics related to our investigation into LLMs fine-tuned on CC instructions.To initiate the process, we extract representations from the LLMs, harnessing their hidden states to encapsulate the contextual nuances present in the transcripts as well as instructions which are indicative of the tasks they are expected to perform as demonstrated in a previous study <cit.>. Our approach is different from the authors in the sense that we use a linear probe as opposed to an attentional probe which is explained in more detail later in this section. For encoder-decoder models, we tap into the final encoder layer to obtain representations for each token within the input prompt. We adopt a suitable aggregation method depending on the characteristics of the specific probing task. For single-token probing tasks, we use the representation of the target token. For other tasks, we obtain an average of representations of all input tokens. On the other hand, in decoder-only models, we utilize the last hidden layer of the decoder block. The aggregation approach for decoder-only models aligns with encoder-decoder models for single-token probing tasks but relies on the last token's representation for other tasks. This difference stems from encoder-decoder models being bidirectional, making each token representation contextual to the entire sequence. In contrast, decoder models process tokens sequentially from left to right, making each token's representation contextual only to the tokens before it. Therefore, we consider the last token's representation as it encompasses information from entire sequence.For encoder-decoder models, the embedding dimension spans 512, 1024, 2048, and 4096 tokens, while for decoder-only models, it encompasses 32001 and 65024 tokens. The different embedding dimensions for the two classes of models stems from the difference in model architectures and context lengths employed during pre-training and fine-tuning. We employed a context length of 512 for all models when extracting representations due to the input prompts having a maximum sequence length of 507 tokens across probing tasks. All models receive an input consisting of a prompt, which is generated from the input dialog, and an instruction that defines the probing task being conducted.Post representation extraction, we employ a Multilayer Perceptron (MLP) comprising a single hidden layer, utilizing the extracted representations as feature inputs for probing. We adopt a sigmoid and softmax activation function for binary and multi-class classification respectively. We perform a hyper-parameter sweep over the range - number of neurons in the hidden layer ∈{50, 100, 150, 200}, learning rate ∈{1e−3, 1e−2, 5e−2}, batch size ∈{4, 8, 16, 32, 64} and choose the best setting as evaluated on eval set. Additionally, we employ Adam optimizer with a dropout rate of 0.3, incorporate a weight decay of 0.00001, and set the maximum number of epochs to 20. Moreover, all experiments include early stopping and check-pointing for the best model.Our experiments comprising representation extraction and probe classifier training were conducted on an AWS cloud instance, specifically, the p4d.24xlarge instance, equipped with eight GPUs, each boasting 40 GB of memory. The process of extracting representations is computationally intensive, chiefly because of the substantial embedding dimensionality. On average, a single run of the representation extraction job for decoder-only models of size 13 billion parameters demands 8-10 hours for completion, whereas the corresponding timeframe for encoder-decoder models of size 11 billion parameters is considerably shorter, ranging from 1-2 hours. In contrast, the probing models present a lighter computational load and general taking around 0.5 hours for completion.Finally, we evaluate the probe models on a held out test set using macro F1 score.§ RESULTS AND ANALYSISIn this section, we provide a comprehensive analysis of the performance evaluation results, shedding light on the key observation made during our study - the striking contrast in response quality between CC-LLMs and OOB-LLMs. §.§ RQ1We perform a qualitative assessment of the responses generated by CC and OOB-LLMs by categorizing the responses generated by each of them into one among following seven classes:Extremely Good, Very Good, Good, Acceptable, Bad, Very Bad, and Extremely Bad. The annotation process involved crafting of task-specific guidelines, covering aspects such as consistency, relevance, and fluency of the generated responses. Additionally, we provided annotated examples to elucidate the criteria for each quality level, ensuring a consistent understanding among annotators. To minimize potential bias, annotators were kept unaware of the model's identity. We further ensured data consistency and quality by conducting a cross-annotator review, which maintained inter-annotator disagreement at levels below 10%. We further analyze the responses generated by both LLM groups, and observe significant drift in the distribution of responses among the seven classes (refer Figure <ref>). Specifically, responses generated by OOB-T5 (11B), OOB-Flan-T5 (11B), OOB-Llama (13B) and OOB-Llama-Instruct (13B) models are consistently skewed towards the lower end of the quality spectrum. A majority of these responses fell within the Bad to Extremely Bad categories, indicating that without specific fine-tuning, these models struggled to generate satisfactory responses for contact center-related instructions. Conversely, responses generated by CC-Flan-T5 (11B) and CC-Llama (13B) models exhibited a notable shift towards higher quality categories. A substantial portion of responses generated by these models landed in the Acceptable to Extremely Good range, demonstrating their ability to comprehend and generate contextually relevant responses for contact center interactions. We hypothesize that this disparity in performance can be attributed to the fine-tuning process with contact center data. It appears that by exposing the LLMs to domain-specific information and scenarios, they have acquired a deeper understanding of contact center interactions.§.§ RQ2In order to investigate the conversational properties learnt by CC-LLMs that lead to performance superior to OOB-LLMs, we evaluate these models on the probing tasks in Section <ref> and per the methodology described in Section <ref>. Although our probing tasks are carefully designed to uncover the latent knowledge within these models, our findings in Table <ref> did not conclusively favor either type of LLM. Specifically, we observe a mixed trend where 1 out of 4 CC models, CC-Flan-T5 (3B) have higher average score and 2 out of 4 models, CC-Flan-T5 (11B) and CC-Llama (13B), have marginally lower (< 0.5%) average score compared to their corresponding OOB instruction-tuned counterparts. We also note a similar observation when comparing CC-LLMs with OOB foundation models wherein 3 out of 4 CC-LLMs have comparable or better average score. This intriguing result prompts us to delve deeper into several critical aspects of LLMs and their fine-tuning process prompting us to put forth following opportunities for exploration:* Probing via Hidden Layer Representation: While this method has been widely employed <cit.> to unearth linguistic properties by language models, we question whether it is sufficiently nuanced to capture conversational intricacies. It is conceivable that the differences we seek are not embedded in the representations themselves but are instead contingent on the decoding strategy employed during the language generation process. This insight underscores the pivotal role of decoding strategies in converting latent embeddings into coherent sequences of tokens that reflect both the given instruction and input. It prompts us to consider that instructing and fine-tuning a general-purpose model and a domain-specific model may ultimately hinge on decoding proficiency rather than vastly divergent learned representations. We believe that this calls for a deeper investigation into designing right probing strategies for recently popular generative language models trained via instruction fine-tuning.* Re-designing probing tasks: Our existing set of probing tasks, although comprehensive, may not fully encapsulate the diverse landscape of conversational properties. Conversations are inherently dynamic, context-dependent, and influenced by various factors, including the interplay between participants, the history of the conversation, and the evolution of topics. Extracting hidden layer representations at a single utterance may not fully capture these dynamic aspects of conversation. It is plausible that more specific probing tasks tailored to the characteristic of contact center interactions are needed. These tasks should ideally mirror the challenges posed by real-world downstream applications that help diagnose the contextual properties and the interplay in the conversations. §.§ RQ3From our results in Table <ref>, we note that T5 models consistently outperform Llama models across the three settings, OOB Foundation, OOB Instruction-tuned and Contact Center, highlighting that T5's encoder-decoder architecture is better-equipped to comprehend conversational properties compared to Llama's decoder only architecture. Further, we observe that larger sizes generally translate to improved performance in both the OOB and CC settings reinforcing the pivotal role of model scale in grasping the complexities of conversation.§.§ RQ4Recently Parameter-Efficient Fine-Tuning, PEFT, has turned out to be a compelling approach to fine-tune large-scale pre-trained LMs while mitigating the challenges associated with resource-intensive full fine-tuning. Hence, we investigate the similarities and differences in the linguistic properties learnt by models fine-tuned using PEFT vs those fine-tuned in vanilla fashion (full fine-tuning). Specifically, we use Low-Rank Adaptation (LoRA) framework to perform parameter-efficient fine-tuning of OOB-Flan-T5 (11B) and OOB-Llama-Instruct (13B) models, using the instruction dataset described in Section <ref> to obtain CC-Flan-T5-PEFT (11B) and CC-Llama-PEFT (13B) models respectively. We further probe CC-Flan-T5 (11B), CC-Flan-T5-PEFT (11B), CC-Llama (13B) and CC-Llama-PEFT (13B) on the probing tasks in Section <ref>. Based on our results in Table <ref>, we observe that CC-Flan-T5-PEFT (11B) leads to a 4.53% lower average score on the probing tasks compared to CC-Flan-T5 (11B). A similar observation also holds true for Llama models, where CC-Llama-PEFT (13B) results in a 6.13% lower average score compared to CC-Llama (13B). This notable trend highlights that while the reduction in trainable parameters and memory requirements offers undeniable advantages in resource efficiency, it comes at a trade-off in task-specific performance. Hence, it is imperative for practitioners and researchers to weigh these trade-offs carefully when considering the adoption of PEFT for fine-tuning LLMs.§.§ RQ5Contact center models, fine-tuned on contact-center instruction dataset, not only embrace conversational nuances but also retain fundamental linguistic properties inherent in OOB models. Our results in Table <ref> exemplify this, wherein we observe that CC-Flan-T5 (11B) achieves an average score of 80.31% on SentEval probing tasks, demonstrating a close alignment with OOB-Flan-T5 (11B) (81.02%). Furthermore, it is noteworthy that CC-Flan-T5 (11B) exhibits a SentEval score 1.61% higher than the OOB-T5 (11B) model, further underscoring its enhanced linguistic knowledge in addition to conversational capabilities in the context of contact center applications. Thus, CC-Flan-T5 (11B), while adapting to the intricacies of contact center conversations, maintains linguistic proficiency akin to its OOB counterparts. This implies that the model can effortlessly navigate the linguistic aspects of text, even when tailored to a specific domain. On the other hand, we note that CC-Llama (13B) exhibits a lower average on SentEval tasks compared to its OOB counterparts. One plausible explanation may revolve around the inherent characteristics of the Llama model, which is designed as a decoder-only architecture. While CC-Llama (13B) and OOB-Llama-Instruct (13B) model exhibit similar average score on conversational probing tasks (refer Table <ref>), we hypothesize that its decoder-only architecture might introduce subtle variations in its linguistic representations compared to the encoder-decoder architectures like Flan. This distinction may result in a slight dip in performance on tasks that primarily assess linguistic properties while maintaining similar understanding of conversational properties. However, we leave this hypothesis for future exploration. § RELATED WORKS In recent years, there have been significant advancements in the field of language modeling, with a particular focus on training domain-specific language models. One notable work in this area is Med-PaLM <cit.>, developed by researchers in the medical domain. Med-PaLM surpassed previous models in terms of performance on medical question answering tasks. CodeLLAMA <cit.>, a prominent family of language models, specializes in code generation and infilling tasks, stemming from LLAMA2 to cater to software development and programming needs.In the field of natural language processing, there have been numerous studies aimed at understanding the inner workings of language models. Probing tasks have been employed as a means to evaluate the fundamental properties encoded within the representations of these models. Baroni et al. <cit.> introduced a collection of probing tasks in the SentEval suite <cit.> to assess sentence embedding representations of language models. This work paved the way for subsequent studies, such as Tenney et al. <cit.> and Lin et al. <cit.>, who performed layer-wise probing of BERT to uncover its semantic and hierarchical awareness.Exploring the self-attention mechanism of language models has also provided insights into their inner workings. <cit.> and <cit.> delved into the patterns exhibited by individual self-attention heads in BERT, offering insights into their roles and functionalities.While most studies focus on probing general language models, there are also investigations into domain-specific models and properties. <cit.> investigated the representations of language models in contact-center domain, revealing that LMs encode conversational and speaker-type properties to a large extent without external supervision, but lose the linguistic understanding of dependency relations. <cit.> probed biomedical language models and demonstrated their high effectiveness on biomedical named entity recognition (NER) and natural language inference (NLI) tasks. Our works falls into the line of training and investigating a domain specific language model. § CONCLUSIONOur study contributes to the growing body of research on fine-tuning LLMs with domain-specific instructions. In this work, we demonstrate that CC-LLMs, CC-Flan-T5 and CC-Llama, exhibit superior performance on downstream tasks within the contact center domain. This finding highlights the effectiveness of fine-tuning LLMs with domain-specific instructions in enhancing their understanding and applicability in specific domains. Furthermore, our comparison between OOB and CC models on the probing task reveals interesting observations. While the performance of probing classifiers on the set of probing tasks is relatively similar, indicating comparable contact-center specific properties encoding capabilities, the CC-LLMs still outperform OOB models. This suggests that the CC-LLMs possess additional domain-specific knowledge or contextual understanding that aids in achieving superior performance on downstream tasks. We also find that CC-LLMs, rely less on encoding surface, syntactic, and semantic properties. This indicates that these models may leverage other mechanisms or information sources to excel in the contact center domain thus opening opportunities for further exploration in this area. § LIMITATIONSWhile our study provides valuable insights into training a contact-center specific language model and conducting linear edge probing, it is important to acknowledge certain limitations in our work. Firstly, our exploration of language models is limited to a couple of models belonging to two architectures, one encoder-decoder and one decoder style. We choose these models on the basis of their effectiveness across different tasks as has been surfaced up in the research community, however, the trends we observe may not necessarily hold true for other models within the same class of architecture. Secondly, our work is based on the probing methodology of linear edge probing, which applies a one layer linear MLP on hidden representations. The performance and observations on probing tasks may differ if a different probing setup, such as an attention-based probing, is used. It is crucial to explore alternative probing methods to gain a more comprehensive understanding of the language model's characteristics. Moreover, the set of probing tasks we utilize may not cover the full range of characteristics that a language model can encode. Additional probing tasks can be considered to do a more extensive study of the model's capabilities. Lastly, our research is conducted on a proprietary dataset that cannot be released. This limits the ability of other researchers to directly compare their results or replicate our experiments. Access to the dataset is crucial for future work in this area, and we encourage the development of publicly available datasets for domain-specific language models.Despite these limitations, our study underscores the importance of domain-specific instruction models and highlights the limited capacity of general-purpose language models to meet domain specific use-cases. Furthermore, we pose thought-provoking questions that can guide further research and contribute to the advancement of the research community's understanding of the properties encoded in generative language models in the new era.acl_natbib§ APPENDIX §.§ Task Definitions Definitions of the tasks are provided in Table <ref>.
http://arxiv.org/abs/2312.15922v1
{ "authors": [ "Varun Nathan", "Ayush Kumar", "Digvijay Ingle", "Jithendra Vepa" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231226073439", "title": "Towards Probing Contact Center Large Language Models" }
Computing Balanced Solutions for Large International Kidney Exchange SchemesM. Benedek et al. KRTK, Institute of Economics, Budapest, Hungary {peter.biro,marton.benedek,csaji.gergely}@krtk.huDepartment of Computer Science, Durham University, Durham, UK {matthew.johnson2,daniel.paulusma,xin.ye}@durham.ac.ukComputing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded Márton Benedek1 Péter Biró1 Gergely Csáji1Matthew Johnson2 Daniel Paulusma2 Xin Ye2 January 14, 2024 =========================================================================================================== In kidney exchange programmes (KEP) patients may swap their incompatible donors leading to cycles of kidney transplants. Nowadays, countries try to merge their national patient-donor pools leading to international KEPs (IKEPs). As shown in the literature, long-term stability of an IKEP can be achieved through a credit-based system. In each round, every country is prescribed a “fair” initial allocation of kidney transplants. The initial allocation, which we obtain by using solution concepts from cooperative game theory, is adjusted by incorporating credits from the previous round, yielding the target allocation. The goal is to find, in each round, an optimal solution that closely approximates this target allocation. There is a known polynomial-time algorithm for finding an optimal solution that lexicographically minimizes the country deviations from the target allocation if only 2-cycles (matchings) are permitted. In practice, kidney swaps along longer cycles may be performed. However, the problem of computing optimal solutions for maximum cycle length ℓ is -hard for every ℓ≥ 3. This situation changes back to polynomial time once we allow unbounded cycle length. However, in contrast to the case where ℓ=2, we show that for ℓ=∞, lexicographical minimization is only polynomial-time solvable under additional conditions (assuming ≠).Nevertheless, the fact that the optimal solutions themselves can be computed in polynomial time if ℓ=∞ still enables us to perform a large scale experimental study for showing how stability and total social welfare are affected when we set ℓ=∞ instead of ℓ=2.computational complexity, cooperative game theory, partitioned permutation game, international kidney exchange § INTRODUCTION In this paper, we introduce a new class of cooperative games called partitioned permutation games, which are closely related to the known classes of permutation games and partitioned matching games (introduced at AAMAS 2019 <cit.>). Both partitioned matching games and partitioned permutation games have immediate applications in international kidney exchange. Before defining these games, we first explain this application area.Kidney Exchange. The most effective treatment for kidney failure is transplanting a kidney from a deceased or living donor, with better long-term outcomes in the latter case. However, a kidney from a family member or friend might be medically incompatible and could easily be rejected by the patient's body. Therefore, many countries are running national Kidney Exchange Programmes (KEPs) <cit.>. In a KEP, all patient-donor pairs are placed together in one pool.If for two patient-donor pairs (p,d) and (p',d'), it holds that d and p are incompatible, as well as d' with p', then d and d' could donate a kidney to p' and p, respectively. This is a 2-way exchange. We now generalize a 2-way exchange. We model a pool of patient-donor pairs as a directed graph G=(V,A) (the compatibility graph) in which V consists of the patient-donor pairs, and A consists of every arc (u,v) such that the donor of u is compatible with the patient of v. In a directed cycle C=u_1u_2… u_ku_1, for some k≥ 2, the kidney of the donor of u_i could be given to the patient of u_i+1 for every i∈{1,…,k+1}, with u_k+1:=u_1. This is a k-way exchange using the exchange cycle C.To prevent exchange cycles from breaking (and a patient from losing their willing donor), hospitals perform the k transplants in a k-way exchange simultaneously. Hence, KEPs impose a bound ℓ (the exchange bound) on the maximum length of an exchange cycle, typically 2≤ℓ≤ 5.Anℓ-cycle packing of G isa set 𝒞 of directed cycles, each of length at most ℓ, that are pairwise vertex-disjoint; if ℓ=∞, we also say that 𝒞 is a cycle packing.The size of 𝒞 is the number of arcs that belong to a cycle in 𝒞.KEPs operate in rounds.A solution for round r is an ℓ-cycle packing in the corresponding compatibility graph G^r.The goal is to help as many patients as possible in each round. Hence, to maximize the number of transplants in round r, we seek for an optimal solution, that is, a maximum ℓ-cycle packing of G^r, i.e., one that has maximum size. After a round, some patients have received a kidney or died, and other patient-donor pairs may have arrived, resulting in a new compatibility graph G^r+1 for round r+1. The main computational issue for KEPs is how to find an optimal solution in each round. If ℓ=2, we can transform, by keeping the “double” arcs, a computability graph G into an undirected graph D=(V,E) as follows. For every u,v∈ V, we have uv∈ E if and only if (u,v)∈ A and (v,u)∈ A. It then remains to compute, in polynomial time, a maximum matching in D. If we set ℓ=∞, a well-known trick works (see e.g. <cit.>). We transform G into a bipartite graph H with partition classes V and V', where V' is a copy of V. For each u∈ V and its copy u'∈ V', we add the edge uu' with weight 0. For each (u,v)∈ A, we add the edge uv with weight 1. Now it remains to find in polynomial time a maximum weight perfect matching in H. However, for any constant ℓ≥ 3, the situation changes, as shown byAbraham, Blum and Sandholm <cit.>.If ℓ=2 or ℓ=∞, we can find an optimal solution for a KEP round in polynomial time; else this is -hard.International Kidney Exchange. As merging pools of national KEPs leads to better outcomes, the focus nowadays lies on forming an international KEP (IKEP), e.g. Austria and the Czech Republic <cit.>; Denmark, Norway and Sweden; and Italy, Portugal and Spain <cit.>.Apart from ethical, legal and logistic issues (all beyond our scope), there is now a new and highly non-trivial issue that needs to be addressed: How can we ensure long-term stability of an IKEP? If countries are not treated fairly, they may leave the IKEP, and it could even happen that each country runs their own national KEP again.Example. Let G be the compatibility graph from Figure <ref>. Then a total of five kidney transplants is possible if the exchange cycle C=abdeca is used. In that case, countries 1, 2 and 3 receive three, two and zero kidney transplants, respectively. Cooperative Game Theoryconsidersfair distributions of common profit if all parties involved collaborate with each other. Before describing its role in our setting, we first give some terminology. A (cooperative) game is a pair (N,v), where N is a set of n players and v: 2^N→is a value function with v(∅) = 0.A subset S⊆ N is a coalition. If for every possible partition (S_1,…,S_r) of N it holds that v(N)≥ v(S_1)+⋯ +v(S_r), then players will benefit most by forming the grand coalition N. The problem is then how to fairly distribute v(N) amongst the players of N. An allocationis a vector x ∈^N with x(N) = v(N) (we write x(S)=∑_p∈ Sx_p for S⊆ N). A solution concept prescribes a set of fair allocations for a game (N,v). Here, the notion of fairness depends on context.The solution concepts we consider are the Banzhaf value, Shapley value, nucleolus, benefit value, contribution value (see Section <ref>) and the core. They all prescribe a unique allocation except for the core, which consists of all allocations x ∈^N with x(S)≥ v(S) for every S⊆ N. Core allocations ensure N is stable, as no subset S will benefit from forming their own coalition. But a core may be empty.We now define some relevant games.For a directed graph G=(V,A) and subset S⊆ V, we let G[S]=(S,{(u,v)∈ A|u,v∈ S}) be the subgraph of G induced by S. An ℓ-permutation game on a graph G=(V,E) is the game (N,v), where N=V and for S⊆ N, the value v(S) is the maximum size of an ℓ-cycle packing of G[S]. Two special cases are well studied. We obtain a matching game if ℓ=2, which may have an empty core (e.g. when G is a triangle) and a permutation game if ℓ=∞, whose core is always nonempty <cit.>.In the remainder, we differentiate between the sets N (of players in the game) and V (of vertices in the underlying graph G). That is, we associate each player i∈ N with a distinct subset V_i of V. A partitioned ℓ-permutation game on a directed graph G=(V,A) with a partition (V_1,…,V_n) of V is the game (N,v), where N={1,…,n}, and for S⊆ N, the value v(S) is the maximum size of an ℓ-cycle packing of G[⋃_i∈ SV_i]. We obtain a partitioned matching game <cit.> if ℓ=2, and a partitioned permutation game if ℓ=∞. The width of (N,v) is the width of (V_1,…,V_n), which is c=max{|V_i| |1≤ i≤ n}. See Figure <ref> for an example.The Model. For a round of an IKEP with exchange bound ℓ, let (N,v) be the partitioned ℓ-permutation game defined on the compatibility graph G=(V,A), where N={1,…,n} is the set of countries in the IKEP, and V is partitioned into subsets V_1,…,V_n such that for every p∈ N, V_p consists of the patient-donor pairs of country p. We say that (N,v) is the associated game for G.We can now make use of a solution concept 𝒮 for (N,v) to obtain a fair initial allocation y, where y_p prescribes the initial number of kidney transplants country p should receive in this round (possibly, y_p is not an integer, but as we shall see this is not relevant).To ensure IKEP stability, we use the model of Klimentova et al. <cit.>, which is a credit-based system. For round r≥ 1, let G^r be the computability graph with associated game (N^r,v^r); let y^r be the initial allocation (as prescribed by some solution concept 𝒮); and let c^r:N^r→ be a credit function, which satisfies ∑_p∈ N^rc^r_p=0; if r=1, we set c^r≡ 0.For p∈ N, we set x^r_p:=y^r_p+c^r_p to obtain the target allocation x^rfor round r (which is indeed an allocation, as y^r is an allocation and ∑_p∈ Nc_p^r=0). We choose some maximum ℓ-cycle packing 𝒞 of G^r as optimal solution for round r (out of possibly exponentially many optimal solutions). Let s_p(𝒞) be the number of kidney transplants for patients in country p (with donors both from p and other countries). For p∈ N, we set c^r+1_p:=x^r_p-s_p(𝒞) to get the credit function c^r+1 for round r+1 (note that ∑_p∈ Nc_p^r+1=0). For round r+1,a new initial allocation y^r+1 is prescribed by 𝒮 for the associated game (N^r+1,v^r+1). For every p∈ N, we set x_p^r+1:=y_p^r+1+c_p^r+1, and we repeat the process.Apart from specifying the solution concept 𝒮, we must also determine how to choose in each round a maximum ℓ-cycle packing 𝒞 (optimal solution) of the corresponding compatibility graph G. We will choose 𝒞, such that the vector s(𝒞), with entries s_p(𝒞),is closest to the target allocation x for the round under consideration. To explain our distance measures, let|x_p-s_p(𝒞)| be the deviation of country p from its target x_p if 𝒞 is chosen out of all optimal solutions. We order the deviations |x_p-s_p(𝒞)| non-increasingly as a vector d(𝒞)= (|x_p_1-s_p_1(𝒞)|, …, |x_p_n-s_p_n(𝒞)|).We say 𝒞 is strongly close to x if d(𝒞) is lexicographically minimal over all optimal solutions. If we only minimize d_1(𝒞)=max_p∈ N{|x_p-s_p(𝒞)|} over all optimal solutions, we obtain a weakly close optimal solution. If an optimal solution is strongly close, it is weakly close, but the reverse might not be true. Both measureshave been used, see below.Related Work. Benedek et al. <cit.> proved the following theorem for ℓ=2 (the “matching” case); in contrast, Biró et al. <cit.> showed that it is -hard to find a weakly close maximum matching even for |N|=2 once the games are defined on edge-weighted graphs. For partitioned matching games, the problem of finding an optimal solution that is strongly close to a given target allocation x is polynomial-time solvable. Benedek et al. <cit.> used the algorithm of Theorem <ref> to perform simulations for up to fifteen countries for ℓ=2.As initial allocations, they used the Shapley value, nucleolus, benefit value and contribution value, with the Shapley value yielding the best results (together with the Banzhaf value in the full version <cit.> of <cit.>). This is in line with the results of Klimentova et al. <cit.> and Biró et al. <cit.> for ℓ=3. Due to Theorem <ref>, the simulations in <cit.> are for up to four countries and use weakly close optimal solutions.For ℓ=2, we refer to <cit.> for an alternative model based on so-called selection ratios using lower and upper target numbers. IKEPs have also been modelled as non-cooperative games in the consecutive matching setting, which has 2-phase rounds: national pools in phase 1 and a merged pool for unmatched patient-donor pairs in phase 2; see <cit.> for some results in this setting. Fairness (versus optimality) issues are also studied for national KEPs, in particular in the US, which is different from Europe (our setting). Namely, hospitals in the US are more independent and are given positive credits to register their easy-to-match patient-donor pairs to one of the three national KEPs, and negative credits for registering their hard-to-match pairs. The US setting is extensively studied <cit.>, but beyond the scope of our paper.Our Results. We consider partitioned permutation games and IKEPs, so we assume ℓ=∞. This assumption is not realistic in kidney exchange, but has also been made for national KEPs to obtain general results of theoretical interest <cit.> and may have wider applications (e.g. in financial clearing). We also aim to research how stability and total number of kidney transplants are affected when moving from one extreme (ℓ=2) to the other (ℓ=∞).As such our paper consists of a theoretical and experimental part.We start with our theoretical results (Section <ref>). Permutation games, i.e. partitioned permutation games of width 1, have a nonempty core <cit.>, and a core allocation can be found in polynomialtime <cit.>.We generalize these two results to partitioned permutation games of any width c, and also show a dichotomy for testing core membership, which is in contrast with the dichotomy forpartitioned matching games, where the complexity jump happens at c=3 <cit.>. The core of every partitioned permutation game is non-empty, and it is possible to find a core allocation in polynomial time.Moreover, for partitioned permutation games of fixed width c, the problem of deciding if an allocation is in the coreis polynomial-time solvable if c=1 and co-complete if c≥ 2. Due to Theorem <ref>, we cannot hope to generalize Theorem <ref> to hold for any constant ℓ≥ 3. Nevertheless, Theorem <ref> leaves open the question if Theorem <ref> is true for ℓ=∞ (the “cycle packing” case) instead of only for ℓ=2 (the “matching” case). We show that the answer to this question is no (assuming ≠). For partitioned permutation gameseven of width 2,the problem of finding an optimal solution that is weakly or strongly close to a given target allocation x is -hard. Our last theoretical result is a randomizedalgorithm with parameter n. As we shall prove, derandomizing it requires solving the notorious Exact Perfect Matching problem in polynomial time. The complexity status of the latter problem is still open since its introduction by Papadimitriou and Yannakakis <cit.> in 1982. For a partitioned permutation game (N,v) on a directed graph G=(V,A), the problem of finding an optimal solution that is weakly or strongly close to a given target allocation x can be solved by a randomized algorithm in |A|^O(n) time. Our theoretical results which highlighted severe computational limitations, and we now turn to our simulations. These are guided by our theoretical results. Namely, we note that the algorithm in Theorem <ref> is neither practical for instances of realistic size (which we aim for) nor acceptable in the setting of kidney exchange due to being a randomized algorithm. Therefore and also due to Theorem <ref>,we formulate the problems of computing a weakly or strongly close optimal solution as integer linear programs, as described in Section <ref>. This enables us to use an ILP solver. We still exploit the fact that for ℓ=∞ (Theorem <ref>) we can find optimal solutions and values v(S) in polynomial time. In this way we can perform, in Section <ref>,simulations for IKEPs up to ten countries, so more than the four countries in the simulations for ℓ=3 <cit.>, but less than the fifteen countries in the simulations for ℓ=2 <cit.>.For the initial allocations we use two easy-to-compute solution concepts: the benefit value and contribution value, and three hard-to-compute solution concepts: the Banzhaf value, Shapley value, and nucleolus. Our simulations show, just like those for <cit.>,that a credit system using strongly close optimal solutions makes an IKEP the most balanced without decreasing the overall number of transplants. The exact improvement is determined by the choice of solution concept. Our simulations indicate that the Banzhaf value yields the best results: on average, a deviation of up to 0.90%from the target allocation.Moreover, moving from ℓ=2 to ℓ=∞ yields on average 46% more kidney transplants(using the same simulation instances generated by the data generator <cit.>). However, the exchange cycles may be very large, in particular in the starting round. § THEORETICAL RESULTS   We start with Theorem <ref>.Recall that the width c of a partitioned permutation game (N,v) defined on a directed graph G=(V,A) with vertex partition (V_1,…,V_n)is the maximum size of a setV_i. Theorem <ref> (restated). The core of every partitioned permutation game is non-empty, and it is possible to find a core allocation in polynomial time.Moreover, for partitioned permutation games of fixed width c, the problem of deciding if an allocation is in the coreis polynomial-time solvable if c=1 and co-complete if c≥ 2. We first show that finding a core allocation of a partitioned permutation game can be reduced to finding a core allocation of a permutation game.As the latter can be done in polynomial-time <cit.> (and such a core allocation always exists <cit.>),the same holds for the former.Let (N,v) be a partitioned permutation game on a graph G=(V,A) with partition (V_1,…,V_n) of V. We create a permutation game (N',v') by splitting each V_i into sets of size 1, i.e., every vertex becomes a player in N'. Let x' be a core allocation of (N',v'). For each set i∈ N, we set x(i)=∑_v∈ V_i x'(v). It holds that x(N)=v(N), as (N,v) and (N',v') are defined on the same graph G; hence, the weight of a maximum weight cycle packing is unchanged.Suppose there is a blocking coalition S⊂ N, that is, v(S)>x(S) holds. By the construction of x, it holds that the sum of the x' values over all vertices in ∪_i∈ SV_i is less than v(S)=v'({ u| u∈∪_i∈ SV_i } ). Hence, the players in N' corresponding to these vertices would form a blocking coalition to x' for (N',v'), a contradiction. As x' can be found in polynomial-time, so can x.Now we show that deciding whether an allocation is in the core can be solved in polynomial time for partitioned permutation games with width 1, that is, for permutation games.Let x be an allocation. We create a weight function w_x over the arcs by setting w_x((u,v))=x(u)-1. We claim that if there exists a blocking coalition, then there is a blocking coalition that consists of only vertices along a cycle.In order to see this, let S be a blocking coalition, so x(S)<v(S). By definition, v(S) is the maximum size of a cycle packing 𝒞= {C_1, C_2,..,C_k} in G[S]. For i=1,…,k, let S_i be the set of vertices in C_i. Fromx(S_1)+x(S_2)+ … x(S_k) ≤ x(S) < v(S) = |S_1|+|S_2|+… |S_k|,we find that x(S_i)<|S_i| for at least one set S_i. Hence, such an S_i is also blocking.Due to the above, we just need to check whether there is a cycle C such that x(V(C))<|E(C)|=v(V(C)). By the definition of w_x, such a cycle exists if and only if w_x is not conservative (i.e. there is no negative weighted directed cycle), which can be decided in polynomial-time, for example with the Bellman-Ford algorithm.Finally, we show that deciding if an allocation x is in the core of a partitioned permutation game is co-complete, even if each |V_i| has size 2 (so c=2). Containment inholds, as we can easily check if a coalition blocks anallocation. To prove hardness, we reduce from a special case ofthe -complete problem Exact 3-Cover <cit.>.Exact 3-CoverA family of 3-element subsets of [3n], 𝒮={ S_1,…, S_3n}, where each element belongs to exactly three setsIs there an exact 3-cover for 𝒮, that is, a subset 𝒮'⊂𝒮 such that each element appears in exactly one of the sets of 𝒮'?Given an instance I of Exact 3-Cover,we construct a partitioned permutation game (N,v) as follows (see Figure <ref> for an illustration).– For each element i∈ [3n], there is a vertex a_i and a vertex b_i,– for each set S_j∈𝒮, there are vertices s_j^1,s_j^2,s_j^3,t_j^1,t_j^2,t_j^3,– there are a further 12n vertices x_1,…,x_6n and y_1,…,y_6n. Define the arcs as follows: – for each k∈ [6n], an arc (x_k,x_k+1),where x_6n+1 := x_1.– for each k∈ [3n], an arc (b_k,b_k+1),where b_3n+1 := b_1; – for each j∈ [3n], the arcs (t_j^1,t_j^2), (t_j^2,t_j^3), (t_j^3,t_j^1); – for each k∈ [6n],j∈ [3n],l∈ [3], the arcs (y_k,s_j^l) and (s_j^l,y_k); and – for each set S_j={ j_1,j_2,j_3}, j_1<j_2<j_3, the arcs (s_j^1,a_j_1), (a_j_1,s_j^1), (s_j^2,a_j_2), (a_j_2,s_j^2), (s_j^3,a_j_3), (a_j_3,s_j^3). This gives a directed graphG=(V,A).We partition V in sets: – for each i∈ [3n], the set A_i={ a_i,b_i},– for each k∈ [6n], the set X_k={ x_k,y_k}, and– for each j∈ [3n],l∈ [3],the set T_j^l={ s_j^l,t_j^l}. Finally, we define the allocation x, as follows:– x(A_i)=3-n+1/9n^2 for each i∈ [3n],– x(X_k)=3-2n-1/18n^2 foreach k∈ [6n], and– x(T_j^l)=1+1/9n for each j∈ [3n],l∈ [3].The size of the maximum cycle packing of G isv(N)=6n+6n+9n+9n+3n+3n=36n,as every vertex can be covered. This is realized by adding the x_k-cycle, the b_i-cycle, the t_j^l-cycles and then for each a_i, a cycle {(a_i,s_j^l),(s_j^l,a_i)} (this can be done, because each element appears exactly three times in the sets, so there is a perfect matching covering the vertex of each element in the bipartite graph induced by the incidence relation between the sets and the elements). We can cover the remaining 6n s_j^l vertices by two cycles {(y_k,s_j^l),(s_j^l,y_k)} arbitrarily. If we sum up all allocation values we get that[x(N) = 6n · (3-2n-1/18n^2) + 9n· (1+1/9n)+3n· (3-n+1/9n^2); = 18n -2n-1/3n+9n +1 +9n - n+1/3n; = 36n; = v(N), ]so x is an allocation for (N,v).We claim I has an exact 3-cover if and only if x is not in the core. “⇒” First suppose { S_k_1,…,S_k_n} is an exact 3-cover in I. We claim that 𝒫={ A_i | i∈ [3n]}∪{ T_k_i^l| i∈ [n],l∈ [3] } is a blocking coalition. We first show that v(𝒫)=12n,which can be seen as follows. First, the s_j^l and a_i vertices in the coalition can be covered by 2-cycles, as the corresponding sets form an exact 3-cover. Moreover, the b_i vertices can be covered by the b_i-cycle, as each of the A_i countries is in 𝒫, and finally, the t_j^l vertices can be covered too, as for each j∈{ k_1,…,k_n} all of T_j^1,T_j^2,T_j^3 belong to 𝒫. Then, x is not in the core, as[x(𝒫) = 3n· (3-n+1/9n^2)+3n· (1+1/9n); =9n-n+1/3n+3n +n/3n; < 12n; = v(𝒫). ] “⇐” Now suppose x is not in the core. Then there is a coalition 𝒫 with v(𝒫)>x(𝒫). We write 𝒜=⋃_i∈ [3n]A_i and 𝒳=⋃_k∈6nX_k. We claim that 𝒜⊂𝒫 or 𝒳⊂𝒫. For a contradiction, suppose that neither 𝒜⊂𝒫 nor 𝒳⊂𝒫 holds. Clearly, 𝒫∩ (𝒜∪𝒳)∅, because m of the T_j^l countries can only create a cycle packing of size m, but they each have an allocation of 1+1/9n>1. So suppose that |𝒫∩ (𝒜∪𝒳)|=m≤ 9n. By our assumption, none of the participating b_i or x_k vertices can be covered. Hence, if there are h ≥ 1 participating T_j^l countries besides them (there must be at least one to have any cycles), then the size of the maximum cycle packing they can obtain is h +2 min{ m,h }≤ 2m+h, as at best all t_j^l vertices can be covered, but the other vertices can only be covered with cycles of length 2 by pairing the h s_j^l vertices to m a_i or x_i vertices. But, their assigned allocation in x is at leastm·min{(3-n+1/9n^2),(3-2n-1/18n^2)} + h · (1+1/9n)>2m+h,a contradiction. Hence, 𝒜⊆𝒫 or 𝒳⊆𝒫 holds.First suppose that 𝒜∪𝒳⊆𝒫. Let the number of participating T_j^l countries be h. Then, we have thatv(𝒫)≤ 3n + 6n + 2h +h, v(𝒫)>x(𝒫)=18n-2n-1/3n+9n-n+1/3n+h +h/9n.Hence, we find that 2h >18n-1+h/9n>18n -1, so h >9n-1, but it also cannot be 9n, as 18n = 18n -1 +9n/9n, a contradiction (as there are only 9n T_j^l countries).Suppose nextthat 𝒳⊆𝒫. Let 0≤ m=|𝒫∩𝒜|<3n. Then, if the number of T_j^l countries in 𝒫 is h >0, then v(𝒫)≤ 6n+h +2· h. We can suppose that h ≤ 6n+m, because if there are more T_j^l countries, then at most 6n+m of their s_j^l vertices can be covered, hence the remaining countries bring strictly more x value than what they can increase the maximum cycle packing size with. However, [x(𝒫) ≥ (18n-2n-1/3n)+(h +h/9n)+m· (3-n+1/9n^2); > 18n+h +3m -1. ]Hence, in order for 𝒫 to block, it must hold that 2h >12n+3m-1, so h >6n + 1.5m-0.5, contradicting h ≤ 6n+m, if m≥ 1. In the case, when m=0, we get that 6n+3h > 18n -2n-1/3n+h + h/9n, so 2h > 12n-2n-1/3n+h/9n>12n-1. From this and h ≤ 6n + 0, we get that h must be 6n. However, then 12n>12n-2n-1/3n+6n/9n>12n, which is a contradiction again.Therefore, suppose that 𝒜⊆𝒫, but 𝒳 is not included in 𝒫.Let 0≤ m=|𝒫∩𝒳|<6n. Now, if the number of T_j^l countries in 𝒫 is h, then v(𝒫)≤ 3n+h +2· h, similarly as before. Again, we can suppose that h ≤ 3n+m for similar reasons. Furthermore,[x(𝒫) ≥ 9n-n+1/3n +h +h/9n+m· (3-2n-1/9n^2); >9n +h + 3m -1. ]If m≥ 1, then this implies that 2h >6n+3m-1, so h > 3n+1.5m -0.5, a contradiction. We conclude that m=0. Therefore, h ≤ 3n and 2h > 6n -n+1/3n+h/9n>6n-1, so h =3n. To sum up, we showed that 𝒫 must contain 𝒜, must be disjoint from 𝒳 and there must be exactly 3n T_j^l countries inside 𝒫. We claim that for each j∈ [3n], if T_j^l∈𝒫 for some l∈ [3n], then T_j^l is inside 𝒫 for all l∈ [3]: if not then there must be at least one t_j^l vertex that cannot be covered, hence v(𝒫)≤ 12n-1, but x(𝒫)=9n-n+1/3n+3n+3n/9n>12n-1, a contradiction. Therefore, for each set S_j, if T_j^l∈𝒫 for some l∈ [3], then T_j^l∈𝒫 for all l∈ [3], so there are exactly n sets S_j, such that T_j^l∈𝒫.Finally, it remains to show that the n sets corresponding to those value of j such that T_j^l is in 𝒫 for l∈ [3] must be the indices of an exact 3-cover. Suppose that there is an element i that cannot be covered by them. Then, a_i cannot be covered by a cycle packing by 𝒫, so v(𝒫)≤ 12n-1, which leads to the same contradiction. We now prove Theorem <ref>.We need some definitions and a lemma. Let G=(V,A) be a directed graph with a partition (V_1,…,V_n) of V for some n≥ 1. Recall that for a maximum cycle packing 𝒞 of G, we let s_p(𝒞) denote the number of arcs (u,v) with v∈ V_p that belong to some directed cycle of 𝒞. We say that 𝒞 satisfies a set of intervals {I_1,…,I_n} if s_p(𝒞)∈ I_p for every p∈{1,…,n}. For instances (G,𝒱,ℐ), where G is a directed graph, 𝒱=(V_1,…,V_n) is a partition of V with fixed width c, and ℐ={I_1, …, I_n} is a set of intervals, the problem of finding a maximum cycle packing of G satisfying ℐ is polynomially solvable if c=1, and -complete if c≥ 2 even ifI_p=[1,∞] for every p∈{1,…,n} or I_p=[1,1] for every p∈{1,…,n}.First suppose c=1. Let v_p be the unique vertex in V_p.We can assume that each I_p contains either 0 or 1, else no cycle packing satisfying ℐ exists. If 1 ∉ I_p, we can delete v_p and redefine G as the graph that remains.If this decreases the size of the maximum cycle packing, then we conclude that no desired maximum cycle packingexists.Let U be the set of vertices v_p for which 0 ∉ I_p.The problem reduces to finding a maximum cycle packing such that each vertex in U is covered.For this, we transform G=(V,E) into a bipartite graph H with partition classes V and V', where V' is a copy of V. For each v∈ V∖ U and its copy v'∈ V'∖ U', we add the edge uu' with weight 0 (we do not add these for the vertices of U). For each (u,v)∈ A, we add the edge uv with weight 1. It remains to find in polynomial time a maximum weight perfect matching in H, if there is any and check whether its weight is the same as the size of a maximum cycle packing in the original directed graph. If there is a perfect matching with that weight, then in the maximum cycle packing it corresponds to, each vertex in U must be covered with a cycle. In the other direction, if there is a maximum cycle packing covering each vertex in U, then the perfect matching it corresponds to has the desired weight and we need no nonexistent uu' edge for any u∈ U indeed.Now suppose c ≥ 2.Containment inis trivial, as we can easily check if an arc set consists of vertex disjoint cycles or not, and for each county we can compute the number of incoming arcs.To prove completeness, as in the proof of Theorem <ref>, we reduce from the -complete problem Exact 3-Cover.Given an instance I of Exact 3-Cover, we construct an instance I' of our problem as follows (see also Figure <ref>): – For each element i∈ [3n], we create a vertex a_i,– for each set S_j∈𝒮, we create vertices s_j^1,s_j^2,s_j^3,t_j^1,t_j^2,t_j^3,– we create 2n source vertices x_1,…,x_2n and 2n sink vertices y_1,…,y_2n. Define the arcs as follows: – for each k∈ [2n], an arc (y_k,x_k),– for each k∈ [2n],j∈ [3n], the arcs (x_k,t_j^1) and (t_j^3,y_k),– for each j∈ [3n], the arcs (t_j^1,t_j^2) and (t_j^2,t_j^3), and – for each set S_j={j_1,j_2,j_3}, j_1<j_2<j_3, the arcs (s_j^1,a_j_1), (a_j_1,s_j^1), (s_j^2,a_j_2), (a_j_2,s_j^2), (s_j^3,a_j_3) and (a_j_3,s_j^3).This gives a directed graph G=(V,A). We partition V in sets: – for each i∈ [3n], the set A_i={ a_i},– for each k∈ [2n], the sets X_k={ x_k}, Y_k={ y_k}, and – for each j∈ [3n],l∈ [3], T_j^l={ s_j^l,t_j^l} . The maximum cycle packing of G has size 16n. This is because the x_i,y_i vertices allow 2n cycles of length 5 through t_j^1,t_j^2,t_j^3 triples covering 10n vertices. The rest of the t_j^l vertices cannot be covered. Also, for the other s_j^l and a_i vertices, they span a directed bipartite graph, so at most 3n+3n vertices can be covered, as we have only 3n a_i vertices. And 6n can be covered indeed, as we can just choose an arbitrary s_j^l neighbour for each a_i and pair them with a 2-cycle. The interval for each set is [1,∞]. Since the size of the maximum cycle packing of G is 16n, which is the same as the number of countries, if there is a solution that satisfies these intervals, then it also must satisfy the intervals [1,1] for each set. Hence the last two statements of the Lemma are equivalent in this instance.As a maximum cycle packing in G has size 16n, which is equal to the sum of the lower bounds, G has a cycle packing satisfying every interval if and only if G has a maximum cycle packing satisfying every interval. We claim I admits an exact 3-cover if and only ifG admits acycle packing satisfying every interval.“⇒”First suppose I has an exact 3-cover S_l_1,…,S_l_n. We create a cycle packing 𝒞 of G. For each j∈{ l_1,…,l_n}, we add the cycles{ (s_j^1,a_j_1),(a_j_1,s_j^1)} ,{ (s_j^2,a_j_2),(a_j_2,s_j^2)} ,{ (s_j^3,a_j_3),(a_j_3,s_j^3)}. For j∉{ l_1,…,l_n} we add the arcs (t_j^1,t_j^2),(t_j^2,t_j^3). Finally, for each i∈ [2n], we add the arcs (y_i,x_i),(x_i,t_j_i^1),(t_j_i^3,y_i), where j_i is the i-th smallest index among the indices [3n]∖{ l_1,…,l_n}.Clearly, 𝒞 is a cycle packing. Each A_ihas an incoming arc, as S_l_1,…,S_l_n was an exact 3-cover. As there are exactly 2n sets not in the set cover, all of the corresponding sets T_j^lhave one incoming arc in a cycle of the form { (y_i,x_i),(x_i,t_j_i^1),(t_j_i^1,t_j_i^2),((t_j_i^3,y_i)}, and so did each X_i and each Y_i.Hence, all lower bounds are satisfied.“⇐”Now suppose G admits acycle packing satisfying every interval. Then, as X_i has an incoming arc forall i∈ [2n], all (x_i,y_i) arcs are included in the cycle packing. This means that there are 2n such j∈ [3n], such that the arcs (t_j^1,t_j^2),(t_j^2,t_j^3) are included in a cycle { (y_i,x_i),(x_i,t_j^1),(t_j^1,t_j^2), (t_j^2,t_j^3),(t_j^3,y_i)} of 𝒞.From the above, we have that there are n indices j from [3n], such that none of the sets T_j^1,T_j^2,T_j^3 have incoming arcs of this form. Hence, all these sets can only have incoming arcs from a set A_i. As each such T_j^imust have one incoming arc, it follows that for all these j, the cycles { (a_j_1,s_j^1),(s_j^1,a_j_1)}, { (a_j_2,s_j^2),(s_j^2,a_j_2)}, { (a_j_3,s_j^3),(s_j^3,a_j_3)} are included in 𝒞, so they are vertex disjoint. Hence, the corresponding sets must form an exact 3-cover. Theorem <ref> (restated).For partitioned permutation games even of width 2, the problem of finding an optimal solution that is weakly or strongly close to a given target allocation x is -hard. Recall that x_p denotes the target for the number of arcs (u,v) with v∈ V_p that belong to some directed cycle of 𝒞.Letting each I_p = [x_p, x_p] and applying Lemma <ref>, we see that finding a cycle packing where each s_p(𝒞) is equal x_p (so differs by at most 0) is -complete.Thus it is -hard to find the maximum cycle packing that minimizes d_1(𝒞)=max_p∈ N{|x_p-s_p(𝒞)|}; that is, to find a solution that is weakly close to a given target and similarly it is also hard to find a strongly close solution. In the remainder of our paper, the following problem plays an important role:q-Exact Perfect MatchingAn undirected bipartite graph B=(U,W;E), where each edge is coloured with one of {1,…, q}, and numbers k_1,…,k_q.Is there a perfect matching in B consisting of k_q edges of each colour q?For q=2, this problem is also known as Exact Perfect Matching), which, as mentioned, was introduced by Papadimitriou and Yannakakis <cit.> and whose complexity status is open for more than 40 years. In the remainder of this section, we will gave both a reduction to this problem and a reduction from this problem. We start with doing the former in the proof of our next result (Theorem <ref>), from whichTheorem <ref>immediately follows.Let x be an allocation for a partitioned permutation game (N,v) on a graph G=(V,A). For a maximum cycle packing 𝒞, d'(𝒞)=(|x_p_1-s_p_1(𝒞)|, …, |x_p_n-s_p_n(𝒞)|) is the unordered deviation vector of 𝒞. For a partitioned permutation game (N,v) on a directed graph G=(V,A) and a target allocation x, it is possible to generate the set of unordered deviations factorsin |A|^O(n) time by a randomized algorithm.Let (N,v) be a partitioned permutation game with n players, defined on a directed G=(V,A) with vertex partition V_1, …, V_n.As mentioned, we reduce from q-Exact Perfect Matching for an appropriate value of q. From (N,v) and a vector d'=(d_1',…,d_n') with d_p'≥ 0 for every p∈ N, we define an undirected bipartite graph B=(U,W;E) with coloured edges: for each vertex v∈ V, there is a vertex v^in∈ U and a vertex v^out∈ W and an edge v^inv^out∈ E that has colour n+1; for each arc (u,v)∈ A, there is an edge u^outv^in, which will be coloured p if v∈ V_p.Let k_n+1=|V|-v(N) and, for p∈{1,…,n}, let k_p=d_p'.We observe that G has a maximum cycle packing 𝒞 with s_p(𝒞)=d_p'if and only if B has a perfect matching with k_p edges of each colour p∈{0,…,n}. As each k_i can only have a value between 0 and |E|=|A|+n=O(|A|), the above reduction implies that the set of unordered deviation vectors has size |A|^O(n)for any allocation x for (N,v). We can find each of these vectors in |A|^O(n) time by a randomized algorithm, as q-Exact Perfect Matching is solvable in |E|^𝒪(q) time with q colours with a randomized algorithm <cit.>. We cannot hope to derandomize the algorithm from Theorem <ref> without first solving 2-Exact Perfect Matching problem in polynomial time. In order to see this, let B=(U,W;E) be a bipartite graph with |U|=|W|=n for some integer n whose edges in E are coloured either red or blue. We construct a digraph D by replacing every edge e=uv with a directed 3-cycle on arcs (u,w_e), (w_e,v), (v,u), where w_e is a new vertex that has only u and v as its neighbours in D. Let V_1 consist of all vertices w_e, for which e is a red edge in E. Let V_2=V(D)∖ V_1. Now, (B,k,n-k) is a yes-instance of 2-Exact Perfect Matching if and only if D has a cycle packing 𝒞 with s_1(𝒞)=k and s_2(𝒞)=3n-k. § ILP FORMULATION In this section, we show how to find an optimal solution of a partitioned permutation game that is strongly close to a given target allocation x by solving a sequence of Integer Linear Programs (ILPs). Let (N,v) be a partitioned permutation game defined on a directed graph G=(V,A). Recall that for a maximum cycle packing 𝒞 of G, we let s_p(𝒞) denote the number of arcs (u,v) with v∈ V_p that belong to some directed cycle of 𝒞. Recall also that |x_p-s_p(𝒞)| is the deviation of country p∈ N from its target x_p if 𝒞 is chosen as optimal solution. Moreover, in the vector d(𝒞)= (|x_p_1-s_p_1(𝒞)|, …, |x_p_n-s_p_n(𝒞)|), the deviations |x_p-s_p(𝒞)| are ordered non-increasingly. Finally, we recall that 𝒞 is strongly close to x if d(𝒞) is lexicographically minimal over all optimal solutions for (N,v). In the kidney exchange literature the following ILP is called the edge-formulation (see e.g. <cit.>) For each a∈ A, let e_a∈{0,1} be a binary edge-variable for D. edge-formulation[ M^*:= max_e ∑_ij∈ Ae_ijs.t.; ] [ ∑_j: ji∈ A e_ji = ∑_j: ij∈ A e_ij∀ i∈ V; ∑_j: ji∈ A e_ji ≤ 1∀ i∈ V; ] The first set of constraints represents the (well-known) Kirchoff law. The second set of constraints ensures that every node is covered by at most one cycle. The objective function provides a maximum cycle packing (of size M^*).This ILP has |A| binary variables and 2|V| constraints. In the following we are going to sequentially find largest country deviations d_t^* (t ≥ 1) and the corresponding minimal number n_t^* of countries receiving that deviation. We achieve this by solving one ILP of similar size for each d_t^* and n_t^*, so two ILPs per iteration t. By similar size, we mean that in each iteration we are going to add |N| binary variables and a single additional constraint, while |N| ≤ |V| holds by definition and typically |N| is much smaller than |A|. Meanwhile, since at every iteration we are going to fix the deviation of at least one additional country (we will not necessarily know which country, we are only going to keep track of number of countries with fixed deviation), the number of iterations are at most |N| (as t ≤ |N|). Hence, we will solve no more than 2|N| ILPs, among which the largest has 𝒪(|A|+|N|^2) binary variables and 𝒪(|V|+|N|)=𝒪(|V|) constraints.Once we have M^* we solve the following ILP to find d_1^*:ILP_d_1[ d_1^* := min_e,d_1 d_1 s.t.;] [ ∑_j: ji∈ A e_ji = ∑_j: ij∈ A e_ij∀ i∈ V; ∑_j: ji∈ A e_ji ≤ 1∀ i∈ V;∑_ij∈ A e_ij = M^*; ∑_j∈ V_p e_ij-x_p ≤ d_1∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤ d_1∀ p∈ N ] The first three constraints guarantee that all solutions are in fact maximum cycle-packings. These constraints will be part of the formulation throughout the entire ILP-series. The remaining two constraints, together with the objective function guarantees that we minimize the largest country deviation. Note that for each country p, exactly one of ∑_j∈ V_p e_ij-x_p and x_p - ∑_j∈ V_p e_ij is positive and exactly one of them is negative, unless both of them are zero. However, as soon as we reach d_t^* ≤ 1/2 we have found a strongly close maximum cycle-packing. Hence, in the remainder, we assume d_t^* > 1/2 for each t such that the series continues with t+1.(<ref>) has one additional continuous variable (d_1) and 2|N|+1 additional constraints. For every country p ∈ N we have that |∑_j∈ V_p e_ij-x_p| ≤ d_1^*. However, there exists a smallest subset N_1 ⊆ N (which may not necessarily be unique) such that[ |∑_j∈ V_p e_ij-x_p| = d_1^*∀ p∈ N_1;; |∑_j∈ V_p e_ij-x_p| ≤ d_2^* < d_1^* ∀ p∈ N∖ N_1 ] In a solution of (<ref>), let n_1 be the number of countries with deviation d_1^*. We need to determine if there is another solution of (<ref>) with fewer than n_1, possibly n_1^*, countries having d_1^* deviation. For this purpose we must be able to distinguish between countries unable to have less than d_1^* deviation and countries for which the deviation is at most d_2^*, the latter value unknown at this stage. In order to make this distinction, we determine a lower bound on d_1^* - d_2^* by examining the target allocation x. We will then set ε in the next ILP to be strictly smaller than this lower bound.The number of vertices for a country covered by any cycle-packing is an integer. Hence, the number of possible country deviations is at most 2|N| and depends only on x. The fractional part of a country deviation is either (x_p) or 1-(x_p). Therefore, to find the minimal positive difference in between the deviations of any two countries p and q, we have to compare the values (x_p) and 1-(x_p),with (x_q) or 1-(x_q) and take the minimum of those four possible differences. Let ε be a small positive constant that is strictly smaller than the minimum possible positive difference between any two countries.We will distinguish between countries having minimal deviation of d_1^* and others through additional binary variables. Since later in the ILP series we will need to distinguish between countries fixed at different deviation levels, let us introduce z^t_p ∈{0,1} binary variables, where z^t_p=1 indicates that p ∈ N_t.ILP_N_1[ min_z^1,e∑_p ∈ Nz^1_ps.t.; ] [∑_j: ji∈ A e_ji=∑_j: ij∈ A e_ij ∀ i∈ V;∑_j: ji∈ A e_ji≤1 ∀ i∈ V; ∑_ij∈ A e_ij=M^* ;∑_j∈ V_p e_ij-x_p≤ d_1^* - ε(1-z^1_p) ∀ p∈ N;x_p-∑_j∈ V_p e_ij≤ d_1^* - ε(1-z^1_p) ∀ p∈ N ] As discussed, for each country p, the left hand side of either the fourth or the fifth constraint is negative (i.e., would be satisfied even with z^1_p=0).For those countries whose deviation cannot be lower than d_1^*, however,the (positive) left hand side of either the fourth or the fifth constraint will require z^1_p=1. Thus, given an optimal solution z^1* of (<ref>), let n_1^*:=∑_p ∈ N z_p^1* be the minimal number of countries receiving the largest country deviations. It is guaranteed that the non-increasingly ordered country deviations at a strongly close maximum cycle packing starts with exactly n_1^* many d_1^* values, followed by some d_2^* < d_1^*. (<ref>) has |A|+|N| binary variables and 2|V|+2|N|+1 constraints. Now, to find d_2^*, we solve the following ILP:ILP_d_2[ min_d_2,e,z^1 d_2s.t.; ] [ ∑_j: ji∈ A e_ji = ∑_j: ij∈ A e_ij∀ i∈ V; ∑_j: ji∈ A e_ji ≤ 1∀ i∈ V;∑_ij∈ A e_ij = M^*; ∑_j∈ V_p e_ij-x_p ≤ d_1^*∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤ d_1^*∀ p∈ N; ∑_j∈ V_p e_ij-x_p ≤d_2 + z^1_pd_1^*∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤d_2 + z^1_pd_1^*∀ p∈ N;∑_p ∈ Nz^1_p = n_1^* ] (<ref>) has |A|+|N| binary variables and one continuous variable (d_2) with 2|V|+4|N|+2 constraints, and guarantees that we find the minimal second-largest country deviation d_2^* while exactly n_1^* countries deviation is kept at d_1^*. Finding n_2^* follows a similar approach, where L is a large constant satisfying L ≥ 2(d_1^* - d_2^*):ILP_N_2[ min_z^1,z^2,e∑_p ∈ Nz^2_ps.t.; ] [ ∑_j: ji∈ A e_ji = ∑_j: ij∈ A e_ij∀ i∈ V; ∑_j: ji∈ A e_ji ≤ 1∀ i∈ V;∑_ij∈ A e_ij = M^*; ∑_j∈ V_p e_ij-x_p ≤ d_2^* - ε(1-z^2_p) + z_p^1L∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤ d_2^* - ε(1-z^2_p) + z_p^1L∀ p∈ N; ∑_j∈ V_p e_ij-x_p ≤ d_1^*∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤ d_1^*∀ p∈ N;∑_p ∈ Nz^1_p = n_1^*; z_p^1 + z_p^2 ≤ 1 ∀ p ∈ N ] Subsequently we follow a similar approach for all t ≥ 3, until either |N|=n_1^*+n_2^*+…+n_t^* or we terminate because d_t^* ≤ 1/2. Until reaching one of these conditions we iteratively solve the following two ILPs, introducing additional |N| binary variables and an additional constraint to both. Let L be a large constant satisfying L ≥ d_t^*, e.g. L = d_t-1^*. ILP_d_t[ min_d_t,e,(z^i)_i=1^t-1 d_ts.t.; ] [ ∑_j: ji∈ A e_ji = ∑_j: ij∈ A e_ij∀ i∈ V; ∑_j: ji∈ A e_ji ≤ 1∀ i∈ V;∑_ij∈ A e_ij = M^*;∑_i=1^t-1z^i_p ≤ 1 ∀ p ∈ N; ∑_j∈ V_p e_ij-x_p ≤ d_t + ∑_i=1^t-1z^i_pd_i^*∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤ d_t + ∑_i=1^t-1z^i_pd_i^*∀ p∈ N; ∑_j∈ V_p e_ij-x_p ≤ ∑_i=1^t-1z^i_pd_i^* + (1-∑_i=1^t-1z^i_p)L∀ p∈ N; x_p-∑_j∈ V_p e_ij ≤ ∑_i=1^t-1z^i_pd_i^* + (1-∑_i=1^t-1z^i_p)L∀ p∈ N;∑_p ∈ Nz^i_p = n_i^* ∀ i=1,…,t-1 ] In the following formulations L is a large constant satisfying L ≥ d_t^* - ε.ILP_N_t[ min_(z)_i=1^t,e∑_p ∈ Nz^t_ps.t.; ] [∑_j: ji∈ A e_ji=∑_j: ij∈ A e_ij ∀ i∈ V;∑_j: ji∈ A e_ji≤1 ∀ i∈ V; ∑_ij∈ A e_ij=M^* ; ∑_i=1^tz^i_p≤1∀ p ∈ N;∑_j∈ V_p e_ij-x_p≤ d_t^* - ε(1-z^t_p) + ∑_i=1^t-1z_p^id_i^* ∀ p∈ N;x_p - ∑_j∈ V_p e_ij≤ d_t^* - ε(1-z^t_p) + ∑_i=1^t-1z_p^id_i^* ∀ p∈ N;∑_j∈ V_p e_ij-x_p≤∑_i=1^tz_p^id_i^* + (1 - ∑_i=1^tz_p^i)L∀ p ∈ N;x_p - ∑_j∈ V_p e_ij≤∑_i=1^tz_p^id_i^* + (1 - ∑_i=1^tz_p^i)L∀ p ∈ N; ∑_p ∈ Nz^i_p=n_i^*∀ i=1,…,t-1 ] From the above, we conclude that the following theorem holds.For a partitioned permutation games (N,v) defined on a graph G=(V,A), it is possible to find an optimal solution that is strongly close to a given target allocation x by solving a series of at most 2|N| ILPs, each having O(|A|+|N|^2) binary variables and O(|V|) constraints. Note that if we just want to find a weakly close optimal solution we can stop after solving the first ILP. § SIMULATIONS   In this section we describe our simulations for ℓ=∞ in detail. Our goals are* to examine the benefits of strongly close optimal solutions over weakly close optimal solutions or arbitrary chosen optimal solutions when ℓ=∞; * to examine benefits of using credits when ℓ = ∞; * to examine the exchange cycle length distribution when ℓ=∞; and * to compare the results for ℓ=∞ with the known results <cit.> for the other extreme case when ℓ=2. For our fourth aim, we want to know in particular how much scope there is for improvement in the total number of kidneys when we move from ℓ=2 to ℓ=∞. Set Up. To do a fair comparison we follow the same set up as in <cit.>, and in addition, we also use the data from <cit.> that was used in <cit.>. That is, for our simulations, we take the same 100 compatibility graphs G_1,…, G_100, each with roughly 2000 vertices from <cit.>. As real medical data is unavailable to us for privacy reasons, the data from <cit.> was obtained by using the data generator from <cit.>. This data generator was used in many papers and is the most realistic synthetic data generator available; see also <cit.>. For every i∈{1,…,100} we do as follows. For every n∈{4,…,10}, we perform simulations for n countries. We first partition V(G_i) into the same n sets V_i,1,…, V_i,n as in <cit.> of equal size 2000/n (subject to rounding), so V_i,p is the set of patient-donor pairs of country p. For round 1, we construct a compatibility graph G_i^1 as a subgraph of G_i of size roughly 500. We add the remaining patient-donor pairs of G_ias vertices by a uniform distribution between the remaining rounds.Starting with G_i^1(n) we run a IKEP of 24 rounds in total. This gives us 24 compatibility graphs G_i^1(n),…, G_i^24(n). Just as in <cit.>,any patient-donor pair, whose patient is not helped within four rounds, will be automatically deleted from the pool. A (24-round) simulation instance consists of the data needed to generate a graph G_i^1(n) and its successors G_i^2(n),…,G_i^24(n), together with specifications for the choice of initial allocation and optimal solution (maximum cycle packing). Our code for obtaining the simulation instances is in GitHub repository <cit.>, along with the compatibility graphs data and the seeds for the randomization. We now discuss our choice for the initial allocations and optimal solutions (maximum cycle packings). Initial allocations. For the initial allocations y we use the Banzhaf value, Shapley value, nucleolus,benefit value and contribution value. The Shapley value ϕ(N,v) <cit.> is defined by ϕ_p(N,v) = ∑_S ⊆ N\{p}|S|!(n-|S|-1)!/n!(v(S∪{p})-v(S)). To define the next solution concept, we first introduce unnormalized Banzhaf value ψ_p(N,v) <cit.> defined by ψ_p(N,v):=∑_S ⊆ N\{p}1/2^n-1(v(S∪{p})-v(S)). As ψ_p may not be an allocation, the (normalized) Banzhaf value ψ_p(N,v) of a game (N,v) was introduced and defined by ψ_p(N,v):=ψ_p(N,v)/∑_q ∈ Nψ_q(N,v)· v(N).Whenever we mention the Banzhaf value, we will mean ψ(N,v). For the Shapley value and Banzhaf value, we were still able to implement a brute force approach relying on the above definitions.We now define the nucleolus. The excess for an allocation x of a game (N,v) and a non-empty coalition S ⊊ N is defined as e(S,x) := x(S)-v(S).Ordering the 2^n-2 excesses in a non-decreasing sequence yields excess vector e(x) ∈^2^n-2. The nucleolus of a game (N,v) is the unique allocation <cit.> that lexicographically maximizes e(x) over the set of allocations x with x_i≥ v({i}) (assuming this set is nonempty, as holds in our case). To compute the nucleolus, we use the Lexicographical Descent method of <cit.>, which is the state-of-the-art method in nucleolus computation.The surplus of a game (N,v) is =v(N) - ∑_p ∈ N v({p}). One can allocate v({p})+α_p·, as long as ∑_p∈ Nα_p=1, to a player p∈ N. If α_p=v(N) - v(N ∖{p})-v({p})/∑_p∈ N(v(N) - v(N ∖{p})-v({p})), we obtain the benefit value. If α_p=v(N) - v(N ∖{p})/∑_p∈ N(v(N) - v(N ∖{p})), we get the contribution value.Both values are easy to compute, but may not exist if the denominator is zero, which did not happen in our simulations.Optimal solutions. We compute weakly and strongly close optimal solutions by solving a sequence of ILPs, as described in Section <ref>. We used g++ version 11 on Ubuntu 20.04 forour C++ implementation. The values v(S) of the partitioned permutation games used in the ILPs are obtained by solving a maximum weight perfect matching problem (see Section <ref>), for which we use the package of <cit.>.Computational environment and scale. We run our simulations both without and with (“+c”) using the credit system, and for the settings, where an arbitrary optimal solution (“arbitrary”), weakly optimal solution (“d_1”) or strongly optimal optimal solution (“lexmin”) is chosen. This leads to the following five scenarios for each of the five selected solution concepts, the Shapley value, nucleolus, Banzhaf value, benefit value and contribution value: arbitary, d1, d1+c, lexmin, and lexmin+c. Note that for the arbitrary scenario, the use of credits is irrelevant. Hence, in total, we run the same set of simulations for 5× 5= 25 different combinations of scenarios and initial allocations. As we have seven different country sizes n and 100 initial compatibility graphs G_i, our total number of 24-round simulation instances is 25 × 7 × 100 = 17500. All simulations were run on a dual socket server with Xeon Gold 6238R processors with 2.2 GHz base speed and 512GB of RAM, where each simulation was given eight cores and 16GB of RAM.Evaluation measures. Let y^* be the total target allocation of a single simulation instance, i.e.,y^* is obtained by taking the sum of the 24 initial allocations of each of the 24 rounds. Let 𝒞^* be the union of the chosen maximum cycle packings in each of the 24 rounds. We use the total relative deviation defined as ∑_p ∈ N |y_p^* - s_p(𝒞^*)|/|𝒞^*|. For each choice of initial allocation and choice of scenario,we run 100 instances. We take the average of the 100 relative total deviations to obtain the average total relative deviation. Taking the maximum relative deviation max_p ∈ N |y_p^* - s_p(𝒞^*)|/𝒞^* gives us the average maximum relative deviation as our second evaluation measure. As we shall see, this evaluation measure leads to the same conclusions.Simulation Results for ℓ=∞ and Comparison with ℓ=2. In Figure <ref> we display our main results. In this figure, we compare different solutions concepts under different scenarios for ℓ=∞. The figure also shows the effects of weakly and strongly close solutions and the credit system. As solution concepts have different complexities, we believe such a comparison might be helpful for policy makers for choosing a solution concept and scenario. As expected, using an arbitrary maximum cycle packing in each round makes the kidney exchange scheme significantly more unbalanced, with average total relative deviations over 4% for all initial allocations y. The effect of both selecting a strongly close solution (to ensure being close to a target allocation) and using a credit function (for fairness, to keep deviations small) is significant. The above observations are in line with the results under the setting where ℓ=2 <cit.>. However, for ℓ=2, the effect of using arbitrary optimal solutions is worse, while deviations are smaller than for ℓ=∞ when weakly close or strongly close optimal solutions are chosen.From Figure <ref> we see that the Shapley value and the Banzhaf value in the lexmin+c scenario provide the smallest deviations from the target allocations (just like when ℓ=2 <cit.>). However, differences are small and, as mentioned, which solution concept to select is up to the policy makers of the IKEP. Moreover, from Figure <ref>, we can also see the benefits of ℓ=∞ over ℓ=2 for the Shapley value.Figure <ref> shows that if we use the average maximum relative deviation instead of the average total relative deviation, then we can see the same pattern as in Figure <ref>.From Figure <ref> we see that when ℓ=∞ instead of ℓ=2, it is possible to achieve 46% more kidney transplants. The ℓ=∞ setting is not realistic. Namely, Figure <ref> show that long cycles, with even more than 400 vertices, may occur. Figure <ref> also shows that these long cycles all happen in the first round. We note that before our experiments, the relation between increase in transplants versus increase in cycle length had not been researched by simulations. Finding out about this was our main motivation for our simulations. § CONCLUSIONS We introduced the class of partitioned permutation games and proved a number of complexity results that contrast known results for partitioned matching games. Our new results guided our simulations for IKEPs with up to ten countries, with exchange bound ℓ=∞. Our simulations showed a significant improvement in the total number of kidney transplants over the case where ℓ=2 <cit.> at the expense of long cycles. In our simulations, all countries had the same size. In <cit.>, simulations were also done for countries with three different sizes, but these led to the same conclusions. We expect the same for ℓ=∞, as confirmed by a robustness check for n=6 (see Github repository <cit.>). For future research we will consider the more realistic exchange bounds ℓ∈{3,4,5}. We note that the simulations done in <cit.> were only for ℓ=3; a more limited number of solution concepts; for IKEPs with up to four countries; and only for scenarios that use weakly close optimal solutions. They also used different data sets. For our follow-up study for ℓ∈{3,4,5}, we must now also overcome, just like <cit.>,the additional computational obstacle of not being able to compute an optimal solution for a compatibility graph in a KEP round and the values v(S) of the associated permutation game in polynomial time (see Theorem <ref>).Current techniques for ℓ∈{3,4,5} therefore involve, besides ILPS based on the edge-formulation, ILPs based on the cycle-formulation, with a variable for each cycle of length at most ℓ (see, for example, <cit.>). Hence, computing a single value v(S) will become significantly more expensive, and even more so for increasing ℓ. For expensive solution concepts, such as the Shapley or nucleolus, we must compute an exponential number of values v(S). Without any new methods, we expect it will not be possible to do this for simulations up to the same number of countries (ten) as we did for ℓ=∞. Finally, we note that matching games and permutation games are usually defined on edge-weighted graphs. It is readily seen that all our positive theoretical results can be generalized to this setting. In kidney exchange, the primary goal is still to help as many patients as possible, but edge weights might be used to represent transplant utilities. Hence, it would also be interesting to do simulations in the presence of edge weights. We leave this as future research.Acknowledgments. Benedek was supported by the National Research, Development and Innovation Office of Hungary (OTKA Grant No. K138945); Biró by the Hungarian Scientific Research Fund (OTKA, Grant No. K143858) and the Hungarian Academy of Sciences (Momentum Grant No. LP2021-2); Csáji by the Hungarian Scientific Research Fund (OTKA Grant No. K143858) and the Momentum Grant of the Hungarian Academy of Sciences (Grant No. 2021-1/2021); and Paulusma was supported by the Leverhulme Trust (Grant RF-2022-607) and EPSRC (Grant EP/X01357X/1). Moreover, this work has used Durham University’s NCC cluster. NCC has been purchased through Durham University’s strategic investment funds, and is installed and maintained by the Department of Computer Science. In particular, we thank Rob Powell for his help with setting up our simulations on the NCC cluster.splncs04
http://arxiv.org/abs/2312.16653v1
{ "authors": [ "Márton Benedek", "Péter Biró", "Gergely Csáji", "Matthew Johnson", "Daniël Paulusma", "Xin Ye" ], "categories": [ "cs.GT", "cs.CC", "cs.DS" ], "primary_category": "cs.GT", "published": "20231227175800", "title": "Computing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded" }
Anomaly component analysis Romain VallaLTCI, Télécom Paris, Institut Polytechnique de ParisPavlo MozharovskyiLTCI, Télécom Paris, Institut Polytechnique de ParisFlorence d'Alché-BucLTCI, Télécom Paris, Institut Polytechnique de Paris December 26, 2023 ====================================================================================================================================================================================================================================================At the crossway of machine learning and data analysis, anomaly detection aims at identifying observations that exhibit abnormal behaviour. Be it measurement errors, disease development, severe weather, production quality default(s) (items) or failed equipment, financial frauds or crisis events, their on-time identification and isolation constitute an important task in almost any area of industry and science. While a substantial body of literature is devoted to detection of anomalies, little attention is payed to their explanation. This is the case mostly due to intrinsically non-supervised nature of the task and non-robustness of the exploratory methods like principal component analysis (PCA).We introduce a new statistical tool dedicated for exploratory analysis of abnormal observations using data depth as a score. Anomaly component analysis (shortly ACA) is a method that searches a low-dimensional data representation that best visualises and explains anomalies. This low-dimensional representation not only allows to distinguish groups of anomalies better than the methods of the state of the art, but as well provides a—linear in variables and thus easily interpretable—explanation for anomalies. In a comparative simulation and real-data study, ACA also proves advantageous for anomaly analysis with respect to methods present in the literature.Keywords: dimension reduction, anomaly detection, data depth, explainability, data visualization, robustness, projection depth.§ INTRODUCTIONAnomaly detection is a branch of machine learning which aims at finding unusual patterns in the data and allows to identify observations that deviate significantly from normal behavior; see, e.g., <cit.> (and references therein) for surveys on existing anomaly detection methods. Anomalies can be represented by abnormal body cells or deviating health parameters, failed equipment or default items, network intrusions or financial frauds, and need to be identified for undertaking further action. Detecting anomalies can help to start timely treatment or handling, improve product's quality, and ensure operational safety. To develop a reaction policy, a deeper insight into anomalies' nature is required, which further demands to explain the reasons for abnormality. A number of works underline importance of explainability in statistics and machine learning, e.g., <cit.>, including the recent survey by <cit.>. This task of explainability, undergoing active development with several proposed solutions in the supervised setting (e.g., variable importance for random forest <cit.> or concept-based explanation for neural networks <cit.>), is particularly challenging in the unsupervised setting not only due to the absence of the feedback, but also because of potentially infinite variety of possible abnormalities.In the current article, we focus on the multivariate setting, where observations possess d (metric) quantitative properties. More precisely, we consider a (training) data set ={_1,...,_n}⊂ℝ^d that consists of n observations in a d-dimensional Euclidean space ℝ^d. This data setcan either contain or not contain anomalies, with this information being unknown (on the training stage) in the considered here unsupervised setting. Explanation of an anomaly ∈ℝ^d in this case can be done, e.g., by importance ranking ofconstituting variables (or their combinations, to account for non-linearity). Possibly based on this information, even more important is insightful data visualization, which allows to identify anomalies and (simultaneously) them causing features of the data.Providing a meaningful and easily interpretable visualization cannot be overestimated in practice, and in reality is of highest importance for solving a number of practical tasks. Several methods serving this purpose have conquered applicants, and are widely used and implemented in numerous software packages employed in various areas of industry and science. These—being shortly over-viewed right below—fail to underline anomalies, mostly for two reasons: either (a) they lack robustness necessary to “notice” anomalies or (b) they are simply not aiming at highlighting them. §.§ Existing methods for meaning visualization of anomalies A number of methods at hand, though not intrinsically designed for anomaly detection framework, can be useful to provide meaningful visualization. In particular, dimension reduction techniques areeffective in providing representation spaces that can, in certain cases, highlight anomalies. These can be enhanced by explanation capacity (if available), see, e.g., <cit.> for a recent survey. Linear methods Linear methods provide explainable data visualization by searching for a new basis in ℝ^d with components being linear combinations of input variables: Principal component analysis (PCA) computes (up to) d components—mutually orthogonal—such that in projection on each of them variance is maximized <cit.>. In this way, first principal component corresponds to the direction in projection on which data variance is maximized. Second principal component then maximizes the variance of the data in linear subspace of ℝ^d orthogonal to the first component. Third principal component maximizes variance in the linear subspace of ℝ^d orthogonal to the first two components; this process continues until either the required number of components is found or the entire variance is explained. Plotting pairwise components provides insightful visualization, together with other visualizations employed for clustering or revealing hidden structure in the data. Due to it's simplicity of understanding and speed of execution, through decades PCA remains one of the most used data visualization and explanation tools for practitioners. Robust principal component analysis (robPCA) has been designed to compensate for presence of anomalies in the data, because anomalies' values—amplified being squared—distract found by traditional PCA variance-maximizing directions <cit.>. While the classic principal component analysis methods describe well Gaussian (elliptical) data, independent component analysis (ICA) allows to departure from this limitation by searching for non-Gaussian statistically-independent features <cit.>. Non-linear methods Non-linear methods, different to those exploiting first-order stochastic dependency (and thus categorized as linear), are based on non-linear geometric transform, often performed via applying a kernel function to between-point distances. Thus, kernel principal component analysis (kPCA) can be seen as an extension of traditional PCA using the “kernel trick” <cit.> to handle data in the (infinite-dimensional) reproducing kernel Hilbert space (RKHS) based on a properly chosen kernel function <cit.>, in which in order the principal components are searched. Multi-dimensional scaling (MDS) makes use of kernel-transformed pair-wise dissimilarities (often expressed as distances) of centered data to construct a lower-dimensional representation by means of the eigenvalues decomposition of the kernel matrix <cit.>. To construct an insightful visualization, t-distributed stochastic neighbor embedding (t-SNE) first defines a similarity (using Euclidean distance or alternative measure) distribution on the space of objects, and then maps it to another low-dimensional distribution by minimizing the asymmetric Kullback-Leibler divergence between the two <cit.>. Further methods Further methods have been developed that can naturally serve for insightful visualization, which logically do not fall under any of the two mention above categories. Non-negative matrix factorization (NMF) decomposes the data matrix into a product of two tentatively smaller (and thus naturally lower-rank) matrices under the non-negativity constraint, in order to minimize, e.g., the Frobenius norm or the Kullback-Leibler divergence of the product <cit.>. Locally linear embedding (LLE) is another non-linear dimension-reduction method which proceeds in two stages <cit.>: first each point is reconstructed as a weighted sum of its neighbors, and second a lower-dimensional space is constructed (based on eigenvalue decomposition) searching for the reconstruction using the weight from the first stage. The local linearity is then governed by the predefined number of neighbors and the distance used. Laplacian eigenmaps (LE) approximate data in a lower-dimensional manifold using the neighborhood-based graph with eigenfunctions of the Laplace–Beltrami operator forming the embedding dimensions <cit.>. Autoencoder <cit.> consists of artificial neuronal encoder and decoder connected by an (information compressing) bottleneck. The latent (neuronal) signals of this bottleneck can be then used to visualize the data, as well as reconstruction error allows to detect anomalies. Explainability of anomaliesExplainability of anomalies constitutes an open question and an active field of research with very little explicit available solutions in the unsupervised setting (different to the supervised one, see, e.g., <cit.> for a survey). One of them is depth-based isolation forest feature importance <cit.>, a variable-importance method for isolation forest <cit.> that ensures both global (i.e., on the level of the trained procedure) and local (i.e., for the particular (new) observation in question) explainability by providing quantitative information on how much each variable influenced the abnormality decision. In the same group can be put cell-wise outlier detection <cit.>, which identifies the cells (i.e., observation's variables, avoiding labeling the entire observation as an outlier) for outlying observations which contaminate the data. Generally speaking, though explainability of anomalies can be seen as an unresolved issue, insightful (linear) visualization methods provide variable-wise information about anomalies, if those can be identified. With explainability of anomalies constituting an important contemporary challenge, it seems that—in view of the potentially rich nature of anomalies—their side-effect identification (and interpretation) is unlikely, though not excluded. That is, special methods—focused on the search of anomalies—are required not only to find them but also to interpret. §.§ The proposed approach In the current article, we propose a versatile method for anomaly visualization and interpretation, targeting relevant for anomalies sub-spaces. The so-called anomaly component analysis ACA sequentially constructs an ortho-normal basis that best unveils the anomalies (additionally splitting them according to geometric grouping) to the human's eye and at the same time allows to perform their automatized interpretation. The proposed method largely exploits the concept statistical data depth function, and in particular depth notions satisfying the weak projection property introduced by <cit.> and extensively studied later in the computational context by <cit.> and <cit.>. More precisely, a direction is being searched which allows for identification of the most outlying (cluster of) anomalies, while in the subsequent steps such a direction is searched in the linear orthogonal complement of the previous directions. Except for the intrinsic (and indispensable) robustness and depth-inherited affine invariance, ACA possesses attractive computational complexity of 𝒪(pkdn^2) for the entire data set of size n in dimension d, with p being the number of the searched components and k being the number of necessary directions, with its choice discussed in Section <ref>. §.§ Outline of the article The rest of the article is organized as follows. After a short reminder on data depth, Section <ref> introduces the ACA method, suggests an algorithm for its computation, and discusses the choice of relevant parameters. Section <ref> is focused on the visual comparison of ACA as a dimension-reduction tool with existing methodologies, on simulated data sets possessing different properties. Section <ref> provides insights on explainable anomaly detection with data depth employed following the ACA-based philosophy, in a simulated setting (where the correct direction is known) in a comparison with PCA, robPCA, ICA, as well as DDC and DIFFI. Section <ref>—in an application to real data sets—provides insightful visualization as well as explanations to them, unknown to the preceding literature. Section <ref> concludes, and enumerates the contents of the Supplementary Materials.§ METHOD ACA is based on the concept of data depth, and more precisely on the class of depth notions that satisfy the weak projection property <cit.>. Thus, in this section, first we briefly remind the notion of data depth (Section <ref>), and after this introduce the method of anomaly component analysis (Section <ref>) followed by the algorithm (Section <ref>) and a discussion on the choice of its parameters (Section <ref>). Denote ={_1,...,_n} a data set of n points in ℝ^d (we use set operator in a slight abuse of notation, since ties are possible but do not distort the proposed methodology), and let ∈ℝ^d be an arbitrary point of the space. §.§ Background on data depth In the multivariate setting, i.e., for data which elements are points in the d-variate Euclidean space ℝ^d, statistical data depth function is a mappingD : ℝ^d ×ℝ^n × d→ [0, 1],(,) ↦ D(|) ,which satisfies the properties of <cit.>: * affine invariance: D( + |{_1 + ,...,_n + }) = D(|) for any ∈ℝ^d and any non-singular d × d matrix ;* monotonicity on rays: for any ^*∈_∈ℝ^d D(|), D( * β + (^* * (1 - β))|) ≤ D(|) with β∈ (0, 1) and any ∈ℝ^d;* vanishing at infinity: lim_→∞=0;* upper-semicontinuity: all upper level sets D_α()={∈ℝ^d : D(|)≥α} (=depth regions) are closed for any α∈[0,1].(In view of the descriptive nature of the proposed methodology, we stick to empirical notation throughout the article.)While the definition is general, a number of particular depth notions have been developed throughout the recent decades, with these notions differing in statistical as well as computationalproperties and suitable for various applications. As we shall see below, any notion of data depth that satisfies the projection property, as well as possible another directional anomaly score defined in a similar manner can be used for ACA. In what follows we will focus on the projection depth and its asymmetric version, since these are very robust <cit.> and everywhere positive, thus allowing to identify and distinguish anomalies, also beyond the convex hull of the data.Projection depth <cit.> is defined in the following way:D^pd(|) = min_∈𝕊^d-1 1 / ( |^⊤ - med()|/MAD() + 1 ),withbeing a shortcut for {_1^⊤,...,_n^⊤} where med and MAD denote (univariate) median and median absolute deviation from the median, respectively, and 𝕊^d-1 stands for the unit hyper-sphere in ℝ^d.With projection depth retaining certain degree of symmetry (of its depth regions), asymmetric projection depth has been designed to reflect the non-symmetric behaviour of the data:D^apd(|) = min_∈𝕊^d-1 1 / ( (^⊤ - med())_+/MAD^+() + 1 ),with (a)_+ = max{a,0} being the positive part of a and MAD^+ denoting the median of the positive deviations from the median. As mentioned above, both projection and asymmetric projection depths belong to the class of depths satisfying the (weak) projection property, which includes depths for which it holds:D(|) = inf_∈𝕊^d-1 D^1 (^⊤|)with D^1 standing for univariate depth. More precisely, in what follows, we shall make much use of the optimal direction ^*∈𝕊^d-1. Furthermore, it is noteworthy that, following (<ref>), such depths can be (well) approximated (from above) by means of multiple computations of solely univariate depths; <cit.> develop time-efficient algorithms for approximate computation of depths satisfying the projection property. §.§ Anomaly component analysis In this subsection we introduce the novel method—anomaly component analysis, or shortly ACA. ACA searches for orthogonal components in a subspace of ℝ^d to provide a meaningful basis-representation that highlights and explains anomalies in the data. Different to existing visualization and explanation methods which optimize a predefined criterion (normally based on majority of the data) for obtaining a meaningful basis, here the goal is to focus on underlining anomalies, and thus these should be the object of optimization. We tackle this question by identifying anomalies based on minimal depth value and use the minimizing direction(s) to construct an (orthogonal) basis in ℝ^d.We start with an intuitive explanation of the ACA method. To facilitate the exposition, let us consider as an example a data setcontaining n=100 points in ℝ^3 with 10 anomalies, in two groups of 5 anomalies each (red triangles and orange reverse triangles); see Figure <ref>, top left. It is important to mention, that for data in higher dimensions no visualization is possible. On the first step, a point ∈ is searched with minimal depth (among the n points), and it's direction being the argument of (<ref>) _1 is taken as the first anomaly component (AC). This direction shall clearly identify the most significant group of anomalies (red triangles); see Figure <ref>, top right. On the second step, again a point ∈ with smallest depth is searched, while the search space (forin (<ref>)) is now limited to the orthogonal complement of _1 (red plane in Figure <ref>, top right; see also Figure <ref>, bottom left for this bivariate linear space). The minimizing direction _2, which in order identifies the second group of anomalies (orange reverse triangles) is taken as the second component. Figure <ref>, bottom right, depicts the constructed bivariate space on the basis of _1 and _2, which clearly distinguishes the two groups of anomalies. Although we stop here for our example in ℝ^3, the process continues, each time searching in the orthogonal complement of all anomaly components found on the earlier steps.In the following subsection, we shall formally state the algorithm for ACA in pseudo-code, accompanied with a brief step-wise explanation. §.§ The algorithm We start by introducing the following depth computation problem (valid for an arbitrary univariate depth notion D^1), which is very similar to (<ref>):D_(|) = min_∈𝕊^ D^1 (^⊤|)where 𝕊^ stands for the unit hyper-sphere in the space spanned by columns of the basis matrix . Note, that though real dimension ofhere is limited by the number of columns in , it is a vector (of length 1) in the original space ℝ^d. To be used in what follows, denotean algorithmic routine which computes (<ref>) and returns both the depth values and its minimizing direction, taking as parameters: * point ∈ℝ^d for which the depth is computed;* the data set ⊂ℝ^d;* the search-basis matrix ;* the chosen notion of data depth;* the number of k directions used to approximate the depth value (see the next Section <ref> to get insights about the choice of k);* further parameters necessary for the optimization procedure <cit.>.A number of useful algorithms forcan be found in <cit.>; we thus simply refer the reader to this article for the computational questions.Algorithm <ref> implements the general ACA method, while Algorithm <ref> gathers the steps for finding the ith anomaly component. Algorithm <ref> starts with the empty set of anomaly components and the full-space basis [e_1,...,e_n] encoded by matrix =_d, with _d being the d × d identity matrix. On each step of Algorithm <ref>, ith anomaly component is (found and) added to the set of components (matrix ) until the pre-specified number of anomaly components p has been reached. Further, on each step, the size of the basis matrixis reduced by one column so that the basis remains orthogonal to all the found (until ith step) components (saved in matrix ). The search of the ith anomaly component is performed by Algorithm <ref>:Algorithm <ref> simply goes through all points ∈ and selects the one delivering minimal depth:minD_() = min_∈min_∈𝕊^ D^1 (^⊤|) ,while the search is performed in the linear subspace of ℝ^d defined by matrix , and the minimal-depth-minimizing direction is returned in addition to the depth value. The search is implemented by the means of the algorithmic routinedescribed above, and the obtained direction is assured to have anomalies on its positive side. In order to do this, the depth notion, the number of directions used for depth approximation, as well as further algorithmic parameters shall be chosen; we discuss these right below in the following subsection.Algorithm <ref> possesses complexity 𝒪(pkdn^2) which can be decomposed as follows: with number of searched components p, number of depth-approximating directions k, dimension d obviously entering linearly in the complexity, n^2 is explained by the fact that, for each component, all n points should be revisited while each time all n points should be projected on each direction (inside the optimisation routine ). While approximation accuracy is clearly dependent on k, one can suppose it's polynomial dependence on d. In this article, we employed the spherical modification of the Nelder-Mead algorithm delivering best results as studied by <cit.>. Regarding the accuracy, which (though not exact) can still be sufficient for components' search: (a) the work by <cit.> sheds the light on algorithmic convergence of the simplest approximation techniques and (b) chosen values of k in practice delivered highly satisfactory results in all experiments conducted for this article. Right below, we discuss further the choice of parameters when performing ACA. §.§ Choice of parameters When applying ACA, several parameters need to be set: * number of anomaly components to search,* the notion of data depth,* number of directions used to approximate point's depth,* optimization parameters;we discuss these choices in detail right below. Number of anomaly components As in any dimension reduction (or visualization) method, dimension of the obtained space p∈{1,...,d} is guided either by prior knowledge about the data (generating process) or by computational resources (needed for further analysis). Though the choice of p is entirely heuristic, it can be guided by a priori information about (expected) anomalies in the data, e.g., possible dimension of anomalies' subspace or number of their groups. Depth notion The chosen notion of statistical depth function can have an important influence on the ACA's performance. Throughout this article, we stick to the notion of the projection depth, with sparse use of the asymmetric projection depth (see Section <ref> below), for the reasons mentioned in Section <ref>. For a detailed discussion on the choice of depth notion in the multivariate setting we refer the reader to the recent survey by <cit.>. Furthermore, theoretically, any (efficiently optimizable) univariate directional score can be used instead as well; a precaution shall be exercised regarding its eventual statistical properties though. Number of directions The number of directions k∈ℕ_+ has a profound influence on the precision of depth computation, and thus on the found direction(s) of anomaly component(s). <cit.> prove that, even for the probability distribution, the number of k shall grow exponentially with dimension for uniformly good depth approximation, if these directions are drawn at random. Further, <cit.> show that this number can be substantially reduced when using (adapted to the task) optimization algorithm, hopefully (and at least heuristically) departing from the exponential dependency on d, and indicate that in experiments where dimension is up to d=20 the (zero-order) optimization algorithm can converge fast requiring only few hundred directions. In their work, the authors propose a comprehensive comparison of number of algorithms in various settings with the lead taken by the Nelder-Mead algorithm, the sphere-adjusted coordinate descent, and the refined random search.To decide on the number of directions in the application at hand, we suggest the following simple verification following the very principle of the class of depths satisfying the projection property (<ref>): “the smaller the approximated depth value the better it is”. That being said, even without knowing the true depth value, it is reasonable to choose the method and number of directions delivering the smallest depth value. A simple way to study the optimization behavior is the visual inspection of the development of the (minimal) depth value throughout the optimization iterations, for at least several points from the data set. For a sample from Gaussian and Cauchy distribution (with varying dimension), this is illustrated in Figure <ref>, where the repeating jumps of the depth value indicate re-starting the optimization routine in hope to avoid local minima. Optimization parameters To obtain more insights about the choice of the optimization parameters, including the optimization algorithm itself, the reader is referred to the article by <cit.>. Furthermore, the above described methodology on a subset ofcan be employed here, as well as in a larger simulation setting resembling the real data at hand.§ VISUAL COMPARISONIn this section, visualization capacity of ACA is explored in a comparative simulation study. After a brief discussion on present visualization tools and existing problems (Section <ref>) we present the simulation settings (Section <ref>). Further, Section <ref> presents the results compared to those obtained with most used (interpretable) dimension reduction methods, while the rest is preserved for the real-data study in Section <ref>. §.§ On existing dimension-reduction tools With the task of Section <ref> being examination of ACA's performance in dimension-reduction compared to existing methods, we shall start with selecting from the state of the art. For interpretability reasons, methods with linear—in input variables—components will be preferred. With PCA being the natural candidate due to it's wide spread, including robPCA as well is necessary in presence of anomalies, for fair comparison. Here, robPCA is employed in the same manner as traditional PCA where the mean and covariance matrix are estimated robustly, using the minimum covariance determinant <cit.> with the standard value for parameter α=(n+d+1)/2n (portion or anomalies in all our experiments never exceeds this parameter); see also <cit.> for the fast randomized algorithm. To allow for 'non-Gaussian' methodology, we further include ICA. Finally, we include auto-encoder, being a widely used neural-network-based tool able to learn highly non-linear components, it provides a visualization in form of (latent) variables of the 'bottle-neck' layer. (The auto-encoder used has 10-5-2-5-10 layers and was trained using the stochastic gradient descent algorithm with 𝕃_2 loss during 100 epoch with mini-batch size 10 and learning rate 0.005.)Further methods like kPCA, t-SNE, MDS, LLE, or LE, though based on components non-linear in input variables, constitute powerful dimension-reduction machinery and can also provide insightful visualization. Due to this non-linearity property, but also for conciseness, we skip them in this simulation comparison, while include later in Section <ref> for analysis of real data.Though mentioned above for completeness, we exclude NMF due to the positivity of components (which is not justified for general types of data, but only in specific applications, e.g., audio signals, images, text etc). §.§ Explored simulation settings Below, we describe five distributional settings (for normal data) used in the current visualization comparison, but also later throughout the article. Here, all of them follow the (famous and most adapted in the literature) Huber's contamination model <cit.>:Y 𝒟= (1 - ϵ)X + ϵX ,where the random vector X (e.g., here, in ℝ^d) represents normal data while X stands for outliers (and 𝒟= denotes equality in distribution). More precisely, in a sample of size n, the ⌊ n · (1 - ϵ) ⌋ points of normal data are generated according to one of the following scenarios:* Setting 1 – MVN(A09): Normal data are generated as i.i.d. copies of the multivariate-normally distributed random vector:X ∼𝒩(0_d, Σ_A09(d)) ,where Σ_A09(d) consisting of {σ_i,j}_i,j=1^d is the Toeplitz matrix with σ_i,j=0.9^|i-j|, i.e., to ensure various values of correlation between different variables; see <cit.> for precisely this form.* Setting 2 – MVN(hCN): This setting copies MVN(A09) where the covariance matrix is a matrix with high condition number <cit.>.* Setting 3 – ELL(t(5)): Multivariate elliptical Student-t(5) distribution:X 𝒟= μ + ΛUR ,where U∼𝒰(𝕊^d-1) is uniformly distributed on the unit hypersphere, R∼ St(5) is a Student-t(5)-distributed random variable,ΛΛ^⊤=Σ, and ℝ^d∋μ(=0_d) and Σ=Σ_A09(d) are the distribution's center and scatter, respectively. ELL(t(5)) has heavier tails than MVN(A09).* Setting 4 – EXP: Random vector of normal data is here:X = (X_1,...,X_d)^⊤with mutually independent X_i∼ℰ(λ_i) , i=1,...,d ,where X_is are d exponentially distributed random variables with parameters λ_i=1/β_i and β_i∼𝒰([0.1,1]). EXP is asymmetric and possesses high degrees of skewness w.r.t. different variables.* Setting 5 – MV-Sk: Bivariate normal distribution skewed along the first variable according to <cit.>:X = (X_1, X_2)^⊤with X_1 ∼𝒩_Sk(0, 1, α)and X_2 ∼𝒩(0, 1/4) ,where α=10 is the skewness parameter.§.§ Two-dimensional plots In this section we shall focus on first four settings from Section <ref> in order to compare the components-based visualization, while letting MV-Sk for Section <ref>. Fixing the portion of anomalies to ϵ=0.05, in each case we generate the n - ⌊ n · (1 - ϵ) ⌋ contaminating data from X∼𝒩(0_d, I_d / 20) placed in direction of the last principal component of PCA of normal data centered at the distance of 1.25 × the largest Mahalanobis distance among normal data points. We thus obtain Y={_1,...,_n} byfixing n=1000 and d=10. It is important to note that concentrating the contaminating cluster of X on the last principal component does not influence the generality of conclusions but simplifies the presentation; we shall comment this more in detail right below when analysing the PCA output. For illustrative purposes, we plot a data sample from each of the four contaminated settings in ℝ^2 in Figure <ref>. PCA Figure <ref> plots 's projection on first two components obtained by application of PCA. While it is not surprising that—with first two components—PCA is not able to `notice' anomalies located in direction of the 10th component in ℝ^10, this example is illustrative and by no means restrictive, for the three following reasons:* It is a reasonable frequent practice, to pre-process data using dimension-reduction techniques (like PCA) and then apply statistical (e.g., anomaly detection) method in the space of several first and several last components. From this point of view, if the component with anomalies (independent of it's number) is not taken over into the reduced space, anomalies remain unnoticed for most visualization and analysis tools.* If correlation (or a higher-order stochastic dependency) is present in the data, (small number of) anomalies cannot be noticed in any (e.g., 2=) k-dimensional projection if they are placed on, e.g., the average of k+1 (=3) correlated components, being shadowed by k-dimensional marginals.* For a practitioner interested in identifying (and explaining) anomalies, it is in any case advantageous if anomalies are explicitly highlighted by first component(s).robPCA Projection ofon two first components obtained by robPCA, for the four mentioned above contaminated settings, is depicted in Figure <ref>. Using robust MCD estimates for the mean and covariance matrix, the group of anomalies is being ignored, and principal components well approximate the variance-maximizing directions (e.g., for MVN and ELL they are close to the axes of the ellipsoids defined by the population (=true) covariance matrix). With anomalies not necessarily lying on the first (or in general no) such axes, they are not readily identifiable/explainable from the visualization. ICA Further, though anomalies-highlighting directions are (obviously) linear in variables, as expected, without an anomaly-specific criterion employed when searching components, the picture does not change for ICA; see Figure <ref>. Auto-encoder To visualize the data representation using auto-encoder, we exploit its bottle-neck where information about noisy observations is expected to be filtered out. Thus, we use output of the two neurons of the third layer as latent variables, plotted forin Figure <ref>. The intrinsic smoothness assumption on the approximable by neural network function makes them generally more vulnerable to increasing portions of anomalies (in the training sample) than traditional methods of robust statistics. Generally speaking, even when auto-encoder properly distinguishes anomalies by their reconstruction loss (which is not the case here), finding a low-dimensional anomaly-insightful representation can still be challenging, especially in realistic situations where the number of bottle-neck neurons is substantially larger than 2. ACA Non surprisingly, being designed for highlighting anomalies, and directly targeting them when searching for components, ACA copes with this artificial task, with contaminatinganomalies being located (and centered) on the first component. Withbeing rather a simple illustrative example, in what follows we switch to further aspects of ACA as well as to more challenging settings.§ EXPLAINABILITY The intrinsic linearity of the ACA method positions it as a powerful tool for explainability—a highy demanded property in the domain of unsupervised anomaly detection. Here, different to supervised setting, no feedback can provide a criterion to decide about importance of a variable, with the goal being to explain how a variable contributes to the method's decision about abnormality of an observation. The framework of ACA suggests possibilities to identify most deviating variable(s) for each anomaly.With each ACA's anomaly component (further AC) being a linear combination of the (input) variables, their contributions (perhaps properly re-scaled) highlight variables' abnormalities. Before exploiting this information (see Section <ref>), let us first pay attention to the ACs themselves. §.§ Direction that highlights abnormality According to the principle of projected outlyingness <cit.>, an AC (i.e., the corresponding direction) should be chosen in a way so that—in projection on it—abnormal observation(s) (cluster) is best separated from normal data. The goal of this subsection is to benchmark usefulness of the generated ACs in the elliptical setting, where a reasonable guess is easier to find theoretically. More precisely, the found direction (let us name it ^*_1) should be such that it best separates the anomaly (or anomalies' cluster) from normal data. In the elliptical setting, a good candidate is the direction orthogonal to the—tangent to any ellipsoid—hyperplane that contains the abnormal point of interest. For comparison, let us fix this point to the earlier defined μ̃∈ℝ^d (see Section <ref>). Then, a good candidate for the searched direction is ^*_1 = Σ^-1μ̃/Σ^-1μ̃, with Σ standing for the covariance matrix of the elliptical distribution. From this point of view, effectiveness of a dimension-reduction method, which—when applied to data set ⊂ℝ^d—returns up to m≤ d ordered (with decreasing importance) component vectors _1,...,_m, can be naturally measured by two indicators: * index of component that is most aligned with ^*_1 (for most fair comparison), î = _i∈{1,...,m} arc cos(_i^⊤^*_1) ,and * the corresponding angle: α̂= arc cos(_î^⊤^*_1) .For the three elliptical distributions from Section <ref> (MVN(A09), MVN(hCN), and ELL(t(5))) contaminated as described in Section <ref>, we plot in Figure <ref> î and α̂ for ACA, PCA, robPCA, and ICA. Note, that since the angle is measured as a positive value, the error in itself is inevitable, in particular for the empirical case. While it is expected that components with higher index numbers can be better aligned with the anomalies-explaining direction for other methods, their angles are still much higher that those of ACA (which always identifies anomalies with the first component). Furthermore, variables contributing more to this direction can be seen as responsible for the abnormality. §.§ Focus on variable's contribution Following the principle of ACA, in this section, we shall explore the general capacity of (asymmetric) projection depth to highlight variables responsible for observations' abnormality, in two comparative simulation studies. Comparison with DDC <cit.> proposed a method for detecting (and imputing) deviating data cells (DDC) in the data set, which can be also seen as explaining anomaly-detection tool. (Under a cell-wise anomaly one understands a normal observation whose one or more coordinates are contaminated with other variables remaining intact.) In what follows, we shall compare (asymmetric) projection depth with DDC in detecting such deviating cells. With DDC exploiting correlation, we use the following challenging anomaly-detection setting: we generateaccording to MV-Sk (see Section <ref>) contaminated with 10% of anomalies from Section <ref> restricting μ̃ to the set γ M_max : {^(1)>0, ^(2) > ^(1), ∈𝕊^d-1} (with ^(i) denoting ith coordinate ofand M_max standing for the maximal Mahalanobis distance among normal data); we set γ=0.8,0.9,1,1.5, n=1000 and d=2 (see the Supplementary Materials, Section <ref> for a visualization of the data generating process). Thus,is not only asymmetric, but as well all anomalies are trivially explained by the second variable. For ith variable of , the depth-based anomaly score is constructed as:s^(i)(|) = _pd^(i)(|) · (1 / D^apd(|) - 1) ,where _pd(|) ∈_∈𝕊^d-1 D^pd(|). For DDC, to each observation's variable we attribute the cell's standardized residuals as anomaly score <cit.>; see also the Supplementary Materials (Section <ref>) for an example of cell-wise scores' visualization. Using these scores to order observations in , we compare them to true anomalies by the area under the receiver operating characteristic (AUC); these are indicated in Figure <ref>. We observe that, under asymmetric deviation from elliptical contours (perfectly described by correlation) DDC is outperformed by the non-parametric depth-based approach in this setting (see also Section <ref> of the Supplementary Materials for a one-sample visualization of the obtained score). For a broader setting, i.e., when letting more freedom for μ̃, DDC and depth perform comparably (see Section <ref> of the Supplementary Materials). Comparison with DIFFI<cit.> introduced depth-based isolation forest feature importance (DIFFI) that allows to evaluate contribution of each variable to the abnormality of an observation, and thus constitutes a natural candidate for comparison. We generate data from MVN(A09) and contaminate them with 10% of anomalies generated from from 𝒩(μ̃,I_d/1000) withμ̃ = ( (2 · d)^2, (2 · (d - 1))^2, ..., (2 · 1)^2 )^⊤ ,i.e., variables' importance decreases in their literal order. For ACA, the variable importance is derived from the variable's contribution (in absolute value) to the first AC. The resulting order correlation (with the correct order induced by (<ref>)) is indicated in Figure <ref> (left), for varying space dimension d. One observes that ACA preserves good correlation level when d increases. Not to disadvantage DIFFI, we also attempt the spherical (standard) normal distribution instead of MVN(A09); see Figure <ref> (right).§ APPLICATION TO REAL DATA SETS Next, we explore the performance of both visualization and explainability provided by ACA in a comparative real-data study. For this, we address 7 real-world data sets downloaded from <cit.>, and present 3 of them right below (see Table <ref> for brief information), with further 4 (for space reasons) being shifted to the Supplementary Materials, see Section <ref>). For each of them, we contrast ACA with 9 dimension-reduction methods PCA, robPCA, kPCA, ICA, AE, t-SNE, MDS, LLE, and LE (see Section <ref> for a brief overview with references), by visualizing projection on first two components. Musk data set Musk data set contains molecules described by 166 features extracted from low-energy conformation, with a part being marked as musk (=anomalies) by experts; variables are hence further indexed by integers here. Visualization of projection on first two components (Figure <ref>) reveals that ACA precisely distinguishes anomalies by their projection on AC1 (with projection on AC2 being shifted as well). While some other methods (PCA, ICA, t-SNE, MDS, LE) also isolate them, detecting those further (say, in an automatic way) is less obvious. It is further interesting to consider the components' constitution (for methods with linear variables' combinations); see Table <ref>, with percentages being low due to high number of variables. Thus, only ACA identifies importance of variables (=conformations' features) `44' and `105' for abnormality, while solely variable `155' (in AC2) also appears in PCA. Satellite image (2) data set This data set contains multi-spectral values of pixels in 3 × 3 neighborhoods in a satelite image, with one of the classes (Class 2) being downsampled to 71 anomalies. (Variables are also numbered by integers here.) As one can observe from the bi-component visualization, ACA distances anomalies already on AC1, while only PCA, robPCA and ICA preserve them on a second (linear) component; see Figure <ref>. Out of variables `18', `2', and `26' spotted by AC1, only the first one finds itself 3rd on PC1; see Table <ref>. Thyroid data setFor the Thyroid data set, we used 6 continuous variables where the hyperfunction class is taken as abnormal. Here, visually (Figure <ref>), ACA outperforms the rest of the methods in addition to being the only one that highlights variable `2' (Triiodothyronine (T3) test) as the main one to explain anomalies, further giving much importance to variable `6' (TBG blood test); see also Table <ref>.§ CONCLUSIONWhile today a universe of dimension-reduction methods is at practitioner's disposal, covering different conceptual approaches and application domains, these methods (be it a simple classical method like PCA (or its robust version) or more advanced techniques, e.g., t-SNE) are primarily aimed at finding a relevant representation space for the whole empirical data distribution and not at identifying anomalies. Even if in some case general dimension-reduction methods allow to detect or visualize some of them, this happens rather by chance as witnessed in the simulations of Section <ref> and real-data examples of Section <ref>. The only way to get around anomalies is to search for them directly. ACA constitutes an attempt to fill this gap, aiming at representation of anomalies in a linear subspace of the original space ℝ^d.ACA is easy to implement and mainly leverages standard existing (depth-computation and some further) tools with restriction of the direction 's search space to a lower-dimensional basisbeing the only exception. As we discuss in Section <ref>, ACA can be implemented with sufficient precision and polynomial complexity in both data set size n and space dimension d. Numerous examples of this article prove reasonable approximation of such implementation in practice. Furthermore, involvement ofcan be avoided by simply projecting the data on the orthogonal complement of the most recent found component; this would yield a somewhat different procedure though.Positioning in the anomaly detection framework, ACA constructs an orthonormal basis of a pre-defined dimension, which—hopefully—provides insights on the location of anomalies from the training data set. When projecting new (out-of-sample) data in the same basis, different situations can arise, depending on the contamination model. In case of Huber model (<ref>), the representation should be normally sufficient for gaining insights about anomalies, which should not be necessarily the case for others, in particular adversarial contamination <cit.> that can appear in any part of ℝ^d and be shaded by the normal data in all 2- and 3-dimensional projections. A possible strategy to act is then to compute the depth/outlyingness of suspected (or all) observations, and if these values indicate potential abnormality, (re-)run ACA on a data set including these observations.ACA is not restricted to (asymmetric) projection depth only, and can be readily employed with other depth notions that satisfy (<ref>), as well as those minimizable over projections, as it is the case for, e.g., weighted halfspace depth of <cit.>. Moreover, any further procedure that provides an anomaly score based on a univariate data's projection can be adapted to ACA framework as well. Furthermore, ACA can be extended to data sets in spaces where linear combination of variables is expected to provide a reasonable explanation of anomalies and the corresponding search procedure can be constructed.With the most relevant information and illustrations incorporated in the body of the article, the Supplementary Materials to this article contain: * Section 1: additional information on simulation settings from Section <ref>; * Section 2: full 10 coordinates constituting anomaly components for visualizations from Section <ref>; * Section 3: further details on comparison with DDC from Section <ref>; * Section 4: visualization and components' data on the 4 remaining real data sets. § ACKNOWLEDGMENTS The authors greatly acknowledge the support of the CIFRE grant n° 2021/1739. 46 urlstyle[Agostinelli et al.(2015)Agostinelli, Leung, Yohai, and Zamar]AGOSTINELLI15 C. Agostinelli, A. Leung, V. J. Yohai, and R. H. Zamar. Robust estimation of multivariate location and scatter in the presence of cellwise and casewise contamination. TEST, 24:0 441–461, 2015.[Ahmed et al.(2016)Ahmed, Mahmood, and Islam]AhmedMI16 M. Ahmed, A. N. Mahmood, and M. R. Islam. A survey of anomaly detection techniques in financial domain. Future Generation Computer Systems, 55:0 278–288, 2016.[Anowar et al.(2021)Anowar, Sadaoui, and Selim]ANOWAR21 F. Anowar, S. Sadaoui, and B. Selim. Conceptual and empirical comparison of dimensionality reduction algorithms (pca, kpca, lda, mds, svd, lle, isomap, le, ica, t-sne). Computer Science Review, 40:0 100378, 2021.[Azzalini and Capitanio(1999)]AZZALINI99 A. Azzalini and A. Capitanio. Statistical applications of the multivariate skew normal distribution. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 610 (3):0 579–602, 1999.[Barredo Arrieta et al.(2020)Barredo Arrieta, Díaz-Rodríguez, Del Ser, Bennetot, Tabik, Barbado, Garcia, Gil-Lopez, Molina, Benjamins, Chatila, and Herrera]BARREDOARRIETA20 A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:0 82–115, 2020.[Bateni and Dalalyan(2020)]BateniD20 A.-H. Bateni and A. S. Dalalyan. Confidence regions and minimax rates in outlier-robust estimation on the probability simplex. Electronic Journal of Statistics, 14:0 2653–2677, 2020.[Belkin and Niyogi(2003)]BELKIN03 M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 150 (6):0 1373–1396, 2003.[Carletti et al.(2023)Carletti, Terzi, and Susto]CARLETTI23 M. Carletti, M. Terzi, and G. A. Susto. Interpretable anomaly detection with diffi: Depth-based feature importance of isolation forest. Engineering Applications of Artificial Intelligence, 119:0 105730, 2023.[Chandola et al.(2009)Chandola, Banerjee, and Kumar]ChandolaBK09 V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys, 410 (3), 2009.[Comon(1992)]COMON92 P. Comon. Independent component analysis. In J.L.Lacoume, editor, Higher-Order Statistics, pages 29–38. Elsevier, 1992.[Cox and Cox(2008)]COX08 M. A. A. Cox and T. F. Cox. Multidimensional Scaling, pages 315–347. Springer Berlin Heidelberg, 2008.[Diakonikolas et al.(2016)Diakonikolas, Kamath, Kane, Li, Moitra, and Stewart]DiakonikolasKKMS16 I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators in high dimensions without the computational intractability. In IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 655–664. IEEE, 2016.[Donoho(1982)]DONOHO82 D. L. Donoho. Breakdown properties of multivariate location estimators. PhD thesis, Dept. Statistics, Harvard University, Boston, 1982.[Doshi-Velez and Kim(2017)]DOSHI17 F. Doshi-Velez and B. Kim. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017.[Dyckerhoff(2004)]DYCKERHOFF04 R. Dyckerhoff. Data depths satisfying the projection property. Allgemeines Statistisches Archiv, 88:0 163–190, 2004.[Dyckerhoff et al.(2021)Dyckerhoff, Mozharovskyi, and Nagy]DYCKERHOFF21 R. Dyckerhoff, P. Mozharovskyi, and S. Nagy. Approximate computation of projection depths. Computational Statistics & Data Analysis, 157:0 107166, 2021.[Görnitz et al.(2013)Görnitz, Kloft, Rieck, and Brefeld]GornitzKRB13 N. Görnitz, M. Kloft, K. Rieck, and U. Brefeld. Toward supervised anomaly detection. Journal of Artificial Intelligence Research, 46:0 235–262, 2013.[Hotelling(1933)]Hotelling33 H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:0 417–441, 1933.[Huber(1964)]Huber64 P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 0 (35):0 73–101, 1964.[Huber(1965)]Huber65 P. J. Huber. A robust version of the probability ratio test. The Annals of Mathematical Statistics, 0 (36):0 1753–1758, 1965.[Hubert et al.(2005)Hubert, Rousseeuw, and Branden]HUBERT05 M. Hubert, P. J. Rousseeuw, and K. V. Branden. Robpca: A new approach to robust principal component analysis. Technometrics, 470 (1):0 64–79, 2005.[Kim et al.(2018)Kim, Wattenberg, Gilmer, Cai, Wexler, Viegas, and Sayres]KimWGCWVS18 B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, and R. Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2668–2677. PMLR, 2018.[Kotík and Hlubinka(2017)]KotikH17 L. Kotík and D. Hlubinka. A weighted localization of halfspace depth and its properties. Journal of Multivariate Analysis, 157:0 53–69, 2017.[Lee and Seung(1999)]LEE99 D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 4010 (6755):0 788–791, 1999.[Li et al.(2023)Li, Zhu, and Van Leeuwen]LiZVL23 Z. Li, Y. Zhu, and M. Van Leeuwen. A survey on explainable anomaly detection. ACM Transactions on Knowledge Discovery from Data, 180 (1), 2023.[Liu et al.(2008)Liu, Ting, and Zhou]LIU08 F. T. Liu, K. M. Ting, and Z.-H. Zhou. Isolation forest. In 2008 eighth ieee international conference on data mining, pages 413–422. IEEE, 2008.[Lopuhaa and Rousseeuw(1991)]LopuhaaR91 H. P. Lopuhaa and P. J. Rousseeuw. Breakdown points of affine equivariant estimators of multivariate location and covariance matrices. The Annals of Statistics, 190 (1):0 229–248, 1991.[Mosler and Mozharovskyi(2022)]MoslerM22 K. Mosler and P. Mozharovskyi. Choosing among notions of multivariate depth statistics. Statistical Science, 370 (3):0 348–368, 2022.[Murdoch et al.(2019)Murdoch, Singh, Kumbier, Abbasi-Asl, and Yu]MURDOCH19 W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 1160 (44):0 22071–22080, 2019.[Nagy et al.(2020)Nagy, Dyckerhoff, and Mozharovskyi]NAGY20 S. Nagy, R. Dyckerhoff, and P. Mozharovskyi. Uniform convergence rates for the approximated halfspace and projection depth. Electronic Journal of Statistics, 140 (2):0 3939 – 3975, 2020.[Parekh et al.(2021)Parekh, Mozharovskyi, and d'Alché Buc]PAREKH21 J. Parekh, P. Mozharovskyi, and F. d'Alché Buc. A framework to learn with interpretation. Advances in Neural Information Processing Systems, 34:0 24273–24285, 2021.[Pearson(1901)]Pearson01 K. Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 20 (11):0 559–572, 1901.[Rayana(2016)]Rayana16 S. Rayana. ODDS library. , 2016. Stony Brook University, Department of Computer Sciences.[Rousseeuw and Bossche(2018)]ROUSSEEUW18 P. J. Rousseeuw and W. V. D. Bossche. Detecting deviating data cells. Technometrics, 600 (2):0 135–145, 2018.[Rousseeuw and Driessen(1999)]ROUSSEEUW99 P. J. Rousseeuw and K. V. Driessen. A fast algorithm for the minimum covariance determinant estimator. Technometrics, 410 (3):0 212–223, 1999.[Rousseeuw and Hubert(2018)]RousseeuwH18 P. J. Rousseeuw and M. Hubert. Anomaly detection by robust statistics. WIREs Data Mining and Knowledge Discovery, 80 (2):0 e1236, 2018.[Rousseeuw and Leroy(1987)]RousseeuwL87 P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. John Wiley & Sons, New York, 1987.[Roweis and Saul(2000)]ROWEIS00 S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 2900 (5500):0 2323–2326, 2000.[Sakurada and Yairi(2014)]SAKURADA14 M. Sakurada and T. Yairi. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, page 4–11. Association for Computing Machinery, 2014.[Schölkopf and Smola(2002)]SCHOLKOPF02 B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.[Schölkopf et al.(1997)Schölkopf, Smola, and Müller]SCHOLKOPF97 B. Schölkopf, A. Smola, and K.-R. Müller. Kernel principal component analysis. In International conference on artificial neural networks, pages 583–588. Springer, 1997.[Stahel(1981)]STAHEL81 W. A. Stahel. Robust Estimation: Infinitesimal Optimality and Covariance Matrix Estimators (In German). PhD thesis, ETH Zurich, 1981.[Thudumu et al.(2020)Thudumu, Branch, Jin, and Singh]ThudumuBJS20 S. Thudumu, P. Branch, J. Jin, and J. J. Singh. A comprehensive survey of anomaly detection techniques for high dimensional big data. Journal of Big Data, 70 (42), 2020.[van der Maaten and Hinton(2008)]VANDERMAATEN08 L. van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 90 (86):0 2579–2605, 2008.[Zuo(2003)]Zuo03 Y. Zuo. Projection-based depth functions and associated medians. The Annals of Statistics, 310 (5):0 1460–1490, 2003.[Zuo and Serfling(2000)]ZuoS00a Y. Zuo and R. Serfling. General notions of statistical depth function. The Annals of Statistics, 28:0 461–482, 2000.Supplementary materials to the article “Anomaly component analysis” Anomaly component analysis Romain VallaLTCI, Télécom Paris, Institut Polytechnique de ParisPavlo MozharovskyiLTCI, Télécom Paris, Institut Polytechnique de ParisFlorence d'Alché-BucLTCI, Télécom Paris, Institut Polytechnique de Paris December 26, 2023 ==================================================================================================================================================================================================================================================== These supplementary materials contain additional information on the study of the performance of the anomaly component analysis (ACA) procedure. First, they contain information about the settings used in the visualisation comparison (Section <ref>), and detailed results (mentioning every component) concerning all the methods involved in experiments (Section <ref>). Further, another experiment is presented, similar to the one in the body of the article, with DDC where its performance is closer to that of ACA, as well as information on generation process where the performances differ (Section <ref>). Finally, other real data sets are used to apply ACA for visualisation and explanation purpose (Section <ref>). While it is expected that ACA's results are not always better, they are usually comparable to the commonly employed techniques and illustrate the relevance of ACA.§ SIMULATION SETTINGS Having described the data generation parameters in Section <ref>, here is a view of the two covariance matrices used in setting MVN(A09) and MVN(hCN).§ COMPONENTS' COORDINATES Following visualisations in Section <ref>, we show the component for every simulation setting and every method (except for the auto-encoder because of its non-linearity).§ ON COMPARISON WITH DDC Two settings are tested in the Section <ref>. Both of them are described below in combination with a view of features used to compare ACA and DDC. §.§ Setting delivering equal performance First setting allows for anomalies' location in a wider area such that they can be more explained using the first variable X_1 than the second X_2. This results in approximately equal performance for both methods. The setting is explained in Figure <ref> and we display the features in Figure <ref> obtained with DDC and ACA which are used to compute AUC indicated in Figure <ref>. §.§ Setting delivering different performance This second setting is discussed in Section <ref> where anomalies are placed in the same way as before with X_2 being more `responsible' for abnormality than X_1. This results in better performance for ACA. The setting is explained in Figure <ref> and we display the features in Figure <ref> obtained with DDC and ACA which are used to compute AUR indicated in Figure <ref>.§ REAL DATA Following data sets are explored in the same manner as in Section <ref>. Obtained visualisations and explanations are competitive to those delivered by the methods of the state of the art. We also add tables with variables' contributions for components for ACA, PCA, robPCA and ICA to interpret anomalies location.
http://arxiv.org/abs/2312.16139v1
{ "authors": [ "Romain Valla", "Pavlo Mozharovskyi", "Florence d'Alché-Buc" ], "categories": [ "stat.ME", "cs.LG", "stat.ML" ], "primary_category": "stat.ME", "published": "20231226175746", "title": "Anomaly component analysis" }
Periodically driven four-dimensional topological insulator with tunable second Chern number Bin Zhou January 14, 2024 ===========================================================================================Zero-shot stance detection (ZSSD) aims to detect stances toward unseen targets.Incorporating background knowledge to enhance transferability between seen and unseen targets constitutes the primary approach of ZSSD. However, these methods often struggle with a knowledge-task disconnect and lack logical consistency in their predictions. To address these issues, we introduce a novel approach named Logically Consistent Chain-of-Thought (LC-CoT) for ZSSD, which improves stance detection by ensuring relevant and logically sound knowledge extraction. LC-CoT employs a three-step process. Initially, it assesses whether supplementary external knowledge is necessary. Subsequently, it uses API calls to retrieve this knowledge, which can be processed by a separate LLM. Finally, a manual exemplar guides the LLM to infer stance categories, using an if-then logical structure to maintain relevance and logical coherence. This structured approach to eliciting background knowledge enhances the model's capability, outperforming traditional supervised methods without relying on labeled data.§ INTRODUCTION Stance detection is a fundamental natural language processing (NLP) task that categorizes expressed attitudes toward a particular target based on opinionated input texts <cit.>. This task has attracted significant research attention in recent years due to its relevance across domains like political analysis, social media monitoring, and customer feedback analysis <cit.>. In practice, the enumeration of all conceivable targets in advance for training stance detection models is infeasible. Consequently, zero-shot stance detection (ZSSD) has emerged as a promising approach, focused on accurately identifying the stance towards unseen targets during the inference stage <cit.>.ZSSD is traditionally framed as a target-based sentence-level classification task that utilizes either non-pretrained or pretrained language models (PLMs) <cit.>. However, sentences often contain background knowledge such as domain-specific terminology, cultural references, social media linguistic styles, and more. These elements are not readily comprehensible to conventional methods and require specialized parsing to be fully understood. Recently, efforts to improve ZSSD have focused on the exploitation of such background knowledge, predominantly through unsupervised methods owing to the scarcity of explicitly annotated background data <cit.>.The emergence of large language models (LLMs), such as GPT series, trained on comprehensive text corpora, presents new avenues for knowledge extraction to bolster stance detection <cit.>. However, current ZSSD approaches exhibit clear deficiencies in knowledge utilization, leading to two key issues: 1) Knowledge-task disconnect: conventional approaches tend to extract expansive, fragmented information components that have limited relevance to the specific stance detection task. This can impair performance when processing contextual data that is highly interdependent with the target stance. 2) Lack of logical consistency: the fragmented knowledge lacks necessary logical verification, introducing potential errors and contradictions that diminish the credibility of stance predictions.To achieve this goal, in this paper, we propose a Logically Consistent Chain-of-Thought (LC-CoT) Approach for ZSSD. LC-CoT approaches involve utilizing manually-designed prompt templates to extract background knowledge relevant to the stance detection analysis process from LLMs in a CoT manner. Specifically, LC-CoT consists of three steps. First, we ask the LLM to determine if additional external knowledge is required for the given input.Second, the LLM is leveraged to produce knowledge retrieval APIs through utilizing tools via API invocations. This API can feed into a separate LLM to obtain knowledge. Third, we furnish the LLM with a manual selected exemplar to guide the LLM in inferring stance categories by consolidating the input and background knowledge. In this step, the generated template follows if-then logical structures to ensure knowledge utilization and inference aligns with the stance prediction process. We conducted extensive experiments validating that eliciting background knowledge following if-then format can effectively augment model capabilities to surpass traditional supervised approaches even without labeled samples. § METHOD Task Definition and Model Overview. We use D={x_i, p_i} to denote the collection of input data, where x and p denote the input text and the corresponding target, respectively.The stance detection task aims to predict a stance label ŷ_i for given input {x_i, p_i}.§.§ LC-CoTTo address the challenge of knowledge-task disconnect and lack logical consistency, and leverage the rich knowledge encoded within LLMs, we designed a three step CoT method. In the first step, we engage an LLM to ascertain the necessity for additional external knowledge pertinent to the input text. Upon establishing this need, in the second step, we exploit the LLM's capabilities to generate API calls, effectively creating a bridge to external knowledge bases. This facilitates the acquisition of pertinent information from a distinct LLM, tailored to the context of the stance detection task.In the third step, the LLM is provided with a carefully chosen exemplar, which serves as a cognitive scaffold, directing the inference process. By employing if-then logical constructs within the generated templates, we ensure that the assimilation of input with the procured background knowledge is both relevant and logically consistent, thereby enhancing the accuracy and reliability of the stance categorization. Step1: We first feed the constructed instruction S'1 into the LLM to decide whether external knowledge is required for stance prediction.0.95S'1: Your task is to judge whether there is enough evidence to support the stance prediction based on the text content.Input:[input text x] to the target [given target p].Output: [yes/no]Step2: If the model output requires additional information, the S'2 instruction can be used to allow the LLM to automatically retrieve the required background knowledge (q): 0.95S'2: You can call the API by writing "QUERY [A]" where "A" is the required knowledge.Here are some examples of API calls:Input: What's the attitude of the sentence [input text x] to the target [given target o]? Select an answer from (favor, against, none) or API call.Output: API call, QUERY […] Step 3: Finally, we send the API call S'3 to the LLM to acquire the if-then expression. 0.95S'3: Your task is to add calls to a Question Answering API to a piece of text. The questions should help you get information required to complete the text. You can call the API by writing "[RULE: IF (A) then (B)]" where "A" is the reason why "B".Here are some examples of API calls: Input: what's the attitude of the sentence [input text x] to the target [given target o]? [given knowledge q (if have)]. select an answer from (favor, against, none). Output: [IF (reason) then (attitude is [stance label])]. Input: …Figure <ref> shows an example, given the input as: “You know email gate must be going nowhere.” to the target “Hillary Clinton”, the LC-CoT model can generate a output: “IF the target: Hillary Clinton (`email gate' has a negative impact on Hillary) then (the attitude is against)”. Ultimately, we can extract against from the if-then expression, that can serve as the stance prediction label for the LC-CoT model.§ EXPERIMENT§.§ Experimental DataThis paper presents experimental results on robust benchmark datasets, including SemEval-2016 Task 6 (SEM16) <cit.> and VAST <cit.>. SEM16 comprises 4870 tweets with diverse targets, with each tweet being labeled with a label of “favor”, “against’’, or “neutral’’. In accordance with the proposed configuration by <cit.>, four targets, i.e., Donald Trump (D), Hillary Clinton (H), Legalization of Abortion (L), and Feminist Movement (F), are selected for evaluating the efficacy of the stance detection task, and hence have been chosen for our study. Following <cit.>, we regard a target as the zero-shot testing target while training on the other five, and randomly select 15% of the training set as the development data to tune the hyper-parameters. VAST comprises three distinct stance labels, with the label set defined as “Pro”, “Neutral”, “Con”. The training set comprises 4003 samples, while the dev and test sets consist of 383 and 600 samples, respectively. §.§ Compared Baseline MethodsTo assess the efficacy of our proposed model, we conduct a thorough evaluation and comparison with a range of established baselines. The details of these baseline models are presented below for reference: Statistics-based methods. BiCond <cit.> utilized a bidirectional-LSTM to encode the underlying sentence and the corresponding target.CrossNet <cit.> is a variant of BiCond, which leverages a self-attention layer to capture informative words. TPDG <cit.> proposed a target-adaptive graph convolutional network.AT-JSS-Lex <cit.> developed a target-adaptive graph convolutional network for the purpose of stance detection.SEKT <cit.> introduced semantic knowledge as the transferable knowledge between targets.Fine-tuning based methods. Bert-FT <cit.> employed a pretrained BERT model for stance detection. PT-HCL <cit.> developed a novel approach to cross-target and zero-shot stance detection using contrastive learning. JointCL <cit.> proposed a contrastive learning method to leverage the stance features of known targets. TarBK <cit.> incorporated the targeted background knowledge for stance detection.TTS <cit.> proposed to augment the training set with different diverse targets.LLM-based methods.GPT-DQA <cit.> directly elicited stance categories from GPT-3.5 by posing queries in an interrogative format. GPT-CoT <cit.> developed the method that prompts LLM with artificially constructed examples containing predefined inferential logic in a chain-of-thought manner to obtain stance categories.KASD-GPT <cit.> utilized GPT-3.5 to retrieve relevant background knowledge, which is then integrated into the trainable Bert to exploit annotated samples through backpropagation.§ EXPERIMENTWe report the main experimental results of zero-shot stance detection in Table <ref>.Following previous methods, we adopted macro-averaged F1 (F1_m =(F1_micro+F1_macro)/2) as the evaluation metric for our study to verify our model. We observe that our LC-CoT performs consistently better than most of the baseline models on all datasets, which verifies the effectiveness of our proposed approach in ZSSD. Despite the challenging nature of ZSSD, our LC-CoT model exhibits considerable potential, surpassing all benchmark approaches on the SEM16 and VAST datasets.Specifically, Our LC-CoT substantially surpassing TarBK on three targets on average by 19.5% with SEM16. Notably, LC-CoT exhibits a slightly inferior performance compared toTarBK,the best contrastive model incorporating knowledge conventionally. However, LC-CoT does not necessitate any training data, unlike TarBK which necessitates substantial labeled data for training, amply demonstrating its superior capability in acquiring background knowledge requisite for stance detection. Additionally, our LC-CoT continued to achieve valid improvements compared to CoT-based methods. Contrasted with CoT-based methods (GPT-DQA and GPT-CoT), LC-CoT exhibited significant enhancements on SEM16 and VAST datasets.Notably, relative to the knowledge generation model KASD-ChatGPT predicated on LLM, LC-CoT averaged improvements of 1.03% across 3 targets on the SEM16 dataset and 5.5% on VAST.Such improvements highlight the effectiveness of integrating logical reasoning into the knowledge generation process, which significantly elevates the performance of the model. The experimental results demonstrate that by aligning the CoT-based knowledge elicitation with the model's stance prediction logic, LC-CoT not only enhances accuracy but also sets a precedent for future stance detection methodologies.§.§ Cross-Target SetupTo evaluate the generalizability of our LC-CoT method for cross-target stance detection, we also assessed LC-CoT under cross-target conditions on the SEM16 datasets. The objective of the cross-target configuration is to predict the stance towards the target destination utilizing labeled data from the source target.The results are presented in Table 2. Based on these results, LC-CoT substantially outperforms the other baselines. Specifically, relative to the previously top statistical method (TPDG), LC-CoT achieves an average improvement of 18.1% on the F1 score on average, affirming the efficacy of employing a distantly supervised framework in the cross-target setting. Compared to the best fine-tuning-based approach (PT-HCL), LC-CoT exhibits an average 16.3% enhancement on the F1 score on average. As GPT-DQA, GPT-CoT, and KASD-ChatGPT require no training data, their results remain consistent with Table 1 and are thus not reiterated here. § CONCLUSIONThis paper proposes the Logically Consistent Chain-of-Thought (LC-CoT) approach, which refines knowledge extraction and application by using LLMs in a structured, logical manner. LC-CoT operates in three stages to assure relevance and logical soundness in stance detection. First, it evaluates the need for external knowledge. Then, it retrieves this information via APIs, harnessing the comprehensive understanding capabilities of LLMs. Finally, LC-CoT employs if-then reasoning patterns, guided by manual exemplars, to integrate the knowledge into the stance detection process. This method has proven superior to traditional supervised techniques, even in the absence of labeled training data, and when fused with existing models, significantly enhances their accuracy. Our research validates LC-CoT as a powerful tool for ZSSD, marking a step forward in the use of LLMs for tasks requiring not just data processing but also nuanced comprehension and logical inference.acl_natbib
http://arxiv.org/abs/2312.16054v1
{ "authors": [ "Bowen Zhang", "Daijun Ding", "Liwen Jing", "Hu Huang" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231226135400", "title": "A Logically Consistent Chain-of-Thought Approach for Stance Detection" }
Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey Jiaxing Huang^†, Jingyi Zhang^†, Kai Jiang, Han Qiu and Shijian Lu^* All authors are with the School of Computer Science and Engineering, Nanyang Technological University, Singapore.† denotes equal contribution; * denotes corresponding author. =========================================================================================================================================================================================================================================================§ ABSTRACTRecent advancements in immune sequencing and experimental techniques are generating extensive T cell receptor (TCR) repertoire data, enabling the development of models to predict TCR binding specificity. Despite the computational challenges due to the vast diversity of TCRs and epitopes, significant progress has been made. This paper discusses the evolution of the computational models developed for this task, with a focus on machine learning efforts, including the early unsupervised clustering approaches, supervised models, and the more recent applications of Protein Language Models (PLMs). We critically assess the most prominent models in each category, and discuss recurrent challenges, such as the lack of generalization to new epitopes, dataset biases, and biases in the validation design of the models.Furthermore, our paper discusses the transformative role of transformer-based protein models in bioinformatics. These models, pretrained on extensive collections of unlabeled protein sequences, can convert amino acid sequences into vectorized embeddings that capture important biological properties. We discuss recent attempts to leverage PLMs to deliver very competitive performances in TCR-related tasks. Finally, we address the pressing need for improved interpretability in these often opaque models, proposing strategies to amplify their impact in the field. Keyword: Machine Learning; T cell Receptor; Specificity Prediction; Protein Language Models; Interpretability. § BACKGROUND T cells are an essential component of the adaptive immune system, due to their ability to orchestrate targeted, effective immune responses through cell-based and cytokine-release mechanisms. While T cell functions are diverse, their activation, differentiation, proliferation, and function are all governed by their T cell receptors (TCR), which enable them to recognize non-self antigens arising from infectious agents or diseased cells <cit.>.To face a diverse and ever-evolving array of antigens, the immune system has evolved the capability to generate a huge array of distinct TCRs. This diversity is achieved through a random process of DNA rearrangement, which involves the recombination of the germline V, D, and J gene segments and the deletion and insertion of nucleotides at the V(D)J junctions. While the theoretical diversity of different TCRs is estimated to be as high as10^19 <cit.>, the realized diversity in an individual is much smaller, typically ranging between 10^6 and 10^10 <cit.>. At the molecular level, TCRs interact with peptides presented on the major histocompatibility complex (MHC), a complex commonly referred to as pMHC. Although the interaction between pMHC and TCR is highly specific, a single TCR can often recognize multiple pMHC complexes. Indeed, some TCRs have been shown to recognize up to a million different epitopes <cit.>. This multivalency is necessary to ensure that the realized diversity in one individual can recognize a significantly broader array of potential antigens.§ T CELL RECEPTOR SPECIFICITY PREDICTION. The precise prediction of TCR-pMHC binding is essential for accurately quantifying and predicting immune responses. If effectively represented, it has the potential to transform the field of personalized medicine. For instance, the accurate identification of epitopes recognized by expanded TCR clones can aid in identifying the auto-antigens involved in T-cell-associated autoimmune diseases <cit.>, assessing responses to vaccines or identifying the pathogenic agents responsible for eliciting T-cell responses <cit.>. In the context of cancer, an improved predictive power of TCR specificity can not only aid in the design of more effective T cell-based therapies <cit.>, but also minimize toxic side-effects produced by TCRs off-target binding <cit.>. However, as experimental methods cannot encompass the vast space of potential TCRs and epitopes, significant emphasis has been placed on the development of reliable computational methods to predict TCR specificity.Existing methods can accurately classify in-distribution samples, i.e. they can predict TCR binding to epitopes already encountered by the model <cit.>. However, the pivotal challenge is to develop models with the capacity to generalize to novel epitopes. A major challenge stems from the scarcity of datasets containing experimentally validated TCR-epitope interactions, and in particular, regarding the diversity of sampled epitopes.§ LIMITATIONS OF AVAILABLE DATASETS.TCR specificity data can be collected from various databases, such as the VDJdb <cit.>, with over ∼70,000 TCR sequences and ∼1100different epitopes as of December 2023, and McPas-TCR <cit.>, with a manually curated set of ∼40,000 pairs. Newer datasets are also rapidly becoming available, such as the MIRA dataset,published during the COVID pandemic and including over 135,000 TCRs binding various COVID-19 epitopes <cit.>.However, current datasets exhibit serious limitations. First, while bulk sequencing of T cells is high-throughput and cost-effective, it cannot detect paired α and β chain sequences. New single-cell technologies can generate paired-chain data, yet they are costly and remain relatively underrepresented in public datasets. Currently, only a minor fraction of samples in VDJdb and none in the MIRA datasets provide paired-chain data.Furthermore, the experimental methods predominantly rely on known target pMHC complexes, skewing the datasets towardsTCRs that recognize a limited number of epitopes, predominantly of viral origin and associated with the 3-6 most common HLA alleles. Finally, the datasets also show a significant bias in epitope diversity, with just ∼100 1̃00 antigens accounting for 70% of TCR-antigen pairs <cit.>. The lack of negative data in T cell sequencing, which focuses primarily on pMHC-labeled cells, further challenges the development of accurate supervised machine learning models. Different approaches are typically used to artificially generate non-binding TCR-epitope pairs, ranging from shuffling TCR-epitope pairs, to using naive TCR sequences, or decoy datasets. However, each of these methods introduces its own biases, and careful consideration is needed in their application to ensure the generation of negative data that accurately reflects true non-binding interactions <cit.>.§ EVOLUTION OF TCR SPECIFICITY PREDICTION MODELS.Since the first release of TCR-pMHC binding data in 2017 <cit.>, multiple studies have undertaken the challenge of modeling TCR specificity, with models broadly categorized into three groups: unsupervised clustering methods, supervised classifiers, and protein language models.Namely, in the early years of TCR-pMHC modeling (2017-2019), when the scarcity of labeled data posed challenges for training supervised models, simple clustering algorithms demonstrated the feasibility of predicting TCR specificity from sequences. In 2020 and 2021, with the increased availability of data, there was a surge in supervised models ranging from simple classifiers to deep neural networks.In the last couple of years, the breakthrough of Large Language Models, such as OpenAI’s generative pre-trained transformer (GPT) models <cit.>, has facilitated the emergence of Protein Language Models (PLMs) based on similar principles <cit.>. The number of PLMs is rapidly increasing, with some of them being specifically trained on TCR sequences <cit.>. Fig. <ref> illustrates the evolution of modeling approaches, prompting us to refer to the different waves as generations.In what follows, we describe these three model generations and summarize the main characteristics of prominent representatives of each. §.§ 1st generation: unsupervised clustering The initial efforts in TCR specificity prediction employed unsupervised clustering methods,under the assumption that sequence similarity, or more precisely, similarity of sequence features, correlates with specificity similarity. Under this hypothesis, clusters of TCRs with similar sequences are expected to bind to the same targets. Typically, these approaches first established a distance measure, and applied the K-nearest neighbor method to determine labels for test samples based on the closest training set samples. A simple proof of concept was provided by De Neuter et al <cit.>. On a dataset comprising only two HIV epitopes, a random tree classifier was trained using TCR features including V and J segments, CDR3 sequence length, mass, amino acid counts, as well as biochemical features, such as CDR3 basicity, hydrophobicity, helicity, and isoelectric point. While the applications of this model were limited, it elegantly showed that specificity prediction from sequences is feasible. An advanced version of the model was published in 2019 as TCRex <cit.>.TCRdist, the most established TCR distance measure, was introduced by Dash et al. in 2017 <cit.>. It is based on the Hamming distance–the number of edits required to transform one sequence into another–enhanced with a gap penalty, and it assigns greater weight to the CDR loops due to their importance for epitope binding.Using this distance measure, it was shown that TCRs with similar specificity cluster together, and the specificity of uncharacterized TCRs can be correctly predicted based on their proximity to training sequences. A newer version of this distance measure incorporating several new features has been introduced as TCRdist3 <cit.>. In the same year, Glanville and colleagues released GLIPH (Grouping of Lymphocyte Interactions by Paratope Hotspots) <cit.>, a TCR clustering model based on both global similarity and local motif similarity. The model establishes an edge between TCRs that either differ by fewer than two amino acids or share a sequence motif enriched more than 10-fold relative to naive repertoires. Clusters are then determined as communities in the resulting graph. The success of these early models demonstrated that the TCR sequence encodes specificity information and led to the identification of a set of features, i.e. high-level sequence descriptors, edit distance, or motif sharing, which have since been utilized by subsequent, more complex models. Interestingly, even with the increased availability of data, distance-based approaches, e.g.TCRMatch <cit.>, GIANA <cit.>, iSMART <cit.>,ELATE <cit.>, etc,have continued to be developed and demonstrate competitive performance on many tasks. However, while clustering approaches are straightforward and effective in environments with limited data, they struggle to accurately represent complex nonlinear interactions and outliers. As more data became increasingly accessible through public databases such as VDJdb, these approaches soon began to be replaced by more sophisticated supervised methods. §.§ 2nd generation: supervised models In the domain of supervised models, clear distinctions emerge in both the modeling approach and the formulation of the prediction task. The modeling aspect spans a range from various nonparametric machine learning models to neural network architectures. As for the prediction task, there are two primary methods. The first treats known epitopes as distinct classes, to which TCRs are assigned. This method, used by all unsupervised clustering methods and approximately one-third of supervised models, does not utilize epitope information as input, and hence, it cannot generalize to unknown epitopes. Conversely, models that explicitly incorporate epitopes as input attempt to predict the binding probability between any given TCR and epitope.Early models predominantly employed non-parametric methods and treated epitopes as class labels for TCRs. Notable examples include TCRGP <cit.> and SETE (Sequence-based Ensemble learning approach for TCR Epitope binding prediction) <cit.>. TCRGP searches for similarities between TCRs using a Gaussian Process classifier with a squared exponential kernel function based on a BLOSUM encoding, i.e. an amino acid-based encoding extracted from the Blocks Substitution Matrix <cit.>. SETE utilizes k-Mers of the CDR3 sequence as input features and adopts an ensemble learning approach based on decision trees to classify TCR sequences into epitope-binding classes. The bulk of supervised models, however, employs neural network architectures.One of the first attempts, NetTCR <cit.>, employed convolutional layers and separated input streams for TCR and epitope sequences, using BLOSUM encodings to vectorize amino acid sequences. Due to the low number of available training sequences at the time, the model performance was moderate, although improved versions with enhanced accuracy have been released afterwards <cit.>.TcellMatch <cit.>, another early neural network model, featured a variety of layer types, including self-attention, Gated Recurrent Units (GRUs), Long Short-Term Memory (LSTM), and convolutional layers. The authors explored the use of paired chain data versus only TCRβ data, and experimented with adding covariate data such as transcriptome and surface protein expression from single-cell experiments. Furthermore, the explicit encoding of epitopes was compared with using epitopes as class labels, with the latter method showing an improved performance. Published shortly afterward, ImRex <cit.> introduced the novel concept of representing TCRs and epitopes as visual interaction maps, facilitating the use of established computer vision techniques.ImRex was one of the first models capable of making predictions for unseen epitopes, though its performance was somewhat limited.Both DeepTCR <cit.> and pMTnet <cit.> exploited autoencoders to generate meaningful representations of TCR sequences in a latent space, later used as input for a classifier. This has the advantage of permitting the use of additional unlabeled data to train the encoder and decoder. DeepTCR employed variational autoencoders with convolutional layers, and used the TCR CDR3 sequence along with V and J gene information. The key difference in pMTnet is the use of stacked autoencoders instead of variational ones, and the explicit encoding of the pMHC molecule through a re-implemented version of the netMHCpan <cit.>, an MHC-I binding machine-learning model.Published in 2021, TITAN <cit.> is a supervised neural network inspired by drug sensitivity prediction models <cit.>. The architecture employs convolutional layers, self-attention, and multi-head context-attention layers.Importantly, TITAN experimented with encoding peptides with SMILES (Simplified Molecular Input Line-Entry System), a linear and readable format to represent molecules atom-wise, which enables an efficient token-based input of atomic units into a neural network. Encoding epitopes as SMILE strings resulted in two significant benefits. First, due to the multiple paths for traversing a molecular graph, the same molecule can be encoded in various equivalent ways as a SMILES string, a property that was leveraged in TITAN as an effective data augmentation strategy. Second, Second, it enabled the pretraining of the network using protein-compound interactions, resulting in a substantial enhancement of the model's performance. This is an example of transfer learning, where large amounts of related data are used to improve predictions in scenarios with limited data availability. The key insights gained from these models are that a good latent sequence representation can significantly improve a model's performance.Additionally, transfer learning further contributes to boosting the accuracy of models. The pretrained PLMs developed in the 3rd generation and discussed in Section <ref> further capitalize on these principles.§.§ 3rd generation: Protein language models. Recently, attention-based Transformer models have garnered substantial public interest due to the release of language models fine-tuned for conversational tasks, such as ChatGPT, Bard, Perplexity, and others. Although these AI-based chatbots have brought the field into the public eye, the use of self-supervised pre-training on unlabeled data has been driving advancements in natural language processing (NLP) for years.Attention-based Transformers were first introduced in 2017 <cit.>. It soon became evident that these architectures could be trained on millions of lines of unlabeled text using proxy tasks such as masked word prediction or next word prediction <cit.>. Following the pre-training phase, the models are typically fine-tuned with labeled datasets for specific text-based downstream tasks. One of the key concepts contributing to the success of these models was the shift from one-hot encodings, a transformation that simply converts text into vectors without preserving any semantic or contextual word information, to context-informed word representationsthat capture the meaning and similarity of words based on their contextual usage. Examples of these are Word2Vec <cit.>, and more recently, Transformer architectures such as GPT (Generative pre-trained Transformer) <cit.>, BERT (Bidirectional Encoder Representations from Transformers) <cit.>, Transformer-XL <cit.>, and XLNet <cit.>.Transformer architectures, when trained on extensive text corpora, encounter words in diverse contexts, which enables them to construct highly informative latent word representations of significant relevance for a wide range of downstream tasks. Learning the language of biology: With the success of Transformer architectures in NLP tasks, they were quickly adapted to various biological tasks, such as biomedical text mining <cit.> and genomic sequence analysis <cit.>. In the context of protein modeling, In protein modeling, the sequential order of amino acids in proteins follows rules determined by chemical properties, such as polarity, charge, and hydrophilicity. This is analogous to how grammatical rules govern the arrangement of words in sentences.Indeed, since the year 2020, a variety of Transformer models have emerged that, trained on extensive protein sequence datasets, are demonstrating competitive performance in various protein-related tasks, such as predicting protein homology, structure, function, and interactions <cit.>. §.§.§ Language models for TCR specificity prediction   With PLMs becoming increasingly popular, the first attempts to apply them to the TCR specificity prediction problem were soon made. For example, TCR-BERT <cit.> is a BERT-based model that was pre-trained on a dataset of 88,403 TCRα and TCRβ sequences using a masked token prediction task. The weights were fine-tuned on an antigen classification task, and the resulting TCR embeddings were reduced to 50 dimensions using principal component analysis (PCA), followed by classification with a support vector machine (SVM).More recently, STAPLER <cit.>, another BERT-based architecture, was trained on an even larger unlabeled dataset of almost 80M random pairs of TCRα, TCRβ, and peptide sequences using similarly a masked token prediction task.Although not a transformer model, CatELMo <cit.> exploits ELMo (Embeddings from Language Models) <cit.>, a bi-directional context-aware LSTM-based language model, for TCR modeling. CatELMo is trained on more than four million TCR sequences collected from ImmunoSEQ <cit.>, and achieves improved performance compared to traditional TCR and epitope sequence embeddings, such as BLOSUM. All these models leverage transfer learning by utilizing unlabeled TCR sequences for pre-training. Alternatively, transfer learning can also be utilized by leveraging existing large PLMs that have been trained on vast collections of unlabeled protein sequences.For instance, TCRconv <cit.>employs ProtBERT <cit.> embeddings as input and processes them through a convolutional and a linear layer for the downstream task of epitope classification. When considering this approach, a relevant question is whether is better to use a large, general PLM or a domain-specific PLM. A recent study showed that embeddings from ESM2 <cit.>, a general PLM (Fig. <ref>), and TCR-BERT <cit.> yielded comparable results in a TCR classification task <cit.>.This study further showed that general models trained on broader collections of heterogeneous protein sequences can outperform their domain-specific counterparts in data-rich environments. This finding challenges the prevailing assumption that domain-specific models offer superior performance in specialized tasks.Indeed, as each PLM encodes information differently, the selection of PLM embeddings and the design of the coupled architecture have to be carefully optimized for each downstream application. § OUTLOOKThe accurate prediction of TCR specificity is crucial for multiple clinical applications such as the development of safer and more efficient immunotherapies. It can also deepen our understanding of autoimmune diseases. Despite the progress, current TCR prediction models face significant challenges due to insufficient and often biased data, notably concerning epitope information. This limitation restricts the models' capacity to generalize to new epitopes, which is critical, for instance, to predict potential cross-reactive events in T cell-based therapies.In recent years, we have seen the rapid release and evolution of machine learning models exploiting supervised and unsupervised learning approaches. The advent of protein language models (PLMs), which leverage transfer learning to enhance performance in data-scarce domains, is revolutionizing the field and showing breakthrough performances in various protein-related tasks.However, substantial work is still required before these models can reliably perform highly specific tasks, such as predicting TCR specificity to novel epitopes. Importantly, alongside the rapid development of new models, targeted efforts to understand the strengths and weaknesses of each model need to be undertaken. In this direction, the ImmRep 2022 Workshop <cit.> has initiated efforts to create a common benchmark dataset and to conduct yearly competitions to facilitate rigorous model comparisons–a critical advancement for the field. The interpretation of existing models is also necessary to improve our understanding of TCR-epitope interactions.Many machine learning models, especially PLMs, are inherently non-interpretable due to the encoding of amino acid information in highly abstract and complex latent spaces. The challenge of deriving meaning and explanations from black-box models presents a crucial challenge that could be addressed by leveraging recent advancements in Interpretable Machine Learning.For instance, the Automated Concept-based Explanation (ACE) <cit.> aims to automatically identify key concepts, i.e. high-level, human-understandable features, that influence model decisions. Applied to clusters of samples in the latent space, it could be used to explore the biochemical properties that govern cluster assignments.Attention mechanisms can also be leveraged to capture structurally important residue pairs that contribute to TCR-epitope binding <cit.> or predict protein structural properties <cit.>. For instance, a recent study categorized residues into groups with high and low attention values. The analysis revealed that residues receiving more attention often had distinct structural properties, and they were statistically more likely to form hydrogen bonds within the CDR3 region <cit.>.Alternatively, black-box models can be employed to generate plausible hypotheses that are later investigated using post hoc interpretable methods. An example of this approach is DECODE <cit.>,a user-centric interpretable pipeline to extract human-comprehensible rules from any black-box TCR predictive model. DECODE exploits Anchors, a model-agnostic approach to approximate the decision boundary of any machine learning model and identify local, sufficient conditions for predictions <cit.>.However it is addressed, interpreting model predictions is not just a technical necessity but a fundamental requirement for clinical trust and application <cit.>.Other open questions demand nuanced investigation, such as understanding how PLMs organize information in the latent space. Counterintuitively, recent research has shown that the selecting of an appropriate PLM embedding layer for downstream applications necessitates optimization, as in some instances, earlier layers demonstrate superior representation capabilities for specific tasks <cit.>.Gaining a deeper insight into what and how information is encoded could enable more focused and efficient development and deployment of models for particular tasks.Finally, many computational methods designed for TCR specificity prediction can be adapted to model B cell receptor (BCR) antigen binding. Indeed, both statistical <cit.>, machine learning <cit.>, and transformer-based approaches <cit.> have been developed to jointly model TCR and BCR repertoires. While not the primary focus of this paper, BCR specificity prediction faces similar data and computational challenges as TCR specificity prediction. A particularly interesting question is whether PLM-based models can effectively capture both TCR and BCR characteristics, or if specialized models might yield higher accuracy. In summary, the field of computational immunology, particularly regarding TCR and BCR specificity prediction, is rapidly evolving, yet key questions and challenges persist. Addressing these is crucial not only for deepening our understanding of the immune system but also for paving the way for groundbreaking clinical applications. § FUNDING AND DECLARATIONSThe authors acknowledge funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Actions (813545) and the ICT-2018-2 Program (826121). Additional support was received from the Swiss National Science Foundation through the Sinergia program (CRSII5 193832) and Project Funding (192128).During the preparation of this work, the authors used ChatGPT to improve language clarity and readability. After using this service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.§ APPENDIX Survey of TCR specificity prediction models. [h]l|l|l|l Epitope Model Year Type explicitly encodedTCRdist <cit.> 2017 Unsupervised NoGliph <cit.>2017 Unsupervised NoDeNeuter <cit.>2018 Unsupervised NoNetTCR <cit.> 2018 Supervised Yes TCRex <cit.>2019 Unsupervised NoSETE <cit.> 2020 Supervised NoTcellMatch <cit.> 2020 Supervised Yes^1ImRex <cit.> 2020 Supervised YesiSMART <cit.>2020 Unsupervised NoERGO <cit.>2020 Supervised YesTCRdist3 <cit.> 2021 Unsupervised NoTCRGP <cit.>2021 Supervised NoERGO2 <cit.>2021 Supervised YesTCRMatch <cit.>2021 Unsupervised NopMTnet <cit.> 2021 Supervised YesTITAN <cit.>2021 Supervised YesNetTCR-2.0 <cit.>2021 Supervised YesTCRAI <cit.> 2021 Supervised NoDeepTCR <cit.> 2021 Supervised NoTCR-BERT <cit.> 2021 Language NoGIANA <cit.> 2021 Unsupervised NoELATE <cit.>2021 Unsupervised NoDLpTCR <cit.>2021 Supervised YesATM-TCR <cit.>2022 Supervised YesTCRconv <cit.> 2022 Language NodiffRBM <cit.>2023 Supervised No^2PanPep <cit.> 2023 Supervised YescatELMo <cit.> 2023 Language YesSTAPLER <cit.> 2023 Language YesNetTCR-2.2 <cit.> 2023 Supervised YesTCR-H <cit.> 2023 Supervised Yes epiTCR <cit.> 2023 Supervised Yes GGNpTCR <cit.> 2023 Supervised Yes Rehman Khan <cit.> 2023 Language No TAPIR <cit.> 2023 Supervised YesBERTrand <cit.> 2023 Language YesMITNet <cit.> 2023 Supervised NoSC-AIR-BERT <cit.> 2023 Language NoMixTCRpred <cit.> 2023 Language Yes^3TSPred <cit.> 2023 Supervised Yes Koyama et al <cit.> 2023 Language Yes EPIC-TRACE <cit.> 2023 Language Yes(^1) Both an epitope classification model and a model that encodes the epitope sequences are presented.(^2) diffRBM trains two different models: one for immunogenicity prediction, which explicitly models the epitope sequence, and a second for TCR epitope specificity prediction, which does not consider the epitope sequence.(^3) The basic model is epitope-specific; however, a pan-epitope model that encodes the sequence of the epitope is also presented. elsarticle-num
http://arxiv.org/abs/2312.16594v1
{ "authors": [ "Anna Weber", "Aurélien Pélissier", "María Rodríguez Martínez" ], "categories": [ "q-bio.QM", "q-bio.BM", "q-bio.SC" ], "primary_category": "q-bio.QM", "published": "20231227144021", "title": "T cell receptor binding prediction: A machine learning revolution" }
griffinch@psu.edu Applied Research Laboratory, The Pennsylvania State University, University Park, PA 16802fengl@cafs.ac.cn Fisheries Engineering Institute, Chinese Academy of Fishery Sciences, Beijing 100141, China ronglingwu@bimsa.cnBeijing Institute of Mathematical Sciences and Applications, Beijing 101408, China Yau Mathematical Sciences Center, Tsinghua University, Beijing 100084, ChinaWe introduce and study the spatial replicator equation with higher order interactions and both infinite (spatially homogeneous) populations and finite (spatially inhomogeneous) populations. We show that in the special case of three strategies (rock-paper-scissors) higher order interaction terms allow travelling waves to emerge in non-declining finite populations. We show that these travelling waves arise from diffusion stabilisation of an unstable interior equilibrium point that is present in the aspatial dynamics. Based on these observations and prior results, we offer two conjectures whose proofs would fully generalise our results to all odd cyclic games, both with and without higher order interactions, assuming a spatial replicator dynamic. Intriguingly, these generalisations for N ≥ 5 strategies seem to require declining populations, as we show in our discussion.Spatial Dynamics of Higher Order Rock-Paper-Scissors and Generalisations Rongling Wu January 14, 2024 - Preprint ========================================================================§ INTRODUCTIONReplicator dynamics have been used extensively in theoretical ecology to model ecosystem interactions at a high level <cit.>. Surprisingly, these models intersect those from theoretical physics, with tournament dynamics in ecology <cit.> also occurring in the analysis of Schrödinger operator <cit.> and in the discrete KdV equation <cit.>. Most biological and ecological models assume pairwise interactions <cit.>, leading naturally to generalized Lotka-Volterra equations or the (equivalent) replicator dynamics in which the interaction matrix and the payoff matrix become synonymous. In this case, the payoff from interactions is used to define species fitness, as discussed in <ref>. This simple assumption is invalidated by strong evidence for the existence of higher order interactions<cit.>. Higher order interactions occur when three or more species (not necessarily distinct) interact with each other simultaneously to produce an additional payoff, which may increase or decrease fitness in the constituent species <cit.>. In particular, higher order interactions have the potentialto alter the established relationship between diversity and stability <cit.>. While the replicator equation has been studied extensively <cit.>, the replicator dynamic with higher order interactions has recently been considered by Griffin and Wu <cit.>. In this work, they show that the presence of higher order interactions in rock-paper-scissors can change the well-known dynamics of this game to allow the emergence of a sub-critical Hopf bifurcation as compared to the known degenerate Hopf bifurcation that characterizes the dynamics of rock-paper-scissors under the ordinary (pairwise) replicator dynamics <cit.>. Before this, Gokhale and Traulsen <cit.> studied evolutionary games with multiple (more than two) strategies and multiple players, while Zhang et al. <cit.> study multiplayer evolutionary games with asymmetric payoffs. In related but distinct work, Peixe and Rodrigues <cit.> study strange attractors and super-critical Hopf bifurcations in polymatrix replicators. Polymatrix games are also discussed in <cit.>. However, to our knowledge, no one has yet studied a spatial replicator with higher order interactions, which is the goal of this paper.Spatial evolutionary dynamics using partial differential equations have been studied by several authors, with <cit.> providing a small example of the body of work. Most of these models assume an infinite, spatially homogeneous, population in so far as the state variables of the model are the proportions of the population playing a given strategy at a given location and time. Durrett and Levin were the first to point out the fundamental differences between discrete and continuous evolutionary game models and finite and infinite population assumptions <cit.>. These distinctions have been further by Griffin et al. <cit.>, where it is shown that finite populations can destroy travelling wave solutions (in rock-paper-scissors) or even reverse the direction of travelling waves (in prisoner's dilemma). Alternative approaches to studying finite populations frequently use discrete (grid) based methods and are based on the early work of Nowak and Martin <cit.>, with extensions by several authors <cit.>. These models often focus on the interplay between concepts from statistical mechanics and evolutionary games via updating rules that use (among other mechanisms) the Boltzmann distribution. We will not consider these models in this paper.Instead, we will use the models of Vickers <cit.> for infinite (or spatially homogeneous) populations and Griffin, Mummah and Deforest for finite (or spatially inhomogeneous) populations. In this paper, we study spatial replicator equations with higher order interactions for both infinite (spatially homogeneous) and finite (spatially inhomogeneous) populations. Formal definitions for these cases are provided in <ref>. While Griffin et al. <cit.> show that rock-paper-scissors under the ordinary spatial replicator dynamic can only admit travelling waves if the net population is decreasing, we show that the introduction of higher order interactions allows travelling waves to emerge in spatially homogeneous and inhomogeneous populations with no decline. Interestingly, when we generalise to cyclic games with more strategies (e.g., rock-paper-scissors-Spock-lizard), we see that this property of both the existence of travelling waves and a non-declining population seems to be a property of the three strategy case only. Nevertheless, we use observations made in this paper to pose two general conjectures on travelling waves and cyclic games under both the ordinary spatial replicator and the spatial replicator with higher order dynamics.The remainder of this paper is organized as follows. In <ref>, we introduce notation needed in the remainder of the paper. We formulate the higher order spatial replicator in <ref>. Our analysis on rock-paper-scissors is carried out in <ref>. We generalise this analysis in <ref>, proposing two conjectures on odd cyclic games. Conclusions and future directions are discussed in <ref>. There is also an appendix (<ref>) that contains a derivation of the first Lyapunov coefficient for the Hopf bifurcation identified in the <ref>.§ BACKGROUNDLet Δ_n-1 be the n-1 dimensional unit simplex embedded in ℝ^n composed of vectors 𝐮 = ⟨u_1,…,u_n⟩ where u_1 + ⋯ + u_n = 1 and u_i ≥ 0 for all i=1,…,n. We assume that an ecosystem supports a population size of M total species. Then u_iMis the size of the population of species i, where we allow fractional species counts for simplicity. Suppose the fitness of species i is given by the function f_i(𝐮). The replicator equation with fitness f is then,u̇_i = u_i[f_i(𝐮) - f̅(𝐮)],where,f̅(𝐮) = ∑_j u_j f_j(𝐮),is the mean fitness of the population. Assuming a finite population, the dynamics of the whole population are given by,Ṁ = f̅(𝐮)M. If 𝐀∈ℝ^n × n is a payoff (or interaction) matrix, then,f_i(𝐮) = 𝐞_i^T𝐀𝐮produces the classic replicator from evolutionary game theory,u̇_i = u_i[𝐞_i^T𝐀𝐮 - 𝐮^T𝐀𝐮]. When u is a function of space 𝐱 and time t. Vickers <cit.> (and many others) study the spatial replicator with form,u̇_i = u_i[f_i(𝐮) - f̅(𝐮)] + D∇^2 u_i,where D is a diffusion constant. Without loss of generality, we assume that all species share a diffusion constant. Griffin, Mummah and DeForest <cit.> generalised the work of Durrett and Levin <cit.> to show that when the total population M(𝐱,t) is neither homogeneous nor infinite, the species and total population are governed by the system of equations,{∂ u_i/∂ t =u_i[f_i(𝐮) - f̅(𝐮)] + 2nD/M∇ M ·∇ u_i + D∇^2 u_i. ∂ M/∂ t = f̅(𝐮)M + ∇^2 M, .where n is the spatial dimension. It is straightforward to see that when M(𝐱,t) is homogeneous (or infinite), then the <ref> is recovered. As we will discuss these two cases in the remainder of the paper, we will refer to equations of the form given in <ref> as the finite population spatial replicator and equations of the form given in <ref> as the infinite population spatial replicator, even though we may really be considering finite populations that are spatially inhomogeneous vs. homogeneous. A biased rock-paper-scissors (RPS) payoff matrix is given by, 𝐀 = ([ 0-1 1+a; 1+a 0-1;-1 1+a 0 ]),where we assume that a > -2 to maintain a rock-paper-scissors dynamics. It is well known that with this payoff matrix, the aspatial replicator, <ref>, has a single interior fixed point at u_1 = u_2 = u_3 = 13 and this fixed point is stable when a > 0, unstable when a < 0 and elliptic when a = 0 <cit.>.Griffin, Mummah and DeForest <cit.> showed that a travelling wave solution exists for <ref> using the biased RPS matrix just in case a < 0. In a finite population case, this is biologically unrealistic since,f̅(𝐮) = 𝐮^T𝐀𝐮 = a(u_1u_2 + u_1u_3 + u_2u_3),which is negative just in case a < 0. Since, Ṁ = f̅M < 0 the population will collapse in this case. Moreover, <cit.> shows numerically that the travelling waves can be destroyed in the finite declining population case. However, we know that spatial travelling waves exist in real, non-declining populations <cit.>. Our objective is to show that higher order interactions lead to the existence of travelling waves in cyclic competition (rock-paper-scissors) under both the finite and non-finite spatial replicator dynamics. § HIGHER ORDER INTERACTIONS IN ROCK-PAPER-SCISSORS IN SPACE In <cit.>, Griffin and Wu introduce a higher order interaction dynamic modelled by,f_i(𝐮) = 𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮,where 𝐁_i is a quadratic form (matrix) that takes two copies of the population proportion vector 𝐮 and returns a payoff to species i that occurs when one member of species i randomly meets two members of the population. We think of 𝐁_i as being a slice of a (0,3) tensor 𝐁:Δ_n-1×Δ_n-1×Δ_n-1→ℝ. The mean fitness is then given by,f̅ = ∑_i = 1^n u_i f_i(𝐮) = ∑_i=1^n u_i ( 𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮) =𝐮^T𝐀𝐮 + ∑_i=1^n u_i𝐮^T𝐁_i𝐮. Following Vickers, <cit.> we can construct a spatial model for higher order interactions that assumes a homogeneous population by appending a diffusion term to the replicator to obtain,∂ u_i/∂ t = u_i(𝐞_i^T𝐀𝐮 - 𝐮^T𝐀𝐮 + 𝐮^T𝐁_i𝐮- ∑_i=1^n u_i𝐮^T𝐁_i𝐮) + D∇^2 u_i.Let 𝐀 be the standard rock-paper-scissors matrix,𝐀 = ([0 -11;10 -1; -110 ]),obtained by setting a = 0 in <ref>. Now, 𝐮^T𝐀𝐮 = 0. Generalising from Griffin and Wu <cit.>, we assume that the quadratic forms 𝐁_i (i=1,2,3) can be written as,𝐁_1 =([0 -αβ; -α -γ0;β0δ ]) 𝐁_2 =([δβ0;β0 -α;0 -αγ ]) 𝐁_3 =([ -γ0 -α;0δβ; -αβ0 ]),where we assume, α,β,γ,δ > 0. As in <cit.>, the tensor 𝐁, composed of slices 𝐁_1, 𝐁_2 and 𝐁_3, has cyclic structure. When we assume that γ = 2β and δ = 2α, then,∑_iu_i𝐮^T𝐁_i𝐮 = 0,and consequently, f̅ = 0. Thus, any finite population would be stable assuming these dynamics. The resulting spatial dynamics for a homogeneous (infinite) population are,∂ u_i/∂ t = u_i(𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮) + D∇^2 u_i.The corresponding finite population model is then,{∂ u_i/∂ t = u_i( 𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮) + 2nD/M∇ M ·∇ u_i + D∇^2 u_i ∂ M/∂ t =D ∇^2 M. .Notice that our assumption on 𝐀 and 𝐁 implies that f̅ = 0 and so the bulk population is governed by the diffusion equation. § TRAVELLING WAVE SOLUTIONS EXIST IN ONE DIMENSIONWe begin by analysing the aspatial dynamics. Under the assumption that γ = 2β and δ = 2α, the aspatial dynamics are given by,u̇_1 = u_1(u_3 - u_2) + 2 u_1 (αu_3^2 +βu_1 u_3 -αu_2 u_1-βu_2^2) u̇_2 = u_2(u_1-u_3) + 2 u_2 (αu_1^2 + βu_2 u_1-αu_2 u_3-βu_3^2) u̇_3 = u_3(u_2-u_1) + 2 u_3 (αu_2^2 + βu_2 u_3-αu_1 u_3-βu_1^2)Just as with ordinary rock paper scissors, the three extreme points of Δ_2 are fixed points as is the interior point u_1 = u_2 = u_3 = 13. First order analysis of the Jacobian matrix at the interior fixed point gives eigenvalues,λ_1= 0 λ_2,3 = β-α/3± i√(3)/3(1+α + β).Thus the interior fixed point is unstable when β > α and stable if β < α. When β = α, the Hartman-Grobman theorem cannot be used. In this case, the dynamics simplify to,u̇_1 = (1+2α) u_1 (u_3 - u_2)u̇_2 = (1+2α)u_2(u_1-u_3)u̇_3 = (1+2α)u_3(u_2-u_1),which is just an ordinary rock-paper-scissors dynamic with payoff matrix (1+2α)𝐀. Therefore, the interior fixed point is elliptic in this case. Moreover, we have shown that the higher order dynamics we consider have analogous dynamics to the ordinary RPS system, except that by construction f̅ = 0. Now consider the spatial replicator with infinite (homogeneous) population. Let z = x + ct, where c is a wave speed to be determined. If we have 𝐮(x,t) = 𝐮(z), then the resulting system becomes,c u'_i = u_i(𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮) + D u_i”,where u'_i is the derivative in terms of z. Let v_i = u'_i. Then we have the modified system of differential equations,{ D v_i'= cv_i - u_i(𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮)u_i'= v_i. .This system has a fixed point at v_i = 0, u_i = 13 for i=1,2,3. Computing the eigenvalues of the Jacobian at this point gives,λ_1= 0 λ_2= c/D λ_3,4 = 3c ±√(9c^2-12D(β - α) + 12Di√(3)(1+α + β))/6D λ_5,6 = 3c ±√(9c^2-12D(β - α) - 12Di√(3)(1+α + β))/6D.The zero eigenvalue arises because we necessarily have u_r(z) + u_p(z) + u_s(z) = 1 and v_r(z) + v_p(z) + v_s(z) = 0 and thus the dynamics play out on a 4 dimensional manifold and λ_1 and λ_2 can be ignored.Focusing on the term under the outer radical, assume there is some r so that, (3c± r i)^2 = 9c^2-12D(β - α) ± 12Di√(3)(1+α + β).Then we obtain the equations,9c^2 - r^2 = 9c^2 - 12D(β - α)6cr = 12D√(3)(1+α+β).We can compute the wave speed and the parameter r as,(r^⋆, c^⋆) = ±( √(12D(β - α)),D(1+α + β)/√(D(β - α))).We conclude that the wave speed is non-imaginary, just in case β > α. That is, a travelling wave can emerge when the interior fixed point of the aspatial dynamics is unstable and hence stabilised by the diffusion term. This is similar to the condition found by Griffin, Mummah and Deforest for the ordinary spatial replicator with rock-paper-scissors <cit.>. We can simplify the eigenvalues λ_3,4 and λ_5,6 using the negative branch of (r^⋆,c^⋆) to obtain,λ_4,6 = ±i/√(3)√(β -α/D) λ_3,5 = -(1+α+β)/√(D (β -α ))± i√(β - α)/√(3D).Thus we have three eigenvalues with negative real part indicating a three-dimensional stable manifold with two additional eigenvalues that are pure imaginary. The presence of a stable manifold with imaginary eigenvalues satisfies the first criterion of Hopf's theorem <cit.> (Page 152). We use the negative branch because that will ensure that solutions to the PDE are (locally) attracted to the limit cycle and hence the travelling wave solution.Now consider the specific eigenvalues,λ_4,6 = 3c - √(9c^2-12D(β - α) ± 12Di√(3)(1+α + β))/6D.Differentiating with respect to c and evaluating at the identified wave speed yields,λ_4,6'(c^⋆) = 1/2D - c^⋆√(3)/2D√((3c^⋆±r^⋆i)^2) = 1/2D - √(3)c^⋆(3c^⋆∓r^⋆ i)/2D(9c^⋆^2+r^⋆^2).Then,Re[λ'_4,6(c^⋆)] = 1/2D( 1-√(3) (α +β +1)^2/7 α ^2-2 α(β -3)+β(7 β +6)+3)To see that this is always non-zero, note that the equation,1-√(3) (α +β +1)^2/7 α ^2-2 α(β -3)+β(7 β +6)+3 = 0,is quadratic in α and β. Solving for α in terms of β leads to a quadratic equation with discriminant,𝒟 = 16 (√(3)-3) (2 β +1)^2 < 0.Thus, there are no real values of α and β that make this expression zero. As such, the eigenvalues must cross the imaginary axis with non-zero speed, satisfying the second criterion of Hopf's theorem. Thus, we have proved the existence of a Hopf bifurcation at the fixed point, which implies the existence of an isolated attracting period orbit (stable limit cycle) just in case the first Lyapnuov coefficient of the system's normal form is non-zero and negative <cit.>. The first Lyapunov coefficient can be constructed using techniques in <cit.>, as,ℓ_1 = -3 √(3)(6 D ξω ^2+ξ ^2 -3 D^2 ω ^4)/4 √((ω ^2+1)^3 (3 D^2 ω ^4+ξ ^2) (3 D^2 ω ^2(ω ^2+1)+ξ ^2)),where,ω = √(β-α/3D)andξ = 1 + α + β. The details of this construction are provided in <ref> and the SI, where it is also shown that this quantity is always negative. Thus, by Hopf's theorem, we have proved that the travelling wave system <ref> has an attracting periodic solution (because ℓ_1 < 0) and consequently a travelling wave solution must exist for <ref>.We now consider the one-dimensional finite population model from <ref>. In one dimension we have,{∂_t u_1 = u_1(u_3 - u_2) + 2 u_1 (αu_3^2 +βu_1 u_3 -αu_2 u_1-βu_2^2) + 2D/M∂_x M∂_x u_1+ D∂_xx u_1 ∂_t u_2 = u_2(u_1-u_3) + 2 u_2 (αu_1^2 + βu_2 u_1-αu_2 u_3-βu_3^2) + 2D/M∂_x M∂_x u_2+ D∂_xx u_2 ∂_t u_3 = u_3(u_2-u_1) + 2 u_3 (αu_2^2 + βu_2 u_3-αu_1 u_3-βu_1^2) + 2D/M∂_x M∂_x u_3+ D∂_xx u_3 ∂_t M= D ∂_xxM. .Following work by Griffin <cit.>, we have a travelling wave solution for the diffusion equation, ∂_t M = D ∂_xxM as,M(x,t) = Aexp[c(x+ Dct)] + B,where A and B are arbitrary constants and kc ∈ℝ is the population wave speed, and c∈ℝ will be the species wave speed. Assume B = 0. Then1/M∂ M/∂ x =c. Then in the finite dimensional case, the resulting travelling wave equation for the species is, c u'_i = u_i(𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮) + 2Dc u'_i + D u_i”,leading to the system of equations,D v_i'= c(1-2D)v_i - u_i(𝐞_i^T𝐀𝐮 + 𝐮^T𝐁_i𝐮)u_i'= v_i.This is identical to <ref> but with a modified wave speed and consequently, our proof of the existence of a travelling wave solution applies mutatis mutandis. Thus, for small diffusion (D < 12), the finite and infinite populations will share solutions but travelling at different speeds. We now illustrate this for D = 110, β = 32 and α = 1, and simultaneously show the existence of the predicted limit cycle solution for <ref>. Consider the following initial conditions for the PDE's <ref> and <ref>,u_1(x,0)= 1/3[1+sin(x)],u_2(x,0)= 1/3[1 + sin(x - 2π/3)], u_3(x,0)= 1/3[1 + sin(x - 4π/3)],and assume periodic boundary conditions u_i(-π,t) = u_i(π,t). Then four snapshots of the resulting travelling wave solution are shown in <ref>.We see a perturbation of the initial condition that quickly settles into the travelling wave solution in both the finite and infinite population cases. We can numerically investigate solutions for <ref>. Let u_i(x,t) be the (numerical) travelling wave solutions to the infinite (finite) population spatial replicator. For u_i(z) and v_i(z) in <ref>, we set,u_i(0)= u_i(0,T)v_i(0)= ∂_x u_i(0,T),where T = 50 is sufficiently large to ensure that the resulting numeric solution is (effectively) on the limit cycle. When we plot u_i(0,t) for i=1,2,3 (in an appropriate projection) we see that the solutions to the partial differential equation(s) approach the limit cycle, as expected. This is shown in <ref>Recall that when β > α, the interior fixed point u_i = 13 (i=1,2,3) in the aspatial dynamics is unstable. We conclude that the travelling wave solution arises because the diffusion is stabilising the growing oscillations that would arise at all points in space and (under certain initial conditions), allowing the stabilised oscillations to synchronize. We can prove this stabilisation occurs by first order analysis of the infinite population system. Let 𝐉_0 be the Jacobian of the equation system given in eqn:ureqn:us evaluated at the interior fixed point. Then we have,𝐉_0 = [2/9 (β - α)-1/9 (2 α + 4 β +3) 1/9 (4α +2 β +3);1/9 (4 α +2 β +3) 2/9 (β - α ) -1/9 (2 α +4 β +3); -1/9 (2 α +4 β +3)1/9 (4 α +2 β +3)2/9(β - α ) ].Let υ_i = u_i - 13 with υ = ⟨u_1,u_2,u_3⟩ and let 𝐃 = D𝐈, where 𝐈 is the identify matrix. Following <cit.>, we analyse the linearised stationary problem with Neumann boundary conditions, 0 = 𝐉_0υ + 𝐃∇^2υ,(𝐧·∇)υ = 0,by computing the roots of the characteristic polynomial,det(λ𝐈 + 𝐉_0 + 𝐃k^2).Here k is a wave number in a Fourier basis of a proposed solution ansatz and λ is an eigenvalue. We find three eigenvalues,λ_1 = -Dk^2λ_2,3 = 1/3(β - α -3Dk^2 ± i√(3(1+α + β))).The fact that -Dk^2 appears in the real parts of all three eigenvalues is sufficient to show that the diffusion exerts only a stabilising effect. Moreover, Re(λ_2,3) = β - α - 3Dk^2/3,is positive only if β > α + 3Dk^2. That is β > α, which we already knew. Thus, we have not only shown that the diffusion exerts a stabilising effect on the system, but also that Turing patterns cannot emerge in this system as a result of diffusion induced instability. Interestingly, this seems also explain the occurrence of travelling waves when no higher order interactions are present but a < 0 in the interaction matrix in <ref> using the infinite population spatial replicator as shown in <cit.>. We discuss this as a possible framework for generalising these results in future directions.While it is generally difficult to construct the amplitude of a limit cycle, and thus a travelling wave, we can show numerically that the amplitude of the travelling wave (limit cycle) varies inversely with (a function of) the diffusion constant. Thus, as D increases, we expect to see travelling wave solutions that approach the fixed point solution u_i(x,t) = 13, further demonstrating the stabilising effect of the diffusion. This is illustrated in <ref> for D = 16 < 110 in the infinite population case. We can prove, by counter-example, that the travelling wave solution is not globally asymptotically stable in the space of solutions for either the finite population equation or the infinite population equation. To see this, consider the initial condition,u_1(x,0)= 2/5[cos(x+π/6)+1],u_2(x,0)= 1/6[1-cos(π/6-x)], u_3(x,0)= 1/60[17 sin (x)-7 √(3)cos (x)+26],These expressions do not lead to (numerical) solutions that tend to travelling waves, as shown in <ref>.Instead, these solutions lead to a globally oscillating solution that asymptotically approaches the boundary of the simplex at all spatial positions. For the finite population case, we are using the travelling wave solution used in our prior numerical illustration. Interestingly, the finite population case takes longer to approach the globally oscillating solution than the infinite population case, most likely as a result of the bulk movement of the finite population. This is illustrated in <ref>(bottom). This phenomenon may warrant investigation in future work.§ GENERALISATIONTo generalise the work in this paper, recall that a circulant matrix <cit.> has structure,𝐀 = [ a_0 a_n-1 a_n-2 ⋯ a_1; a_1 a_0 a_n-1 ⋯ a_2; ⋮ ⋮ ⋮ ⋱ ⋮; a_n-1 a_n-2 a_n-3 ⋯ a_0 ].That is, the entire matrix structure is determined by the first row. The set of all circulant matrices forms an algebra under addition and (commutative) matrix multiplication. Let N = 2n+1 with n ≥ 1. Consider the N dimensional row vector,𝐀_N_1 = [ 0, -1, 1+a, -1, 1+a, …, -1, 1+a ],where a is the biasing term. Then the N× N circulant matrix 𝐀_N defined by 𝐀_N_1 is the payoff matrix for the N strategy generalisation of rock-paper-scissors. Griffin and Fan <cit.> showed that the replicator dynamics <ref> have a unique interior fixed point at 𝐮 = ⟨1N,…,1N⟩ that is stable when a > 0 and unstable when a < 0. Then we have the following conjecture, which is proved for the case N = 3.For all odd N, the one-dimensional spatial replicator equation,u̇_i = u_i(𝐞_i^T𝐀_N𝐮 - 𝐮𝐀_N𝐮) + D∂ u_i/∂ x,admits a travelling wave solution when 𝐀_N is defined as above and a < 0.We hypothesize thatAs in the three strategy case (rock-paper-scissors), when a < 0, 𝐮𝐀_N𝐮 < 0, which implies a globally decreasing population.Generalising our result to the higher order interaction case produces a surprising result. To generalise the interaction tensor to the case of N strategies, let Σ = {1,…,N} be the strategy set and let w(i) denote the set of strategies that are beaten by strategy i and let l(i) be the set of strategies that beat strategy i. Then,𝐁_N_i_jk =δ if j,k ∈ w(i)-γ if j,k ∈ l(i) β if j = i and k ∈ w(i) or k = i and j ∈ w(i)-α if j = i and k ∈ l(i) or k = i and j ∈ l(i)0j ∈ w(i) and k ∈ w(j) and i ∈ w(k)0j ∈ l(i) and k ∈ l(j) and i ∈ l(k)0i = j = k.The last three cases produce the 0 diagonals and the case when the three strategies form a winning/losing cycle (like rock, paper and scissors). As before, α, β, γ, δ > 0. When N = 3, we recover the higher order interaction matrices we have already studied.Consider N = 5. Then evaluating at γ = 2β and δ = 2α gives,f̅ = ∑_i u_i𝐮𝐁_N_i𝐮 = 4 (α -β)(u_1 u_2 u_4+u_1 u_3 u_4+u_2 u_5 u_4+u_1 u_3 u_5+u_2 u_3 u_5).This value is 0 if and only if α = β and otherwise its sign is equivalent to sgn(α - β). Notice, the triples are composed of entries of the form u_iu_ju_k where j,k ∈ l(i) and therefore cannot occur in the case when N = 3, which is why f̅ = 0 in our prior analysis. Simple computation shows that the only way for f̅ to be zero is in the case when α = β. Further analysis shows that the eigenvalues of the Jacobian at the (unique) interior fixed point 𝐮_i = 15 (i=1,…,5) are,λ_1 = 4/25(β - α)λ_2,3 = 1/25(7(β-α) ± 5 i (1 + α +β)√(5+2 √(5)))λ_4,5 = 1/25(7(β-α) ± 5 i (1 + α +β)√(5-2 √(5))).Thus the interior fixed point is unstable just in case β < α, as in the three strategy case, which we conjecture will lead to a stable travelling wave solution in the infinite population spatial case. We summarize this in the following conjecture.For all odd N, the one-dimensional higher order spatial replicator equation,∂ u_i/∂ t = u_i(𝐞_i^T𝐀_N𝐮 - 𝐮^T𝐀_N𝐮 + 𝐮^T𝐁_N_i𝐮- ∑_i=1^n u_i𝐮^T𝐁_N_i𝐮) + D∇^2 u_i,admits a travelling wave solution when 𝐀_N and 𝐁_N_i (i=1,…,N) are defined as above and a = 0, γ = 2β and δ = 2α and β > α.We note that the computation of the first Lyapunov coefficient will most likely be the most difficult component of any proofs of the generalised conjectures. What is perhaps the most interesting aspect of this is the fact that higher order interactions as defined by <ref> seem to be able to simultaneously produce travelling wave solutions and maintain a constant population size only for the three strategy rock-paper-scissors game. While we do not rule out the possibility that a more complex interaction mechanism may be able to simultaneously accomplish this, it is surprising that this property seems to hold for only the three strategy case and thus may warrant additional study. § CONCLUSIONS AND FUTURE DIRECTIONSIn this paper, we merged the higher order interaction model first discussed by Griffin and Wu <cit.> with the spatial replicator equation model of Vickers <cit.> and the finite population spatial model of Griffin, Mummah and Deforest <cit.>. For higher order interactions in rock-paper-scissors, we showed that travelling wave solutions exist in both the finite and infinite population cases, with the important model feature that the net population was stable (as opposed to declining). This suggests that if replicators are models of real-world cyclically interacting populations, then travelling waves in such populations can be explained by either a declining population count or the presence of higher order (i.e., non-pairwise) interactions. In discussing a generalisation of this approach, we provided two conjectures on the existence of travelling wave solutions in spatial replicator dynamics with an arbitrary odd number of strategies. Most interestingly, we found that the property of population size preservation and the existence of travelling wave solutions appears to be present only in the rock-paper-scissors game (three strategy case), with higher order interactions. Games with more than three strategies (e.g., rock-paper-scissors-Spock-lizard) seem to admit travelling wave solutions only when the total population is decreasing and higher order interactions cannot remediate this.Proving the conjectures raised in this paper is clearly an area of future work. We argue that stable Turing patterns will not be admitted by the infinite population spatial replicator with higher order interactions as defined in this paper. However, Griffin and Wu <cit.> show that a (subcritical) Hopf bifurcation can emerge in the aspatial higher order dynamics using a related but distinct payoff matrix and higher order interaction matrices. If parameter regimes exist where a supercritical Hopf bifurcation exists in the aspatial case, then it may be possible that a diffusion mediated transition maybe possible from periodic solutions to asymptotically stable solutions as in the work of Ginzburg and Landau equation <cit.> or in the work ofDilão <cit.>. Investigating this possibility of significant interest for future work. § ACKNOWLEDGEMENTSC.G. was supported in part by the National Science Foundation under grant CMMI-1932991. C.G. would also like to thank Andrew Belmonte for a useful discussion on this topic.§ DATA AND CODE AVAILABILITYThree Mathematica notebooks are provided as supplementary materials and contain the code needed to reproduce the images and theoretical derivations in this paper.§ CONSTRUCTING THE FIRST LYAPUNOV COEFFICIENTThe approach outlined here is provided in<cit.> and is distilled from the detailed discussion in <cit.>. We begin by setting u_s = 1 - u_r - u_p and v_s = -v_r - v_p, since we can see that v_r + v_p + v_s = 0. Then <ref> reduces to four linearly independent equations. Let 𝐉_0 be the Jacobian of this reduced dimension system evaluated at the fixed point u_r = u_p = 13 and v_r = v_p = 0 and the special wave speed c = c^⋆ using the negative branch. Thus,𝐉_0 = [ ξ/Dω√(3)ξ-3Dω^2/3 D02 ξ/3 D;1000;0 -2 ξ/3 D ξ/Dω√(3) -3Dω^2+ξ/3 D;0010 ]As expected, this matrixhas two pure imaginary eigenvalues of form ±ω i, where,ω = √(β - α/3D)andξ = 1 + α + β.Let 𝐪 be the normalized eigenvector of 𝐉_0 so that 𝐉_0𝐪 = iω𝐪 and let 𝐩 be the normalized eigenvector of 𝐉_0^T so that 𝐉_0^T𝐩 = -iω𝐩. The values of these eigenvectors can be computed in terms of ω, ξ and D as,𝐪 = ⟨(√(3)/2-i/2) ω/√(2)√(ω ^2+1),-1/2+i √(3)/2/√(2)√(ω^2+1),i ω/√(2)√(ω ^2+1),1/√(2)√(ω ^2+1)⟩𝐩 = Q⟨D ω(1/2√(3)(3 D ω ^2+ξ)+3/2 i (D ω ^2-ξ))/3 D^2 ω ^4+ξ^2,1/2(1-i √(3)),D ω(√(3)ξ +3 i Dω ^2)/3 D^2 ω ^4+ξ ^2,1⟩,where Q = √(3 D^2 ω ^4+ξ ^2)/√(2)√(3 D^2 ω ^2 (ω^2+1)+ξ ^2).For simplicity of notation, write the reduced dimension version of <ref> as,η̇_i = f_i(η),where η = ⟨u_1,v_1,u_2,v_2⟩ and for i=1,…,4, f_i is defined from <ref>. Let η_0 = ⟨13,0,13,0⟩ be the equilibrium point. Define 𝐁:ℝ^4 ×ℝ^4 →ℝ^4 componentwise as,B_i(𝐫,𝐬) = .∑_j, k∂^2 f_i/∂η_j∂η_k𝐫_j𝐬_k|_η = η_0.Define 𝐂:ℝ^4 ×ℝ^4×ℝ^4 →ℝ^4 componentwise as, C_i(𝐫,𝐬,𝐰) = .∑_j, k, l∂^3 f_i/∂η_j∂η_k∂η_l𝐫_j𝐬_k𝐰_l|_η = η_0.Lastly, define the complex inner product,⟨𝐫,𝐬⟩ = ∑_k 𝐫_k𝐬_k,where z denotes the complex conjugate of the z. Then ℓ_1 is computed as,ℓ_1 = 1/2ωRe[ ⟨𝐩,𝐂(𝐪,𝐪,𝐪)⟩ + . .⟨𝐩,𝐁[𝐪,(2i𝐈ω - 𝐉_0)^-1𝐁(𝐪,𝐪) ] ⟩ - 2⟨𝐩,𝐁[𝐪,𝐉_0^-1𝐁(𝐪,𝐪)]⟩].Here, 𝐈 is an identity matrix of appropriate size. Using Mathematica (see SI), it is straightforward to compute that,⟨𝐩,𝐂(𝐪,𝐪,𝐪)⟩ = 02⟨𝐩,𝐁[𝐪,𝐉_0^-1𝐁(𝐪,𝐪)]⟩ = 0,leaving only the term,⟨𝐩,𝐁[𝐪,(2i𝐈ω - 𝐉_0)^-1𝐁(𝐪,𝐪) ] ⟩,to be evaluated. A human assisted computation with Mathematica (see SI) yields the expression,ℓ_1 = -3 √(3)(-3 D^2 ω ^4+6 D ξω ^2+ξ ^2)/4 √((ω ^2+1)^3 (3 D^2 ω ^4+ξ ^2) (3 D^2 ω ^2(ω ^2+1)+ξ ^2)). To prove this value is always negative, and thus that the limit cycle is always attracting, it suffices to show that,-3 √(3)(-3 D^2 ω ^4+6 D ξω ^2+ξ ^2) < 0,for all allowable parameters. Computation is easier in terms of α, β and D at this point. When we substitute in their definitions, we obtain the inequality,√(3)(4 α ^2-8 αβ -4 β(2 β +3)-3) ≤ 0.We can rewrite this as,√(3)[-3 + 4(β-α)^2 - 12β(1+β)] ≤ 0,which implies,(β - α)^2 ≤12β(1+β)+3/4.Solving the inequality for α yields the requirement that,β - √(3)/2√(1+4β+4β^2)≤α≤β + √(3)/2√(1+4β+4β^2).We know by our assumptions that 0 < α < β. Therefore, it suffices to show that the left-hand-side of the inequality is always less than zero. To prove this, note that for all β > 0 we have,0 < β < β + √(3)/2√(1+4β+4β^2).Thus multiplying the left and right-hand sides of <ref> yields,(β - √(3)/2√(1+4β+4β^2))(β + √(3)/2√(1+4β+4β^2)) = -(2 β ^2+3 β +3/4) < 0.Therefore, it follows thatβ - √(3)/2√(1+4β+4β^2) < 0and for all 0 < α < β, ℓ_1 < 0 and thus the limit cycle is always attracting.
http://arxiv.org/abs/2312.16722v1
{ "authors": [ "Christopher Griffin", "Li Feng", "Rongling Wu" ], "categories": [ "nlin.PS" ], "primary_category": "nlin.PS", "published": "20231227212009", "title": "Spatial Dynamics of Higher Order Rock-Paper-Scissors and Generalisations" }
IEEEexample:BSTcontrol Sliding Mode Control for 3-D Uncalibrated and Constrained Vision-based Shape Servoing within Input Saturation Fangqing Chen ^*University of Toronto Copyright may be transferred without notice, after which this version may no longer be accessible. ^* Corresponding Author.January 14, 2024 =============================================================================================================================================================================== This paper designs a servo control system based on sliding mode control for the shape control of elastic objects. In order to solve the effect of non-smooth and asymmetric control saturation, a Gaussian-based continuous differentiable asymmetric saturation function is used for this goal. The proposed detection approach runs in a highly real-time manner. Meanwhile, this paper uses sliding mode control to prove that the estimation stability of the deformation Jacobian matrix and the system stability of the controller are combined, which verifies the control stability of the closed-loop system including estimation. Besides, an integral sliding mode function is designed to avoid the need for second-order derivatives of variables, which enhances the robustness of the system in actual situations. Finally, the Lyapunov theory is used to prove the consistent final boundedness of all variables of the system.Robotics,Shape-servoing,Asymmetric saturation,Sliding Mode Control, Deformable objects§ INTRODUCTIONThe deformable object manipulation (DOM) receives considerable attention in the field of robotics. Also, this application can be seen everywhere, such as: industrial processing <cit.>, medical surgery <cit.>,furniture services <cit.>, anditem package <cit.>.However, although a lot of research has been done on DOM, a complete manipulation framework system has not yet been formed <cit.>.The biggest difficulty in this is the complex and unknown physical characteristics of the deformable linear object (DLO) is hard to obtain in the real application environment <cit.>. Currently, methods targeting DOM are broadly divided into learning-based and cybernetics-based. This article designs the DOM from a control perspective. To the best of our knowledge, this is the first attempt to design an SMC-based manipulation framework for DLO with the consideration of the non-symmetric saturation control issue. <cit.>, which helps to obtain more explainable contents in the physical applications <cit.>. The key contributions of this paper are three-fold: * Construction of the Unified Shape Manipulation: This paper forms a unified manipulation framework from the control-based viewpoint. The core modules are constructed in three parts, detection/extraction, approximation, and shape servoing control. The proposed manipulation runs in a model-free manner and does not need any prior knowledge of the system model.* Consideration of the Non-symmetric Saturation: This paper considers the common asymmetric and non-smooth control input saturation problems in practical applications and avoids the control input discontinuity problem caused by the traditional use of hard saturation measures by introducing a Gaussian saturation function.§ RELATED WORK §.§ Input SaturationIn the real application environment, the input saturation issue that occurred in the control should be paid attention to improve the system performance <cit.>. About the solutions of nonlinear saturation, much research has been conducted in recent years, and is well addressed in <cit.>. About the detailed survey of the control input saturation, we refer the readers to <cit.>.Input saturation of plants with uncertain models has been considered in developing anti-windup schemes in <cit.>. Some state-of-art methods are introduced in <cit.>, and a unified framework incorporating the various existing anti-windup schemes is presented in <cit.>.Model predictive control (MPC) plays the most important role in this question, output, or state constraint <cit.>.However, the limitation of MPC is also obvious, it needs many online iterations, so it is not good for the specified tasks that need high real-time performance <cit.>. MPC must be completed between the two sampling instances, which increases the calculation burden for real-time control <cit.>. Furthermore, it should be noted that one critical assumption made in the most of the above research is that the actuator saturation is symmetric <cit.>. The symmetry of the saturation function to a large degree simplifies the analysis of the closed-loop system <cit.>. In this brief, we will investigate a sliding model control-based manipulation framework with also considers the nonlinear asymmetric saturation issue. § PROBLEM FORMULATIONNotation: In this paper, we use the following frequently-used notation: Bold small letters, e.g., 𝐦 denote column vectors, while bold capital letters, e.g., 𝐌 denote matrices.§.§ Robot-Manipulation ModelConsider the kinematic-controlled 6-DOF robot manipulator (i.e., the underlying controller can precisely execute the given speed command), 𝐪∈ℝ^6 and 𝐫 = 𝐟_r(𝐪) ∈ℝ^6 are denoted by the robot's joint angles and end-effector's pose, respectively. 𝐟_r(𝐪) is the forward kinematics of the robot.Therefore, the standard velocity Jacobian matrix 𝐉_r of the manipulator can be obtained as follows: 𝐫̇ = 𝐉_r(𝐪) 𝐪̇ where 𝐉_r(𝐪) ∈ℝ^6 × 6 is assumed to be exactly known. In addition, we assume that the robot does not collide with the environment or itself during the manipulation process, i.e., collision avoidance is not the scope of the article <cit.>.§.§ Visual-Deformation ModelIn this article, the centerline configuration of the object is defined as follows:𝐜̅= [𝐜_1^,…,𝐜_N^]^∈ℝ^2N, 𝐜_i=[c_xi,c_yi]^∈ℝ^2where N is the number of points comprising the centerline, 𝐜_i for i=1,…,N are the pixel coordinates of i-th point represented in the camera frame. In this article, the object is assumed to be tightly attached to the end-effector beforehand without sliding and falling off during the movement. It means that the object grasping issue is not the considered topic in this work. Slight movements 𝐫 of the end-effector can cause changes in the shape 𝐜̅ of the object. This complex nonlinear relationship is given as:𝐜̅= 𝐟_c(𝐫)= 𝐟_c(𝐟_r(𝐪))Note that the dimension 2N of the observed centerline 𝐜̅ is generally large, thus it is inefficient to directly use it in a shape controller as it contains redundant information. In this work, we just give this generation concept, i.e., shape feature extraction. This module aims to extract efficient features 𝐬 from the original data space and then map them into the low-dimensional feature space. The relation between the robot's angles and such shape descriptor is modeled as follows:𝐬= 𝐟_s(𝐜̅)= 𝐟_s(𝐟_c(𝐟_r(𝐪)))Taking the derivative of (<ref>) with respect to 𝐪, we obtain the first-order dynamic model:𝐬̇ = ∂𝐟_s/∂𝐪𝐪̇ = 𝐉_s( 𝐫)𝐪̇where 𝐉_s( 𝐫) ∈ℝ^p × 6 is named after the deformation Jacobian matrix (DJM), which relates the velocity of the joint angles with the shape feature changes. As the physical knowledge of the flexible objects is hard to obtain in the practical environment, thus the traditional analytical-based methods are not applicable for the calculation of DJM here. To this end, the approximation methods are used to estimate DJM online for the real-time environment. The quasi-static (<ref>) holds when the materials properties of objects do not change significantly during manipulations, as 𝐉_s(𝐫) represents the velocity mapping between the objects and the robot motions. The nonlinear kinematic relationship between 𝐬 and 𝐪 can seen as the special form of the traditional standard robot kinematic Jacobian mapping, which captures the perspective geometry of the object's points in the boundary.Note that the deformations of the elastic objects (not considering rheological objects) are only related to its own potential energy, the contact force with the manipulator and without the manipulation sequence of the manipulator. These conditions guarantee the establishment of (<ref>). §.§ Input-Saturation ModelThe system is without the effect of the input saturation, i.e., do not consider the maximum velocity of the robot in the applications. Although the above assumptions can simplify the design process of the system and reduce the algorithm complexity of the system, this affects the stability and deformation accuracy of the system. The existence of DJM estimation error could lead to control failure. Input saturation as the most critical non-smooth nonlinearity should be explicitly considered in the control design. Thus we propose the sliding mode deformation control for the uncertain nonlinear system with estimation error case and unknown non-symmetric input saturation in this section. The control input is 𝐮=[u_1,…,u_q]^∈ℝ^q. For simplicity, we define that 𝐮=𝐪̇ in the following sections. In the past articles, most of them adopted a hard-saturation manner, i.e., the influence of input saturation is not considered in theoretical analysis. However, in the real environment, the speed of the robot is bounded. Therefore, we need to consider the speed limit of the manipulator in the actual manipulation, to improve the stability of the system, which is very important for the manipulation of the deformable object. The input saturation is defined as follows:u_i = {[ u_i^maxv_i≥ u_i^max; v_i u_i^min < v_i < u_i^max; u_i^minv_i≤ u_i^min ]. , fori=1,…,6where 𝐮 = [u_1,…,u_6]^∈ℝ^6 is the system input and the saturation output, and 𝐯 = [v_1,…,v_6]^∈ℝ^6 is the actually designed control input. u_i^max, u_i^min are the known joint angular velocity limits.§.§ Mathematical PropertiesBefore furthering our control design, some useful properties are given here. <cit.> The inequality 0 ≤| x | - xtanh(x/ε) ≤δε holds for any ε > 0 and for any x ∈ℝ, where δ=0.2785 is a constant that satisfies δ=e^-(δ+1).<cit.> Gauss error function (x) is a nonelementary function of sigmoid shape, which is defined as:(x)=2/√(π)∫_0^xe^-t^2dtwhere (x) is a real-valued and continuous differentiable function, it has no singularities (except that at infinity) and its Taylor expansion always converges.DJM is composed of estimated value 𝐉̂_s(t) and approximation error 𝐉̃_s(𝐫,t), i.e., 𝐉̂_s(𝐫) = 𝐉̂_s(t) + 𝐉̃_s(𝐫,t).The disturbance 𝐝 = 𝐉̂_s 𝐮̃ + 𝐉̃_s 𝐮 has the unknown positive constant limit, 𝐝≤η_1.The disturbance 𝐉̇̂̇𝐮̃ has the unknown positive constant limit, 𝐉̇̂̇_s 𝐮̃≤η_2.As the slow deformation speed is considered in this paper, thus it is assumed that 𝐮̇̃̇ = 0_p. § CONTROLLER DESIGNTraditional input-saturation model (<ref>) is a discontinuous function. If this saturation model is simply adopted, it will affect the stability of the system, and it is easy to cause damage to the actuator in practical applications. In this article, we utilize the model in <cit.> to describe the saturation nonlinearity. Refer to Definition <ref>, the asymmetric input-saturation model can be transformed into the smooth format given as:u_i(v_i) = u_mi×(√(π)/2u_miv_i), fori=1,…,6 ,u_mi =(u_i^max + u_i^min)/2 + (u_i^max - u_i^max)/2 * (v_i)where (·)is the sign function. Fig. <ref> shows the conceptual Gauss-saturation model in the following case:v(t)=10sin(2t), u^max=5, u^min=-6where v(t) is the original control input without any saturation processing. The saturation error function is constructed as follows:𝐮̃ = 𝐮 - 𝐯for 𝐮 and 𝐯 are the functions of time t. Considering Assumption <ref> and input-saturation model (<ref>), the differential equation (<ref>), the following anti-saturation model is obtained as follows:𝐬̇ = 𝐉_s𝐮= ( 𝐉̂_s + 𝐉̃_s ) ( 𝐯 + 𝐮̃ ) = 𝐉̂_s 𝐯 + 𝐝where 𝐝 = 𝐉̂_s 𝐮̃ + 𝐉̃_s 𝐮 is the total disturbance including approximation error 𝐉̃ and saturation error 𝐮̃. Define the deformation error:[𝐞_1 = 𝐬 - 𝐬_d, 𝐞_2 = 𝐬̇ - 𝐉̂_s 𝐮 ]for 𝐬_d as the desired shape feature. The derivatives of (<ref>) with respect to time-variable t is:𝐞̇_1 = 𝐉̂_s 𝐯 + 𝐝 - 𝐬̇_d 𝐞̇_2 = 𝐬̈- 𝐉̇̂̇_s𝐯- 𝐉̂_s 𝐯̇- 𝐉̇̂̇_s 𝐮̃- 𝐉̂_s 𝐮̇̃̇Considering Assumption <ref>, 𝐞̇_2 is transformed into:𝐞̇_2= 𝐬̈ - 𝐉̇̂̇_s𝐯 - 𝐉̂_s 𝐯̇ - 𝐉̇̂̇_s 𝐮̃The integral sliding surface is constructed as follows <cit.>:σ _1 = 𝐞_1 - 𝐞_1( 0 ) + ∫_0^t 𝐞_1( τ)dτ σ _2 = 𝐞_2 - 𝐞_2( 0 ) + ∫_0^t 𝐞_2( τ)dτCombing with (<ref>) and (<ref>), the time derivative of σ_1 is:σ̇_1 = 𝐉̂_s 𝐯 + 𝐝 - 𝐬̇_d + 𝐞_1 σ̇_2 = 𝐬̈ - 𝐉̇̂̇_s𝐯 - 𝐉̂_s 𝐯̇ - 𝐉̇̂̇_s 𝐮̃ + 𝐞 _2and design the velocity control input as follows:𝐯 = 𝐉̂_s^ + (- σ _1 + 𝐬̇_d - 𝐞_1 + Θ _1)Θ _1 = - σ _1^ + tanh( σ _1/ε _1)η̂_1σ _1where 𝐉̂_s^+ denotes the pseudo-inverse of the matrix 𝐉̂_s. Since there is no power term and sign function, 𝐯 is continuous without chattering. And η̂_1 is updated as:η̇̂̇_1 = tanh( σ _1/ε _1)σ _1 - γ _1η̂_1To quantify the shape deformation error, we introduce the quadratic function V_1(σ_1) = 1/2σ _1^T σ _1 whose time-derivative satisfies:V̇_1 =σ_1 (𝐉̂_s 𝐯 + 𝐝 - 𝐬̇_d + 𝐞_1) = - σ _1^2- tanh ( σ _1/ε _1)η̂_1σ _1 + σ _1^𝐝From Assumption <ref> and considering norm induction formula, we can obtain the following relation:σ _1^𝐝≤σ _1𝐝≤η _1σ _1Substitution of (<ref>) into (<ref>) yields:V̇_1≤ - σ _1^2- tanh ( σ _1/ε _1)η̂_1σ _1+ η _1σ _1 Following the similar manner, the adaptive update rule of DJM is computed as:𝐉̇̂̇_s= ( 𝐬̈ - 𝐉̂_s 𝐯̇ + σ _2 + 𝐞_2 + σ _2^T + Θ _2)𝐯^+ Θ _2 = tanh( σ _2/ε _2)η̂_2σ _2where the adaptive update rule of η̂_2 is designed as follows:η̇̂̇_2= tanh( σ _2/ε _2)σ _2 - γ _2η̂_2for ε_1, ε_2, γ_1, γ_2 as positive constants. Θ_2 is used to compensate for the effect of saturation error 𝐮̃ on the estimated value of 𝐉̂_s. The quadratic function V_2(σ_2) = 1/2σ _2^T σ _2 whose time-derivative satisfies:V̇_2 = σ_2 (𝐬̈ - 𝐉̇̂̇_s𝐯 - 𝐉̂_s 𝐯̇ - 𝐉̇̂̇_s 𝐮̃ + 𝐞 _2) = - σ _2^2 - tanh( σ _2/ε _2)η̂_2σ _2 - σ _2^𝐉̇̂̇_s𝐮̃ From Assumption <ref> and considering norm induction formula, we can obtain the following relation:σ _2^𝐉̇̂̇_s𝐮̃≤η _2σ _2Substitution of (<ref>) into (<ref>) yields:V̇_2 ≤ - σ _2^2 - tanh( σ _2/ε _2)η̂_2σ _2 + η _2σ _2 Consider the closed-loop shape servoing system (<ref>) considered with Assumption <ref> - <ref>, the input-saturation issue (<ref>), the velocity controller (<ref>), the DJM estimation law (<ref>), with adaptation laws (<ref>) (<ref>). For a given desired feature vector 𝐬_d, there exists an appropriate set of control parameters that ensure that: * All signals in the close-loop system remain uniformly ultimately bounded (UUB); * The deformation error 𝐞_1 asymptotically converges to a compact set around zero.Consider the energy-like functionV(σ_1, σ_2, η̃_1, η̃_2)= V_1(σ_1) + V_2(σ_2) + η̃ _1^2/2 + η̃ _2^2/2forη̃_1=η_1-η̂_1,η̃_2 = η_2 - η̂_2 are the approximation errors of the constant η_1 and η_2, respectively. Thus, the time-derivative of (<ref>) is computed as:V̇ = V̇_1( σ _1) + V̇_2( σ _2)- η̃_1η̇̂̇_1 - η̃_2η̇̂̇_2Invoking (<ref>) (<ref>), we can show that the time derivative of V_2 satisfies:V̇ ≤ - σ _1^2 - σ _2^2 - η̃_1η̇̂̇_1 - η̃_2η̇̂̇_2- tanh( σ _1/ε _1)η̂_1σ _1 + η _1σ _1- tanh( σ _2/ε _2)η̂_2σ _2 + η _2σ _2 Thus, we can obtain:V̇ ≤- σ _1^2 - σ _2^2- η̃_1( η̇̂̇_1 - tanh( σ _1/ε _1)σ _1 ) - η̃_2( η̇̂̇_2 - tanh( σ _2/ε _2)σ _1 ) + η _1( σ _1 - σ _1tanh( σ _1/ε _1)) + η _2( σ _2 - σ _2tanh( σ _2/ε _2))Refer to Lemma <ref>, it yieldsV̇ ≤ - σ _1^2 - σ _2^2 + η _1δε _1 + η _2δε _2+ η̃_1(σ _1tanh( σ _1/ε _1) - η̇̂̇_1 ) + η̃_2( σ _2tanh( σ _2/ε _2) - η̇̂̇_2 )By the well-known Young's inequality, it yields[ η̃_1η̂_1≤1/2η_1^2 - 1/2η̃_1^2,η̃_2η̂_2≤1/2η_2^2 - 1/2η̃_2^2 ]Considering the adaptive rules (<ref>) (<ref>), it yieldsV̇ ≤- σ _1^2 - σ _2^2 - γ _1/2η̃_1^2 - γ _2/2η̃_2^2 + γ _1/2η_1^2 + γ _2/2η_2^2 + η_1δε _1 + η_2δε _2≤- aV + bwhere a=min(2,γ_1,γ_2), b=γ _1/2η_1^2 + γ _2/2η_2^2 + δ(η_1ε _1 + η_2ε _2). By selecting the appropriate γ_1 and γ_2 that ensure that a>0, the deformation error 𝐞_1 asymptotically converge to a compact set around zeros, and ensures that the estimation error 𝐉̃_s remains bounded.IEEEtran
http://arxiv.org/abs/2312.16048v1
{ "authors": [ "Fangqing Chen" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231226132954", "title": "Sliding Mode Control for 3-D Uncalibrated and Constrained Vision-based Shape Servoing within Input Saturation" }
liangdu@gxnu.edu.cn lihuang.dmft@gmail.com College of Physics and Technology, Guangxi Normal University, Guilin, Guangxi 541004, China The Bayesian reconstruction entropy is considered an alternative to the Shannon-Jaynes entropy, as it does not exhibit the asymptotic flatness characteristic of the Shannon-Jaynes entropy and obeys the scale invariance. It is commonly utilized in conjunction with the maximum entropy method to derive spectral functions from Euclidean time correlators produced by lattice QCD simulations. This study expands the application of the Bayesian reconstruction entropy to the reconstruction of spectral functions for Matsubara or imaginary-time Green's functions in quantum many-body physics. Furthermore, it extends the Bayesian reconstruction entropy to implement the positive-negative entropy algorithm, enabling the analytic continuations of matrix-valued Green's functions on an element-wise manner. Both the diagonal and off-diagonal components of the matrix-valued Green's functions are treated equally. Benchmark results for the analytic continuations of synthetic Green's functions indicate that the Bayesian reconstruction entropy, when combined with the preblur trick, demonstrates comparable performance to the Shannon-Jaynes entropy. Notably, it exhibits greater resilience to noises in the input data, particularly when the noise level is moderate. Combining Bayesian reconstruction entropy with maximum entropy method for analytic continuations of matrix-valued Green's functions Li Huang January 14, 2024 ===================================================================================================================================§ INTRODUCTION In quantum many-body physics, single-particle Green's function holds significant importance <cit.>. It is typically computed using numerical methods such as quantum Monte Carlo <cit.> and functional renormalization group <cit.>, or many-body perturbative techniques like random phase approximation <cit.> and GW approximation <cit.>, within the framework of imaginary time or Matsubara frequency formulation. This quantity provides valuable insights into the dynamic response of the system. However, it cannot be directly compared to real-frequency experimental observables. Therefore, it is necessary to convert the calculated values to real frequencies, which presents the fundamental challenge of the analytic continuation problem. In theory, the retarded Green's function G^R(ω) and the spectral function A(ω) can be derived from the imaginary-time Green's function G(τ) or Matsubara Green's function G(iω_n) by performing an inverse Laplace transformation <cit.>. Due to the ill-conditioned nature of the transformation kernel, solving it directly is nearly impossible. In fact, the output for this inverse problem is highly sensitive to the input. Even minor fluctuations or noises in G(τ) or G(iω_n) can result in nonsensical changes for A(ω) or G^R(ω), making analytic continuations extremely unstable.In recent decades, numerous techniques have been developed to address the challenges associated with analytic continuation problems. These methods include Padé approximation (PA) <cit.>, maximum entropy method (MaxEnt) <cit.>, stochastic analytic continuation (SAC) <cit.>, stochastic optimization method (SOM) <cit.>, stochastic pole expansion (SPX) <cit.>, sparse modeling (SpM) <cit.>, Nevanlinna analytical continuation (NAC) <cit.>, causal projections <cit.>, Prony fits <cit.>, and machine learning assistant approaches <cit.>, and so on. Each method has its own advantages and disadvantages. There is no doubt that the MaxEnt method is particularly popular due to its ability to maintain a balance between computational efficiency and accuracy <cit.>. Several open source toolkits, such as  <cit.>,  <cit.>,by Kraberger et al. <cit.>,by Levy et al. <cit.>,  <cit.>,  <cit.>,  <cit.>, and  <cit.>, have been developed to implement this method.The MaxEnt method is rooted in Bayesian inference <cit.>. Initially, the spectral function A(ω) is constrained to be non-negative and is interpreted as a probability distribution. Subsequently, the MaxEnt method endeavors to identify the most probable spectrum by minimizing a functional Q[A], which comprises the misfit functional L[A] and the entropic term S[A] [see Eq. (<ref>)]. Here, L[A] quantifies the difference between the input and reconstructed Green's function, while S[A] is employed to regulate the spectrum <cit.>. The MaxEnt method is well-established for the diagonal components of matrix-valued Green's functions, as the corresponding spectral functions are non-negative (for fermionic systems). However, the non-negativity of the spectral functions cannot be assured for the off-diagonal components of matrix-valued Green's functions. As a result, conventional MaxEnt method fails in this situation.Currently, there is a growing emphasis on obtaining the full matrix representation of the Green's function (or the corresponding self-energy function) on the real axis <cit.>. This is necessary for calculating lattice quantities of interest in Dyson's equation <cit.>. Therefore, there is a strong demand for a reliable method to perform analytic continuation of the entire Green's function matrix <cit.>. To meet this requirement, several novel approaches have been suggested in recent years as enhancements to the conventional MaxEnt method. The three most significant advancements are as follows: (1) Auxiliary Green's function algorithm. It is possible to create an auxiliary Green's function by combining the off-diagonal and diagonal elements of the matrix-valued Green's function to ensure the positivity of the auxiliary spectrum <cit.>. The analytic continuation of the auxiliary Green's function can then be carried out using the standard MaxEnt method. (2) Positive-negative entropy algorithm. Kraberger et al. proposed that the off-diagonal spectral functions can be seen as a subtraction of two artificial positive functions. They extended the entropic term to relax the non-negativity constraint presented in the standard MaxEnt method <cit.>. This enables simultaneous treatment of both the diagonal and off-diagonal elements, thereby restoring crucial constraints on the mathematical properties of the resulting spectral functions, including positive semi-definiteness and Hermiticity. (3) Maximum quantum entropy algorithm. Recently, Sim and Han generalized the MaxEnt method by reformulating it with quantum relative entropy, maintaining the Bayesian probabilistic interpretation <cit.>. The matrix-valued Green's function is directly continued as a single object without any further approximation or ad-hoc treatment <cit.>.Although various algorithms have been proposed to enhance the usefulness of the MaxEnt method <cit.>, further improvements in algorithms and implementations are always beneficial. As mentioned above, the essence of the MaxEnt method lies in the definition of the entropic term S[A]. Typically, it takes the form of Shannon-Jaynes entropy (dubbed S_SJ) in the realm of quantum many-body physics <cit.>. However, alternative forms such as the Tikhonov regulator <cit.>, the positive-negative entropy <cit.>, and the quantum relative entropy <cit.> are also acceptable. It is worth noting that in the context of lattice QCD simulations, the extraction of spectral functions from Euclidean time correlators is of particular important. The MaxEnt method is also the primary tool for the analytic continuation of lattice QCD data <cit.>. Burnier and Rothkopf introduced an enhanced dimensionless prior distribution, known as the Bayesian reconstruction entropy (dubbed S_BR) <cit.>, which demonstrates superior asymptotic behavior compared to S_SJ <cit.>. They discovered that by using S_BR in conjunction with the MaxEnt method, a significant improvement in the reconstruction of spectral functions for Euclidean time correlators can be achieved. They latter extended S_BR to support Bayesian inference of non-positive spectral functions in quantum field theory <cit.>. But to the best of our knowledge, the application of S_BR has been limited to post-processing for lattice QCD data thus far. Hence, some questions naturally arise. How effective is S_BR for solving analytic continuation problems in quantum many-body simulations? Is it truly superior to S_SJ? Keeping these questions in mind, the primary objective of this study is to broaden the application scope of S_BR. We at first verify whether S_BR can handle Matsubara (or imaginary-time) Green's functions. Then, we generalize S_BR to implement the positive-negative entropy algorithm <cit.> for analytic continuations of matrix-valued Green's functions. The simulated results indicate that S_BR works quite well in transforming imaginary-time or Matsubara Green's function data to real frequency, no matter whether the Green's function is a matrix or not. We find that S_BR has a tendency to sharpen the peaks in spectral functions, but this shortcoming can be largely overcame by the preblur trick <cit.>. Overall, the performance of S_BR is comparable to that of S_SJ <cit.> for the examples involved.The rest of this paper is organized as follows. Section <ref> introduces the spectral representation of the single-particle Green's function. Furthermore, it explains the fundamental formalisms of the MaxEnt method, the Bayesian reconstruction entropy, and the principle of the positive-negative entropy algorithm. Section <ref> is dedicated to various benchmark examples, including analytic continuation results for synthetic single-band Green's functions and multi-orbital matrix-valued Green's functions. The spectral functions obtained by S_SJ and S_BR are compared with the exact spectra. In Section <ref>, we further examine the preblur trick and the auxiliary Green's function algorithm for S_SJ and S_BR. The robustness of S_BR with respect to noisy Matsubara data is discussed and compared with that of S_SJ. A concise summary is presented in Section <ref>. Finally, Appendix <ref> introduces the goodness-of-fit functional. The detailed mathematical derivations for S_SJ and S_BR are provided in Appendices <ref> and <ref>, respectively.§ METHOD§.§ Spectral representation of single-particle Green's function It is well known that the single-particle imaginary-time Green's function G(τ) can be defined as follows:G(τ) = ⟨𝒯_τ d(τ) d^†(0)⟩,where τ is imaginary time, 𝒯_τ means the time-ordered operator, d (d^†) denotes the annihilation (creation) operator <cit.>. Given G(τ), Matsubara Green's function G(iω_n) can be constructed via direct Fourier transformation:G(iω_n) = ∫^β_0 dτ e^-iω_n τ G(τ),where β is the inverse temperature of the system (β≡ 1/T), and ω_n is the Matsubara frequency. Note that ω_n is equal to (2n+1)π /β for fermions and 2nπ / β for bosons (n is an integer).Let us assume that the spectral function of the single-particle Green's function is A(ω), then the spectral representations of G(τ) and G(iω_n) read:G(τ) = ∫^+∞_-∞ K(τ,ω) A(ω) dω,andG(iω_n) = ∫^+∞_-∞ K(iω_n,ω) A(ω) dω,respectively. Here, K(τ,ω) and K(iω_n,ω) are called the kernel functions. Their definitions are as follows:K(τ,ω) = e^-τω/1 ± e^-βω,andK(iω_n,ω) = 1/iω_n - ω.In the right-hand side of Eq. (<ref>), + is for fermionic correlators and - is for bosonic correlators <cit.>. In the subsequent discussion, our focus will be on fermionic correlators.We observe that Eqs. (<ref>) and (<ref>) are classified as Fredholm integral equations of the first kind. When A(ω) is given, it is relatively straightforward to compute the corresponding G(τ) and G(iω_n) by numerical integration methods. However, the inverse problem of deducing A(ω) from G(τ) or G(iω_n) is challenging due to the ill-conditioned nature of the kernel function K. To be more specific, the condition number of K is very large because of the exponential decay of K with ω and τ. Consequently, direct inversion of K becomes impractical from a numerical standpoint <cit.>. Furthermore, the G(τ) or G(iω_n) data obtained from finite temperature quantum Monte Carlo simulations is not free from errors <cit.>, further complicating the solution of Eqs. (<ref>) and (<ref>). §.§ Maximum entropy method The cornerstone of the MaxEnt method is Bayes' theorem. Let us treat the input Green's function G and the spectrum A as two events. According to Bayes' theorem, the posterior probability P[A|G] can be calculated by:P[A|G] = P[G|A] P[A]/P[G],where P[G|A] is the likelihood function, P[A] is the prior probability, and P[G] is the evidence <cit.>. P[G|A] is assumed to be in direct proportion to e^-L[A], where the misfit functional L[A] reads:L[A] = 1/2χ^2[A] = 1/2∑^N_i=1( G_i - G̃_i[A]/σ_i)^2.Here, N is the number of input data points, σ denotes the standard deviation of G, G̃[A] is the reconstructed Green's function via Eqs. (<ref>) and (<ref>), and χ^2[A] is called the goodness-of-fit functional (see Appendix <ref> for more details). On the other hand, P[A] is supposed to be in direct proportion to e^α S[A], where α is a regulation parameter and S is entropy. Since the evidence P[G] can be viewed as a normalization constant, it is ignored in what follows. Thus,P[A|G] ∝ e^Q,where Q is a functional of A <cit.>. It is defined as follows:Q[A] = α S[A] - L[A] = α S[A] - 1/2χ^2 [A].The basic idea of the MaxEnt method is to identify the optimal spectrum  that maximizes the posterior probability P[A|G] (or equivalently Q[A]). In essence, our goal is to determine the most favorable  that satisfies the following equation:∂ Q/∂ A|_A =  = 0.Eq. (<ref>) can be easily solved by the standard Newton's algorithm <cit.>. In Appendices <ref>, <ref> and <ref>, all terms in the right-hand side of Eq. (<ref>) are elaborated in detail. We also explain how to solve Eq. (<ref>) efficiently. §.§ Bayesian reconstruction entropy The entropy S is also known as the Kullback-Leibler distance sometimes. In principle, there are multiple choices for it. Perhaps S_SJ is the most frequently used in quantum many-body physics and condensed matter physics <cit.>. It reads:S_SJ[A] = ∫ dω[ A(ω) - D(ω) - A(ω) log(A(ω)/D(ω)) ].Here, D(ω) is called the default model, providing the essential features of spectra. Usually D(ω) is a constant or a Gaussian function, but it can be determined by making use of high-frequency behavior of input data as well <cit.>. If both A(ω) and D(ω) have the same normalization, Eq. (<ref>) reduces to:S_SJ[A] = -∫ dω A(ω) log(A(ω)/D(ω)).The S_BR introduced by Burnier and Rothkopf <cit.> is dominant in high-energy physics and particle physics. It reads:S_BR[A] = ∫ dω[ 1 - A(ω)/D(ω) + log(A(ω)/D(ω)) ].Note that S_SJ is constructed from four axioms <cit.>. While for S_BR, two of these axioms are replaced <cit.>. First of all, the scale invariance is incorporated in S_BR. It means that S_BR only depends on the ratio between A and D. The integrand in Eq. (<ref>) is dimensionless, such that the choice of units for A and D will not change the result of the spectral reconstruction. Second, S_BR favors choosing smooth spectrum. The spectra that deviate between two adjacent frequencies, ω_1 and ω_2, should be penalized. §.§ Positive-negative entropy The formulas discussed above are only correct for positive definite spectra with a finite norm, i.e.,∫ dω A(ω) > 0.However, the norm could be zero for off-diagonal elements of matrix-valued Green's functions due to the fermionic anti-commutation relation <cit.>:∫ dω A(ω) = 0.This suggests that the spectral function is not positive definite any more. The equations for S_SJ and S_BR [i.e., Eqs. (<ref>) and (<ref>)] should be adapted to this new circumstance. The positive-negative entropy algorithm proposed by Kraberger et al. is a graceful solution to this problem <cit.>. They rewrite the off-diagonal spectral function A(ω) as the subtraction of two positive definite spectra:A(ω) = A^+(ω) - A^-(ω).Here A^+(ω) and A^-(ω) are independent, but have the same norm. Then the resulting entropy can be split into two parts:S^±[A] = S[A^+,A^-] = S[A^+] + S[A^-].S^±[A] (or S[A^+,A^-]) is called the positive-negative entropy, which was first used for the analysis of NMR spectra <cit.>. The expressions of positive-negative entropy for S_SJ and S_BR are as follows: S^±_SJ[A] = ∫ dω  [ √(A^2 + 4D^2) - 2D - A log(√(A^2 + 4D^2) + A/2D) ],andS^±_BR[A] = ∫ dω [ 2 - √(A^2 + D^2) + D/D + log(√(A^2 + D^2) + D/2D) ]. The default models for A^+ and A^- are D^+ and D^-, respectively. To derive Eqs. (<ref>) and (<ref>), D^+ = D^- = D is assumed. For a detailed derivation, please see Appendices <ref> and <ref>. Similar to S_BR[A], S^±_BR[A] also exhibits scale invariance. In other words, it depends on A/D only.§.§ Entropy density The integrand in the expression for entropy S is referred as entropy density s, i.e.,S[A] = ∫ dω s[A(ω)].Next, we would like to make a detailed comparison about the entropy densities of different entropic terms. Supposed that x = A/D, the expressions for various entropy densities are collected as follows:s_SJ(x) = x - 1 - xlogx, s_BR(x) = 1 - x + logx, s^±_SJ(x) = √(x^2 + 4) - 2 - x log√(x^2 + 4)+x/2, s^±_BR(x) = 2 - (√(x^2 + 1) + 1) + log√(x^2+1)+1/2.They are visualized in Figure <ref>.Clearly, all the entropy densities are strictly convex and non-positive (i.e., s ≤ 0 and s^±≤ 0). The ordinary entropy density s(x) is just defined for positive x. s_BR(x) becomes maximal at x = 1, and exhibits a similar quadratic behavior around its maximum as s_SJ(x). In the case x → 0 (A ≪ D), s_BR(x) is not suppressed, while s_SJ(x) → -1. Thus, s_BR(x) avoids the asymptotic flatness inherent in s_SJ(x) <cit.>. The positive-negative entropy density s^±(x) is valid for any x (x ∈ℝ). Both s^±_BR(x) and s^±_SJ(x) are even functions. They also exhibit quadratic behaviors around x = 0, and s^±_BR(x) ≥ s^±_SJ(x). In the limit of α→∞, the goodness-of-fit functional χ^2[A] has negligible weight, and the entropic term α S[A] becomes dominant <cit.>. Thus, the maximization of Q[A] (equivalently, the maximization of S[A]) yields  = D (at x = 1) for conventional entropy, in contrast to  = 0 (at x = 0) for the positive-negative entropy <cit.>.§ RESULTS§.§ Computational setups When the conventional MaxEnt method is combined with S_SJ (or S^±_SJ) <cit.>, it is called the MaxEnt-SJ method. On the other hand, a combination of the MaxEnt method with S_BR (or S^±_BR) <cit.> is called the MaxEnt-BR method. We implemented both methods in the open source analytic continuation toolkit, namely  <cit.>. All the benchmark calculations involved in this paper were done by this toolkit. The simulated results for the MaxEnt-SJ and MaxEnt-BR methods will be presented in this section. Next, we will explain the computational details.We always start from an exact spectrum A(ω), which consists of one or more Gaussian-like peaks. Then A(ω) is used to construct the imaginary-time Green's function G(τ) via Eq. (<ref>), or the Matsubara Green's function G(iω_n) via Eq. (<ref>). Here, for the sake of simplicity, the kernel function K is assumed to be fermionic. Since the realistic Green's function data from finite temperature quantum Monte Carlo simulations is usually noisy, multiplicative Gaussian noise will be manually added to the clean G(τ) or G(iω_n) to imitate this situation. The following formula is adopted:G_noisy = G_clean[1 + δ N_ℂ(0,1)],where δ denotes the noise level of the input data and N_ℂ is the complex-valued normal Gaussian noise <cit.>. Unless stated otherwise, the standard deviation of G is fixed to 10^-4. The numbers of input data for G(τ) and G(iω_n) are 1000 and 50, respectively. During the MaxEnt simulations, the χ^2-kink algorithm <cit.> is used to determine the regulation parameter α. The Bryan algorithm <cit.> is also tested. It always gives similar results, which will not be presented in this paper. Once S_BR and S^±_BR are chosen, the preblur trick <cit.> is used to smooth the spectra. The blur parameter is adjusted case by case. Finally, the reconstructed spectrum is compared with the true solution. We also adopt the following quantity to quantify the deviation of the reconstructed Green's function from the input one:R_i = 100 |Δ G_i/G_i| = 100 |G_i - G̃_i[Â]/G_i|,where G̃ is evaluated by  via Eq. (<ref>) or Eq. (<ref>). §.§ Single-band Green's functions At first, the analytic continuations of single-band Green's functions are examined. The exact spectral functions are constructed by a superposition of some Gaussian peaks. We consider two representative spectra. (i) Single off-centered peak. The spectral function is:A(ω) = exp[-(ω - ϵ) ^ 2/2Γ ^ 2],where ϵ = 0.5, Γ = 1.0, δ = 10^-4, σ = 10^-4, and β = 20.0. The input data is Matsubara Green's function, which contains 50 Matsubara frequencies (N = 50). The blur parameter for the MaxEnt-BR method is 0.45. (ii) Two Gaussian peaks with a gap. The spectral function is:A(ω) =f_1/√(2π)Γ_1exp[-(ω - ϵ_1) ^ 2/2Γ_1 ^ 2]+f_2/√(2π)Γ_2exp[-(ω - ϵ_2) ^ 2/2Γ_2 ^ 2] ,where f_1 = f_2 = 1.0, ϵ_1 = -ϵ_2 = 2.0, Γ_1 = Γ_2 = 0.5, δ = 10^-4, σ = 10^-3, and β = 5.0. The synthetic data is imaginary-time Green's function, which contains 1000 imaginary time slices (N = 1000). The blur parameter for the MaxEnt-BR method is 0.30.The analytic continuation results are shown in Fig. <ref>. For case (i), it is clear that the exact spectrum and Matsubara Green's function are well reproduced by both the MaxEnt-SJ method and the MaxEnt-BR method [see Fig. <ref>(a) and (c)]. For case (ii), the MaxEnt-SJ method works quite well. The MaxEnt-BR method can resolve the gap and the positions of the two peaks accurately. However, it slightly overestimates their heights. We also observe that the difference between G(τ) and G̃(τ) becomes relatively apparent in the vicinity of τ = β/2 [see Fig. <ref>(d) and (f)]. Figure <ref>(b) and (e) plot log_10 (χ^2) as a function of log_10 (α). The plots can be split into three distinct regions <cit.>: (i) Default model region (α→∞). χ^2 is relatively flat and goes to its global maximum. The entropic term α S is much larger than χ^2. The obtained spectrum A(ω) resembles the default model D(ω). (ii) Noise-fitting region (α→ 0). χ^2 approaches its global minimum, but it is larger than α S. In this region, the calculated spectrum A(ω) tends to fit the noise in G(τ) or G(iω_n). (iii) Information-fitting region. χ^2 increases with the increment of α. It is comparable with α S. The optimal α parameter is located at the crossover between the noise-fitting region and the information-fitting region. For S_BR and S_SJ, their χ^2(α) curves almost overlap at the default model region and the noise-fitting region, but differ at the information-fitting region. It indicates that the optimal α parameters for S_SJ are larger than those for S_BR. §.§ Matrix-valued Green's functions Next, the analytic continuations of matrix-valued Green's functions are examined. We consider a two-band model. The initial spectral function is a diagonal matrix:𝐀'(ω) = [ [ A'_11(ω)0;0 A'_22(ω);]].Here A'_11(ω) and A'_22(ω) are constructed by using Eq. (<ref>). For A'_11(ω), the parameters are: f_1 = f_2 = 0.5, ϵ_1 = 1.0, ϵ_2 = 2.0, Γ_1 = 0.20, and Γ_2 = 0.70. For A'_22(ω), the parameters are: f_1 = f_2 = 0.5, ϵ_1 = -1.0, ϵ_2 = -2.1, Γ_1 = 0.25, and Γ_2 = 0.60. The true spectral function is a general matrix with non-zero off-diagonal components. It reads:𝐀(ω) = [ [ A_11(ω) A_12(ω); A_21(ω) A_22(ω); ]].The diagonal matrix 𝐀'(ω) is rotated by a rotation matrix 𝐑 to generate 𝐀(ω). The rotation matrix 𝐑 is defined as follows:𝐑 = [ [cosθsinθ; -sinθcosθ; ]],where θ denotes the rotation angle. In the present work, we consider three different rotation angles. They are: (i) θ = 0.1, (ii) θ = 0.5, and (iii) θ = 0.9. Then 𝐀(ω) is used to construct the matrix-valued Green's function 𝐆(iω_n) by using Eq. (<ref>). The essential parameters are: δ = 0.0, σ = 10^-4, and β = 40.0. The input data contains 50 Matsubara frequencies. For analytic continuations of the diagonal elements of 𝐆(iω_n), the default models are Gaussian-like. However, for the off-diagonal elements, the default models are quite different. They are evaluated by D_12(ω) = √(A_11(ω)A_22(ω) ), where A_11(ω) and A_22(ω) are the calculated spectral functions for diagonal elements <cit.>. This means that we have to carry out analytic continuations for the diagonal elements at first, and then use the diagonal spectra to prepare the default models for the off-diagonal elements. For the MaxEnt-BR method, the preblur algorithm is used to smooth the spectra. The blur parameter is fixed to 0.2.The simulated results are illustrated in Fig. <ref>. For the diagonal spectral spectral functions (A_11 and A_22), both the MaxEnt-SJ and MaxEnt-BR methods can accurately resolve the peaks that are close to the Fermi level. However, for the high-energy peaks, the MaxEnt-BR method tends to overestimate their heights and yield sharper peaks [see the peaks around ω = 2.0 in Fig. <ref>(a), (d), and (g)]. For the off-diagonal spectral functions, only A_12 is shown, since A_21 is equivalent to A_12. We observe that the major features of A_12 are well captured by both the MaxEnt-SJ and MaxEnt-BR methods. Undoubtedly, the MaxEnt-SJ method works quite well in all cases. The MaxEnt-BR method exhibits good performance when the rotation angle is small [θ = 0.1, see Fig. <ref>(c)]. But some wiggles emerge around ω = ± 2.0 when the rotation angle is large [θ = 0.5 and θ = 0.9, see Fig. <ref>(f) and (i)]. We test more rotation angles (0.0 < θ < 2.0). It seems that these wiggles won't be enhanced unless the blur parameter b is decreased.§ DISCUSSIONS In previous section, analytic continuations for single-band Green's functions and matrix-valued Green's functions by using the MaxEnt-BR method have been demonstrated. However, there are still some important issues that need to be clarified. In this section, we would like to further discuss the preblur algorithm, the auxiliary Green's algorithm, and the noise tolerance for the MaxEnt-BR method. §.§ Effect of preblur The benchmark results shown in Section <ref> imply that the MaxEnt-BR method has a tendency to generate sharp peaks or small fluctuations in the high-energy regions of the spectra. The preblur algorithm is helpful to alleviate this phenomenon. In the context of analytic continuation, the preblur algorithm was introduced by Kraberger et al. <cit.>. The kernel function K(iω_n,ω) [see Eq. (<ref>)] is “blurred” by using the following expression:K_b(iω_n,ω) = ∫^∞_-∞ dω'  K(iω_n,ω') g_b(ω - ω').Here, K_b(iω_n,ω) is the blurred kernel, which is then used in Eq. (<ref>) to evaluate the χ^2-term. g_b(ω) is a Gaussian function:g_b(ω) = exp(-ω^2 / 2b^2)/√(2π) b,where b is the blur parameter.In Figures <ref> and <ref>, we analyze the effects of the preblur trick for two typical scenarios: (i) Positive spectral function with two separate Gaussian peaks. (ii) Complicated spectral function for off-diagonal Green's function. Note that the model parameters for generating the exact spectra are taken from Sections <ref> and <ref>, respectively. It is evident that the MaxEnt-BR method without the preblur trick (b = 0) has trouble resolving the spectra accurately. It usually favors sharp peaks (see Fig. <ref>). Sometimes it may lead to undesirable artifacts, such as the side peaks around ω = ± 1.5 in Fig. <ref>. The preblur algorithm can remedy this problem to some extent. The major peaks are smoothed, and the artificial side peaks are suppressed upon increasing b. But it's not the case that bigger is always better. There is an optimal b. For case (i), the optimal b is 0.3. A larger b (b > 0.3) will destroy the gap, inducing a metallic state. For case (ii), the optimal b is 0.2. If b is further increased, it is difficult to obtain a stable solution. Here, we should emphasize that these unphysical features seen in the spectra are not unique for the MaxEnt-BR method. Actually, a similar tendency was already observed in the early applications of the MaxEnt-SJ method in image processing tasks. In order to address this problem, Skilling suggested that the spectrum can be expressed as a Gaussian convolution: A = g_b ⋆ h, where h is a “hidden” function <cit.>. Then the entropy is evaluated from h(ω), instead of A(ω). In fact, Skilling's approach is equivalent to the preblur trick. §.§ Auxiliary Green's functions As stated before, in addition to the positive-negative entropy method, the auxiliary Green's algorithm can be used to continue the off-diagonal elements of the Green's functions <cit.>. Its idea is quite simple. At first, an auxiliary Green's function is constructed as follows:G_aux(iω_n) = G_11(iω_n) + G_22(iω_n) + 2G_12(iω_n).Supposed that A_aux(ω) is the corresponding spectral function for G_aux(iω_n). It is easy to prove that A_aux(ω) is positive semi-definiteness. So, it is safe to perform analytic continuation for G_aux(iω_n) by using the traditional MaxEnt-SJ or MaxEnt-BR method. Next, the off-diagonal spectral function A_12(ω) can be evaluated as follows:A_12(ω) = A_aux(ω) - A_11(ω) - A_22(ω)/2.This algorithm is not restricted to the particle-hole symmetric case, as assumed in Ref. <cit.>. Here, we employ the two-band's example presented in Section <ref> again to test the combination of the auxiliary Green's function algorithm and the MaxEnt-BR method (and the MaxEnt-SJ method). The model and computational parameters are kept, and the analytic continuation results are plotted in Fig. <ref>.Three rotation angles are considered in the simulations. We find the auxiliary Green's function algorithm is numerically unstable. (i) θ = 0.1. When the rotation angle is small, the amplitude (absolute value) of A_12(ω) is small. Both the MaxEnt-SJ method and the MaxEnt-BR method can resolve the major peaks around ω = ± 1.0. But they will produce apparent oscillations at higher energies. Increasing the b parameter further could lead to wrong peaks. It seems that these unphysical features are likely due to the superposition of errors in A_aux(ω), A_11(ω), and A_22(ω). The performance of the MaxEnt-BR method is worse than that of the MaxEnt-SJ method. (ii) θ = 0.5 and θ = 0.9. When the rotation angle is moderate or large, the spectra obtained by the MaxEnt-SJ method are well consistent with the exact solutions. By using the MaxEnt-BR method, though the fluctuations in high-energy regions are greatly suppressed, small deviations from the exact spectra still exist. Overall, the auxiliary Green's function algorithm is inferior to the positive-negative entropy method in the examples studied. Especially when the rotation angle is small, the auxiliary Green's function algorithm usually fails, irrespective of which form of entropy is adopted. §.§ Robustness with respect to noisy input data Analytic continuation is commonly used on noisy Monte Carlo data. The precision of the Monte Carlo data, which strongly depends on the sampling algorithm and the estimator used, is rarely better than 10^-5 <cit.>. The distributions of inherent errors in the Monte Carlo data are often Gaussian. This poses a strict requirement for the noise tolerance of the analytic continuation methods. Previous studies have demonstrated that the MaxEnt-SJ method is robust to noise. Here, we would like to examine the robustness of the MaxEnt-BR method with respect to noisy Matsubara data. We reconsider the analytic continuation of the off-diagonal elements of matrix-valued Green's functions. As mentioned above, the synthetic Matsubara data for this case is assumed to be noiseless (δ = 0.0). Now the noise is manually added by Eq. (<ref>) and the noise level δ is changed from 10^-8 to 10^-2. The other computational parameters are the same as those used in Section <ref>. In order to estimate the sensitivity of the analytic continuation method to the noisy data, a new quantity, namely integrated real axis error, is introduced:err(δ) = ∫ dω |A(ω) - Â_δ(ω)|.Here, Â_δ(ω) means the reconstructed (optimal) spectral function under the given noise level δ.In Figure <ref>(a)-(g), the convergence of the spectral functions by the MaxEnt-BR method for simulated Gaussian errors with varying magnitude is shown. When δ = 10^-2, the main peaks at ω≈± 1.0 are roughly reproduced, while the side peaks at ω≈ 2.0 are completely smeared out. When δ = 10^-3, the major characteristics of the off-diagonal spectrum are successfully captured. As the simulated errors decrease further (δ < 10^-3), the MaxEnt-BR method rapidly converges to the exact solution. For comparison, the results for the MaxEnt-SJ method are also presented. It is evident that the MaxEnt-SJ method is less robust than the MaxEnt-BR method when the noise level is moderate. It fails to resolve the satellite peaks around ω = ± 2.0. Only when δ≤ 10^-6, the MaxEnt-SJ method can recover the exact spectrum. Figure <ref>(h) exhibits the integrated error err(δ). When 10^-6 < δ < 10^-3, the MaxEnt-BR method exhibits better robustness with respect to noise than the MaxEnt-SJ method. § CONCLUDING REMARKS In summary, we extend the application scope of S_BR to analytic continuations of imaginary-time Green's functions and Matsubara Green's functions in quantum many-body physics and condense matter physics. It is further generalized to the form of positive-negative entropy to support analytic continuation of matrix-valued Green's function, in which the positive semi-definiteness of the spectral function is broken. We demonstrate that the MaxEnt-BR method, in conjunction with the preblur algorithm and the positive-negative entropy algorithm, is capable of capturing the primary features of the diagonal and off-diagonal spectral functions, even in the presence of moderate levels of noise. Overall, its performance is on par with that of the MaxEnt-SJ method in the examples studied. Possible applications of the MaxEnt-BR method in the future include analytic continuations of anomalous Green's functions and self-energy functions <cit.>, bosonic response functions (such as optical conductivity and spin susceptibility) <cit.>, and frequency-dependent transport coefficients with non-positive spectral weight <cit.>, etc. Further investigations are highly desirable.Finally, the MaxEnt-BR method, together with the MaxEnt-SJ method, has been integrated into the open source software package  <cit.>, which may be useful for the analytic continuation community. This work is supported by the National Natural Science Foundation of China (No. 12274380 and No. 11934020), and the Central Government Guidance Funds for Local Scientific and Technological Development (No. GUIKE ZY22096024). § GOODNESS-OF-FIT FUNCTIONAL Given the spectral function A(ω), the Matsubara Green's function G̃[A] could be reconstructed by using the following equation (thesymbol is used to distinguish the reconstructed Green's function from the input Green's function):G̃_n[A] = ∫ dω K(iω_n, ω) A(ω),where K(iω_n,ω) denotes the kernel function, and n is the index for Matsubara frequency. For the sake of simplicity, Eq. (<ref>) is reformulated into its discretization form:G̃_n[A] = ∑_m K_nm A_m Δ_m,where K_nm≡ K(iω_n, ω_m), A_i≡ A(ω_i), and Δ_i means the weight of the mesh at real axis.The goodness-of-fit functional χ^2[A] measures the distance between the input Green's function G and the reconstructed Green's function G̃[A]. Its expression is as follows:χ^2 [A] = ∑^N_n = 1(G_n - G̃_n[A])^2 /σ^2_nHere we just assume that there are N data points, and σ_n denotes the standard derivative (i.e., the error bar) of G_n. Substituting Eq. (<ref>) into Eq. (<ref>), then we arrive:χ^2 [A] = ∑^N_n = 1(G_n - ∑_m K_nm A_m Δ_m)^2 /σ^2_n.The first derivative of χ^2 [A] with respect to A_i reads:∂χ^2 [A]/∂ A_i = 2 ∑^N_n = 1K_niΔ_i/σ^2_n(∑_m K_nm A_m Δ_m - G_n). § SHANNON-JAYNES ENTROPY In this appendix, the technical details for S_SJ, S^±_SJ, and the MaxEnt-SJ method are reviewed. Most of the equations presented in this appendix have been derived in Refs. <cit.>, <cit.>, and <cit.>. We repeat the mathematical derivation here so that this paper is self-contained.§.§ Entropy The Shannon-Jaynes entropy is defined as follows:S_SJ[A] = ∫ dω[ A(ω) - D(ω) - A(ω) log(A(ω)/D(ω)) ],The discretization form of Eq. (<ref>) is:S_SJ[A] = ∑_i Δ_i [ A_i - D_i - A_i log(A_i/D_i) ].The first derivative of S_SJ[A] with respect to A_i reads:∂ S_SJ[A]/∂ A_i = - Δ_i log(A_i/D_i).§.§ Parameterization of spectral function A singular value decomposition can be readily performed for the kernel function K:K = U ξ V^T,where U and V are column-orthogonal matrices and ξ is the vector of singular values. So, the matrix element of K reads:K_ni = ∑_m U_nmξ_m V_im.The columns of V can be understood as basis functions for the spectral function. That is to say they span a so-called singular value space. And in the singular value space the spectrum can be parameterized through:A_l = D_l exp(∑_m V_lm u_m).It is easy to prove that:∂ A_l/∂ u_i = D_l exp(∑_m V_lm u_m) V_li = A_l V_li.§.§ Newton's method Next, we will derive the equations for {u_m}. Let us recall the stationary condition [see also Eq. (<ref>)]:∂ Q[A]/∂ A_i = 0.It is actually:α∂ S_SJ[A]/∂ A_i -1/2∂χ^2[A]/∂ A_i = 0.Substituting Eqs. (<ref>) and (<ref>) into the above equation:αΔ_i log(A_i/D_i) + ∑^N_n=1K_niΔ_i/σ^2_n(∑_m K_nm A_m Δ_m - G_n ) = 0.Eliminating Δ_i:αlog(A_i/D_i) + ∑^N_n=1K_ni/σ^2_n(∑_m K_nm A_m Δ_m - G_n ) = 0.Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>):α∑_m V_im u_m +     ∑^N_n = 11/σ^2_n∑_m U_nmξ_m V_im(∑_l K_nlA_l Δ_l -G_n ) = 0.Now ∑_m V_im can be removed from the two terms in the left-hand side. Finally we get:α u_m + ∑^N_n = 11/σ^2_nξ_m U_nm(∑_l K_nlA_l Δ_l -G_n ) = 0.Since A_l depends on {u_m} as well, Eq. (<ref>) is actually a non-linear equation about {u_m}. In thepackage <cit.>, the Newton's method is adopted to solve it. So we have to evaluate the following two variables and pass them to the Newton's algorithm <cit.>:f_m = α u_m + ξ_m ∑^N_n = 11/σ^2_n U_nm(∑_l K_nlA_lΔ_l -G_n ), J_mi = ∂ f_m/∂ u_i = αδ_mi + ξ_m ∑^N_n = 11/σ^2_n U_nm∑_l K_nlA_l Δ_l V_li,where J_mi can be considered as a Jacobian matrix. Note that Eq. (<ref>) is applied to derive Eq. (<ref>).Since the calculations for f_m and J_mi are quite complicated and time-consuming, the Einstein summation technology is used to improve the computational efficiency. The following three variables should be precomputed and stored during the initialization stage:B_m = ∑^N_n = 11/σ^2_nξ_m U_nm G_n, W_ml = ∑_pn1/σ^2_n U_nmξ_m U_npξ_p V_lpΔ_l D_l, W_mli = W_ml V_li.Clearly, the three variables only depend on the input Green's function G, the default model D, and the singular value decomposition of the kernel function K. Now f_m and J_mi can be reformulated in terms of them:f_m = α u_m + ∑_l W_ml w_l - B_m, J_mi = αδ_mi + ∑_l W_mli w_l,wherew_l = exp(∑_m V_lm u_m).§.§ Positive-negative entropy For matrix-valued Green's function, the spectral functions of the off-diagonal components could exhibit negative weight. However, the ordinary MaxEnt method is only rigorous for non-negative spectrum <cit.>. In order to remedy this problem, one could imagine that the off-diagonal spectral functions originate from a subtraction of two artificial positive functions <cit.>,A = A^+ - A^-, A^- = A^+ - A.Assuming the independence of A^+ and A^-, the resulting entropy S_SJ[A^+,A^-] is the sum of the respective entropies:S_SJ[A^+,A^-] =  ∫ dω[A^+ - D - A^+log(A^+/D)] +   ∫ dω[A^- - D - A^-log(A^-/D)].Thus, S_SJ[A^+,A^-] is called the positive-negative entropy in the literature <cit.>. We at first eliminate A^- S_SJ[A,A^+] = ∫ dω[ A^+ - D - A^+log(A^+/D) ]+ ∫ dω [(A^+ - A) - D - (A^+ - A) log(A^+-A/D)], and write:Q[A,A^+] = α S_SJ[A,A^+] - 1/2χ^2[A].Since we are searching for a maximum of Q with respect to A and A^+, we also apply:∂ Q[A,A^+]/∂ A^+ = ∂ S_SJ[A,A^+]/∂ A^+ = 0,to eliminate A^+. The following equations show some intermediate steps:∫ dω[ log(A^+/D) +log(A^+-A/D) ] = 0, log(A^+/D) + log(A^+-A/D) = 0, A^+(A^+ - A)/D^2 = 1.Finally, we obtain:A^+ = √(A^2 + 4D^2) + A/2, A^- = √(A^2 + 4D^2) - A/2.Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>) to eliminate A^+ and A^-, we obtain:S_SJ[A^+, A^-]= ∫ dω[ √(A^2 + 4D^2) - 2D - Alog(√(A^2 + 4D^2) + A/2D) ].Its discretization form becomes:S_SJ[A^+, A^-]= ∑_m [ √(A_m^2 + 4D_m^2) - 2D_m -A_mlog(√(A_m^2 + 4D_m^2) + A_m/2D_m) ]Δ_m.Note that Eq. (<ref>) is exactly the same with Eq. (<ref>). S_SJ[A^+,A^-] is just S^±_SJ[A]. The first derivative of S_SJ[A^+,A^-] with respect to A_i is:∂ S_SJ[A^+,A^-]/∂ A_i =-Δ_i log(A^+_i/D_i).§.§ Parameterization for positive-negative spectral functions According to Eq. (<ref>) and Eq. (<ref>), we find:A^+ A^- = D^2.Inspired by Eq. (<ref>), it is naturally to parameterize A^+ and A^- in the singular value space as well:A^+_i = D_i exp(∑_m V_im u_m), A^-_i = D_i exp(-∑_m V_im u_m).Hence,A_i = D_i exp(∑_m V_im u_m) - D_i exp(-∑_m V_im u_m) A_i = D_i (w_i - 1/w_i). ∂ A^+_l/∂ u_i = A^+_l V_li. ∂ A^-_l/∂ u_i = -A^-_l V_li. ∂ A/∂ u_i = (A^+_l + A^-_l) V_li = D_l(w_l + 1/w_l) V_li.By using the Einstein summation notations as defined in Section <ref>, we immediately obtain:f_m = α u_m + ∑_l W_ml(w_l - 1/w_l) - B_m, J_mi = αδ_mi + ∑_l W_mli(w_l + 1/w_l). § BAYESIAN RECONSTRUCTION ENTROPY In this appendix, the technical details for S_BR, S^±_BR, and the MaxEnt-BR method are discussed. §.§ Entropy The Bayesian reconstruction entropy reads:S_BR[A] = ∫ dω[ 1 - A(ω)/D(ω) + log(A(ω)/D(ω)) ].Its discretization form is:S_BR[A] = ∑_i Δ_i [ 1 - A_i/D_i + log(A_i/D_i) ].Its first derivative with respect to A_i is:∂ S_BR[A]/∂ A_i = -Δ_i (1/D_i - 1/A_i).§.§ Parameterization of spectral function The original parameterization of A [see Eq. (<ref>)] can not be used here. A new parameterization scheme for A is necessary. Assumed that1/D_i - 1/A_i = ∑_m V_im u_m,then we have:A_l = D_l/ 1 - D_l ∑_m V_lm u_m.So, the first derivative of A_l with respect to u_i reads:∂ A_l/∂ u_i = A_l A_l V_li.§.§ Newton's method Next, we would like to derive f_m and J_mi for the Bayesian reconstruction entropy. Substituting Eq. (<ref>) into Eq. (<ref>):αΔ_i (1/D_i - 1/A_i) + ∑^N_n=1K_niΔ_i/σ^2_n(∑_m K_nm A_m Δ_m - G_n ) = 0.Eliminating Δ_i:α(1/D_i - 1/A_i) + ∑^N_n=1K_ni/σ^2_n(∑_m K_nm A_m Δ_m - G_n ) = 0.Substituting Eq. (<ref>) into Eq. (<ref>):α∑_m V_im u_m +     ∑^N_n = 11/σ^2_n∑_m U_nmξ_m V_im(∑_l K_nlA_l Δ_l -G_n ) = 0.Eliminating ∑_m V_im again:α u_m + ∑^N_n = 11/σ^2_nξ_m U_nm(∑_l K_nlA_l Δ_l -G_n ) = 0.Clearly, Eq. (<ref>) is the same with Eq. (<ref>). Thus, the equations for f_m and J_mi are as follows:f_m = α u_m + ξ_m ∑^N_n = 11/σ^2_n U_nm(∑_l K_nlA_lΔ_l -G_n ), J_mi = ∂ f_m/∂ u_i = αδ_mi + ξ_m ∑^N_n = 11/σ^2_n U_nm∑_l K_nlA_l A_l Δ_l V_li.Note that Eq. (<ref>) is the same with Eq. (<ref>). However, Eq. (<ref>) differs from Eq. (<ref>) by a factor A_l in the second term of the right-hand side. To derive Eq. (<ref>), Eq. (<ref>) is used. Now the Einstein summation notation is adopted to simplify the calculations of f_m and J_mi again:f_m = α u_m + ∑_l W_ml w_l - B_m, J_mi = αδ_mi + ∑_l W_mli D_l w_l w_l,wherew_l = 1/1 - D_l ∑_m V_lm u_m.Here, the definitions of W_ml, W_mli, and B_m could be found in Section <ref>. §.§ Positive-negative entropy Next, we would like to generalize the Bayesian reconstruction entropy to realize the positive-negative entropy algorithm to support the analytic continuation of matrix-valued Green's function <cit.>. The positive-negative entropy for the Bayesian reconstruction entropy is defined as follows:S_BR[A^+, A^-]= ∫ dω[ 1 - A^+/D + log(A^+/D) ] + ∫ dω[ 1 - A^-/D + log(A^-/D) ].Since A = A^+ - A^-, Eq. (<ref>) can be transformed into:S_BR[A, A^+]= ∫ dω[ 1 - A^+/D + log(A^+/D) ] + ∫ dω[ 1 - A^+-A/D + log(A^+-A/D) ].Then we should eliminate A^+ in Eq. (<ref>). Because∂ S_BR[A, A^+] /∂ A^+ = 0 = ∫ dω(1/A^+ + 1/A^+-A -2/D),we immediately get:A^+ = √(A^2 + D^2) + D + A/2,andA^- = √(A^2 + D^2) + D - A/2.From the definitions of A^+ and A^-, it is easily to prove:2A^+ A^- = (A^+ + A^-) D,andA^+ + A^- = √(A^2 + D^2) + D. We would like to express S_BR[A^+,A^-] in terms of A:S_BR[A^+, A^-] = ∫ dω[ 2 - A^+/D - A^-/D + log(A^+A^-/D^2) ].Substituting Eq. (<ref>) into Eq. (<ref>):S_BR[A^+, A^-] = ∫ dω[ 2 - A^+ + A^-/D + log(A^+ + A^-/2D) ].Substituting Eq. (<ref>) into Eq. (<ref>):S_BR[A^+,A^-] = ∫ dω[ 2 - √(A^2 + D^2) + D/D + log(√(A^2 + D^2) + D/2D) ].Its discretization form reads:S_BR[A^+,A^-] = ∑_m [ 2 - √(A_m^2 + D_m^2) + D_m/D_m + log(√(A_m^2 + D_m^2) + D_m/2D_m) ]Δ_m. Note that Eq. (<ref>) is exactly the same with Eq. (<ref>). S_BR[A^+,A^-] is just S^±_BR[A]. Then the first derivative of S_BR[A^+,A^-] with respect to A_i is:∂ S[A^+,A^-]/∂ A_i = -Δ_i (1/D_i - 1/A_i^+)This equation is very similar to Eq. (<ref>). §.§ Parameterization for positive-negative spectral functions Now we assume that:A^+_l = D_l/ 1 - D_l ∑_m V_lm u_m,andA^-_l = D_l/ 1 + D_l ∑_m V_lm u_m.We also introduce:w^+_l = 1/ 1 - D_l ∑_m V_lm u_m,andw^-_l = 1/ 1 + D_l ∑_m V_lm u_m.So A^+_l = D_l w^+_l, A^-_l = D_l w^-_l, and A_l = D_l (w^+_l - w^-_l). It is easy to verify that the definitions of A^+_l and A^-_l obey Eq. (<ref>). The first derivatives of A^+_l and A^-_l with respect to u_i are:∂ A^+_l/∂ u_i = A^+_l A^+_l V_li,and∂ A^-_l/∂ u_i = -A^-_l A^-_l V_li.The first derivative of A_l with respect to u_i is:∂ A_l/∂ u_i = A^+_l A^+_l V_li + A^-_l A^-_l V_li.After some simple algebra, we easily get:f_m = α u_m + ∑_l W_ml (w^+_l - w^-_l) - B_m, J_mi = αδ_mi + ∑_l W_mli D_l (w^+_l w^+_l + w^-_l w^-_l).
http://arxiv.org/abs/2401.00018v1
{ "authors": [ "Songlin Yang", "Liang Du", "Li Huang" ], "categories": [ "hep-lat", "cond-mat.str-el", "physics.comp-ph" ], "primary_category": "hep-lat", "published": "20231227034712", "title": "Combining Bayesian reconstruction entropy with maximum entropy method for analytic continuations of matrix-valued Green's functions" }
Unveiling chiral phases: Finite-size scaling as a probe of quantum phase transition in symmetry-enriched c=1 conformal field theories Tigran A. Sedrakyan January 14, 2024 =========================================================================================================================================== In this paper, we study the problem of generalizable synthetic image detection, aiming to detect forgery images from diverse generative methods, e.g., GANs and diffusion models.Cutting-edge solutions start to explore the benefits of pre-trained models, and mainly follow the fixed paradigm of solely training an attached classifier, e.g., combining frozen CLIP-ViT with a learnable linear layer in UniFD <cit.>. However, our analysis shows that such a fixed paradigm is prone to yield detectors with insufficient learning regarding forgery representations. We attribute the key challenge to the lack of forgery adaptation, and present a novel forgery-aware adaptive transformer approach, namely FatFormer. Based on the pre-trained vision-language spaces of CLIP, FatFormer introduces two core designs for the adaption to build generalized forgery representations. First, motivated by the fact that both image and frequency analysis are essential for synthetic image detection, we develop a forgery-aware adapter to adapt image features to discern and integrate local forgery traces within image and frequency domains. Second, we find that considering the contrastive objectives between adapted image features and text prompt embeddings, a previously overlooked aspect, results in a nontrivial generalization improvement. Accordingly, we introduce language-guided alignment to supervise the forgery adaptation with image and text prompts in FatFormer. Experiments show that, by coupling these two designs, our approach tuned on 4-class ProGAN data attains a remarkable detection performance, achieving an average of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen diffusion models with 95% accuracy. § INTRODUCTION Recent years have witnessed the emergence and advancement of generative models, such as GANs <cit.> and diffusion models <cit.>. These models enable the creation of hyper-realistic synthetic images, thus raising the wide concerns of potential abuse and privacy threats. In response to such security issues, various forgery detection methods <cit.> have been developed, e.g., image-based methods <cit.> focusing on low-level visual artifacts and frequency-based methods <cit.> relying on high-frequency pattern analysis. However, we observe big performance degradation when applying them to unseen images created by GANs or more recent diffusion models. How to address this problem has seen significant interest.Recent approaches <cit.> turn to explore the utilization of pre-trained models, following the fixed pre-trained paradigm of solely training an attached classifier, as shown in Figure <ref> (a). A notable example in this field is the UniFD proposed by Ojha et al. <cit.>, where a pre-trained CLIP-ViT <cit.> is employed to encode images into image features without learning. Subsequently, a linear layer is tuned as a classifier to determine the credibility of inputs. At a very high level, their key to success is the employment of a pre-trained model in a frozen state, thus providing a learned universal representation (from the pre-training), yet not explicitly tuned in the current synthetic image detection task.In this way, such a representation will never be overfitted during training and thus preserves reasonable generalizability. However, we consider that such a frozen operation adopted by UniFD will also limit the capability of pre-trained models for learning strong and pertinent forgery features.To verify our assumption, we qualitatively study the forgery discrimination of the fixed pre-trained paradigm by visualizing the logit distributions of UniFD <cit.> across various generative models, as depicted in the top row of Figure <ref>. The distribution reflects the degree of separation between `real' and `fake' during testing, thereby offering the extent of generalization of extracted forgery representations.One can see that there is a large overlap of `real' and `fake' regions when facing unseen GANs or diffusion models (Figure <ref> (b)-(d)), mistakenly, to identify these forgeries as `real' class. Moreover, even in the case of ProGAN <cit.> testing samples, which employ the identical generative model as the training data, the distinction between `read' and `fake' elements becomes increasingly indistinct (Figure <ref> (a) vs. (e)).We conclude that the fixed pre-trained paradigm is prone to yield detectors with insufficient learning regarding forgery artifacts, and attribute the key challenge to the lack of forgery adaptation that limits the full unleashing of potentials embedded in pre-trained models.Driven by this analysis, we present a novel Forgery-aware adaptive transFormer approach (Figure <ref> (b)), named FatFormer, for generalizable synthetic image detection. In alignment with UniFD <cit.>, FatFormer investigates CLIP <cit.> as the pre-trained model, which consists of a ViT <cit.> image encoder and a transformer <cit.> text encoder. Based on the pre-trained vision-language spaces of CLIP, our approach achieves the forgery adaptation by incorporating two core designs, ultimately obtaining well-generalized forgery representations with a distinct boundary between real and fake classes (Figure <ref> (e)-(h)).First, motivated by the fact that both image and frequency domains are important for synthetic image detection, a forgery-aware adapter (FAA) is developed, comprising a pair of image and frequency forgery extractors.In the image domain, a lightweight convolution module is employed for extracting low-level forgery artifacts, such as blur textures and color mismatch <cit.>.On the other hand, for the frequency domain, we construct a grouped attention mechanism that dynamically aggregates frequency clues from different frequency bands of discrete wavelet transform (DWT) <cit.>.By integrating these diverse forgery traces, FAA builds a comprehensive local viewpoint of image features essential for effective forgery adaptation.Second, instead of utilizing the binary cross-entropy loss applied to image features, we consider the contrastive objectives between image and text prompts, a previously overlooked aspect. This novel direction is inspired by the natural language supervision in CLIP-ViT's pre-training, typically more robust to overfitting by optimizing the similarity between image features and text prompt embeddings <cit.>.Accordingly, language-guided alignment (LGA) is proposed, which encompasses a patch-based enhancer, designed to enrich the contextual relevance of text prompts by conditioning them on image patch tokens, as well as a text-guided interactor, that serves to align local image patch tokens with global text prompt embeddings, thereby directing the image encoder to concentrate on forgery-related representations. Empirical results show that the forgery adaptation supervised by LGA obtains more generalized forgery representations, thus improving the generalizability of synthetic image detection. Our adaptive approach FatFormer significantly outperforms recent methods with the fixed pre-trained paradigm.Notably, we achieve 98.4% ACC and 99.7% AP on 8 types of GANs, and 95.0% ACC and 98.8% AP on 10 types of unseen diffusion images, using limited ProGAN training data.We hope our findings can facilitate the development of pre-trained paradigms in this field. § RELATED WORKSynthetic image detecting. Due to the increasing concerns about generative models, many works are proposed to address the problem of synthetic image detection, which can be roughly divided into image-based methods <cit.>, frequency-based methods <cit.>, and pre-trained-based methods <cit.>. For instance, Yu et al. <cit.> find images generated by GANs have unique fingerprints, which can be utilized as forgery traces for detection. Wang et al. <cit.> adopt various data augmentations and large-scale GAN images to improve the generalization to unseen testing data. Qian et al. <cit.> introduce frequency analysis into the detection framework, using local frequency statistics and decomposed high-frequency components for forgery detection.More recently, many works have focused on the fixed pre-trained paradigm of freezing the pre-trained model and adopting an attached classifier for forgery detection.For example, Lgrad <cit.> turns the detection problem into a pre-trained-based transformation-dependent problem, and utilizes gradient features from the frozen pre-trained model as forgery cues.Furthermore, Ojha et al. <cit.> propose UniFD to explore the potential of the vision-language model, i.e., CLIP <cit.>, for synthetic image detection. They observe that training a deep network fails to detect fake images from new breeds and employs the frozen CLIP-ViT <cit.> to extract forgery features, followed by a linear classifier.In this paper, our motivation is different from the closely-related approach UniFD <cit.>. UniFD attempts to adopt a frozen pre-trained model to extract forgery representations `without learning'. In contrast, our approach aims to demonstrate that the forgery adaptation of pre-trained models is essential for the generalizability of synthetic image detection.Efficient transfer learning. The latest progress in transfer learning shows the potential for efficient fine-tuning of pre-trained models, especially in the NLP field. Unlike traditional strategies, such as linear-probing <cit.> and full fine-tuning <cit.>, efficient transfer learning only adds learnable modules with a few parameters, such as prompt learning <cit.> and adapter-based methods <cit.>. Inspired by this, many efficient transfer learning works are proposed for vision <cit.> and vision-language models <cit.>. Unlike UniFD <cit.> with linear probing, this paper investigates the efficient transfer learning for generalizable synthetic image detection and first proposes an adaptive transformer with contrastive objectives.§ FATFORMER §.§ OverviewThe overall structure of FatFormer is illustrated in Figure <ref>. FatFormer is composed of two pre-trained encoders for both image and text prompts, as well as the proposed forgery-aware adapter (Section <ref>) and language-guided alignment (Section <ref>).This framework predicts the forgery probability by calculating the softmax of cosine similarities between image features and text prompt embeddings.Vanilla CLIP. Following UniFD <cit.>, we adopt CLIP <cit.> as the pre-trained model with a ViT <cit.> image encoder and transformer <cit.> text encoder, respectively. Given an image x∈ℝ^3× H × W, with height H and width W, CLIP converts it into a D-dimensional image features f_img∈ℝ^(1+N) × D, where 1 represents the image CLS token, N=HW/P^2 denotes the image patch tokens and P is the patch size. Meanwhile, the text encoder takes language text t and generates the text prompt embeddings f_text∈ℝ^M × D from the appended EOS tokens in the text encoder, where M denotes the number of classes (in this paper, M=2). Two encoders are jointly trained to optimize the cosine similarity between the image CLS token and text prompt embeddings using contrastive loss. After pre-training, we can utilize the re-assembled text descriptions for zero-shot testing, e.g., a simple template of `this photo is [CLASS]', where `[CLASS]' is replaced by class names like `real' or `fake'. Given the testing image and text prompts, we have the predicted similarity of class i ∈{0, 1}, where 0 represents `real' and 1 is `fake', as followsS(i) =cos(f_img^(0), f_text^(i)),where cos(·) is the cosine similarity, f_img^(0) denotes the image CLS token at index 0 of f_img. Further, the corresponding possibility can be derived via a softmax functionP(i) =exp(S(i) / τ)/∑_k exp(S(k) / τ),where τ is the temperature parameter.§.§ Forgery-aware adapter (FAA) To adapt the image features for effective forgery adaptation, we insert forgery-aware adapters to bridge adjacent ViT stages, each encompassing multiple ViT layers, in the image encoder, as shown in Figure <ref>.These adapters discern and integrate forgery traces within both image and frequency domains, enabling a comprehensive local viewpoint of image features.Image forgery extractor. In the image domain, FAA constructs a lightweight image forgery extractor, comprising two convolution layers and a ReLU layer for capturing low-level image artifacts, as followsĝ_img^(j) =Conv( ReLU( Conv(g_img^(j)))),where ĝ_img^(j) represents the adapted forgery-aware image features from FAA in j-th ViT stage, and g_img^(j) is the vanilla features from the last multi-head attention module in j-th ViT stage. Here, we omit the reshape operators.Frequency forgery extractor. For the frequency domain, a grouped attention mechanism is proposed to mine forgery traces in the frequency bands of discrete wavelet transform (DWT) <cit.>. Although previous detection methods <cit.> adopt fast Fourier transform <cit.> and discrete cosine transform <cit.>, they destroy the position information <cit.> in the transformed frequency domain, which is crucial in the context of attention modeling <cit.>.Thus, we utilize DWT as the transform function, retaining the spatial structure of image features, which decomposes the inputs into 4 distinct frequency bands, including LL, LH, HL, and HH.Here, combinations of `L' and `H' represent the combined low and high pass filters. Then, two grouped attention modules, i.e., inter-band attention and intra-band attention, are proposed for the extraction of frequency clues.As indicated in Figure <ref>, the inter-band attention explicitly explores the interactions across diverse frequency bands, while the intra-band attention builds interactions within each frequency band. This design achieves the dynamical aggregation of different positions and bands, rather than manual weighting like F3Net <cit.>. In practice, we implement them with multi-head attention modules <cit.>. Finally, FFN and inverse discrete wavelet transform (IDWT) are used to obtain forgery-aware frequency features ĝ_freq^(j), which are transformed back into the image domain for further incorporation.To prevent introducing hyper-parameters, we leverage a learnable scale factor λ to control the information from image and frequency domains as the final adapted image features of j-th stage of ViT, which will be sent to the first multi-head attention module in the next (j+1)-th stage.ĝ^(j) = ĝ_img^(j) + λ·ĝ_freq^(j).§.§ Language-guided alignment (LGA) To supervise the forgery adaptation of FatFormer, language-guided alignment is proposed by considering the contrastive objectives between image and text prompts.In a bit more detail, LGA has a patch-based enhancer that enriches the context of text prompts, and a text-guided interactor that aligns the local image patch tokens with global text prompt embeddings.Finally, we implement an augmented contrastive objective for the loss calculation.Patch-based enhancer. Instead of using hand-crafted templates as prompts, FatFormer has a soft prompt design by adopting auto context embeddings, following <cit.>.Since synthetic image detection relies on local forgery details <cit.>, we develop a patch-based enhancer to enhance the contextual relevance of prompts via the condition of local image patch tokens, deriving forgery-relevant prompts context. Specifically, we first compute the image patch tokens f_img^(1:N)∈ℝ^N× D in the image encoder. Then, given C context embeddings p_ctx∈ℝ^C× D, we haveA_pbe = p_ctx· (f_img^(1:N))^T,where A_pbe∈ℝ^C× N is the similarity matrix in patch-based enhancer. We use A_pbe to represent the intensity of image patch tokens for constructing each context embedding, as followsp̂_ctx =softmax(A_pbe) · f_img^(1:N) + p_ctx.Finally, we can obtain the set of possible text prompts by combining the enhanced context p̂_ctx and M [CLASS] embeddings, and send them to the text encoder.Text-guided interactor. To guide the image encoder focusing on forgery-related representation, we propose a text-guided interactor, which aligns the local image patch tokens with global text prompt embeddings.Specifically, given the text prompt embeddings f_text from text encoder and image patch tokens f_img^(1:N), our text-guided interactor calculates the similarity A_tgi between them byA_tgi =f_img^(1:N)· (f_text)^T.Similar to Eq. (<ref>), with A_tgi, sized ℝ^N× M, we align the image patch tokens with text prompt embeddings by adaptively augmenting text representations, as followsf̂_img^(1:N) =softmax(A_tgi) · f_text + f_img^(1:N),where f̂_img^(1:N) denotes the aligned image patch tokens. Together with the augmented contrastive objectives, the image encoder is guided to concentrate on forgery-related representation within each distinct image patch.Augmented contrastive objectives. For the loss calculation, we consider augmented contrastive objectives that comprise two elements.The first is the cosine similarity in Eq. (<ref>) same as the vanilla CLIP.The second is the similarity between text prompt embeddings and aligned image patch tokens f̂_img^(1:N). With t ∈ [1, N] and i ∈{0, 1}, we haveS'(i) = 1/N∑_t cos(f̂_img^(t), f_text^(i)).By merging similarities from Eq. (<ref>) and Eq. (<ref>), our FatFormer describes a augmented probability P̂(i) by a softmax function, as followsP̂(i)=exp((S(i) + S'(i)) / τ)/∑_k exp((S(k) + S'(k)) / τ).In practice, we apply the cross-entropy function on Eq. (<ref>) with label y ∈{0,1} to calculate contrastive loss like the origin CLIP, as followsℒ = - y ·logP̂(y)-(1-y) ·log (1-P̂(y)).§ EXPERIMENTS §.§ SettingsDatasets. As generative methods are always coming up, we follow the standard protocol <cit.> that limits the accessible training data to only one generative model, while testing on unseen data, such as synthetic images from other GANs and diffusion models. Specifically, we train FatFormer on the images generated by ProGAN <cit.> with two different settings, including 2-class (chair, horse) and 4-class (car, cat, chair, horse) data from <cit.>. For evaluation, we collect the testing GANs dataset provided in <cit.> and diffusion model datasets in <cit.>, which contain synthetic images and the corresponding real images. The testing GANs dataset includes ProGAN <cit.>, StyleGAN <cit.>, StyleGAN2 <cit.>, BigGAN <cit.>, CycleGAN <cit.>, StarGAN <cit.>, GauGAN <cit.> and DeepFake <cit.>. On the other hand, the diffusion part consists of PNDM <cit.>, Guided <cit.>, DALL-E <cit.>, VQ-Diffusion <cit.>, LDM <cit.>, and Glide <cit.>. For LDM and Glide, we also consider their variants with different generating settings. More details can be found in their official papers.Evaluation metric. The accuracy (ACC) and average precision (AP) are reported as the main metrics during evaluation for each generative model, following the standard process <cit.>. To better evaluate the overall model performance over the GANs and diffusion model datasets, we also adopt the mean of ACC and AP on each dataset, denoted as ACC_M and AP_M. Implementation details. Our main training and testing settings follow the previous study <cit.>. The input images are first resized into 256 × 256, and then image cropping is adopted to derive the final resolution of 224 × 224. We apply random cropping and random horizontal flipping at training, while center cropping at testing, both with no other augmentations. The Adam <cit.> is utilized with betas of (0.9, 0.999). We set the initial learning rate as 4 × 10^-4, training epochs as 25, and adopt a total batch size of 256. Besides, a learning rate schedule is used, decaying at every 10 epochs by a factor of 0.9. §.§ Main resultsThis paper aims to build a better paradigm with pre-trained models for synthetic image detection. Therefore, we mainly compare our FatFormer with previous methods that adopt the fixed pre-trained paradigm, such as LGrad <cit.> and UniFD <cit.>. In addition, to show the effectiveness of our approach, we also consider comparisons with existing image-based <cit.> and frequency-based methods <cit.>.Comparisons on GANs dataset. Table <ref> reports the comparisons on the GANs dataset <cit.> with two different training data settings. Results show that our FatFormer consistently exceeds pre-trained-based LGrad <cit.> and UniFD <cit.>. Specifically, under 4-class supervision, FatFormer outperforms the current state-of-the-art method UniFD by a significant 9.3% ACC and 1.4% AP with the same pre-trained CLIP model, achieving 98.4% ACC and 99.7% AP. Besides, for the other 2-class supervision setting, similar trends are observed with the ones under 4-class supervision, when compared with pre-trained-based methods. Moreover, we also compare FatFormer with representative image-based <cit.> and frequency-based methods <cit.> in Table <ref>. Our approach can also easily outperform all of them with a larger improvement.The above evidence indicates the necessity of forgery adaptation for pre-trained models. Beyond the impressive performance, more importantly, our FatFormer provides an effective paradigm of how to incorporate pre-trained models in the synthetic image detection task. Comparisons on diffusion model dataset. To further demonstrate the effectiveness of FatFormer, we provide comparisons with existing detection methods on the diffusion model dataset <cit.>. The results are shown in Table <ref>. Note that all the compared methods are trained on 4-class ProGAN data. This test setting is more challenging as forged images are created by various diffusion models with completely different generating theories and processes from GANs. Surprisingly, FatFormer generalizes well for diffusion models, achieving 95.0% ACC and 98.8% AP. Compared with pre-trained-based LGrad <cit.> and UniFD <cit.>, FatFormer also works better than both of them when handling diffusion models. For example, our approach surpasses UniFD by 9.6% ACC and 4.2% AP. Moreover, we find that even with powerful CLIP as the pre-trained model, UniFD only achieves a similar result (about 85% ACC) like PatchFor <cit.>. We argue this is mainly because the fixed pre-trained paradigm is prone to yield detectors with insufficient learning regarding forgery artifacts. Thus, our FatFormer, which presents an adaptive transformer framework with forgery adaptation and reasonable contrastive objectives, can achieve much better results.§.§ Ablation study We conduct several ablation experiments to verify the effectiveness of key elements in our FatFormer. Unless specified, we report the mean of accuracy (ACC_M) and average precision (AP_M) on the GANs dataset under the training setting of 4-class ProGAN data. Forgery-aware adapter implementations. We ablate the effects of considering the image domain and frequency domain in the forgery-aware adapter. The results are shown in Table <ref>. We observe severe performance degradation when removing either of these two domains, especially for the frequency domain with over -3.0% ACC gaps. We conclude that both image and frequency domains are essential in FatFormer for synthetic image detection. The image forgery extractor collects the local low-level forgery artifacts, e.g., blur textures, while the frequency forgery extractor explores and gathers the forgery clues among different frequency bands, together building a comprehensive local viewpoint for the adaptation of image features. For the frequency forgery extractor, both interactions built by inter-band and intra-band attentions are important in our FatFormer. Table <ref> shows the ablation.Benefits of supervision in vision-language space. Table <ref> provides the comparisons between different supervising strategies for FatFormer, including (i) linear probing with image modality, (ii) vanilla contrastive objectives between image CLS token and text prompt embeddings, which masked out the text-guided interactor, and (iii) our augmented contrastive objectives. The results demonstrate that introducing text prompts for contrastive supervision benefits the generalization of detection.We conjecture this is mainly because CLIP provides a stable alignment between real image and text representation with pre-training, thus yielding a mismatching when handling a fake image with text prompts. As potential evidence, we find that only adopting LGA can still achieve an accuracy of 91.5% ACC (Table <ref>).Besides, we observe that the proposed augmented contrastive objectives can further boost generalizability by directing the image encoder to concentrate on forgery-related representations, bringing a 2.0% ACC gain over the vanilla implementation.Text prompt designs. Table <ref> gives the results of constructing the text prompt with different prompt designs and image conditions. The results validate that both auto context embeddings and image conditions are important in text prompt designs. Compared with using a fixed hand-crafted template, e.g., `this photo is', the design of auto context embedding improves by 0.9% ACC, due to its abstract exploration in word embedding spaces. Besides, it is better to adopt image patch tokens as conditions to enhance these auto context embeddings, containing more local context details, rather than the global image CLS token.Model components. Tabel <ref> gives the ablation of two proposed model components, i.e., forgery-aware adapter and language-guided alignment. Large performance drops (-6.9% ACC and -1.6% AP) are observed when adopting the previous fixed pre-trained paradigm by removing the forgery-aware adapter. This explains the necessity of forgery adaptation of pre-trained models. On the other hand, the proposed language-guided alignment, which considers the augmented contrastive objectives in the vision-language space, also provides better supervision for the forgery adaptation than simply adopting binary labels, bringing 3.1% ACC and 0.5% AP gains. As shown in Figure <ref>, using language-guided alignment obtains more concentration on semantic foreground patches, where anomalies, e.g., unrealistic objects, textures, or structures often occur. Therefore, our FatFormer can obtain generalized forgery representations by focusing on local forgery details, resulting in the improvement of the generalizability of synthetic image detection. §.§ More analysisHere, we analyze our FatFormer on different architectures and pre-training strategies.Analysis on different architectures. While FatFormer is constructed upon the identical CLIP framework <cit.> as employed in UniFD <cit.>, the proposed forgery adaptation strategy is transferrable to alternative architectures. Presented in the upper section of Table <ref> are the ACC_M and AP_M scores for four distinct architectures, including two variations of multi-modal structures pre-trained by CLIP and two variants of image-based Swin transformer <cit.> pre-trained on ImageNet 22k <cit.>. The comparisons between models with and without FatFormer verify the efficacy of integrating forgery adaptation among different pre-trained architectures, significantly facilitating the performance of synthetic image detection.Analysis on different pre-training strategies.We further conduct an assessment of the efficacy of forgery adaptation across models employing different pre-training strategies.Utilizing ViT-L <cit.> as the baseline, we validate two well-known pre-training approaches: MAE <cit.> and CAE <cit.>. The evaluations are shown in the lower segment of Table <ref>.We observe that incorporating the forgery adaptation in our FatFormer can lead to a consistent increase in performance across diverse pre-training strategies, demonstrating the robustness and transferability of our approach. § CONCLUSIONIn this paper, we present a novel adaptive transformer, FatFormer, for generalizable synthetic image detection. With two core designs, including the forgery-aware adapter and language-guided alignment, for the forgery adaption of pre-trained models, the proposed approach outperforms the previous fixed pre-trained paradigm by a large margin. Besides, the forgery adaption in FatFormer is also flexible, which can be applied in various pre-trained architectures with different pre-training strategies. We hope FatFormer can provide insights for exploring better utilization of pre-trained models in the synthetic image detection field.Limitations and future works.FatFormer generalizes well on most generative methods, while we still have space to improve in diffusion models, e.g., Guided <cit.>.Elucidating the distinctions and associations among images produced by diffusion models and GANs is needed to build stronger forgery detectors. The investigation of this problem is left in future work.Besides, how to construct a better pretext task special for synthetic image detection in pre-training is also worth a deeper study. ieeenat_fullname§ APPENDIXIn this appendix, we first discuss the potential negative societal impacts (refer to Section <ref>) that may arise in practical scenarios. Then, an in-depth exploration of ablation studies (explicated in Section <ref>) is presented, delineating the influence of hyper-parameters employed within our approach. Lastly, a comprehensive analysis is conducted to assess the efficacy of forgery adaptation in enhancing robustness (outlined in Section <ref>) against image perturbations. §.§ Broader impacts The development of synthetic image detection tools, while aiming to combat misinformation, may lead to unintended consequences in content moderation. Legitimate content that exhibits characteristics similar to forgeries may be mistakenly flagged, impacting normal information (based on image modality) sharing. These issues need further research and consideration when deploying this work to practical applications for content moderation. §.§ More Ablations We provide more ablation studies on the hyper-parameters used in our FatFormer. The training and evaluating settings are the same as Section 4.3.Number of auto context embeddings. FatFormer combines the enhanced context embeddings and [CLASS] embeddings to construct the set of possible text prompts. Here, we ablate the effects of how a pre-defined number of context embeddings in text prompts affects the performance in the following table: One can see that 8 auto context embeddings are good enough and achieve better results than 16 embeddings. Thus, we set the number as 8 by default in this paper. Number of forgery-aware adapters. To achieve effective forgery adaptation, FatFormer develops the forgery-aware adapter and integrates it with the ViT image encoder. The number of inserted forgery-aware adapters is to be explored. The following table lists the relevant ablations: We observe that inserting 3 forgery-aware adapters in the image encoder is able to achieve good performance. Therefore, we set 3 as the default number of the forgery-aware adapter in our FatFormer. Kernel size of image forgery extractor. To capture low-level image artifacts, we introduce a lightweight image forgery extractor in the proposed forgery-aware adapter, including two convolutional layers and a ReLU. We also explore settings of the kernel size of convolutional layers, as follows:We find that using 1× 1 kernel yields superior results in constructing the image forgery extractor. We conjecture that this is mainly because the intermediate image patch tokens in ViT encode high-level semantic information of different image patches, which may not provide useful low-level similarity among adjacent positions like the ones in traditional convolutional networks. Thus, larger kernels, designed to fuse adjacent patch tokens, may introduce disturbance to the modeling process of ViT and damage the performance. §.§ Robustness on image perturbation To evaluate the effects of forgery adaptation in FatFormer on robustness, we apply several common image perturbations to the test images, following [12, 46]. Specifically, we adopt random cropping, Gaussian blurring, JPEG compression, and Gaussian noising, each with a probability of 50%. The detailed perturbation configures can be found in [12]. Based on the GANs dataset, we compare our FatFormer with UniFD [35] and LGrad [46], which adopts the fixed pre-trained paradigm. The results are shown in the following table: It can be observed that our approach exceeds UniFD by a larger margin, e.g., over +12.0% facing Gaussian blurring. This is mainly because FatFormer obtains well-generalized forgery representations with the proposed forgery adaption, as analyzed in Section 4.3.Moreover, we also consider a more real-world scenario by combining all four types of perturbation. The results are illustrated in Figure <ref>. Compared with UniFD, our FatFormer also beats it on all testing GAN methods, further suggesting the robustness improvement brought by forgery adaptation.
http://arxiv.org/abs/2312.16649v1
{ "authors": [ "Huan Liu", "Zichang Tan", "Chuangchuang Tan", "Yunchao Wei", "Yao Zhao", "Jingdong Wang" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227173632", "title": "Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection" }
=14 ptCosmological constant Petrov type-N space-time in Ricci-inverse gravity F. Ahmed0000-0003-2196-9622[faizuddinahmed15@gmail.com]Department of Physics, University of Science & Technology Meghalaya,Ri-Bhoi, 793101, IndiaJ. C. R. de Souza0000-0002-7684-9540[jean.carlos@fisica.ufmt.br] and A. F. Santos0000-0002-2505-5273[alesandroferreira@fisica.ufmt.br (Corresp. author)]Instituto de Física, Universidade Federal de Mato Grosso, Cuiabá, Mato Grosso 78060-900, Brazil Our focus is on a specific type-N space-time that exhibits closed time-like curves in general relativity theory within the framework of Ricci-inverse gravity model. The matter-energy content is solely composed of a pure radiation field, and it adheres to the energy conditions while featuring a negative cosmological constant. One of the key findings in this investigation is the non-zero determinant of the Ricci tensor (R_μν), which implies the existence of an anti-curvature tensor (A^μν) and, as a consequence, an anti-curvature scalar (A ≠ R^-1). Furthermore, we establish that this type-N space-time serves as a solution within modified gravity theories via the Ricci-inverse model, which involves adjustments to the cosmological constant (Λ) and the energy density (ρ) of the radiation field expressed in terms of a coupling constant. As a result, our findings suggest that causality violations remain possible within the framework of this Ricci-inverse gravity model, alongside the predictions of general relativity.Keywords: Exact solutions; modified gravity theories; null dust ; cosmological constantPACS number(s): 04.20.Jb; 04.50.Kd; 98.80.Es;§ INTRODUCTIONThe solutions of Einstein's field equations that exhibit intriguing features have garnered acceptance within the scientific community. These solutions are systematically categorized using the Petrov classification scheme. Since curvature is a localized property of space-time, the Petrov type provides insight into the local algebraic characteristics of the space-time geometry. Following this scheme <cit.> involves first establishing a set of null tetrad vector fields denoted as (𝐤, 𝐥, 𝐦, 𝐦̅) for a given space-time and subsequently determining the Weyl scalars Ψ_i (i=0,1,2,3,4) <cit.>. In the context of Petrov type-N space-time, the only non-zero Weyl scalar is Ψ_4 ≠ 0, and the Weyl tensor satisfies the condition C_μνρσ k^σ=0, where k^σ is the quadruple principal null direction (PND) along which the gravitational radiation propagates. The Petrov classification of the Weyl scalars and the asymptotic forms of radiative fields originating from spatially bounded sources underscores the fundamental role played by type-N solutions in gravitational radiation theory (for a comprehensive review, see <cit.>). All solutions of the vacuum Einstein equations, including those with a non-zero cosmological constant Λ, falling under type N and featuring a non-twisting null geodesic congruence, are well-established <cit.>. Additional non-vacuum solutions, both without and with a cosmological constant, within the framework of Petrov type-N space-times with non-twisting, non-expanding, and shear-free geodesic null congruence, are constructed in <cit.>. Recent investigations have extended to include twisting type-N vacuum solutions with a nonzero cosmological constant <cit.>. Furthermore, type N universal space-time has also attracted attention in recent times (see, refs. <cit.>). In type-N space-time without twisting, a non-expanding and shear-free geodesic null vector field emerges as the conduit for gravitational wave propagation. These gravitational wave space-times are either plane-fronted gravitational waves with parallel rays (called pp-wave for non-vacuum solutions of the field equations) or plane-wave (vacuum solutions) space-times, belonging to the Kundt class. The confirmation of gravitational waves, or ripples in space and time, resulting from the merger of black holes by the LIGO scientific collaboration in 2015, validated the success of general relativity in contemporary physics. Thus, Petrov type-N space-time holds significant importance in the realm of gravitational wave theory. One of the biggest problems in modern cosmology is the current accelerated expansion of the universe. There is strong observational data confirming this phenomenon <cit.>. One possibility to understand this accelerated phase is add a new exotic component, called dark energy. This component has a negative pressure that causes gravity to behave repulsively on large cosmological scales <cit.>. The best-known candidate for dark energy is the cosmological constant Λ. Taking this component and assuming the existence of dark matter, the ΛCDM model has been proposed. Although it is a very popular model and can successfully explain the observational results, it is not without problems. The main problem is the famous cosmological constant problem <cit.>. Another way to explain the cosmological observations is to propose modifications to Einstein's general relativity <cit.>. The simplest manner to construct alternative theories of general relativity is to include an additional term in the Einstein-Hilbert Lagrangian or to modify the structure of the Lagrangian which implies modifying the Ricci scalar itself. Among the various modified gravity theories existing in the literature, there is a recent model, called Ricci-inverse gravity, which has attracted attention. In this paper, this model is considered.Ricci-inverse gravity <cit.> is an alternative theory to general relativity that modifies the Einstein-Hilbert action by introducing a geometrical object called anti-curvature scalar A. The anti-curvature scalar is the trace of the anti-curvature tensor denoted as A^μν, which is defined as the inverse of the Ricci tensor R^μν. The anti-curvature tensor satisfies the condition A^μσR_σν=δ^μ_ν. It is important to note that the anti-curvature scalar A is not the inverse of the Ricci scalar R. There are two classes that define this model: class I, characterized by the function f(R, A) that depends on Ricci and anti-curvature scalars, and class II, given by the function f(R, A^2) which depends on the square of the anti-curvature tensor. Here it is considered class I. In recent years, some investigations have been developed using this gravitational theory. For example,no-go theorem for inflation has been investigated <cit.>, cosmic structure has been studied <cit.>,evolution from matter-dominated epoch to accelerated expansion epoch has been analyzed <cit.>, the matter-antimatter asymmetry through baryogenesis in the realm of f(R, A) theory of gravity has been discussed <cit.>, the causality issue using an axially symmetric space-time has been investigated <cit.>, anisotropic stellar structures has been explored <cit.>, among others applications.In this paper, we aim to investigate a type-N space-time in general relativity,possessing a non-twisting, non-expanding, and shear-free geodesic null congruence within the framework of Ricci-inverse gravity. For that we consider a specific example: pure radiation field space-time characterized by a negative cosmological constant that was investigated in <cit.>.This space-time represents pp-wave non-vacuum solution of the field equations satisfying the energy conditions. In general relativity, this solution represents an exact solution of the field equations and exhibits closed time-like curves at an instant of time, thus violating the causality condition.Our study reveals that this also serve as a solution within the framework of Ricci-inverse gravity, where the cosmological constant Λ is replaced by a modified value Λ_, and the energy-density of the radiation field, ρ, is altered to ρ_, which is connected to a coupling parameter. Consequently, we demonstrate that the violation of causality is permitted in this modified gravity theory as well. It is worthwhile mentioning that this particular type-N space-time is a solution in Ricci-inverse gravity since the determinant of the Ricci tensor is nonzero, (R_μν)≠ 0 which ensures the existence of an anti-curvature tensor A^μν which is symmetric in nature analogue to the metric tensor or the Ricci tensor.This paper is organized as following. In Section 2, the cosmological constant type-N space-time with causality violation is introduced. Considering pure radiation as matter content,it is shown that this metric is the solution in general relativity obtaining a negative cosmological constant and a positive energy density. Next, this metric is investigated in Ricci-inverse gravity. Some conclusions and remarks are presented in Section 3.§ COSMOLOGICAL CONSTANT TYPE-N SPACE-TIME WITH CAUSALITY VIOLATION IN RICCI-INVERSE GRAVITY In this section, we examine an example of a type-N pure radiation field solution derived from Einstein's field equations in the context of general relativity. The matter content in this scenario consists of a pure radiation field that satisfies the energy conditions and includes a negative cosmological constant. Notably, this type-N space-time permits the existence of closed time-like curves within time-like regions, thus violating the causality condition. We take this type-N space-time and investigate it within the framework of modified gravity theories using the Ricci-inverse approach.The line-element that describes this type-N space-time in cylindrical coordinates (t, r, ϕ, z) is given by the following expression (assuming c=1 and 8π G=1) <cit.>:ds^2=g_rr dr^2+2 g_tϕ dt dϕ+g_ϕϕ dϕ^2+2 g_zϕ dz dϕ+g_zz dz^2,with different components of the metric tensor g_μν given by g_tϕ=-1/2 cosh t sinh^2 (α r), g_rr=^2 (α r), g_ϕϕ=-sinh t sinh^2 (α r), g_zz=sinh^2 (α r), g_zϕ=β z sinh^2 (α r),where α>0, β>0 are arbitrary positive constants.To convert this metric (<ref>) with (<ref>) into a standard form, we perform transformations as followst →sinh^-1 (τ), r →1/α sinh^-1 (α ϱ),into the metric (<ref>), we obtain the following line-element in the chart (τ, ϱ, ϕ, z) given by ds^2=dϱ^2/α^2 ϱ^2+α^2 ϱ^2 (-dτ dϕ-τ dϕ^2+2 β z dz dϕ+dz^2). Our primary objective is to investigate this metric (<ref>) within the context of Ricci-inverse gravity. This choice is motivated by the fact that the determinant of the Ricci tensor for the space-time described by the metric (<ref>) is non-zero, as we will discuss in detail below. Before that, we first review this metric in the context of general relativity and then will investigate within the frame of modified theories of gravity. The covariant metric tensor g_μν and its contravariant form g^μν for the metric (<ref>) are given byg_μν=[ 0 0-α^2 ϱ^2/2 0; 0 1/α^2 ϱ^2 0 0;-α^2 ϱ^2/2 0-α^2 ϱ^2 τ β z α^2 ϱ^2; 0 0 β z α^2 ϱ^2 α^2 ϱ^2 ], g^μν=[ 4 (τ+β^2 z^2)/α^2 ϱ^2 0-2/α^2 ϱ^2 2 β z/α^2 ϱ^2; 0 α^2 ϱ^2 0 0;-2/α^2 ϱ^2 0 0 0; 2 β z/α^2 ϱ^2 0 0 1/α^2 ϱ^2 ].The covariant Ricci tensor R_μν and its contravariant form for the metric (<ref>) are givenR_μν=[003 α^4 ϱ^2/20;0 -3/ϱ^200;3 α^4 ϱ^2/20β+3 τ α^4 ϱ^2 -3 α^4 β z ϱ^2;00 -3 α^4 β z ϱ^2 -3 α^4 ϱ^2 ], R^μν=[ 4/ϱ^4 (β/α^4-3 ϱ^2 (τ+β^2 z^2)) 0 6/ϱ^2-6 β z/ϱ^2; 0-3 α^4 ϱ^2 0 0; 6/ϱ^2 0 0 0;-6 β z/ϱ^2 0 0-3/ϱ^2 ].Finally, the Ricci scalar for metric (<ref>) is given byR=g_μν R^μν=-12 α^2. The non-zero components of the Einstein tensor G_μν areG_τϕ=-3 α^4 ϱ^2/2=3 α^2 g_τϕ, G_ϱϱ=3/ϱ^2=3 α^2 g_ϱϱ, G_ϕϕ=-3 τ α^4 ϱ^2+β=3 α^2 g_ϕϕ+β, G_ϕ z=3 α^4 ϱ^2 β z=3 α^2 g_ϕ z, G_zz=3 α^4 ϱ^2=3 α^2 g_zz . The Einstein field equations with a cosmological constant and the energy-momentum tensor 𝒯^μν are given byG_μν+Λ g_μν=𝒯_μν,where the right hand side is given by𝒯_μν=ρ k_μ k_ν,𝒯^μ_μ=0,with ρ being the energy-density of a pure radiation field, and k_μ=δ^ϕ_μ=(0,0,1,0) is a null vector that satisfies the relation k^μ k_μ=0. The non-zero component of the energy-momentum tensor is 𝒯_ϕϕ=ρ.Using the metric tensor g_μν given by (<ref>), the Einstein tensor G_μν by (<ref>), and the energy-momentum tensor (<ref>) into the equations (<ref>), one will find the following physical quantities given byΛ=-3 α^2, ρ=β>0. Thus, space-time (<ref>) is an exact solution of the field equations in general relativity with matter content a pure radiation field having constant energy-density and a negative cosmological constant.To determine type of the chosen space-time (<ref>) using the Petrov classification scheme, one can use the Newman–Penrose formalism and construct a set of null tetrad vectors ( k, l, m, m̅) <cit.>. One can easily show that the only Weyl scalars Ψ_4 ≠ 0 and the rest are all equal to zero, which indicates that the chosen metric is of the Petrov type-N. The null vector field k satisfies the geodesic null congruence, that is, k_μ;ν k^ν=0. Thus, the space-time we have chosen here admits a non-expanding, non-twisting, and shear-free geodesic congruence.Now, we show that this space-time admits closed causal curves. For that, let us consider curves defined byϱ=ϱ_0, z=z_0,where ϱ_0, z_0 are constants. Therefore, the space-time (<ref>) reduces to 2D Misner-like space <cit.> given byds^2_conf-Misner=Ω_0 (-dτ dϕ-τ dϕ^2),where Ω_0=α^2 ϱ^2_0 is the conformal constant factor. The Misner space is a 2D space-time with the metric <cit.>ds^2_Misner=-2 dT dψ-T dψ^2,where -∞ <T < +∞ and the coordinate ψ is periodic, that is, ψ→ψ +2 n π with n=0,± 1,± 2,..... The curves T=const=T_0 are all closed due to the periodicity of ψ. The curves T_0<0 are spacelike and T_0>0 are time-like. It then follows that all points at T_0>0 rest on closed time-like curves (CTCs) but those at T_0<0 do not. Hence, the Misner space admits closed time-like at an instant of time, T=const=T_0>0. In our case, for the chosen space-time (<ref>), the closed curves defined by {ϱ, ϕ, z}→{ϱ_0, ϕ+2 n π, z_0} is spacelike for τ<0 and time-like for τ>0. Hence, our four-dimensional space-time (<ref>) admits closed time-like curves at an instant of time analogue to the two-dimensional Misner space. These closed time-like curves evolve from an initial spacelike hypersurface in a causally well-behaved manner <cit.>. This point is clear from the following discussion. The metric component g^ττ for the metric (<ref>) is given byg^ττ=4 (τ+β^2 z^2)/α^2 ϱ^2 .In the constant z-plane defined by z=z_0=0, we obtaing^ττ=4 τ/α^2 ϱ^2 .Thus, the hypersurface τ=const is spacelike for τ>0 since g^ττ>0 and time-like for τ<0. The curve τ=0 is null and serve as chronology horizon, that is, the hypersurface separating the causal and non-causal parts of space-time.Thus, the Petrov type-N space-time (<ref>) which is a non-vacuum solution of the field equations admits closed time-like curves analogue to the Misner space. Now, we study this line-element (<ref>) in the context of modified theories of gravity via the Ricci-inverse gravity by introducing an anti-curvature tensor into the Lagrangian of the system, as discussed in Refs. <cit.>.The action that describes the Ricci-inverse gravity is given asS= ∫ d^4x √(-g)[(R + κ A-2 Λ)+ L_m],where g is the metric determinant, κ is the coupling constant, R=g_μν R^μν is the Ricci scalar, A=g_μν A^μν is anti-curvature scalar, Λ is the cosmological constant, and L_m is the matter Lagrangian. Varying the action (<ref>) with respect to the metric, the field equations that describe this gravitational theory is <cit.>R^μν - 1/2 R g^μν+ Λ g^μν + M^μν = 𝒯^μν.Using the value of the Ricci scalar given in (<ref>), the last equation becomesR^μν +(6 α^2+Λ) g^μν + M^μν = 𝒯^μν,with 𝒯^μν being the standard energy-momentum tensor and the tensor M^μν is defined asM^μν = -κ (A^μν+A/2 g^μν)+ κ/2 {2 g^κμ∇_ι∇_κ (A^ι_σ A^νσ)-∇^2(A^μ_ι A^νι)-g^μν∇_κ∇_ι(A^κ_σ A^ισ))}.The determinant of the metric tensor g_μν and the Ricci tensor R_μν for the space-time (<ref>) are given byg_μν=-α^4 ϱ^4/4, R_μν=-81 α^12 ϱ^4/4.Since the determinant of the Ricci tensor is non-zero, there exist an anti-curvature tensor A^μν defined by A^μν=R^-1_μν= (R_μν)/ (R_μν). For the space-time (<ref>), this anti-curvature tensor A^μν and its covariant form A_μν are given byA^μν=[ -4/9 α^8 ϱ^4(β+3 α^4 ϱ^2 (τ+β^2 z^2)) 0 2/3 α^4 ϱ^2-2 β z/3 α^4 ϱ^2; 0-ϱ^2/3 0 0; 2/3 α^4 ϱ^2 0 0 0;-2 β z/3 α^4 ϱ^2 0 0-1/3 α^4 ϱ^2 ], A_μν=[ 0 0 ϱ^2/6 0; 0-3/α^4 ϱ^2 0 0; ϱ^2/6 0 t ϱ^2/3-β/9 α^4-β z ϱ^2/3; 0 0-β z ϱ^2/3-ϱ^2/3 ].At last, the anti-curvature scalar is given byA=g_μν A^μν=-4/3 α^2. Thereby, substituting the metric tensor g^μν from (<ref>), the anit-curvature tensor A^μν from (<ref>), and the anti-curvature scalar from (<ref>) into the relation (<ref>), we obtain the nonzero components of the symmetric tensor M^μν=M^νμ (<ref>) given byM^ττ = 4 κ (27 α ^4 ϱ^2 (β ^2 z^2+τ)+7 β)/27 α^8 ϱ ^4,M^τϕ =-2 κ/α ^4 ϱ ^2,M^τ z = 2 β κ z/α ^4 ϱ ^2,M^ϱϱ = κ ϱ ^2 ,M^zz = κ/α ^4 ϱ ^2.The nonzero components of the covariant tensor M_μν areM_τϕ =-1/2κ ϱ ^2,M_ϱϱ = κ/α ^4 ϱ ^2,M_ϕϕ = 7 β κ/27 α ^4-κ ϱ ^2 τ,M_ϕ z = β κ ϱ ^2 z,M_zz = κ ϱ ^2.The trace of the tensor M^μν is given by M=g_μν M^μν=4 κ/α^2.In order to simplify the modified field equations (<ref>), let's defineJ^μν = R^μν +(6 α^2+Λ) g^μν + M^μν.Then, the non-zero components of the tensor J^μν using the metric tensor (<ref>), the Ricci tensor (<ref>), and the tensor M^μν (<ref>) are given byJ^ττ = 4 [27 α^4 ϱ^2 (β^2 z^2+τ) (3 α^4+α^2 Λ+κ)+(27 α ^4+7 κ) β]/27 α ^8ϱ ^4,J^τϕ =-2 (3 α ^4+α ^2Λ +κ)/α ^4ϱ ^2,J^τ z = 2β z (3α ^4+α ^2Λ +κ)/α ^4ϱ ^2,J^ϱϱ = ϱ ^2 (3α ^4+α ^2Λ +κ) ,J^zz = 3 α ^4+α ^2Λ +κ/α ^4ϱ ^2. Taking J^μν into account, a possible solution will be checked for Λ with 𝒯^μν = 0 and 𝒯^μν = ρk ^μk ^ν, i. e., a vacuum and pure radiation field, respectively. One can see from the above set of equation (<ref>) that for vacuum case, where 𝒯^μν = 0=J^μν,there is a solution for Λ provided the parameter β in the space-time (<ref>) must be zero, that is, β=0. Otherwise there is no such solution for Λ if the parameter β>0.We now focus on the case, where 𝒯^μν≠ 0, non-vacuum solution of the modified theory of gravity. In general relativity theory, we have shown that the space-time (<ref>) satisfies the field equation with a pure radiation field as matter content with a negative cosmological constant. In the Ricci-inverse gravity, we choose the same pure radiation field as matter content whose energy-momentum tensor is given in equation (<ref>), that is, 𝒯^μν = ρ k^μ k^ν, where in the chart {t, ϱ, ϕ, z}, we defined the null vector field k_μ = (0, 0, 1, 0) and its contravariant form will be k^μ = (-2/a^2 ϱ^2, 0, 0, 0), such that the vector field satisfies the null condition k^μ k_μ=0. Now, using the relation J^μν=𝒯^μν=ρ k^μ k^ν and substituting the non-zero components (<ref>) leads to the following system of equationρ(-2/α ^2 ϱ^2)^2= 4 [27 α^4 ϱ^2 (β^2 z^2+τ){κ+α^2 (3 α^2+Λ)}+(27 α ^4+7 κ) β]/27 α ^8 ϱ ^4,0 = -2 (3 α ^4+α ^2 Λ +κ)/α ^4 ϱ ^2,0 = 2 β z (3 α ^4+α ^2 Λ +κ)/α ^4 ϱ ^2,0 = ϱ ^2 (3 α ^4+α ^2 Λ +κ), 0 = 3 α ^4+α ^2 Λ +κ/α ^4 ϱ ^2 .The solution to the above system of equations gives us the following physical quantitiesΛ→Λ_m =(-3 α^2-κ/α ^2), ρ→ρ_m=β (1+7κ/27α ^4). Hence, the space-time described by the line-element (<ref>) serves as a solution in modified gravity theories, specifically in Ricci-inverse gravity. In this framework, the matter-energy content consists of a pure radiation field, characterized by a modified energy density ρ_m and a modified cosmological constant Λ_m, as given by equation (<ref>). These modifications are determined by the coupling constant κ along with other parameters α, β.It is noteworthy that this new theory also allows for the existence of closed time-like curves, which are observed within certain regions where τ > 0. It is worth noting that the modified energy-density ρ_m satisfies the energy condition for a positive coupling constant. Notably, the energy condition is automatically satisfied for a positive coupling constant, κ>0 since β>0. It can be readily demonstrated that when the coupling constant approaches zero, i. e., κ→ 0, the results obtained revert to the original findings in general relativity, as discussed earlier as well as in Ref. <cit.>.§ CONCLUSIONS General relativity is a classical theory of gravity that has been intensively tested since it was proposed by Einstein. Although Einstein's theory has been successfully verified, it has problems in explaining some observational data. Then, based on observational motivations, alternative models of gravity have been proposed. Modified theories of gravity have been a subject of significant research interest for an extended period. Several alternative theories of gravity have been proposed by researchers over time. These include the f(R) theory, f(T) theory, f(R, T) theory, f(R, G) theory, and f(G, T) theory (For references to these theories, see <cit.>). More recently, a novel gravity theory called Ricci-inverse theory has been introduced in <cit.>. The main characteristic of this theory is that the determinant of the Ricci tensor for any space-time geometry must differ from zero. In this study, we have examined an example of Petrov type-N pure radiation field solution in the backdrop of anti-de Sitter space. This particular solution allows for the formation of closed time-like curves in specific regions, thus, serving as a model for a time machine within the framework of general relativity. Subsequently, we have taken this type-N space-time and explored it within the context of modified theories using Ricci-inverse gravity. Our findings reveal that this type-N metric is also a solution in this novel Ricci-inverse gravity theory, thus, allows the formation of closed time-like curves, similar to the previous theory. We have observed that the energy-density of the pure radiation field and the cosmological constant undergo modifications due to the coupling constant defined in equation (<ref>). It's noteworthy that, as long as the parameter β remains positive, the energy-density satisfies the null energy condition as long as the coupling constant is positive. It's worth noting that one can explore matter content other than a pure radiation field, including scalar fields, pressureless perfect fluids, or anisotropic fluids, within the framework of this modified theory of gravity.In addition, Ricci-inverse gravity is a new theory that should be tested in different contexts. Several points have been investigated, for example, there is no Minkowskian limit for the Ricci-inverse gravity, this theory cannot explain the cosmic expansion history starting from the radiation-dominated epoch to the matter-dominated epoch to the dark energy-dominated epoch, the existence of ghosts must be carefully analyzed, among others. Some studies on instability and perturbation should be considered for future investigation. Therefore, the study developed here follows this necessary line of testing a theory that is proposed as an alternative theory of gravity. It is important to emphasize that general relativity allows solutions that present CTCs that lead to violation of causality. Then verifying an exact solution of general relativity with such a characteristic in theories of modified gravity is an important test like the others mentioned previously. The Einstein tensor of a null dust solution (or null fluid) is expressed as G^μν=Φ k^μ k^ν <cit.>, where k^μ is a null vector field, and Φ is a scalar multiplier. Introducing a stress-energy tensor in the space-time as T^μν=Φ k^μ k^ν satisfies Einstein's field equation, providing a clear physical interpretation in terms of massless radiation. Physically, a null dust solution can describe gravitational radiation or non-gravitational radiation governed by a relativistic classical field theory, such as electromagnetic radiation. Phenomena modeled by null dust solutions include (i) a beam of neutrinos, assumed to be massless and treated using classical physics, (ii) a high-frequency electromagnetic wave, and (iii) a beam of incoherent electromagnetic radiation. Petrov-type N regions are associated with transverse gravitational radiation, the type detected by astronomers using LIGO and Virgo detectors recently <cit.>. Typically, it experiences decay on the order of 𝒪(r^-1), indicating that the long-range radiation field falls under type N. § DATA AVAILABILITY No data generated or analyzed in this study.§ CONFLICT OF INTERESTS Author(s) declares no such conflict of interests.§ ACKNOWLEDGMENTS We sincerely acknowledged the anonymous referee's for their valuable remarks and suggestions. F.A. acknowledged the Inter University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for granting visiting associateship. This work by A. F. S. is partially supported by National Council for Scientific and Technological Development - CNPq project No. 313400/2020-2. J. C. R. S. thanks CAPES for financial support.#1#2http://eudml.org/#1#2#1#2http://dx.doi.org/#1#2#1#2http://arxiv.org/abs/#1arXiv:#1 [#2]#1http://arxiv.org/abs/#1arXiv:#1 99AZP A. Z. Petrov, “https://zbmath.org/0174.28305Einstein spaces", Translated by R. F. Kelleher and Edited by J. Woodrow, Pergamon (1969).AZP2 A. Z. Petrov, “Classification of spaces defined by gravitational fields", https://doi.org/10.1023/A:1001910908054Gen. Relativ. Gravit. 32, 1665 (2000).HS H. Stephani, D. Kramer, M. A. H. MacCallum, C. Hoenselaers and E. Herlt, https://doi.org/10.1017/CBO9780511535185Exact solutions of Einstein’s Field Equations, Cambridge University Press, Cambridge (2003).JB J. B. Griffith and J. Podolský, https://doi.org/10.1017/CBO9780511635397Exact Space-Times in Einstein's General Relativity, Cambridge University Press, Cambridge (2009).JB2 J. Bic̆ák, “Exact radiative spacetimes: some recent developments", https://doi.org/10.1002/andp.200051203-504Ann. Phys. (Leipzig) 512, 207 (2000).JB3 J. Bic̆ák, “Radiative Spacetimes", in https://doi.org/10.1007/978-88-470-2101-3_2Recent Developments in General Relativity, Genoa 2000 by M. Cianci et al. (eds.), Springer, Milano, pp. 11-23. AGD A. Garcia-Diaz and J. F. Plebanski, “All nontwisting N’s with cosmological constant", https://doi.org/10.1063/1.524843J. Math. Phys. 22, 2655 (1981).JB4 J. Bic̆ák and J. Podolský, “Gravitational waves in vacuum spacetimes with cosmological constant. I. Classification and geometrical properties of nontwisting type N solutions", https://doi.org/10.1063/1.532981J. Math. Phys. 40, 4495 (1999).JB5 J. Bic̆ák and J. Podolský, “Gravitational waves in vacuum spacetimes with cosmological constant. II. Deviation of geodesics and interpretation of nontwisting type N solutions", https://doi.org/10.1063/1.532982J. Math. Phys. 40, 4506 (1999).JB6 J. Podolský and M. Belnan, “Geodesic motion in Kundt spacetimes and the character of the envelope singularity", https://doi.org/10.1088/0264-9381/21/12/003Class. Quantum Grav. 21, 2811 (2004).JP J. Podolský and M. Ortaggio, “Explicit Kundt type II and N solutions as gravitational waves in various type D and O universes", https://doi.org/10.1088/0264-9381/20/9/307Class. Quantum Grav. 20, 1685 (2003).SBE S B. Edgar and M. P. M. Ramos, “ Obtaining a class of type N pure radiation metrics using invariant operators", http://dx.doi.org/10.1088/0264-9381/22/5/002Class. Quantum Grav. 22, 791 (2005).DS1 D. Sarma, M. Patgiri, F. U. Ahmed, “A vacuum spacetime with closed null geodesics", https://doi.org/10.1016/j.aop.2012.11.004Ann. Phys. (N.Y.) 329, 179 (2013).AOP F. Ahmed, “Type N Einstein space Time Machine spacetime", https://doi.org/10.1016/j.aop.2017.04.012Ann. Phys. (N.Y.) 382, 127 (2017). DS2 D. Sarma, M. Patgiri, F. U. Ahmed, “Pure radiation metric with stable closed timelike curves",https://doi.org/10.1007/s10714-013-1633-7Gen. Rel. Grav. 46, 1633 (2014).PTEP F. Ahmed, “A family of type N space-time with a negative cosmological constant and causality violation", https://doi.org/10.1093/ptep/pty140Prog. Theor. Exp. Phys. 2019, 013E03 (2019). refa10F. Ahmed, “A type N radiation field solution with Λ < 0 in a curved space-time and closed time-like curves”, 10.1140/epjc/s10052-018-5880-3Eur. Phys. J. C 78, 385 (2018). XZ X. Zhang and D. Finley, “Lower order ODEs to determine new twisting type N Einstein spaces via CR geometry", https://doi.org/10.1088/0264-9381/29/6/065010Class. Quantum Grav. 29, 065010 (2012).XZ2 X. Zhang and D. Finley, “Interpretation of twisting type N vacuum solutions with cosmological constant", https://doi.org/10.1088/0264-9381/30/7/075021Class. Quantum Grav. 30, 075021 (2013).SH S. Hervik, V. Pravda and A. Pravdova, “Type III and N universal spacetimes", http://dx.doi.org/10.1088/0264-9381/31/21/215005Class. Quantum Grav. 31 (2014) 215005.SH2 S. Hervik, V. Pravda and A. Pravdova, “Type N universal spacetimes", https://doi.org/10.1088/1742-6596/600/1/012065J. Phys.: Conf. Series 600 (2015) 012065.SH3 S. Hervik, V. Pravda and A. Pravdova, “On type N and III universal spacetimes", https://doi.org/10.1142/9789813226609_0081Proceedings, 14th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (MG14) by M. Bianchi et al. (eds.): Rome, Italy,pp. 1179-1183 (2017). Riess A. G. Riess et al., “Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant,” 10.1086/300499 Astron. J. 116, 1009 (1998).Perm S. Perlmutter et. al., “Measurements of Omega and Lambda from 42 high redshift supernovae”, 10.1086/307221 Astrophys.J. 517,:565 (1999).WMAP WMAP collaboration, G. Hinshaw et al., “Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results”,10.1088/0067-0049/208/2/19 Astrophys. J. Suppl. 208, 19 (2013).DE M. Li, X-D. Li, S. Wang and Y. Wang, “Dark Energy”, 10.1088/0253-6102/56/3/24 Commun. Theor. Phys. 56, 525 (2011).Wein S. Weinberg. “The Cosmological Constant Problem”, 10.1103/RevModPhys.61.1 Rev. Mod. Phys. 61 1 (1989).Mod1 T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis, “Modified gravity and cosmology”, 10.1016/j.physrep.2012.01.001 Phys. Rep. 513, 1 (2012).Mod2 S. Shankaranarayanan and J. P. Johnson, “Modified theories of gravity: Why, how and what?”, 10.1007/s10714-022-02927-2 Gen. Relativ. Gravit. 54, 44 (2022).Mod3 S. Nojiri and S.D. Odintsov, “Introduction to Modified Gravity and Gravitational Alternative for Dark Energy”, 10.1142/S0219887807001928 Int. J. Geom. Meth. Mod. Phys. 4, 115 (2007).Mod4 G. J. Olmo, “Introduction to Modified Gravity: From the Cosmic Speedup Problem to Quantum Gravity Phenomenology”,1112.2223gr-qc.Mod5 P. O. Hess, `Alternatives to Einstein’s General Relativity Theory`”, 10.1016/j.ppnp.2020.103809 Prog. Part. Nucl. Phys. 114, 103809 (2020). RIG L. Amendola, L. Giani and G. Laverda,“Ricci-inverse gravity: a novel alternative gravity, its flaws, and how to cure them.”,10.1016/j.physletb.2020.135923Phys. Lett. B 811, 135923 (2020).Do T. Q. Do, “No-go theorem for inflation in an extended Ricci-inverse gravity model”, 10.1140/epjc/s10052-021-09974-0Eur. Phys. J. C 82, 15 (2022).Do1 T. Q. Do, “No-go theorem for inflation in Ricci-inverse gravity”, 10.1140/epjc/s10052-021-09223-4Eur. Phys. J. C 81, 431 (2021).Cosmic M. Scomparin, “Cosmic structures in Ricci-inverse theories of gravity”,2102.04676gr-qc. Dasa I. Dasa, J. P. Johnsonb and S. Shankaranarayanan, “Can we bypass no-go theorem for Ricci-inverse Gravity?”, 10.1140/epjp/s13360-022-03472-2 Eur. Phys. J. Plus 137, 1265 (2022).Bar A. Jawad and A. M. Sultan, “Analysis of baryon to entropy ratio in Ricci inverse gravity”, 10.1209/0295-5075/ac6977 EPL 138, 29001 (2022).Our J. C. R. de Souza andA. F. Santos, “An axially symmetric spacetime with causality violation in Ricci-inverse gravity”',10.1140/epjc/s10052-023-12020-w Eur. Phys. J. C 83, 834 (2023). refa6 M. F. Shamir, M. Ahmad, G. Mustafa and A. Rashid, “Ricci inverse anisotropic stellar structures”, 10.1016/j.cjph.2022.11.011Chin. J. Phys. 81, 51 (2023).AM A. Malik, A. Shafaq, M. Koussour and Z. Yousaf, “Development of local density perturbation technique to identify cracking points in f(R, T) gravity”, https://doi.org/10.1140/epjc/s10052-023-11996-9Eur. Phys. J. C 83, 845 (2023).Ori D. Levanony and A. Ori, “Extended time-travelling objects in Misner space", https://doi.org/10.1103/PhysRevD.83.044043Phys. Rev. D 83, 044043 (2011).Ori2 A. Ori, "A Class of Time-Machine Solutions with a Compact Vacuum Core", https://doi.org/10.1103/PhysRevLett.95.021101Phys. Rev. Lett. 95, 021101 (2005). Mis C. W. Misner, ”Taub-NUT Space as a Counter example to Almost Anything", in Relativity Theory and astrophysics I: relativity and cosmology, J. Ehlers (ed.), Lectures in Applied Mathematics Vol.8, American Mathematical Society (1967).GW1 B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), https://doi.org/10.1103/PhysRevLett.116.061102Phys. Rev. Lett. 116, 061102 (2016).GW2 B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), https://doi.org/10.3847/2041-8205/818/2/L22Astrophys. J. Lett. 818, L22 (2016).
http://arxiv.org/abs/2312.16123v1
{ "authors": [ "F. Ahmed", "J. C. R. de Souza", "A. F. Santos" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20231226171457", "title": "Cosmological constant Petrov type-N space-time in Ricci-inverse gravity" }
Combining Bayesian reconstruction entropy with maximum entropy method for analytic continuations of matrix-valued Green's functions Li Huang January 14, 2024 =================================================================================================================================== Kaletha extended local Langlands conjectures to a certain class of disconnected groups and proved them for disconnected tori. Our first main result is a reinterpretation of the local Langlands correspondence for disconnected tori. Our second, and the central objective of this paper, is to establish an automorphic multiplicity formula for disconnected tori. § INTRODUCTIONLet F be a number field and 𝔸 be the adele ring of F. Let G be a connected reductive group defined over F with centre Z_G. Let [G]:= A_GG(F)\ G(𝔸) be its adelic quotient, where A_G is the identity component of ℝ-points of the largest ℚ-split subtorus of Res_F/ℚZ_G. Then there is a right regular representation of G(𝔸) on the Hilbert space L^2([G]). In the theory of automorphic representations, it is a crucial question to understand the decomposition of the discrete part L_disc^2([G]) and the multiplicities of its irreducible constituents. The work of Labesse and Langlands (<cit.>,<cit.>) and the work of Kottwitz (<cit.>) suggest a conjectural answer for tempered automorphic representations. They expect that an admissible tempered discrete homomorphism ϕ:L_F→^LG gives rise to an adelic L-packet Π_ϕ of tempered representations of G(𝔸) and conversely, each tempered discrete automorphic representation belongs to some L-packet. The topological group L_F above is usually called the (hypothetical) Langlands group of F and it should have the Weil group W_F as a quotient. Then it is expected that themultiplicity of π (which can be 0) in L_disc^2([G]) is the sum of m_π,ϕ with ϕ running over the equivalence classes of such parameters, where we have denoted by m_π,ϕ the ϕ-contribution towards the total multiplicity of π.Moreover, it is conjectured that m_π,ϕ is given by m_π,ϕ = 1/|𝒮_ϕ|∑_x∈𝒮_ϕ⟨ x, π⟩,where 𝒮_ϕ is a finite group related to the centraliser of ϕ in Ĝ, and⟨· ,·⟩:𝒮_ϕ×Π_ϕ→ℂ^× is a complex-valued pairing relying on the local Langlands conjectures. In <cit.>, after introducing the global Galois gerbes, Kaletha makes the definitions of 𝒮_ϕ and the pairing⟨· ,·⟩ precise for general connected reductive groups G.In <cit.>, Kaletha initiated an extension the (refined) local Langlands conjectures to disconnected groups. To be precise, he treats the inner forms of “quasi-split” disconnected reductive groups, which are of the form G⋊ A, where G is a connected quasi-split reductive group and A acts on G by preserving an F-pinning.Kaletha extends the (refined) local Langlands conjectures to this setting and proves the conjectures in the case of disconnected tori. §.§ Main resultsBased on Kaletha's pioneering work in the local aspects, the principal goal of this thesis is to explore the global aspects for the disconnected tori and establish a multiplicity formula. It turns out that the formula we obtain in this setting takes a similar form as (<ref>). First, we introduce the disconnected tori we treat, the main players of this thesis, and state the multiplicity formula on the automorphic side. Let T̃ be a quasi-split disconnected torus defined over F with identity component a torus T, that is T̃ = T⋊ A is a semidirect product of T with some finite group A defined over F. Given z∈ Z^1(F, T), we twist the rational structure of T̃ via z through inner automorphisms, and obtain a pure inner form T̃_z. Regarding its rational points, there is a short exact sequence1 → T(F) →T̃_z(F) → A(F)^[z]→ 1,whereA(F)^[z] is the stabiliser of [z] in A(F). Moreover, we see that A(F) acts on the set of Hecke characters of T by conjugation. When χ is a Hecke character of T, we denote the stabiliser of χ in A(F)^z by A(F)^[z],χ.To ease the exposition, in this introduction, we assume that T is anisotropic. Under this assumption, [T̃_z] := T̃_z(F)\T̃_z(𝔸) is compact. Let η be an irreducible constituent of L^2([T̃_z]). By an argument involving the Clifford Theory, the multiplicity of η in L^2([T̃_z]) can be determined by eventually passing η to a finite group and calculating its the character. To be precise, we havem_η = ∑_χ1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χtrη̅|_A(F)^[z],χ(a),where χ runs over the T̃_z(F)-orbits (or equivalently, A^[z]-orbits) of Hecke characters of T. Before entering the dual side, we need to review the local aspects first. In the pioneering work <cit.>, Kaletha extends the (refined) local Langlands conjectures to the disconnected setting and proves the conjectures in the case of disconnected tori. Now, we briefly recall the local Langlands correspondence for disconnected torus in terms of pure inner forms. Let v be a place of F, then after a base change, T̃ is a quasi-split disconnected torus over F_v. And one can consider the local pure inner form T̃_z_v. It is noteworthy that the rational points of T̃_z can be related to certain 1-hypercocycles in an obvious manner. Letϕ_v: W_F_v→^LT be a local L-parameter for T. By suitably enlarging the L-group, one can extend the group of self-equivalences of ϕ_v from S_ϕ_v to S̃_ϕ_v, and define Irr(π_0(S̃_ϕ_v), [z_v]) as the set of irreducible representations of π_0(S̃_ϕ_v) whose restrictions to π_0(S_ϕ_v) contains [z] (which is a character of π_0(S_ϕ_v) in Kottwitz's sense). On the other hand, by the local Langlands correspondence for tori, ϕ_v gives rises to a character of T(F_v). One can define Irr(T̃_z(F_v), χ_v) as the set of irreducible representations of T̃_z(F_v) whose restrictions to T(F_v) contain [ϕ_v]. Now the local Langlands correspondence for disconnected tori asserts there is a natural bijection between Irr(T̃_z(F_v), χ_v) and Irr(π_0(S̃_ϕ_v), [z_v]), which satisfies the character identities. Kaletha constructs the LLC and verifies the character identities. One of the main ingredients in his construction is the Tate-Nakayama pairing (or duality) for hypercohomology introduced in <cit.>, which generalises the (classical) Tate-Nakayama duality and the local Langlands correspondence for (connected) tori simultaneously. After passing both the group side and the dual side to two extensions ofA^[ϕ_v],[z_v] by ℂ^× respectively, Kaletha shows that there is acanonical isomorphism between them, which induces the desired bijection. The slight drawback of this approach is that the isomorphism is given in a less transparent manner, due to the need of choosing sections for the extensions. The first main result of this work is an intrinsic reinterpretation of the Kaletha's LLC for disconnected tori. To be precise, we are able to define a simple relation (<ref>) to characterise the 1-1 correspondence between Irr(T̃_z(F_v), χ_v) and Irr(π_0(S̃_ϕ_v), [z_v]) intrinsically, since there is no choice of“coordinate systems” involved in this approach of the construction.Now we return to the global aspect. Let ϕ: W_F→^LT be a global L-parameter and let χ = ⊗_vχ_v be the Hecke character of T given by the global Langlands correspondence. One can define the adelic L-packet Π_ϕ associated to ϕ. Essentially, it consists of irreducible representations of T̃_z(𝔸) that are unramified almost everywhere and whose local component at a place v contains χ_v. Then we define a complex-valued pairing (<ref>) ⟨·,·⟩: A(F)^[z],χ×Π_ϕ→ℂ given by⟨ a,η⟩ := ∏_v⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩^-1_TN·tr[ι_v(η̅_v)(s_v,a)],where (s_v,a) ∈S̃_ϕ_v^[z_v] and (t,a)∈T̃_z(F)^χ are arbitrarily chosen. Despite the fact that the choices of local data appearing in the above pairing have great flexibility, we can show the well-definedness of the pairing andits independence on the choices of the local data.At this point, a key input is our reinterpretation of the LLC for disconnected tori, which enables us to find that this pairing has its counterpart on the automorphic side. In fact, we have⟨ a, η⟩ = tr[η̅|_A(F)^[z],χ(a)]. After a comparison with the (actual) multiplicity obtained from the automorphic side, we reach the desired multiplicity formulam_η = ∑_[[ϕ]]1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χ⟨ a,η⟩,where [[ϕ]] runs over the near A^[z]-equivalence classes of global L-parameters, which are in 1-1 correspondence to the A^[z]-orbits (or equivalently T̃_z(F)-orbits) of Hecke characters of T. §.§ Structure of the paperIn Chapter 2, we review the (local and global) Tate-Nakayama duality for hypercohomology introduced in<cit.>. We aim to explicitly elucidate the isomorphisms on the chain level. In Chapter 3, we set conventions (following <cit.>) on the disconnected groups considered in this work, and we give some examples of disconnected tori.Chapter 4 and Chapter 5 are devoted to the local aspects. In Chapter 4, we closely follow <cit.> and review the local Langlands correspondence for disconnected tori in terms of pure inner forms. In Chapter 5, we give a new construction of the LLC, which can be eventually proved to coincide with Kaletha's construction. The rest of the chapters are focused on establishing the multiplicity formula. In Chapter 6, we lay the necessary foundation for the discussion of global aspects. In Chapter 7, we aim to calculate the multiplicity on the automorphic side (which means the actual multiplicity of an automorphic representation). For this purpose, we first extract a dense smooth subspace 𝒜(T̃_z) from the L^2-space. After a preliminary decomposition, we are able to break the calculation of multiplicity into a sum of χ-contribution with χ running over the T̃_z(F)-orbits of Hecke characters. Eventually, the multiplicity can be calculated through some arguments in Clifford theory. In Chapter 8, we enter the dual side. After defining the pairing ⟨·,·⟩: A(F)^[z],χ×Π_ϕ→ℂ, we apply our reinterpretation of the LLC so that conclude that the pairing has an incarnation on the automorphic side. Finally, the multiplicity formula is established after a straightforward comparison. Acknowledgements. The author is grateful to Wee Teck Gan for his invaluable insights, as well as for hisguidance and proof-reading. The author also appreciates Tasho Kaletha for his interest in this project and helpful suggestions. The author gratefully acknowledges the support of an MOE Graduate Research Scholarship. § THE TATE-NAKAYAMA DUALITY FOR HYPERCOHOMOLOGY In this section, we review the local and global duality between the hypercohomology of the Galois group and that of the Weil group. These results are due to Kottwitz-Shelstad <cit.>. We will see that the duality is able to encompass the LLC for (connected) tori and Tate-Nakayama duality simultaneously. We refer the reader to Appendix <ref> for the notion and basic properties of group hypercohomology, Appendix <ref> for the (local and global) Langlands correspondences for connected tori, and Appendix <ref> for the (local and global) Tate-Nakayama dualities. §.§ ConventionWhenever we say hypercohomology of profinite groups in a (complex of) discrete module(s), we always refer to the continuous version.Whenever we consider (hyper-)cochains/cocycles/cohomology of Weil group W_F or relative Weil groups W_K/F, we always assume continuity, unless otherwise stated or indicated as the abstract version by the subscript “abs”. In contrast, in this work,all the (hyper-)chains/cycles/homology of (relative) Weil groups are always in the abstract sense. §.§ Local Tate-Nakayama Duality for Hypercohomology Let F be a p-adic field. Let T and U be F-tori with cocharacter groups X and Y. Let f:T → U be a morphism defined over F and f_*:X→ Y be the map between cocharacter groups induced by f. We aim to define a pairing between H^1(W_F,ÛT̂) and H^1(F,TU). For convenience, we fix a finite Galois extension K of F such that both T and U split over K. Throughout this chapter, once and for all, we fix a section s: Gal(K/F)→ W_K/F with s(1) = 1, so that we have maps ϕ and ψ on the chain level with explicit formulae (<ref>) and(<ref>), respectively. We will eventually show that the choice of section s has no effect on the (co)homology level (see Proposition <ref>).§.§.§ Step 1We quickly remind the reader that, whenever talking about (hyper)homology of (relative) Weil groups, we ignore the topology and regard them as abstract groups. First, we note that the image of the differential C_1(W_K/F,X)C_0(W_K/F,X)lies in C_0(W_K/F,X)_0, the subgroup of norm-0 elements in C_0(W_K/F,X) = X. Indeed, given x∈ C_1(W_K/F,X), we haveN_K/F(∂ x)= ∑_σ∈Gal(K/F)σ(∂ x)= ∑_σ∈Gal(K/F)σ[ ∑_w∈ W_K/F(w^-1x_w-x_w)]= ∑_w∈ W_K/F[∑_σ∈Gal(K/F)(σ w^-1x_w-σ x_w)].Now for each fixed w ∈ W_K/F, the inner sum vanishes and it follows that ∂ x has norm 0.Next, we define a modified (norm-0) hyperhomology group H_0(W_K/F, X → Y)_0 as a subgroup of H_0(W_K/F, X → Y). Given XY, we consider the following complex used in defining group hyperhomology:⋯→ C_1(W_K/F,X)⊕ C_2(W_K/F,Y)C_0(W_K/F,X)⊕ C_1(W_K/F,Y)C_0(W_K/F,Y),where α(x,y) = (∂ x, f_*(x)-∂ y) and β(x,y) = f_*(x) - ∂ y. We recall from Appendix <ref> that H_0(W_K/F, X → Y) is defined as the quotient ker(β)/im(α). We define (kerβ)_0 to be the subgroup of pairs (x,y) in kerβ with x ∈ C_0(W_K/F,X)_0. According to the observation made above, we have imα⊆ (kerβ)_0. Now we define the modified hyperhomology groupH_0(W_K/F, X → Y)_0 := (kerβ)_0/ imα.Analogous to Fact <ref>, we have an exact sequenceH_1(W_K/F,X) → H_1(W_K/F,Y)→ H_0(W_K/F, X → Y)_0→ H_0(W_K/F,X)_0→ H_0(W_K/F,Y)_0 The main goal of Step 1 is to define a natural isomorphismH_0(W_K/F, X → Y)_0 H^1(Gal(K/F), T(K) → U(K)). For this purpose, we recall from (<ref>) and (<ref>) that we have defined ϕ: C_1(W_K/F, X) → C^0(K/F, T) and ψ:C_0(W_K/F, X)_0→ Z^1(K/F, T).The key observation is that ϕ and ψ sit in the following diagram to make it commute:C_2(X) [r,"∂"][d]C_1(X) [r,"∂"][d,"ϕ"] C_0(X)_0[d,"ψ"]0 [r] C^0(T) [r,"∂"] Z^1(T).For brevity, we dropped the groups concerned in each term in the above diagram, but remind the readers that the chains are of W_K/F, while the cochains and cocycles are of Gal(K/F). The diagram (<ref>) commutes. We recall that, by definition, ϕ = ∼_D∘Res. The restriction is a chain map (see (<ref>)), thus we have ϕ∘∂ = ∼_D∘Res∘∂ = ∼_D∘∂∘Res. Now we notice that themap ∼_D (on the chain level)∼_D:C_1(K^×,X) → T(K)x ↦∏_a∈ K^×x_a(a)^-1is trivial on 1-boundaries, which implies ∼_D∘∂ = 0. Thus, the commutativity of the first square follows. Now we check the commutativity of the second square. Let x ∈ C_1(W_K/F,X). On the one hand, we find that ψ∘∂(x) := ψ(∑_ww^-1x_w-y_w) is an element in Z^1(K/F,T) sending ρ∈Gal(K/F) to∏_σ∈Gal(K/F)⟨ρσ(∑_ww^-1x_w-x_w) , a_ρ,σ⟩.We let w = as(τ) and alter the domain over which the sum is taken accordingly. Then the above expression becomes∏_σ,τ,a⟨ρσ(τ^-1x_as(τ)-x_as(τ)) , a_ρ,σ⟩= ∏_σ,τ,a⟨ρστ^-1(x_as(τ)) , a_ρ,σ⟩∏_σ,τ,a⟨ρσ(x_as(τ)) , a_ρ,σ⟩^-1 On the other hand, through an elementary computation involving certain changes of variables, wefind that ∂∘ϕ(x) ∈ Z^1(K/F,T) sends ρ∈Gal(K/F) to∏_σ,τ, a⟨σ(x_as(τ)), a_σ,τσ(a)⟩ρ(∏_σ,τ, a⟨σ(x_as(τ)), a^-1_σ,τσ(a)^-1⟩)= ∏_σ,τ, a⟨σ(x_as(τ)), a_σ,τ⟩∏_σ,τ, a⟨ρσ(x_as(τ)), ρ(a^-1_σ,τ)⟩= ∏_σ,τ, a⟨σ(x_as(τ)), a_σ,τ⟩∏_σ,τ, a⟨ρσ(x_as(τ)), a_ρσ,τ^-1a_ρ,στa_ρ,σ^-1⟩= ∏_σ,τ, a⟨ρσ(x_as(τ)), a_ρ,στa_ρ,σ^-1⟩,where we have used the 2-cocyle relation ρ(a_σ,τ)^-1 = a_ρσ,τ^-1a_ρ,στa_ρ,σ^-1 in the second to last equality. Finally, we can see immediately that (<ref>) coincides with (<ref>), after a change of variable. Hence the commutativity of the second square follows. We consider the following diagram with the first row a modification of (<ref>) (0th-chains modified to norm-0 subgroups), and the second row mixed with cochains and cocycles of Gal(K/F): C_1(X)⊕ C_2(Y) [r,"α"] [d,"ϕ⊕ 0"] C_0(X)_0⊕ C_1(Y) [r, "β"] [d, "ψ⊕ϕ"] C_0(Y)_0[d,"ψ"]C^0(T)⊕ 0 [r,"γ"] Z^1(T)⊕ C^0(U) [r, "δ"] Z^1(U),where γ(t) = (∂ t, f(t)) and δ(t,u) = f(t)-∂ u, hence the cohomology of the second row at Z^1(T)⊕ C^0(U) is nothing but H^1(F,TU). And the vertical maps are indicated in the diagram. Using the commutativity of the diagram (<ref>), one can immediately check:The diagram (<ref>) commutes.At this point, the commutativity of diagram (<ref>) immediately suggests that the middle vertical map ψ⊕ϕ actually passes to (co)homology:ℋ: H_0(W_K/F, X → Y)_0→ H^1(Gal(K/F), T(K) → U(K)), We quickly check the well-definedness of ℋ:ℋ does not depend on the choice of section s: Gal(K/F)→ W_K/F. Suppose s' is another section, a' is the corresponding 2-cocycle given by (<ref>), and ϕ' and ψ' are the chain-level maps (<ref>) and (<ref>) defined in terms of s' and a'. Let (x, y) ∈kerβ. We hope to show that (ψ(x),ϕ(y)) and (ψ'(x),ϕ'(y)) differ by a 1-hypercoboundary. We consider the elementt := ∏_σ∈Gal(K/F)⟨σ(x), s'(σ)s(σ)^-1⟩in T(K). Then elementary compuations suggest that, for any ρ∈Gal(K/F), we havet^-1ρ(t)= ∏_σ⟨ρσ(x),s(ρσ)s'(ρσ)^-1ρ(s'(σ)s(σ)^-1) ⟩= ∏_σ⟨ρσ(x),s(ρσ)s'(ρσ)^-1s'(ρ)s'(σ)s(σ)^-1s'(ρ)^-1⟩= ∏_σ⟨ρσ(x), a'_ρ,σa_ρ,σ^-1⟩∏_σ⟨ρσ(x), s(ρ)s'(ρ)^-1⟩= ψ'ψ^-1(x)(ρ).We note that we have used the fact that x has trivial norm in the last equality above. Since (x, y) lies in kerβ, we have f_*(x) = ∂ y.Using this and some elementary manipulations, one can find thatf(t)= ∏_σ,τ,a⟨στ^-1(y_as(τ))-σ(y_as(τ)),s'(σ)s(σ)^-1⟩= ∏_σ,τ,a⟨σ(y_as(τ)), s'(στ)s(στ)^-1s(σ)s'(σ)^-1⟩coincides withϕ'(y)ϕ(y)^-1 = ∏_σ,τ,a⟨σ(y_as'(τ)),a_σ,τ'^-1σ(a)^-1⟩∏_σ,τ,a⟨σ(y_as(τ)),a_σ,τ^-1σ(a)^-1⟩^-1= ∏_σ,τ,a⟨σ y_as(τ), a_σ,τ'^-1a_σ,τσ(s'(τ)s(τ)^-1)⟩= ∏_σ,τ,a⟨σ y_as(τ), a_σ,τ'^-1a_σ,τ s'(σ)s'(τ)s(τ)^-1s'(σ)^-1⟩.It is clear that the difference between (ψ(x),ϕ(y)) and (ψ'(x),ϕ'(y)) is a 1-hypercoboundary. One can further show:The natural map ℋ: H_0(W_K/F, X → Y)_0→ H^1(Gal(K/F), T(K) → U(K))is an isomorphism. We consider the following diagram with rows the long exact sequences associated to hyper(co)homology (see <ref>):H_1(X) [r] ["𝒟",d] H_1(Y) [r]["𝒟",d] H_0(X→ Y)_0[r] ["ℋ",d] H_0(X)_0[r]["TN",d] H_0(Y)_0["TN",d]H^0(T) [r] H^0(U) [r] H^1(T → U) [r]H^1(T) [r]H^1(U).In the above diagram, 𝒟 is the key isomorphism (<ref>) induced by ϕ in Deligne's convention,TN is the Tate-Nakayama isomorphism (<ref>) induced by ψ, and ℋis the natural map induced by ϕ⊕ψ.We can see that the diagram (<ref>) commutes. Indeed, on the one hand, we recall from <ref> that the two arrows in the middle of each row are induced by an inclusion and a projection on the chain level. Hence the commutativity of the two squares in the middle is clear. On the other hand, the first and the last squares commute due to functoriality of 𝒟 and the Tate-Nakayama isomorphism. It follows that ℋ is an isomorphism by the five lemma. §.§.§ Step 2In this part, we will proceed as in Section <ref> to produce a pairing between certain hypercohomology and hyperhomology groups. Let B_∙ be the bar resolution of ℤ. Then the defining complex of H_0(W_K/F, X→ Y)_0:⋯→ C_1(X)⊕ C_2(Y) → C_0(X)⊕ C_1(Y) → C_0(Y)is nothing but⋯→ B_1⊗_ℤW_K/FX ⊕ B_2⊗_ℤW_K/FY → B_0⊗_ℤW_K/FX ⊕ B_1⊗_ℤW_K/FY → B_0⊗_ℤW_K/FY,which we denote by 𝒫_∙.Similar to the argument in Section <ref>, we note thatHom(B_i⊗_ℤW_K/FX, ℂ^×) = Hom_ℤW_K/F(B_i, Hom(X,ℂ^×)),which also holds with X replaced by Y.We note that ℂ^× is an injective abelian group, hence after applying the functor Hom(-,ℂ^×) to complex 𝒫_∙ (<ref>) and taking cohomology, we obtain Hom(H_∙(𝒫_∙),ℂ^×) = H^∙(Hom(𝒫_∙, ℂ^×)).Hence we have Hom(H_0(W_K/F,X → Y),ℂ^×) = H^1_abs(W_K/F, U→T).The right-hand side is the abstract cohomology group, ignoring the topology on W_K/F and Hom(X, ℂ^×) ≅T̂. Similarly, for hyper(co)homology we have canonical isomorphismHom(H_0(W_K/F,X → Y),ℂ^×) ≅ H^1_abs(W_K/F, Û→T̂). Explicitly, given a 0-hypercycle (x, y_w) in H_0(W_K/F,X→ Y), i.e. f_*(x) = ∂ y_w = ∑ (w^-1y_w-y_w), and given a 1-hypercocycle (u_w,t) with u_w∈ Z^1_abs(W,Û) and t ∈T̂ such that f_*(u_w)= ∂ t = t^-1w(t). The pairing between (x,y_w) and (u_w,t) is⟨(x,y_w), (u_w,t)⟩ = ⟨ x, t ⟩∏_w ∈ W_K/F⟨ y(w), u(w) ⟩^-1. We need to point out some subtlety here. On the one hand, we have regarded X→ Y as a complex concentrated at degrees 0 and -1. Hence the cohomology of dual complex Hom(H_∙(𝒫_∙),ℂ^×) actually gives hypercohomology of U→T with U and T placed at degrees -1 and 0, respectively. However, whenever we write H^1_abs(W_K/F, Û→T̂), we are always placing U and T at degrees 0 and 1. Hence, the inverse we add in (<ref>) is due to this discrepancy. So far, we have obtained a pairing between H_0(W_K/F,X → Y) and H^1_abs(W_K/F, U→T). We recall from (<ref>) that we defined a subgroup H_0(W_K/F,X → Y)_0 of H_0(W_K/F,X → Y). And we can restrict thepairingto subgroupsH_0(W_K/F,X → Y)_0 ⊆ H_0(W_K/F,X → Y) and H^1(W_K/F, U→T)⊆ H^1_abs(W_K/F, U→T).Here, we recall that H^1(W_K/F, Û→T̂) denotes the continuous cohomology group. Furthermore, in view of the isomorphism H_0(W_K/F,X → Y)_0≅ H^1(K/F, T → U) we established in Step 2, we have obtained a pairingH^1(K/F, T → U) × H^1(W_K/F, Û→T̂) →ℂ^×. Suppose K' ⊇ K is another Galois extension of F. Then again, we have a pairing with K replaced by K' above. More precisely, we have the following diagram: H^1(K/F, T → U) [r, phantom, sloped, "×"] [d,"inf"] H^1(W_K/F, Û→T̂) [r] [d,"inf"]ℂ^×H^1(K'/F, T → U) [r, phantom, sloped, "×"] H^1(W_K'/F, Û→T̂) [r]ℂ^×. The pairing (<ref>) is compatible with inflations. The desired compatibility follows from the compatibility of H_0(W_K/F, X → Y)_0[r, phantom, sloped, "×"]H^1(W_K/F, Û→T̂) [r] [d,"inf"]ℂ^×H_0(W_K'/F, X → Y)_0[u, "def"] [r, phantom, sloped, "×"] H^1(W_K'/F, Û→T̂) [r]ℂ^×and the commutativity of H^1(K/F,T→ U)[d, "inf"] H_0(W_K/F, X → Y)_0[ll]H^1(K'/F,T→ U)H_0(W_K'/F, X → Y)_0[u, "def"] [ll].The compatibility in (<ref>) is obvious, since the deflation between homology is dual to the inflation between cohomology. The commutativity of the diagram (<ref>) can be shown in the same manner as the proof of Proposition <ref>. Details can be found on pp. 136-137 of <cit.>. Therefore, after passing to limits, we have the desired functorial pairingH^1(F, T → U) × H^1(W_F, Û→T̂) →ℂ^×. The pairing (<ref>) is compatible with the Langlands pairing (<ref>) (in Langlands' convention) and the Tate-Nakayama pairing (<ref>). Precisely speaking, if we let the long exact sequences on the group side and dual side pair with each other[r] H^0(F, U) [r, "i"] [drr,dotted, leftrightarrow] H^1(F, T→ U) [r, "p"] [d, dotted, leftrightarrow] H^1(F, T) [r] [dll, dotted, leftrightarrow] [r] H^0(W_F,T̂) [r, "i"]H^1(W_F,Û→T̂) [r, "p"]H^1(W_F,Û) [r],then we have ⟨ i(u), ẑ⟩ = ⟨ u, p̂(ẑ) ⟩ and⟨ z, î(t̂) ⟩ = ⟨ p(z), t̂⟩,for each u∈ H^0(F,U), ẑ∈ H^1(W_F,Û→T̂), z∈ H^1(F, T → U) and t̂∈ H^0(W_F,T̂). Both compatibilities follow from the commutativity of Diagram (<ref>) and the definition of the pairing (<ref>). Compatibility with Tate-Nakayama pairing is immediate. As for compatibility with the LLC in Langlands' convention, we first notice that ℋ is compatible with 𝒟, which differs from ℒ by a sign. Then we compare the pairing (<ref>) with the pairing (<ref>), and find that they differ by a sign as well. Therefore, the two minus signs cancel and the desired compatibility follows.§.§.§ Step 3In this final step, we take continuity into account. But first we need to endow H^1(F,T U) with a topology. Recall that we havethe long exact sequence⋯→ T(F) → U(F)H^1(F,T U) → H^1(F,T) → H^1(F, U) →⋯.Then we topogise H^1(F,T U) by stipulatingthat U(F)H^1(F,T U)is a continuous open map. We quickly note that i induces an isomorphism of topological groups between the quotient group U(F)/f(T(F)) and the image of i. Since H^1(F, T) is finite, i(U(F)) is an open subgroup of finite index.In particular, a character of H^1(F,T U) is continuous if and only if it is continuous on the image i(U(F)). According to Fact <ref>, the pairing (<ref>) is continuous on i(U(F)) indeed,giving us a mapH^1(W_F, Û→T̂) →Hom_cts(H^1(F,T U),ℂ^×). In virtue of Proposition <ref>, the following diagram commutes:H^1(U)^'[r]H^1(T)^'[r] H^1(F,T U)'[r] T(F)' [r]U(F)' Û^W[u] [r]T̂^W[u] ["î",r] H^1(Û→T̂) [u] [r] H^1(Û) [u] [r] H^1(T̂) [u].In the above diagram, the first row consists of (hyper)cohomology groups of Gal_F applied by Hom_cts(-,ℂ^×), which we write as the superscript ' for brevity, while the second row consists of continuous (hyper)cohomology groups of W_F. The two vertical maps to the right are the LLC map for tori, and the two vertical maps to the left are induced by the Tate-Nakayama pairing (<ref>). The isomorphism (<ref>) implies that the two vertical maps to the left are surjective, with kernels (Û^Γ)^∘ and (T̂^Γ)^∘, respectively. We define the quotientH^1(W_F, Û→T̂)_red:= H^1(W_F, Û→T̂)/î((T̂^Γ)^∘),and modify the above commutative diagram toH^1(U)^'[r]H^1(T)^'[r] H^1(F,T U)'[r] T(F)' [r]U(F)' π_0(Û^W) [u] [r]π_0(T̂^W) [u] [r] H^1(Û→T̂)_red[u] [r] H^1(Û) [u] [r] H^1(T̂) [u].By the five lemma, the vertical map in the middle must be an isomorphism, because all the other four are so. We have thus shownThe pairing (<ref>) induces a functorial isomorphismH^1(W_F, Û→T̂)_red≅Hom_cts(H^1(F,T U),ℂ^×).§.§ A useful resultFor later convenience, we record an interesting result. This will play an important role in our reinterpretation of the local Langlands correspondence for disconnected tori (Theorem <ref>) and its proof (especially the proof of Lemma <ref>). LetT, U and V be F-tori, and consider F-morphisms f and g:T rfU rgV.Let f̂ and ĝ be themorphisms between the dual tori induced by f and g, respectively:T̂[l,"f̂"']Û[l,"ĝ"'] V̂.We consider the natural map induced by gH^1(F, T U) [r,"g_*"]H^1(F, T V),sending the class of (z,t) to that of (z, g(t)), and the natural map induced by f̂H^1(W_F, V̂ Û) [r,"f̂_*"]H^1(W_F, V̂ T̂),sending the class of (ϕ, s) to that of (ϕ, f̂(s)). Then the Tate-Nakayama pairing between the image of g_* and the image of f̂_* vanishes:⟨ g_*(z,t), f̂_*(ϕ,s) ⟩ = 1. We go back to the definition of Tate-Nakayama pairing. First we fix a finite Galois extension K/F such that T, U and V split over K. Under the isomorphism, we let (λ, μ) ∈ Z_0(W_K/F, XY)_0 correspond to (z,t). Then (λ, f_*(μ_w)) corresponds to (z,f(t)). Now the Tate-Nakayama pairing reads as⟨ g_*(z,t), f̂_*(ϕ,s) ⟩ = ⟨λ, f̂(s)⟩∏_w⟨ g_*(μ(w)), ϕ(w)⟩^-1.Since (λ, μ_w) ∈ Z_0(W_K/F, XY)_0 and (ϕ, s)∈ Z^1(W_K/F, V̂Û), we have f_*(λ) = ∑_w(w^-1μ(w)-μ(w)),andĝ(ϕ(w)) = s^-1w(s),for each w ∈ W_K/F. Now we can plug (<ref>) and (<ref>) into (<ref>):⟨ g_*(z,t), f̂_*(ϕ,s) ⟩ = ⟨ f_*(λ), s⟩∏_w⟨μ(w), ĝ(ϕ(w))⟩^-1= ⟨∑_w(w^-1μ(w)-μ(w)), s⟩∏_w⟨μ(w), s^-1w(s)⟩^-1= ∏_w⟨ w^-1μ(w), s⟩·∏_w⟨μ(w),s^-1⟩·∏_w⟨μ(w), sw(s)^-1⟩= ∏_w⟨μ(w), w(s)⟩·∏_w⟨μ(w),s^-1⟩·∏_w⟨μ(w), sw(s)^-1⟩= 1. §.§ Global Tate-Nakayama Duality for HypercohomologyIn this section, we turn to the global Tate-Nakayama duality for hypercohomology.As its local analogue, this combines the global Langlands correspondence and the global Tate-Nakayama duality. Since the constructions are carried out in the way as the local case, details areomitted. Let F be a number field,and C_F = F^×\𝔸_F be its idele class group. Idele class groups will play the same role as multiplicative groups have played in the local setting. Let W_F be the global Weil group of F. Let T and U be tori defined over F with cocharacter groups X and Y, respectively, and f:T→ U be an F-morphism. We fix a finite Galois extension K of F over which both T and U split. First, we introduce some Galois hypercohomology groups:H^i(F,TU):= H^i(F,T(F̅)U(F̅)),H^i(𝔸,TU):= H^i(F,T(𝔸̅)U(𝔸̅)),H^i(𝔸/F,TU):= H^i(F,T(𝔸̅)/T(F̅)U(𝔸̅)/U(F̅)).It is the last group that will appear in the duality and it can betopologized in the same manner as Step 3 of Section <ref> (see also Kottwitz-Shelstad <cit.> for details). We note an elementary long exact sequence involving all the three groups above:⋯→ H^i(F,TU)→ H^i(𝔸,TU) → H^i(𝔸/F,TU) →⋯. On the dual side, we can define the continuous hypercohomology H^1(W_F,ÛT̂)_red and its reduced version H^1(W_F,ÛT̂)_red as in the local setting. Now we state the global Tate-Nakayama duality for hypercohomology:There is a natural functorial isomorphismH^1(W_F,ÛT̂)_red≅Hom_cts(H^1(𝔸/F,TU), ℂ^×)that is compatible with the global Langlands correspondence (or more precisely, the isomorphism (<ref>)) and the global Tate-Nakayama pairing (<ref>). Moreover, this isomorphism is compatible with the local Tate-Nakayama duality for hypercohomogy, in the sense that the following diagram commutes:H^1(W_F, Û→T̂)_red[r,"∼"][d]Hom_cts(H^1(𝔸/F,T→ U),ℂ^×) [d] ∏_v H^1(W_F_v, Û→T̂)_red[r, "∼"]∏_vHom_cts(H^1(F_v, T→ U),ℂ^×).The construction is the same as in the local case, and so is the compatibility with the GLC and the Tate-Nakayama pairing. Meanwhile, the local-global compatibility can be readily checked from the construction. § DISCONNECTED REDUCTIVE GROUPSIn this chapter, we closely follow <cit.> and introduce a certain class of disconnected groups, for which we reserve the term “disconnected reductive groups” in the rest of this work. Due to our emphasis on disconnected tori, we will present some rank-1 examples.§.§ ConventionLet F be a field of characteristic 0 with absolute Galois group Γ = Gal(F̅/F). In this work, we say an affine algebraic group G is a disconnected reductive group, if G̃ satisfies the following conditions:* There is an F̅-isomorphism G̃→ G ⋊ A, where G is a connected reductive group and A is a (nontrivial) finite group.* The action of Aon G preserves some fixed F̅-pinning of G.The normaliser of the diagonal torus in SL_2 does not split as a semidirect product (even over F̅). Indeed, there does not exist any element of order two in the nonidentity component of F̅-points. §.§ ClassificationIn this part, we will recall from <cit.> the classification of disconnected reductive groups. It is a well-known fact that each connected reductive group has a unique split form, and moreover has a unique quasi-split inner form. The same notions can be extended to the disconnected setting, although there turns out to be an additional type, “translation form”, in the term of <cit.>. We start from extending the notions of “quasi-split” and “split” groups:We call G̃ a split disconnected reductive group, if there is an F-isomorphismG̃→ G⋊ A,where G is a split connected reductive group, and A is a (nontrivial) constant group scheme acting on G by preserving some F-pinning of it.We call G̃ a quasi-split disconnected reductive group, if there is an F-isomorphismG̃→ G⋊ A,where G is a quasi-split connected reductive group, and A is a (not necessarily constant) finite group scheme acting on G (the action is defined over F) and preserving some F-pinning of it. We note that split disconnected reductive groups can be classified by the root datum of G and the action of A on the root datum. It can immediately seen from the definitions that every disconnected reductive group is a unique form of a split disconnected reductive group. Now, to obtain a classification of all disconnected reductive groups, it suffices if we can classify all the forms of a given split disconnected reductive group G⋊ A. Equivalently, it suffices to understand the Galois cohomology H^1(Γ, Aut_F̅(G ⋊ A)). This leads us to a close examination on the automorphism group Aut_F̅(G⋊ A) first. From now on, we omit the subscript F̅ for brevity.We highlight three subgroups of Aut(G ⋊ A): * G/Z(G)^A, the group of inner automorphisms. * Z^1(A, Z(G)), the group of translation automorphisms. Each 1-cocycle z ∈ Z^1(A, Z(G)) induces an automorphism (g,a) ↦ (z(a)g,a).* Aut_pin(G ⋊ A), the group of pinned automorphisms over F̅, as which we refer to automorphisms preserving the pinning of G and the subgroup 1 ⋊ A. We note that G/Z(G)^A intersects nontrivially with Z^1(A, Z(G)). The intersection is Z(G)/Z(G)^A (as a subgroup of G/Z(G)^A) or B^1(A, Z(G)) (as a subgroup of Z^1(A, Z(G))). Now, one can characterise the structure of Aut(G⋊ A) (see <cit.>):There is a semidirect product decomposition:Aut(G ⋊ A) = (G/Z(G)^A· Z^1(A, Z(G))) ⋊Aut_pin(G ⋊ A)).In view of this result, we resume discussing the classification of disconnected reductive groups. Let G⋊ A be a split disconnected reductive group. One observes that twisting the rational structure of G⋊ A by a cocycle in Z^1(Γ, Aut_pin(G⋊ A)) yields a quasi-split disconnected reductive group G̃. And it is also clear that each quasi-split disconnected reductive group arises in this way.Then, one can twist the rational structure on the quasi-split group G̃ by a 1-cocycle z ∈ Z^1(Γ, G/Z(G)^A) to obtainG̃_z, an inner form of the quasi-split group. And furthermore, twisting G̃_z by some z'∈ Z^1(Γ, Z^1(A,Z(G))) yields a translation form (G̃_z)_z' of the inner form G̃_z. Now Fact <ref> implies that each disconnected reductive group can be obtained in this manner. To put it in a concise way, each disconnected reductive group is a translation form of an inner form of some quasi-split disconnected reductive group. §.§ Examples of some simple disconnected toriIn this work, the objects that we will focus on are disconnected tori. As the name suggests, a disconneted torus is a group that has a torus as its identity component and satisfies the Convention <ref>. In order to provide the reader with a glimpse into disconnected reductive tori and their rational points, we present instances of the most elementary (nontrivial) case,when the identity component is 𝔾_m and the component group is ℤ/2ℤ. In this case, there are two split forms: 𝔾_m×ℤ/2ℤ and 𝔾_m⋊ℤ/2ℤ. We classify their forms and also exhibit the F-rational points of all the forms. §.§.§ Forms of 𝔾_m×ℤ/2ℤLet T̃ = 𝔾_m×ℤ/2ℤ. We write ℤ/2ℤ = {1,-1}. The automorphism group of 𝔾_m×ℤ/2ℤ can be worked out asAut(T) = Z^1(ℤ/2ℤ,F̅) ⋊Aut_pin(𝔾_m×ℤ/2ℤ) ≅ℤ/2ℤ×ℤ/2ℤ.We write the nontrivial element in the first (resp. second) ℤ/2ℤ as μ (resp. ω). As an automorphism of T̃, μ fixes the identity component pointwise and sends any (x,-1) from the non-identity component to (-x,-1). And ω is the automorphism sending (x,±1) to (x^-1,±1). And clearly, the Galois action on Aut(T̃) is trivial. Thus, the forms of 𝔾_m×ℤ/2ℤ are classified by the Galois cohomologyH^1(Γ,ℤ/2ℤ×ℤ/2ℤ) = Z^1(Γ,ℤ/2ℤ×ℤ/2ℤ) = Hom_cts(Γ,ℤ/2ℤ×ℤ/2ℤ).Any nontrivial cocycle z ∈ Z^1(Γ,ℤ/2ℤ×ℤ/2ℤ) falls into one of the four categories to be described below, and we denote by T̃_z the group obtained from twisting T̃ by z. * Case 1: z factors through a quadratic extension E/F and maps onto the first ℤ/2ℤ-factor: Γ↠Gal(E/F) {1,μ}. Let Gal(E/F) = {1,σ}. Then after passing to E, the Galois actiontwisted by z (which we denote by adding a subscript z) is given by σ_z(x,1) = (σ(x),1) and σ_z(x,-1) = (-σ(x),-1). Then the group of rational points is T̃_z(F) = {(x,1)|x ∈ F^×}∪{(x,-1)|x ∈ E^×, σ(x) = -x}.* Case 2: z factors through a quadratic extension E/F and maps onto the second ℤ/2ℤ-factor: Γ↠Gal(E/F) {1,ω}. Let Gal(E/F) = {1,σ}. Then after passing to E, the Galois actiontwisted by z is given by σ_z(x,±1) = (σ(x)^-1,±1). Thus we haveT̃_z(F) = {(x,±1)|x∈ E^×, Nm_E/F(x) = 1}.* Case 3: z factors through a quadratic extension E/F and maps into ℤ/2ℤ×ℤ/2ℤ diagonally: Γ↠Gal(E/F) {1,μω}. Let Gal(E/F) = {1,σ}. Then after passing to E, the Galois actiontwisted by zis given by σ_z(x,1) = (σ(x)^-1, 1) and σ_z(x,-1) = (-σ(x)^-1, -1). So we haveT̃_z(F) = {(x,1)|x∈ E^×, Nm_E/F(x) = 1}∪{(x,-1)|x∈ E^×, x+σ(x)^-1 = 0}.* Case 4: z factors through a biquadratic extension K/F and maps isomorphically to ℤ/2ℤ×ℤ/2ℤ: Γ↠Gal(K/F) ⟨μ⟩×⟨ω⟩. We write Gal(K/F) = ⟨σ⟩×⟨τ⟩ and assume σ↦μ and τ↦ω. After passing to K, the twisted Galois action thus obtained is σ_z(x,1) = (σ(x),1), σ_z(x,-1) = (-σ(x),-1), and τ_z (x,±1) = (τ(x)^-1,±1). Then we have T̃_z(K^σ)= {(x,1)|x ∈K^σ^×}∪{(x,-1)|x ∈ K^×, σ(x) = -x}, T̃_z(K^τ)= {(x,±1)|x∈ K^×, Nm_K/K^τ(x) = 1}, T̃_z(F)= {(x,1)|x ∈K^σ^×, Nm_K^σ/F(x) = 1}∪{(x,-1)|x ∈ K^×, σ(x) = -x,Nm_K/K^τ(x) = 1 }. §.§.§ Forms of 𝔾_m⋊ℤ/2ℤ Let T̃ = 𝔾_m⋊ℤ/2ℤ. The semidirect product is given by the inverting action of ℤ/2ℤ on 𝔾_m. An elementary calculation suggests that the inner automorphisms and translation automorphisms coincide over F̅. For convenience, we consider them as translation automorphisms, and one can checkAut(T̃) = Z^1(ℤ/2ℤ,F̅) ⋊Aut_pin(𝔾_m×ℤ/2ℤ) ≅F̅⋊ℤ/2ℤ,in which ℤ/2ℤ acts on F̅ by inverting. For y ∈F̅, we denote the automorphism that fixes 𝔾_m and sends (x,-1)↦ (xy,-1) by μ_y. And we denote the nontrivial element in ℤ/2ℤ by ω. Again, ω is the automorphism sending (x,±1)↦ (x^-1,±1). One can further check that the Galois group Γ acts on Aut(T̃) = F̅⋊ℤ/2ℤ by σ(y, ±1) = (σ(y), ±1). The forms of 𝔾_m⋊ℤ/2ℤ are classified by the Galois cohomologyH^1(Γ, Aut(T̃)) = H^1(Γ, F̅⋊ℤ/2ℤ)To compute the Galois cohomology H^1(Γ,F̅^×⋊ℤ/2ℤ), we consider the following long exact sequence⋯→ℤ/2ℤ→ H^1(Γ, F̅^×) → H^1(Γ,F̅^×⋊ℤ/2ℤ) → H^1(Γ,ℤ/2ℤ) → 0,where the surjectivity of the last map follows from the existence of a splitting due to the semidirect product. In view of Hilbert 90, we have H^1(Γ, F̅^×) = 0, and hence we conclude that the fibre over the trivial element in H^1(F,ℤ/2ℤ) is a singleton, which is nothing but the split form. Now, it suffices to understand the fibre ℱ_E of any nontrivial element [E] ∈ H^1(F,ℤ/2ℤ) (which corresponds to a quadratic extension E/F). To this end, we consider the cocycle z^[E]: Γ↠Gal(E/F) = {1,σ}↪F̅^×⋊ℤ/2ℤ sending σ to ω. Clearly, we have z^[E]∈ℱ_E.* Case 1: After passing to E, the Galois action twisted by z^[E] is given by σ_z^[E](x, ±1) = (σ(x)^-1, ±1). Hence we haveT̃_z^[E](F) = {(x,±1)|x∈ E^×, Nm_E/F(x) = 1}.It remains to investigate whether the fibre ℱ_E contains any other element than [z^[E]]. From now on, we fix the quadratic extension E/F. If we twist the Galois action on the short exact sequence by z^[E] (abbreviated as z below) and take the long exact sequence, then we obtain⋯→ (F̅^×⋊ℤ/2ℤ)_z^Γ→ℤ/2ℤ→ H^1(Γ, F̅^×_z) → H^1(Γ,(F̅^×⋊ℤ/2ℤ)_z) → H^1(Γ,ℤ/2ℤ).We note that there are identificationsℱ_E ≅ker(H^1(Γ,(F̅^×⋊ℤ/2ℤ)_z) → H^1(Γ,ℤ/2ℤ))≅ H^1(Γ,F̅_z^×)/ (ℤ/2ℤ). More generally speaking, given a short exact sequence of (not necessarily abelian) G-modules 0 → A → B → C → 0,the kernel of the map H^1(G,B) → H^1(G,C) (in the associated long exact sequence) is in 1-1 correspondence to the orbit space H^1(G,A)/C^G, where the action of C^G on H^1(G,A) is given by: c· z(σ) := bz(σ)σ(b)^-1 (b is any lift of c). When the G-modules are abelian, this recovers the usual quotient. See <cit.> for more details on nonabelian cohomology.The action of ℤ/2ℤ on H^1(Γ,F̅_z^×) is trivial. Due to the exactness at ℤ/2ℤ, it suffices to show the surjectivity of the projection (F̅^×⋊ℤ/2ℤ)_z^Γ→ℤ/2ℤ. It is clear that (1,-1) lies in (F̅^×⋊ℤ/2ℤ)_z^Γ according to the definition of z. So the surjectivity follows. Therefore, ℱ_E is in bijection with H^1(Γ,F̅_z^×). It is clear from the construction that F̅_z^× (with the Galois action twisted by z) coincides with F̅-points of the norm torus Res_E/F^1𝔾_m determined by the quadratic extension E/F. We have H^1(Γ, F̅_z^×) ≅ H^1(E/F, E^×_z) ≅ F^×/NmE^×. Explicitly, the isomorphism is given by sending z ∈ Z^1(E/F, E_z^×) to z(σ), where σ∈Gal(E/F) is the nontrivial element.Since |F^×/NmE^×| = 2, we may arbitrarily fix some y ∈ F^×\NmE^×. Then y determines [z'] ∈ H^1(Γ,(F̅^×⋊ℤ/2ℤ)_z), where we set z'(σ) = (y,1) after passing to the quotient Gal(E/F). After composing z' with z, we are able to obtain a 1-cocycle in the original (untwisted) sense. Let z”∈ Z^1(Γ,F̅^×⋊ℤ/2ℤ) defined by (after passing to E) z”(σ) = z'(σ)z(σ) = (y,ω). Then T̃_z” is the other isomorphism class associated to E: * Case 2: After passing to E, the Galois action twisted by z” is given by σ_z”(x, 1) = (σ(x)^-1, 1) and σ_z”(x, -1) = (yσ(x)^-1, -1). And the F-points are given byT̃_z”(F) = {(x,1)|x∈ E^×, Nm_E/F(x) = 1}.We note that, for (x, -1) to be an F-point, xσ(x) = y must hold. However, y is chosen from the complement of Nm_E/FE^×.Therefore, there are no rational points on the non-identity component.§.§ Rational points on inner forms of quasi-split disconnected groupsIn this work, we focus on inner forms of quasi-split groups and do not treat translation forms (that do not fall into the former category). Let G̃ = G⋊ A be a quasi-split disconnected reductive group. Although the semidirect product is defined over F and A is not necessarily constant as a finite group scheme, we will still abbreviate A(F̅) as A. Let z̅∈ Z^1(Γ, G/Z(G)^A). We obtain G̃_z̅ by twisting the rational structure of G̃ via z̅. After this twisting process, there is still a short exact sequence of Γ-modules:1 → G_z̅(F̅) →G̃_z̅(F̅) → A → 1,where the twisted Galois action on G̃_z̅(F̅) is given by σ_z̅(g,a) = z̅(σ)(σ(g),σ(a)) z̅(σ)^-1 = (z̅(σ)σ(g)σ(a)[z̅(σ)^-1], σ(a)),for σ∈Γ, while the Galois action on A is unchanged.After taking Γ-fixed points, we have 1 → G_z̅(F) →G̃_z̅(F) → A(F).The last projection is not always surjective. In fact, one can see that (g,a) ∈G̃_z̅(F) if and only if a ∈ A(F) andz̅(σ)σ(g)a(z̅(σ)^-1) = g.We define A(F)^[z̅] as the subgroup of A(F), comprising elements a for which there exists some g ∈ G(F̅) such that (<ref>) is satisfied. Then we have a short exact sequence 1 → G_z̅(F) →G̃_z̅(F) → A(F)^[z̅]→ 1.§ THE LLC FOR DISCONNECTED TORI IN TERMS OF PURE INNER FORMSLet F be a p-adic field. §.§ Pure inner formsWe start with a quasi-split disconnected torus T̃ = T⋊ A defined over F. To be precise, T is a (not necessarily split) torus over F, A is a (not necessarily constant) finite group scheme defined over F acting on T, and the action of A on T is also defined over F.Let z ∈ Z^1(Γ, T). Under the natural map Z^1(Γ, T) → Z^1(Γ,T/T^A), z is brought to a 1-cocycle z̅ taking values in the group of inner automorphisms T/T^A. We twist the rational structure of T̃ by z (or more precisely, by z̅) and obtain an inner form T̃_z, which we call a pure inner form.According to the discussion made in Section <ref>, any element (t,a)∈T̃_z(F) satisfiesz(σ)σ(t)a(z(σ)^-1) = t .If we rewrite (<ref>) as t^-1z(σ)σ(t) = a(z(σ)), then we see that, for any given a∈ A(F), there exists some (t,a)∈T̃_z(F) if and only if a· z is cohomologous to z. We denote by A(F)^[z] the stabliser of the cohomology class [z] in A(F), and obtain the short exact sequence1 → T(F) →T̃_z(F) → A(F)^[z]→ 1.§.§ Local Langlands correspondenceWe fix a pure inner form T̃_z.Now we set out to lay the foundation needed to state the LLC precisely in this disconnected setting.§.§.§ Dual sideWe start by recalling some notions on the dual side from Section <ref>. The L-group of the (connected) torus T is defined to be the semi-direct product ^LT = T̂⋊ W_F. An L-parameter ϕ: W_F→^LT is a continuous morphism such that the projection of ϕ(w) to W_F is w. Two L-parameters are said to be equivalent if they are conjugate under T̂. Moreover, we remind the reader that we have identified the set of L-parameters with the set of continuous 1-cocycles Z^1(W_F, T̂), and the equivalence classes of L-parameters with the continuous cohomology groups H^1(W_F,T̂). By abuse of notation, ϕ might refer to either an L-parameter or a continuous 1-cocycle, and [ϕ] might refer to one of the following three: (i) an equivalence class of L-parameters, (ii) an element in H^1(W_F,T̂), and (iii) the character of T(F) determined by ϕ under the LLC.In the disconnected setting, we will consider the same L-parameters as above. And to avoid unnecessary confusion, we retain the notations of ϕ and [ϕ] which were introduced in Chapter <ref> and recalled above. Now we notice that the action of A on T gives rise to a natural action of A on X_*(T) given by ⟨ a· x , y⟩ = a⟨ x, y⟩ where x∈ X_*(T) and y ∈𝔾_m.Since X_*(T) can be identified canonically with X^*(T̂), the above action further gives rise to an action of A on T̂ = Hom(X^*(T̂),ℂ^×) = Hom(X_*(T),ℂ^×) by (a· t)(x) = t(a^-1· x) for t∈T̂ and x∈ X_*(T). Then we can enlarge the dual group by considering the semi-direct product T̂⋊ A. In fact, it is more convenient to consider a smaller group T̂⋊ A(F)^[z]. Based on this, we introduce the “enlarged L-group” (T̂⋊ A(F)^[z])⋊ W_F, where the action of W_F on T̂⋊ A(F)^[z] is self-evident. We note that we can see an L-parameter ϕ as targeted within (T̂⋊ A(F)^[z])⋊ W_F, after composing it with the natural embedding ^LT ↪ (T̂⋊ A(F)^[z])⋊ W_F.Next, in view of the disconnected setting, we need to introduce a weaker equivalence relation between the L-parameters. Recall that we have fixed a disconnected pure inner form T̃_z. We say that two L-parameters ϕ_1, ϕ_2: W_F→^LT are A(F)^[z]-equivalent, if there exists some (t,a)∈T̂⋊ A(F)^[z] such that (t,a)ϕ_1(w)(t,a)^-1 = ϕ_2(w) for all w∈ W_F.Clearly,the A(F)^[z]-equivalence classesare in 1-1 correspondence with the orbits H^1(W_F,T̂)/A(F)^[z].From now on, we work with a fixed L-parameter ϕ. First, we note that the centraliser of ϕ in T̂ is nothing but theGalois-fixed point of T̂:S_ϕ := Cent(ϕ, T̂) = T̂^W_F = T̂^Γ.We denote the centraliser of ϕ in T̂⋊ A(F)^[z] by S̃_ϕ^[z] := Cent(ϕ, T̂⋊ A(F)^[z]). Then we have the following short exact sequence1 → S_ϕ→S̃_ϕ^[z]→ A(F)^[ϕ],[z]→ 1,where A(F)^[ϕ],[z] is defined to be the subgroup of A(F) comprising elements that fix both [ϕ] and [z]. Indeed, any (s,a) ∈S̃^[z]_ϕ⊂T̂⋊ A(F)^[z] should satisfy (s,a)ϕ(w) = ϕ(w)(s,a) for each w ∈ W_F. Ifwe write ϕ(w) = (ϕ_0(w),w), then we have (s· a(ϕ_0(w)), a) = (ϕ_0(w)w(s), w(a)). This implies that a·ϕ_0 is cohomologous to ϕ_0 and hence S̃^[z]_ϕ maps onto A(F)^[ϕ],[z] with kernel T̂^W_F. After taking component groups, we further obtain1 →π_0(S_ϕ) →π_0(S̃^[z]_ϕ) → A(F)^[ϕ],[z]→ 1. We recall Kottwitz's isomorphism H^1(Γ, T) ≅π_0(T̂^Γ)^*, and find that [z] ∈ H^1(Γ,T) can be regarded as a character of π_0(S_ϕ) = π_0(T̂^Γ). In <cit.>, Kaletha has pointed out the following observation: Suppose ρ is an irreducible representation of π_0(S̃_ϕ^[z]). Then the restriction of ρ to π_0(S_ϕ) either does not contain [z], or is [z]-isotypic. Suppose the restriction of ρ to π_0(S_ϕ) contains [z]. Then by Frobenius reciprocity, we have ρ↪Ind_π_0(S_ϕ)^π_0(S̃_ϕ^[z]) [z]. According to Mackey's Theorem, Res(Ind_π_0(S_ϕ)^π_0(S̃_ϕ^[z]) [z]) is a direct sum of π_0(S̃_ϕ^[z])-conjugates of [z]. Since the conjugate action of π_0(S̃_ϕ^[z])on [z] is trivial,Res(Ind_π_0(S_ϕ)^π_0(S̃_ϕ^[z]) [z]) is [z]-isotypic.We defineIrr(π_0(S̃_ϕ^[z]),[z]):={ρ∈Irr(π_0(S̃_ϕ^[z]))| Res_π_0(S_ϕ)^π_0(S̃_ϕ^[z])ρ↩ [z]}. We remark that one can also consider a larger group S̃_ϕ^[z]:= Cent(ϕ, T̂⋊ A) which gives rise to similar short exact sequences as (<ref>), (<ref>), and we can define Irr(π_0(S̃_ϕ),[z]) analogously. In fact, one can see that π_0(S̃_ϕ^[z]) is the subgroup of π_0(S̃_ϕ) fixing [z] when acting on Irr(π_0(S_ϕ)) by conjugation. As pointed out in <cit.>, one can consider either due to the canonical bijection:Induction gives a bijection from Irr(π_0(S̃_ϕ^[z]),[z]) to Irr(π_0(S̃_ϕ),[z]). We first show that given ρ∈Irr(π_0(S̃_ϕ^[z]),[z]), Indρ is irreducible. By Mackey's criterion on irreducibility, it reduces to show for any x ∈π_0(S̃_ϕ)\π_0(S̃_ϕ^[z]), we have Hom_π_0(S̃_ϕ^[z])(ρ, Ind_xπ_0(S̃_ϕ^[z])x^-1∩π_0(S̃_ϕ^[z])^π_0(S̃_ϕ^[z])^xρ) = 0.In virtue of Frobenius reciprocity, equivalently, we need to showHom_xπ_0(S̃_ϕ^[z])x^-1∩π_0(S̃_ϕ^[z])(ρ, ^xρ) = 0. This holds because the following (stronger) assertion holds: Hom_π_0(T̂^Γ)(ρ, ^xρ) = 0.Indeed, Claim <ref> guarantees the restriction of ρ to π_0(T̂^Γ) is [z]-isotypic, while the restriction of ^xρ to π_0(T̂^Γ) is x[z]-isotypic. Our choice of x ensures x[z] ≠ [z]. Hence there is no nontrivial π_0(T̂^Γ)-intertwining map between them. Now the well-definedness of the map is clear.For any ρ̃∈Irr(π_0(S̃_ϕ),[z]), there exists ρ∈Irr(π_0(S̃_ϕ^[z]),[z]) such that ρ is contained in the restriction of ρ̃. Since Indρ is guaranteed to be irreducible, Indρ is forced to coincide with ρ̃. Hence the map is surjective. Suppose ρ_1, ρ_2∈Irr(π_0(S̃_ϕ^[z]),[z]) satisfy Indρ_1 = Indρ_2 = ρ̃, then by Frobenius reciprocity, both ρ_1 and ρ_2 are contained in Resρ̃. According to Mackey's Theorem, Resρ̃ = ResIndρ_1 is a direct sum of ρ_1 and some g-conjugates with g ∈π_0(S̃_ϕ)\π_0(S̃_ϕ^[z]).These g-conjugates can not lie in Irr(π_0(S̃_ϕ^[z]),[z]) at all, hence ρ_2 coincides with ρ_1. Injectivity is now clear. §.§.§ Group sideWe continue with the setting where a pure inner form T̃_z and an L-parameter ϕ: W_F→^LT are fixed.We define the L-packet associated with ϕ and T̃_z to be Π_ϕ,z = Irr(T̃_z(F),[ϕ]):={η∈Irr(T̃_z)| Res_T(F)^T̃_z(F)η↩ [ϕ]}.Using an easy induction argument, one can see that Π_ϕ,z is a finite set.Moreover, given two L-parameters ϕ_1 and ϕ_2, their associated L-packets Π_ϕ_1, z and Π_ϕ_2, z has nonempty intersection if and only if they are A(F)^[z]-equivalent (in the sense of Definition <ref>), in which case we actually have Π_ϕ_1, z = Π_ϕ_2, z.We let T̃_z(F)^[ϕ] be the subgroup of T̃_z(F) fixing [ϕ]: T(F)→ℂ^× when acting on T(F) by conjugation. T̃_z(F)^[ϕ] sits in the following short exact sequence1 → T(F) →T̃_z(F)^[ϕ]→ A(F)^[ϕ],[z]→ 1,We can consider the set Irr(T̃_z(F)^[ϕ],[ϕ]). Since T(F) is a normal subgroup of T̃_z(F) (or T̃_z(F)^[ϕ]) with finite index, any irreducible representation η of T̃_z(F) (or T̃_z(F)^[ϕ]) must be finite-dimensional. Moreover, we have the followingclaims similar to those on the dual side, with essentially the same proofs which we omit:Suppose η is an irreducible representation of T̃_z(F)^[ϕ]. Then the restriction of η to T(F) either does not contain [ϕ], or is [ϕ]-isotypic. Induction gives a bijection from Irr(T̃_z(F)^[ϕ], [ϕ]) to Irr(T̃_z(F), [ϕ]). Due to the last claim, we can identify Π_ϕ,z = Irr(T̃_z(F),[ϕ]) with Irr(T̃_z(F)^[ϕ],[ϕ]), and work with the latter for convenience.§.§.§ Local Langlands Correspondence for disconnected toriIn <cit.>, Kalethaformulates the conjectural local Langlands correspondence for general disconnected reductive groups in the framework of pure inner forms. In this work, we refrain from delving into the general setting or introducing the character identities (<cit.>).In <cit.>, Kaletha has constructed the LLC for disconnected tori:There is a natural bijection Π_ϕ, z⟷Irr(π_0(S̃_ϕ^[z]), [z])such that character identities hold. In view of the observations following Definition <ref>, we haveIrr(T̃_z) = ∐_[ϕ] ∈ H^1(W_F,T̂)/A(F)^[z]Π_ϕ, z⟷∐_[ϕ] ∈ H^1(W_F,T̂)/A(F)^[z]Irr(π_0(S̃_ϕ^[z]), [z]).§ CONSTRUCTION OF LLC FOR DISCONNECTED TORIInstead of replicating Kaletha's construction of the LLC for disconnected tori in <cit.>, we present another construction in this chapter. For this purpose, we need to relate elements in T̃_z(F)^[ϕ] and S̃_ϕ^[z] to their corresponding 1-hypercocyles first, so that we can appeal to the Tate-Nakayama pairing we have discussed earlier. We fix an L-parameter ϕ: W_F→^LT and a 1-cocycle z ∈ Z^1(Γ, T).§.§ Comparison with hypercohomology groups: group sideWe recall that as a subgroup of T(F̅)⋊ A, T̃_z(F)^[ϕ] consists of points (t,a) that satisfy * a ∈ A(F)^[ϕ],[z], and* z(σ)σ(t)a(z(σ)^-1) = t for any σ∈Γ.If we fix a ∈ A^[ϕ],[z], and consider the complex of length two:T(F̅)T(F̅)sending t to ta(t)^-1, then we find that z(σ)σ(t)a(z(σ)^-1) = tholds for any σ∈Γ if and only if (z^-1,t) ∈ Z^1(Γ, TT). This observation will be used crucially later.In fact, one can further the comparison between the rational points and the hypercohomology groups, and eventually obtain an isomorphism as twisted spaces between certain fibres of them over a∈ A (or a^-1). The remaining discussions of this section will not be used elsewhere in this work, but we include them for their own interest.First, we briefly recall the notion of twisted spaces and refer the reader to <cit.> for more details. The definition we introduce below is equivalent to, but simpler (at least in this specific scenario) than, the one in loc. cit.Let G be a group and let L be a space on which G acts from both the left and the right. We write both the actions multiplicatively: (x,δ)↦ xδ and (δ, x)↦δ x, for x ∈ G and δ∈ L. We say that L is a G-twisted space if * L becomes a G-torsor under both the left and right actions, and* The two actions commute: (xδ)y = x(δ y) for x, y ∈ G and δ∈ L.We resume our discussion of the comparison. Fix a ∈ A(F)^[ϕ],[z]. We recall from (<ref>) that we have the following short exact sequence1 → T(F) →T̃_z(F)^[ϕ]→ A(F)^[ϕ],[z]→ 1.We consider (1-a)T(F)\T̃_z(F)^[ϕ], the space of right (1-a)T(F)-cosets in T̃_z(F)^[ϕ]. Then the projection of T̃_z(F)^ϕ onto A(F)^[ϕ],[z] gives rise to the natural mapq_z: (1-a)T(F)\T̃_z(F)^[ϕ]→ A(F)^[ϕ],[z]. Now we define the left (resp. right) action of T(F)/(1-a)T(F) on q_z^-1(a) to be the one given by left (resp. right) multiplication in the group T̃_z(F)^[ϕ]. One can easily see that q_z^-1(a) hence becomes a T(F)/(1-a)T(F)-twisted space.On the other hand, we recall that the hypercohomology group H^1(F, TT) lies in the following long exact sequence:⋯→ T(F)T(F) → H^1(F, TT) →H^1(F,T)H^1(F,T) →⋯.After truncations, the hypercohomology group in the middle lies in the following short exact sequence:1 →T(F)/(1-a)T(F)→ H^1(F, TT)H^1(F,T)[1-a]→ 1,where H^1(F,T)[1-a] denotes the kernel of 1-a, and p_a denotes the projection. Again, we let the left (resp. right) action of T(F)/(1-a)T(F) on p_a^-1([z^-1]) bethe left (resp. right) multiplication. Then we see that p_a^-1([z^-1]) becomes a T(F)/(1-a)T(F)-twisted space in this way. We define a natural mapΦ: q_z^-1(a)→ p_a^-1([z]^-1)[(t,a)]↦ [(z^-1,t)] We have seen from the defining relations of T̃_z(F)^[ϕ] and Z^1(F, TT) that (t,a) ∈T̃_z(F)^[ϕ] if and only if (z^-1,t)∈ Z^1(F, TT). And we note that(0,(1-a)t_0)∈ B^1(F, TT)holds if and only if t_0∈ T(F) so that Φ is well-defined and bijective. After checking the equivariance of Φ with respect to both the left and the right action (the proof of which we leave to the reader), we reach the conclusion that Φ is an isomorphism:The natural map Φ is an isomorphism of T(F)/(1-a)T(F)-twisted spaces.§.§ Comparison with hypercohomology groups: dual side Under the same setting (with an L-parameter ϕ, z ∈ Z^1(Γ,T) and a ∈ A(F)^[ϕ],[z] fixed), we make the comparison on the dual side in this section. We recall that (s,a) ∈S̃_ϕ^[z] if and only ifϕ(w)w(s)a(ϕ(w)^-1) = sfor any w ∈ W_F. This holds if and only if (ϕ^-1, s) is a continuous 1-hypercocycle lying in Z^1(W_F,T̂T̂). Furthermore, as on the group side, we can update this comparison to an isomorphism as twisted spaces between certain fibres. The projection of π_0(S̃_ϕ^[z]) onto A(F)^[ϕ],[z] gives rise to the mapq'_ϕ: (1-a)π_0(T̂^Γ)\π_0(S̃_ϕ^[z]) → A(F)^[ϕ],[z]. On the other hand, by truncating the long exact sequence associated to hypercohomology, we obtain 1 →π_0(T̂^Γ)/(1-a)π_0(T̂^Γ)→ H^1(W_F, T̂T̂)_red H^1(W_F,T̂)[1-a] → 1. Analogous to the scenario concerning the group side, we can endow both q'_ϕ^-1(a) and p'_a^-1([ϕ]^-1) with the structure ofπ_0(T̂^Γ)/(1-a)π_0(T̂^Γ)-twisted spaces. We defineΨ: q'_ϕ^-1(a)→p'_a^-1([ϕ]^-1) [(s,a)]↦ [(ϕ^-1,s)]and after verifying its well-definedness, we can showThe natural map Ψ is an isomorphism of π_0(T̂^Γ)/(1-a)π_0(T̂^Γ)-twisted spaces.§.§ The local Langlands correspondence for disconnected tori The goal of this section is to describe our reinterpretation of the LLC for disconnected tori. In view of the canonical bijection (see Claim <ref>) given by induction between Π_ϕ,z and Irr(T̃_z(F)^ϕ,[ϕ]), it suffices to give a 1-1 correspondence Irr(T̃_z(F)^[ϕ],[ϕ]) ⟷Irr(π_0(S̃_ϕ^[z]),[z]).Two representations η∈Irr(T̃_z(F)^[ϕ]) and ρ∈Irr(π_0(S̃_ϕ^[z])) are said to be related if the following conditions hold: * dimη = dimρ, so that we can write (η, V) and (ρ,V) for some common underlying space V, and moreover* After conjugating η or ρ (or both) by an automorphism of V (i.e. replacing them by equivalent representations on V) if needed, we have the following identity in GL(V):ρ(s,a)η(t,a^-1) = ⟨ (ϕ^-1,s), (z^-1,t)⟩_TN^-1.for each (s,a) ∈π_0(S̃_ϕ^[z]) and each (t,a^-1) ∈T̃_z(F)^[ϕ]. The right-hand side is understood as a scalar in GL(V), given by the Tate-Nakayama pairing between the groups H^1(W_F, T̂T̂)_red and H^1(F, TT). In the definition's second condition, the need to replace both η andρ can always be reduced to merely replacing any one of them, since the right-hand side of the relation (<ref>) is a scalar. The rest of this section aims to show: Relatedness defined above restricts to a 1-1 correspondence between Irr(T̃_z(F)^[ϕ],[ϕ]) and Irr(π_0(S̃_ϕ^[z]),[z]).The theorem follows from the following three lemmas.For each η∈Irr(T̃_z(F)^[ϕ]), there is at most one ρ∈Irr(π_0(S̃_ϕ^[z])) related to it. Conversely, for each ρ∈Irr(π_0(S̃_ϕ^[z])), there is at most one η∈Irr(T̃_z(F)^[ϕ]) related to it. Let η∈Irr(T̃_z(F)^[ϕ]). In view of Remark <ref>, without loss of generality, we may fix η as a homomorphism η: T̃_z(F)^[ϕ]→ GL(V) and only allow the replacement of ρ. We note that there is a unique map ρ: π_0(S̃_ϕ^[z]) → GL(V) which satisfies the relation (<ref>). Therefore, there is at most one ρ∈Irr(π_0(S̃_ϕ^[z])) related to η. The converse can be shown similarly. Given η∈Irr(T̃_z(F)^[ϕ], [z]), there exists some ρ∈Irr(S̃_ϕ^[z]) related to it. Conversely, given ρ∈Irr(π_0(S̃_ϕ^[z]), [z]), there exists some η∈Irr(T̃_z(F)^[ϕ]) related to it.The proof we present here is modeled after the proof of<cit.>. However, in our setting, we have larger flexibility so that the calculation can be significantly simplified.Let η:T̃_z(F)^[ϕ]→GL(V) and ρ: π_0(S̃_ϕ^[z])→GL(V) be two maps satisfying the relation (<ref>) for each (s,a) ∈π_0(S̃_ϕ^[z]) and each (t,a^-1) ∈T̃_z(F)^[ϕ].We will show that η is a homomorphism if and only if ρ is a homomorphism. For arbitrarily fixed (s_1,a_1),(s_2,a_2) ∈π_0(S̃_ϕ^[z]) and (t_1,a_1^-1),(t_2,a_2^-1) ∈T̃_z(F)^[ϕ], we have(s_1a_1(s_2),a_1a_2) ∈π_0(S̃_ϕ^[z]) and (t_2a_2^-1(t_1), a_2^-1a_1^-1) ∈T̃_z(F)^[ϕ]. Applying the relation (<ref>) to these elements, we obtain three equations:ρ(s_1,a_1)η(t_1,a_1^-1)= ⟨ (ϕ^-1,s_1), (z^-1,t_1)⟩^-1 ρ(s_2,a_2)η(t_2,a_2^-1)= ⟨ (ϕ^-1,s_2), (z^-1,t_2)⟩^-1 ρ(s_1a_1(s_2),a_1a_2)η(t_2a_2^-1(t_1),a_2^-1a_1^-1)= ⟨ (ϕ^-1,s_1a_1(s_2)), (z^-1,t_2a_2^-1(t_1))⟩^-1.Therefore, after regrouping the terms, we haveρ(s_1,a_1)ρ(s_2,a_2)ρ(s_1a_1(s_2),a_1a_2)^-1= (η(t_2a_2^-1(t_1),a_2^-1a_1^-1)^-1η(t_2,a_2^-1)η(t_1,a_1^-1))^-1 · ⟨ (ϕ^-1,s_1), (z^-1,t_1)⟩^-1⟨ (ϕ^-1,s_2), (z^-1,t_2)⟩^-1⟨ (ϕ^-1,s_1a_1(s_2)), (z^-1,t_2a_2^-1(t_1))⟩.To show that η is a homomorphism if and only if ρ is a homomorphism, it suffices to show⟨ (ϕ^-1,s_1),(z^-1,t_1)⟩·⟨(ϕ^-1,s_2),(z^-1,t_2)⟩=⟨(ϕ^-1,s_1a_1(s_2)), (z^-1,t_2a_2^-1(t_1))⟩,for all (s_1,a_1),(s_2,a_2) ∈π_0(S̃_ϕ^[z]) and (t_1,a_1^-1),(t_2,a_2^-1) ∈T̃_z(F)^[ϕ].The arrows on the second row indicate the maps in each hypercohomology group that the corresponding element comes from. Now we consider a finite Galois extension K of F over which T splits and T̃_z is isomorphic to a semi-direct product. In view of the isomorphism ℋ(<ref>) and the diagram (<ref>) defining ℋ, there is [(λ_i, μ_i)] ∈ H_0(W_K/F, X X)_0 corresponding to [(z^-1, t_i)]∈ H^1(K/F, TT), for i = 1,2. Since both [λ_1] and [λ_2] correspond to [z^-1] induced by the map ψ (<ref>), we may assume λ_1 = λ_2 = λ after replacing (λ_2,μ_2) by a cohomologous element. Now, according to the explicit expression on the chain level (<ref>) of the map ϕ, we can see that ϕ is A^[ϕ],[z]-equivariant. [Unfortunately, a clash of notations occurs since we denoted both the map (<ref>) and the L-parameter by ϕ.] Therefore, we find that the isomorphism ℋ (induced by ψ⊕ϕ) sends[(λ, μ_2+a_2^-1(μ_1))]∈ H_0(W_K/F, X X)_0to [(z^-1, t_2a_2^-1(t_1))] ∈ H^1(K/F, TT). By the definition of Tate-Nakayama duality (see Step 2 in Section <ref>), the pairings in (<ref>) can be expanded into⟨(ϕ^-1,s_1),(z^-1,t_1)⟩ = ⟨λ, s_1⟩·∏_w⟨μ_1(w), ϕ(w)⟩,⟨ (ϕ^-1,s_2),(z^-1,t_2)⟩ =⟨λ, s_2⟩·∏_w⟨μ_2(w), ϕ(w)⟩, ⟨(ϕ^-1,s_1a_1(s_2)),(z^-1,t_2a_2^-1(t_1))⟩ = ⟨λ, s_1a_1(s_2)⟩·∏_w⟨[μ_2+a_2^-1(μ_1)](w), ϕ(w)⟩where the w's run over W_K/F. After multiplying the first two formulae together and then dividing by the third, it remains to show⟨λ,(1-a_1)s_2⟩·∏_w⟨(1-a_2^-1)μ_1(w),ϕ(w)⟩ = 1.At this point, we observe that the left-hand side equals the pairing between [(z^-1,(1-a_2^-1)t_1)] ∈ H^1(K/F, TT)and [(ϕ^-1, s_2)]∈ H^1(W_K/F, T̂T̂). We can now apply Theorem <ref>. Indeed, in our case, only one torus T is involved, and if we let f = 1-a_1^-1 and g = 1-a_2^-1, the pairing vanishes according to Theorem <ref>.So far, we have completed the proof of the claim that η is a homomorphism if and only if ρ is a homomorphism. We still need to show η is irreducible if and only if ρ is irreducible. But this is immedidate, since according to the relation (<ref>), ρ and η have the same image in PGL(V).Suppose η∈Irr(T̃_z(F)^[ϕ]) and ρ∈Irr(π_0(S̃_ϕ^[z])) are related. Then η∈Irr(T̃_z(F)^[ϕ],[ϕ]) if and only if ρ∈Irr(π_0(S̃_ϕ^[z]),[z]).We let a = 1. Then the hypercohomology groups involved in (<ref>) are H^1(W_F, T̂T̂)_red andH^1(F, TT), which are actually direct products H^1(W_F,T̂)×π_0(T̂^Γ) and H^1(F,T)× T(F), respectively. And in this degenerate case, the pairing between them is merely the product of the Kottwitz pairing and the Langlands pairing:⟨ (ϕ^-1, s), (z^-1, t) ⟩ = [z]^-1(s)[ϕ]^-1(t).And the defining relation (<ref>) becomesρ(s,1)η(t,1) = [z](s)[ϕ](t)for any s ∈π_0(S̃_ϕ^[z]) and any t∈ T(F). Thus, it follows that ρ(s,1) = [z](s) if and only if η(t,1) = [ϕ](t). Combining Lemma <ref>, Lemma <ref> and Lemma <ref>, we have proved Theorem <ref>. The following proposition will be used later in the global context. Suppose z ∈ Z^1(F,T) is cohomologous to the trivial element, then there exists some d ∈ T(F̅) such that z(σ) = d^-1σ(d) for any σ∈Γ. There is an isomorphism of topological groups γ_d (depending on the choice of d):γ_d:T(F)⋊ A^[ϕ],[z] T̃_z(F)^[ϕ](t,a)↦ (td^-1a(d), a).Suppose [z] = 1. Let d be any element in T(F̅) such that z(σ) = d^-1σ(d), and let γ_d be the isomorphism defined as above. Let η_1∈Irr(T̃_z(F)^[ϕ], [ϕ]) be the representation corresponding to the trivial character 1∈Irr(π_0(S̃^[z]_ϕ), 1).Then we haveη_1∘γ_d = [ϕ]^*where [ϕ]^*: T(F)⋊ A(F)^[ϕ],[z]→ℂ^× is the unique character extending [ϕ] trivially: [ϕ]^*(a)=1 for any a ∈ A^[ϕ],[z]. One can immediately check that as elements inZ^1(F, TT), the group of 1-hypercocycles,(z^-1, td^-1a(d)) and (1, t) are cohomologous. Therefore, by our reinterpretation of the LLC (<ref>), we haveη_1∘γ_d (t,a) = ⟨ (ϕ^-1, s) , (z^-1, td^-1a(d))⟩^-1 = ⟨(ϕ^-1,s), (1, t) ⟩^-1 =[ϕ](t), for any (t,a)∈ T(F)⋊ A(F)^[ϕ],[z]. By abuse of notation, we may often simply write η_1 as [ϕ]^*.Finally, we remark that we are able to give a simpler proof (than that in<cit.>) of the character identities for our construction of the LLC, which we will not include in this work. Instead, we directly compare our construction with Kaletha's. §.§ Comparison with Kaletha's constructionGiven η∈Irr(T̃_z(F)^[ϕ]),we denote by ρ_η the unique element in Irr(π_0(S̃_ϕ^[z])) related to it. And we denote by ρ_η^Kal∈Irr(π_0(S̃_ϕ^[z])) the element given by Kaletha's construction of LLC of disconnected tori in <cit.>. This section aims to show ρ_η = ρ_η^Kal. §.§.§ Construction of ρ_η^KalWe start by recalling Kaletha's construction. Kaletha constructs an isomorphism between certain push-outs (or quotients) ofT̃_z(F)^[ϕ] andπ_0(S̃_ϕ^[z]), which induces a bijection between Irr_[ϕ](T̃_z(F)^[ϕ]) and Irr_[z](π_0(S̃_ϕ^[z])). To be precise, on the group side, after moduloing ker[ϕ], we obtain the following push-out diagram:1 [r] T(F) [d, "[ϕ]"] [r] T̃_z(F)^[ϕ] [r] [d] A^[ϕ],[z] [r] [d,-,double equal sign distance,double] 1 1 [r] ℂ^× [r]ℰ^z_[ϕ] [r]A^[ϕ],[z] [r] 1Let Irr_id(ℰ^z_[ϕ]) be the set of irreducible representations of ℰ^z_[ϕ] whose restriction to ℂ^× is identity. Then it is straightforward to see there is a canonical bijection Irr_id(ℰ_[ϕ]^z) Irr_[ϕ](T̃_z(F)^[ϕ]). And on the dual side, we can obtain a similar push-out diagram:1 [r] π_0(T̂^Γ) [d, "[z]"] [r] π_0(S̃_ϕ^[z]) [r] [d] A^[ϕ],[z] [r] [d,-,double equal sign distance,double] 1 1 [r] ℂ^× [r]ℰ^ϕ_[z] [r]A^[ϕ],[z] [r] 1Let Irr_id(ℰ^ϕ_[z]) be the set of irreducible representations of ℰ^ϕ_[z] whose restriction to ℂ^× is identity. Then there is a canonical bijection Irr_id(ℰ_[z]^ϕ) Irr_[z](π_0(S̃_ϕ^[z])). By abuse of notation, we still denote by the same symbols the inverse images of η∈Irr_[ϕ](T̃_z(F)^[ϕ]) and ρ∈Irr_[z](π_0(S̃_ϕ^[z])) under the above isomorphisms. At this point, we note that an isomorphism ℰ_[ϕ]^z≅ℰ_[z]^ϕ will induce a bijection Irr_[ϕ](T̃_z(F)^[ϕ]) ↔Irr_[z](π_0(S̃_ϕ^[z])).Since we need to exhibit the elements more explicitly, we shall regard T̃_z(F)^[ϕ] as a subgroup of T(F̅) ⋊ A^[ϕ],[z] and π_0(S̃_ϕ^[z]) a subquotient of T̂⋊ A^[ϕ],[z]. For each a ≠id∈ A^[ϕ],[z], we fix, once and for all, some t_a⋊ a ∈T̃_z(F)^[ϕ] and some s_a⋊ a ∈π_0(S̃_ϕ^[z]). When a = id, we let t_id = 1 and s_id = 1. Now the two extensions 1 [r] T(F)[r] T̃_z(F)^[ϕ] [r]A^[ϕ],[z] [r]1and1 [r] π_0(T̂^Γ)[r] π_0(S̃_ϕ^[z]) [r]A^[ϕ],[z] [r]1can be characterised by 2-cocycles (factor sets determined by the sections t_a⋊ a and s_a⋊ a):α(a,b) = t_aa(t_b)t_ab^-1andβ(a,b) = s_aa(s_b)s_ab^-1.Let α̅ = [ϕ]∘α and β = [z]∘β , then we have realised the extensions ℰ_[ϕ]^z and ℰ_[z]^ϕ as twisted products via α̅ and β̅, respectively:ℰ_[ϕ]^z = ℂ^×⊠_α̅ A^[ϕ],[z]andℰ_[z]^ϕ = ℂ^×⊠_β̅ A^[ϕ],[z]. Then Kaletha constructs a natural map ℐ from ℰ^ϕ_[z] to ℰ^z_[ϕ]:ℐ:ℂ^×⊠_β̅ A^[ϕ],[z] →ℂ^×⊠_α̅ A^[ϕ],[z]x ⊠ a↦ xh(a)^-1⊠ a,where h(a) is defined by means of Tate-Nakayama pairing:h(a):=α̅(a^-1, a) ·⟨(z^-1, t_a^-1),(ϕ^-1, s_a)⟩.Kaletha then shows that the definition of ℐ does not depend on choices of the sections and it is an isomorphism indeed.Eventually, by composing the isomorphisms concerned, for each η∈Irr_[ϕ](T̃_z(F)^[ϕ]), one can associate an element ρ_η^Kal∈Irr_[z](π_0(S̃_ϕ^[z])):Irr_[ϕ](T̃_z(F)^[ϕ]) [r] [r, "∼"] Irr_id(ℰ_[ϕ]^z) [r, "ℐ^*"] Irr_id(ℰ_[z]^ϕ) [r, "∼"] Irr_[z](π_0(S̃_ϕ^[z])) η[rrr, dotted, maps to]ρ_η^Kal.§.§.§ ComparisonWe show that our construction of LLC coincides with Kaletha's. Recall that we have fixed sections t_a⋊ a and s_a⋊ a for each a ∈ A^[ϕ],[z]. ρ_η coincides with ρ_η^Kal. Suppose s ∈π_0(S̃_ϕ^[z]) lies over a. Then we write it as s_0⋊ a = s_0s_a^-1(s_a⋊ a).Thus, s_0⋊ a will be pushed to [z](s_0s_a^-1)⊠ a in ℰ_[z]^ϕ = ℂ^×⊠_β̅ A^[ϕ],[z], and further brought to [z](s_0s_a^-1)h(a)^-1⊠ a under ℐ. As a result, we haveρ_η^Kal(s)= η([z](s_0s_a^-1)h(a)^-1⊠ a)= [z](s_0s_a^-1)h(a)^-1η(t_a⋊ a)= [z](s_0s_a^-1)α̅(a^-1, a)^-1·⟨(z^-1, t_a^-1),(ϕ^-1, s_a)⟩^-1·η(t_a⋊ a).On the other hand, our construction yieldsρ_η(s)= ⟨ (z^-1,t_a^-1), (ϕ^-1, s_0) ⟩^-1η(t_a^-1⋊ a^-1)^-1= ⟨ (z^-1,t_a^-1), (ϕ^-1, s_a) ⟩^-1⟨ (z^-1,t_a^-1), (0, s_0s_a^-1) ⟩^-1η(t_a^-1⋊ a^-1)^-1= ⟨ (z^-1,t_a^-1), (ϕ^-1, s_a) ⟩^-1· [z](s_0s_a^-1) ·η(t_a^-1⋊ a^-1)^-1.While some terms in the two expressions are now found to be the same, we still need to match the rest. For this purpose, we recall that α̅ = [ϕ]∘α and hence obtainα̅(a^-1,a) := [ϕ](t_a^-1a^-1(t_a)) = η(t_a^-1⋊ a^-1)η(t_a⋊ a),hence we have η(t_a^-1⋊ a^-1)^-1 = α̅(a^-1,a)^-1η(t_a⋊ a). It follows that ρ_η(s) and ρ_η^Kal(s) agree.§ DISCRETE AUTOMORPHIC REPRESENTATIONSLet F be a number filed with absolute Galois group Γ = Gal(F̅/F). Suppose T is a torus defined over F and A is a (not necessarily constant) finite group scheme defined over F. Let T̃ = T⋊ A be a quasi-split disconnected torus defined over F. We recall this means that A is a (not necessarily constant) finite group scheme acting on a (not necessarily split) torus T, and the action is defined over F. In other words, the action of A on T is Galois-equivariant: σ(a(t)) = σ(a)(σ(t)) for any σ∈Γ.§.§ Pure inner formsLet z ∈ Z^1(Γ,T). Then we can twist the rational structure on T̃ by z via inner automorphisms, and obtain a pure inner form T̃_z. We fix, once and for all, a pure inner form T̃_z.§.§.§ Rational pointsAs in the local case, we have a short exact sequence concerning the rational points1 → T(F) →T̃_z(F) → A(F)^[z]→ 1,where A(F)^[z] is the subgroup of A(F) stabilising the cohomology class [z]. In forthcoming sections, We will consider the action of T̃_z(F) on the set of Hecke characters of T by conjugation: t ·χ(x) = χ(t^-1xt). Due to the commutativity of T(F), one can immediately see that this action descends to an action of A(F)^[z] on the Hecke characters. Let χ: T(F)\ T(𝔸) →ℂ^× be a Hecke character. We denote the stabiliser of χ in T̃_z(F) and A(F)^[z] by T̃_z(F)^χ and A(F)^[z],χ, respectively. They sit in the following short exact sequence:1 → T(F) →T̃_z(F)^χ→ A(F)^[z],χ→ 1. §.§.§ Adelic pointsBefore entering the discussion of adelic points, we need to introduce some background needed. Let K be a finite Galois extension of F such that z factors through Gal(K/F) and T̃_z becomes a split formover K, that is, it splits as a semidirect product T⋊ A over K with T split and A constant. We denote still by z the image of z under the natural map Z^1(Gal(K/F), T(K)) → Z^1(Gal(K/F), T(𝔸_K)).In view of the decomposition T(𝔸_K) = '∏_vT(K_v), we can also write z = (z_v), where z_v is the image of z under the natural map Z^1(Gal(K/F), T(K)) → Z^1(Gal(K_v/F_v), T(K_v)). One can regard T̃ as a F_v-scheme after a base change and consider the pure inner form T̃_z_v. We quickly observe T̃_z(F_v) = T̃_z_v(F_v) and therefore, while we will always use the former notation when referencing it, it is important to bear in mind that it also represents the rational points of a local pure inner form.Now we can start the discussion of adelic points. We can see that, after taking Gal(K/F)-fixed points of T̃_z(𝔸_K), the set of adelic points is given byT_z(𝔸_F) = {(t, a)| t ∈ T(𝔸_K), a∈ A(𝔸_F),and a(z(σ)) = z(σ)t^-1σ(t)for ∀σ∈Γ}, which clearly sits in the following short exact sequence:1 → T(𝔸_F) →T_z(𝔸_F) → A(𝔸_F)^[z]→ 1,where A(𝔸_F)^[z] is the stabiliser of [z]∈ H^1(Gal(K/F), T(𝔸_K)) in A(𝔸_F). §.§.§ Model We fix an 𝒪_F-model 𝒯̃_z of T̃_z. According to <cit.>, for almost all v, T̃_z(F_v) = T(F_v)𝒯̃_z(𝒪_v) holds. In other words, for all almost all places v, there exist integral points on each connected component of T̃_z(F_v). We fix S, a finite set of places, including the infinite places, places that ramify in the extension K/F, places that fail T̃_z(F_v) = T(F_v)T̃_z(𝒪_v),and (finitely many) places which fail the property that 𝒯(𝒪_v) is the unique maximal compact subgroup of T(F_v). For v∉ S, we see that K_v := 𝒯̃_z(𝒪_v) is the unique maximal compact subgroup of T̃_z(F_v). In later sections, we will write T̃_z(𝒪_v) to refer to 𝒯̃_z(𝒪_v) for simplicity. Once the model is chosen, the adelic points of an algebraic group can be written as a restricted direct product (with restriction in integral points). Thus, the short exact sequence (<ref>) can be rewritten as 1 →'∏_v T(F_v) →'∏_vT̃_z_v(F_v) →∏_vA(F_v)^[z_v]→ 1.Indeed, <cit.> implies the restricted direct product '∏ A_v is actually a direct product, and then we find that (a_v)_v fixes [z] = ([z_v])_v if and only if a_v fixes [z_v] for each v. We note that,endowed with product topology, A(𝔸)^[z] is a compact topological group. As a consequence, it follows that T̃_z(𝔸) is unimodular. §.§ Discrete automorphic representationsFrom now on, we drop the subscript and simply write 𝔸 for the adele ring of F. We note that T̃_z(F) is diagonally embedded as a discrete closed subgroup of T̃_z(𝔸). Suppose T_0 is the largest ℚ-split torus of Res_F/ℚT. We define A_T := T_0(ℝ)^∘, the identity component of the ℝ-points, then embed A_T into T(𝔸) andT̃_z(𝔸). It is noteworthy that [T] := A_TT(F)\ T(𝔸) is compact. We fix any section s: A(𝔸)^[z] = ∏_vA(F_v)^[z_v]→T̃_z(𝔸) and note that s is automatically continuous in virtue of <cit.>. Then, we consider the composition[T] × A(𝔸)^[z] [T] ×T̃_z(𝔸) ↠ A_TT̃_z(F)\T̃_z(𝔸),where the second map is induced by multiplication between T(𝔸) and T̃_z(𝔸), the former embedded into the latter. Combining this with the compactness of [T], it follows that A_TT̃_z(F)\T̃_z(𝔸) is a compact space. We fix a right Haar measure on A_T\T̃_z(𝔸), which (along with the counting measure on T̃_z(F)) induces a A_T\T̃_z(𝔸)-invariant Radon measure on A_TT̃_z(F)\T̃_z(𝔸). Then we have a Hilbert space L^2(A_TT̃_z(F)\T̃_z(𝔸)) and the right regular representation on L^2(A_TT̃_z(F)\T̃_z(𝔸)) as a unitary representation of T̃_z(𝔸). Our goal is to eventually understand how L^2(A_TT̃_z(F)\T̃_z(𝔸)) can be decomposed into a Hilbert direct sum of irreducible representations, which we call discrete automorphic representations (or simply automorphic representations).Slightly more generally, if we let ω: A_T→ S^1 be a unitary character, then we can consider the Hilbert space L^2(T̃_z(F)\T̃_z(𝔸), ω) consisting ofT̃_z(F)-left-invariant functions that satisfy f(tx) = ω(t)f(x) for t∈ A_T and x∈T̃_z(𝔸), and ask how it decomposes. In fatc, most of our results in the following chapters carry over to this twisted case verbatim. Indeed, if we let H(T) be the set of Hecke characters of T trivial on A_T, then what we will find in the following chapters is that only Hecke characters in H(T) account for the irreducible constituents of L^2(A_TT̃_z(F)\T̃_z(𝔸)). On the other hand, for the twisted case, the irreducible constituents of L^2(T̃_z(F)\T̃_z(𝔸), ω) are accounted for by H(T, ω), comprised of Hecke characters of T restricted to be ω on A_T.After the replacement of H(T) by H(T, ω), our arguments in the following chapters remain valid. § MULTIPLICITY ON THE AUTOMORPHIC SIDE §.§ Smooth inductions We write 𝔸 = F_∞×𝔸^∞, where F_∞ is the product of F_v's with v's ranging in all the archimedean places, and 𝔸^∞ is the restricted direct product of F_v's with v ranging in finite places. Let G be an affine algebraic group defined over F. Then the above splitting induces a splitting of adelic points of G: G(𝔸) = G(F_∞) × G(𝔸^∞). We assume that G(𝔸) is unimodular.First, we recall the notion of the smooth part of a representation.Let π be a continuous representation of G(𝔸) on a locally convex topological vector space V. Then a vector v ∈ V is termed smooth if v is a smooth vector with respect to G(F_∞) and is K^∞-invariant for some open compact subgroup K^∞≤ G(𝔸^∞). We call V_sm, which consists of all smooth vectors in V, the smooth part of V. It is worth noting that V_sm is stable under the action of G(𝔸) and dense in V. When V is a Fréchet space, V_sm can be endowed with a natural LF-topology as follows. First, we denote by V_sm,∞ the subspace of smooth vectors under G(F_∞), which is a Fréchet space. Then, we let K_n be a decreasing cofinal sequence of compact open subsets of T̃_z(𝔸^∞). Now, we rewrite V_sm as an inductive limit (union): V_sm = ∪_n (V_sm,∞)^K_n and find that V_sm naturally becomes an LF-space equipped with inductive topology. Next, we introduce the notion of smooth induction to an adelic group from a closed subgroup. Let H ≤ G(𝔸) be a unimodular closed subgroup, and let (ρ, W) be a unitary representation of H. We consider the representationU := {f:G(𝔸)→ W|f(hg) = ρ(h)f(g) for ∀ h ∈ H and ∀ g ∈ G(𝔸)}with G(𝔸) acting by right translation. Then we define the induction of (ρ, W) to G(𝔸), denoted by Ind_H^G(𝔸)ρ, to be its smooth partU_sm.In our later practical application, compactness of the quotient G(𝔸)/H is always satisfied. In this scenario, the space U in the above definition can be naturally embedded into the unitary induced representation (see <cit.> for its definition). In this case, we define the topology on U to be the one inherited from this embedding. Moreover, we endow its smooth part U_sm =: Ind_H^G(𝔸)ρ with the LF-topology. In the rest of this section, we discuss some properties of the smooth inductions Ind_H^Gρ when G, H and ρ satisfy the following working assumption: * G = '∏_v G_v is a restricted direct product satisfying T(F_v)⊆ G_v⊆T̃_z(F_v) * H satisfies one of the following: * H is a restricted direct product '∏_vH_v with T(F_v) ⊆ H_v⊆ G_v,* H ⊆ G contains T(𝔸) as a subgroup of finite index, e.g. H = T(𝔸)T̃_z(F).* (ρ, W) is smooth when restricted to T(𝔸)⊂ H.The notion Ind_H^Gρ is self-evident for G and H satisfying the assumption in the same way as Definition <ref> (even if it may not be covered by the latter when G is not the adelic points for some algebraic group). Now, for the sake of clarity in our exposition, we set G = T̃_z(𝔸) for the rest of this section, but bear in mind that the forthcoming arguments are applicable to other G's.We recall that, in Definition <ref>, Ind_H^T̃_z(𝔸)ρ is defined to be U_sm. Rather than consider the smooth vectors under the whole T̃_z(𝔸), we may disregard the smoothness with respect to the archimedean places and solely consider the subspace of smooth vectors in U under the action of T̃_z(𝔸^∞), which we denote by U^∞_sm. In fact, we notice that these two spaces actually coincide: A T̃_z(𝔸^∞)-smooth vector in U is automatically T̃_z(F_∞)-smooth. Hence we have U_sm = U_sm^∞.According to the assumption, any v ∈ U is smooth under T(F_∞). Since T(F_∞) is a Lie subgroup of finite index in T̃_z(F_∞), v is also smooth under T̃_z(F_∞). In fact, let S be a finite set of places of F, and let 𝔸^S := '∏_v∉ SF_v. By the same reasoning, smoothness of v ∈ V under T̃_z(𝔸^S) implies smoothness of v under T̃_z(𝔸). Now the Frobenius reciprocity and induction in steps can be easily deduced as in the p-adic case. For completeness, we include their proofs here.We maintain our previous assumptions on H and (ρ, W). Let (π, V) be a smooth representation of T̃_z(𝔸). Then there is a natural isomorphismΦ:Hom_T̃_z(𝔸)(V, Ind_H^T̃_z(𝔸)W) Hom_H(ResV, W)defined by sending T̃_z(𝔸)-morphism f:V→Ind_H^T̃_z(𝔸)W to Φ(f):v ↦ f(v)(1). One can immediately check that Φ(f) thus defined is an H-morphism indeed. Now we define Ψ:Hom_H(ResV, W) →Hom_T̃_z(𝔸)(V, Ind_H^T̃_z(𝔸)W) as follows. Let g: V→ W be an H-morphism. We define Ψ(g)(v) to be a map T̃_z(𝔸) → W, which sends t ∈T̃_z(𝔸) to g(π(t)v). Now we need to show Ψ(g)(v) falls in Ind_H^T̃_z(𝔸)W.First we let h ∈ H, then we find that Ψ(g)(v) sends ht to g(π(ht)v) = ρ(h)g(π(t)v) since g intertwines Resπ with ρ. According to Proposition <ref>, it remains to show that Ψ(g)(v) is smooth under the action of T̃_z (𝔸^∞). Since (π,V) is smooth representation of T̃_z(𝔸),there exists a compact open subgroup K^∞≤T̃_z(𝔸^∞) such that π(K^∞)v = v. Now, by letting K act via right translation, we have K · [Ψ(g)(v)](t) = g(π(tK)v) = g(π(t)v) = [Ψ(g)(v)](t). In other words, K also fixes Ψ(g)(v).One can immediately check that Ψ(g) is T̃_z(𝔸)-intertwining, and finally, Φ and Ψ are inverses of each other.We maintain our previous assumptions on H and an H-representation (ρ, W). Let H' be a larger subgroup satisfying the assumptions on H, such that T(𝔸) ⊆ H ⊆ H' ⊆T̃_z(𝔸). Then there is a natural isomorphism of T̃_z(𝔸)-modulesϕ: Ind_H'^T̃_z(𝔸)(Ind_H^H'W) Ind_H^T̃_z(𝔸) W,defined by sending f: T̃_z(𝔸) →Ind_H^H'W to a map T̃_z(𝔸) ∋ t ↦ f(t)(1) ∈ W.We construct ψ: Ind_H^T̃_z(𝔸) W →Ind_H'^T̃_z(𝔸)(Ind_H^H'W) as follows. Let g ∈Ind_H^T̃_z(𝔸) W, we define ψ(g) as a map T̃_z(𝔸) →Ind_H^H'W, which sends t to (H'∋ h_2↦ g(h_2t)∈ W). As in the proof of the last proposition, in view of Proposition <ref>, one can show that both ϕ and ψ are well-defined, i.e. images of the maps described above fall within the corresponding induction spaces indeed. Verification of ϕ and ψ being T̃_z(𝔸)-morphisms and inverses of each other is immediate.§.§ A preliminary decompositionFor the sake of obtaining a spectral decomposition of L^2(A_TT̃_z(F) \T̃_z(𝔸)), we will extract a suitable dense subspace 𝒜(T̃_z), on which T̃_z(𝔸) acts smoothly. First we consider an intermediate closed subgroup T(𝔸)T̃_z(F) of T̃_z(𝔸), which contains T(𝔸) as a subgroup of finite index:1 → T(𝔸) → T(𝔸)T̃_z(F) → A(F)^[z]→ 1.The coset space A_TT̃_z(F)\ T(𝔸)T̃_z(F), after being endowed with quotient topology, is naturally homeomorphic to A_TT(F)\ T(𝔸) via p:A_TT(F)\ T(𝔸)A_TT̃_z(F)\ T(𝔸)T̃_z(F)given by sending A_TT(F)-coset of x to A_TT̃_z(F)-coset of x,for any x ∈ T(𝔸). Indeed, one can see its inverse p^-1is given by sending the A_TT̃_z(F)-coset of xt to the A_TT(F)-coset of t^-1xt, for any x ∈ T(𝔸) and t ∈T̃_z(F). Using the fact that T(𝔸) is of finite index in T(𝔸)T̃_z(F), continuity of p and p^-1 can be checked immediately. The homeomorphism p gives rise to the following identificationL^2(A_TT(F)\ T(𝔸))L^2(A_TT̃_z(F)\ T(𝔸)T̃_z(F) ).We let H(T) be the set of all Hecke characters of T trivial on A_T (in particular, they are unitary).Given a χ∈ H(T), we define χ̃ := (p^*)^-1(χ) as its lifting to T(𝔸)T̃_z(F). Explicitly, we have χ̃(xt) = χ(t^-1xt) for x∈ T(𝔸) and t∈T̃_z(F).We introduce the conjugation action of T̃_z(F) on H(T):(t_0·χ)(x) := χ(t_0^-1xt_0). for x ∈ T(𝔸) and t_0∈T̃_z(F). On the other hand, there is the right regular representation of T(𝔸)T̃_z(F) on L^2(A_TT̃_z(F)\ T(𝔸)T̃_z(F) ). It turns out that, under this action, T(𝔸) acts on χ̃ by scalars and T̃_z(F) permutes the χ̃'s: Under the action of right translation, we havex_0·χ̃ = χ(x_0)χ̃for each x_0∈ T(𝔸), and t_0·χ̃ = t_0·χfor each t_0∈T̃_z(F). We note that functions in L^2(A_TT̃_z(F)\ T(𝔸)T̃_z(F) ) are completely determined by their values on T(𝔸). Given x_0∈ T(𝔸), we have (x_0·χ̃)(x) = χ̃(xx_0) = χ(xx_0) = χ(x_0)χ̃(x)for any x ∈ T(𝔸), hence the first statement follows.Given t_0∈T̃_z(F), we have t_0·χ̃(x)= χ̃(xt_0) = χ̃(t_0· t_0^-1xt_0) = χ(t_0^-1xt_0)for any x ∈ T(𝔸), thus the second statement follows. Next, we note that Peter-Weyl Theorem ensuresdensity of the following subspace ⊕_χ∈ H(T)ℂ·χ⊂ L^2(A_TT(F)\ T(𝔸)).Transfering it under p^*, we define the following dense subspace:W := ⊕_χ∈ H(T)ℂ·χ̃⊂ L^2(A_TT̃_z(F)\ T(𝔸)T̃_z(F) ),which is moreover stable under the right translation by T(𝔸)T̃_z(F), in virtue of Fact <ref>.From now on, to ease the notation, we simply write χ̃ as χwhen no ambiguity occurs. Finally, we define𝒜(T̃_z) := Ind_T(𝔸)T̃_z(F)^T̃_z(𝔸) W. Via a series of inclusions (where each space is dense in the one following it):𝒜(T̃_z) = Ind_T(𝔸)T̃_z(F)^T̃_z(𝔸) W⊂Ind_T(𝔸)T̃_z(F)^T̃_z(𝔸)[Ind_A_TT̃_z(F)^T(𝔸)T̃_z(F)1]⊂ L^2(A_TT̃_z(F)\T̃_z(𝔸)),we see that 𝒜(T̃_z) is dense in L^2(A_TT̃_z(F)\T̃_z(𝔸)). Next, we hope to rewrite W based on apartition of H(T) into T̃_z(F)-orbits. Given χ_0∈ H(T), denote its orbit by𝒪_χ_0:={t·χ_0| t∈T̃_z(F)}.There is a natural isomorphism of irreducible representations of T(𝔸)T̃_z(F):⊕_χ∈𝒪_χ_0ℂ·χ≅Ind_T(𝔸)T̃_z(F)^χ_0^T(𝔸)T̃_z(F)χ_0^*,where χ_0^* is the character on T(𝔸)T̃_z(F)^χ_0 extending χ_0 trivially: χ_0^*(xt) = χ(x) for x ∈ T(𝔸) and t ∈T̃_z(F)^χ_0.Since T̃_z(F)^χ_0 is of finite index in T̃_z(F), we may fix a set of left (resp. right) coset representatives t_i's (resp. t_i^-1's). Then the isomorphism above can be written explicitly as∑ c_jt_j·χ_0↦ f:T(𝔸)T̃_z(F)→ℂ, t_j^-1 ↦ c_j with inverse∑ f(t_j^-1)t_j·χ_0f:T(𝔸)T̃_z(F) →ℂ.Now we reassemble the summands appearing in W. We start by rewriting W as a sum over a setof representatives {χ_0} from each T̃_z(F)-orbit of H:W = ⊕_χℂ·χ≅⊕_χ_0∈ H(T)/T̃_z(F)Ind_T(𝔸)T̃_z(F)^χ_0^T(𝔸)T̃_z(F)χ_0^*,which is an isomorphism of T(𝔸)T̃_z(F)-modules.Then, in view of induction by steps (see Proposition <ref>), we obtain𝒜(T̃_z) = Ind_T(𝔸)T̃_z(F)^T̃_z(𝔸)W ≅⊕_χ_0∈ H(T)/T̃_z(F)Ind_T(𝔸)T̃_z(F)^T̃_z(𝔸)(Ind_T(𝔸)T̃_z(F)^χ_0^T(𝔸)T̃_z(F)χ_0^*)≅⊕_χ_0∈ H(T)/T̃_z(F)Ind_T(𝔸)T̃_z(F)^χ_0^T̃_z(𝔸)χ_0^*. We denote U_χ_0:= Ind_T(𝔸)T̃_z(F)^χ_0^T̃_z(𝔸)χ_0^*,which turns out to be semisimple according to our discussion in the next section.By the density of 𝒜(T̃_z) inL^2(A_TT̃_z(F)\T̃_z(𝔸)), we conclude that the latter can be decomposed as a Hilbert direct sumL^2(A_TT̃_z(F)\T̃_z(𝔸)) = ⊕_χ_0∈ H(T)/T̃_z(F)U_χ_0. §.§ Preparation for multiplicity calculationIn this part, we lay the foundation for the calculation of multiplicity on the automorphic side. For this, we need to investigate the interaction between the restricted tensor products and taking inductions. As a result, we show the semisimplicity of 𝒜(T̃_z) as a T̃_z(𝔸)-module. We fix some χ∈ H(T). To start with, we hope to explore the induction Ind_T(𝔸)^T̃_z(𝔸)χ. Now we need to factor the induction of χ from T(𝔸) to T̃_z(𝔸) into a restricted tensor product of local inductions. Let S' be the union of the set of ramified places of χ and S, the set of places fixed in Section <ref>. Certainly, χ can be understood as a restricted tensor product of local characters χ_v χ≅'⊗_vχ_v,where the restriction is with respect to 1 ∈ V_χ_v for all v ∉ S'.Suppose v ∉ S', then there exists a unique irreducible T̃_z(𝒪_v)-unramified constituent η^0_v of Ind_T(F_v)^T̃_z(F_v)χ_v. Moreover, the subspace of T̃_z(𝒪_v)-spherical vectors of η_v^0 is spanned by the distinguished element f^0_v∈Ind_T(F_v)^T̃_z(F_v)χ_v, characterised by taking value 1 on T̃_z(𝒪_v). We recall that for v ∉ S', we have T̃_z(F_v) = T(F_v)T̃_z(𝒪_v). On the one hand, since χ_v is unramified, the element f^0_v is well-defined, and it is T̃_z(𝒪_v)-spherical. On the other hand, whenever f_v∈Ind_T(F_v)^T̃_z(F_v)χ_v is T̃_z(𝒪_v)-spherical, we find that f_v is determined by f_v(1) and have f_v = f_v(1)f^0_v.In particular, the subspace ofT̃_z(𝒪_v)-spherical vectors in Ind_T(F_v)^T̃_z(F_v)χ_v is one-dimensional. It follows that there is a unique irreducible T̃_z(𝒪_v)-unramified constituent η^0_v of Ind_T(F_v)^T̃_z(F_v)χ_v.Now we can form a restricted tensor product of local inductions'⊗_vInd_T(F_v)^T̃_z(F_v)χ_v,whose restriction is with respect to the f^0_v's introduced in the previous proposition. There is a natural embeddingJ: '⊗_vInd_T(F_v)^T̃_z(F_v)χ_v↪Ind_T(𝔸)^T̃_z(𝔸)χgiven by J(⊗_v f_v)(t) = ⊗_v f_v(t_v).The natural embedding J is also surjective. We recall that there is a natural projection map p: T̃_z(𝔸) → A(𝔸)^[z] = ∏ A(F_v)^[z_v]. We pick a section s of p (as merely a section of a map) such that s_v takes value in T̃_z(𝒪_v) for v ∉ S'. Such a section exists since S' contains all the exceptional places which fail T̃_z(F_v) = T(F_v)T̃_z(𝒪_v). Let f ∈Ind_T(𝔸)^T̃_z(𝔸)χ, then by definition, f is smooth on T̃_z(𝔸). Thus, one can easily check that the composition f∘ s: A(𝔸)^[z]→ℂ is again smooth (locally constant). Here we recall that A(𝔸)^[z] is endowed with product topology, hence compact. Therefore, f∘ s has a finite image {c_1,…, c_n}. Now we note that A(𝔸)^[z] becomes a finite disjoint union of the preimages, which are open sets. This immediately implies that each preimage (f∘ s)^-1(c_i) must be also closed, hence compact. This further implies that each (f∘ s)^-1(c_i) can be written as a finite union of some basic open sets which are in the form of (<ref>). To ease notation and without loss of generality, we assume each preimage is a single basic open set:(f∘ s)^-1(c_i) = U_i×∏_v∉ S_iA_v^[z_v],where S_i is a finite set of places containing S', and U_i⊆∏_v∈ S_iA_v^[z_v].By the finiteness of the image, the union Σ := ∪_i S_i is a finite set. Therefore, we can reach a factorisation f∘ s = g_1⊗ g_2whereg_2≡ 1: ∏_v∉ΣA_v^[z_v]→ℂ and g_1: ∏_v∈ΣA_v^[z_v]→ℂ. This further induces a factorisation of f:f = f_Σ⊗(⊗_v∉Σ f^0_v),where f^0_v is the distinguished vector defined at the beginning of this section, and f_Σ lies in Ind_∏_v∈ΣT(F_v)^∏_v∈ΣT̃_z(F_v)⊗_v∈Σχ_v.Due to the fact that Σ is finite and T(F_v) is of finite index in T̃_z(F_v) for each v, the induction commutes with tensor product, hence the above space is naturally isomorphic to ⊗_v ∈ΣInd_T(F_v)^T̃_z(F_v)χ_v. Therefore, we conclude that f lies in⊗_v ∈ΣInd_T(F_v)^T̃_z(F_v)χ_v⊗(⊗_v∉Σ f^0_v) ⊂'⊗_vInd_T(F_v)^T̃_z(F_v)χ_v. Ind_T(𝔸)^T̃_z(𝔸)χ is semisimple as a T̃_z(𝔸)-module. Locally, Ind_T(F_v)^T̃_z(F_v)χ_v is semisimple for each place v. Now we consider a collection of certain irreducible subrepresentations of Ind_T(𝔸)^T̃_z(𝔸)χ:𝒮_χ = {⊗_v'η_v: η_v is an irreducible constituent of Ind_T(F_v)^T̃_z(F_v)χ_v and η_v = η^0_v for almost all v },where the restriction is with respect to the f^0_v's. One can see thatInd_T(𝔸)^T̃_z(𝔸)χ = '⊗_vInd_T(F_v)^T̃_z(F_v)χ_v = '⊗_v(⊕_i=0^n_vη_v^i)is a direct sum of irreducible representations from 𝒮_χ (with certain multiplicity).U_χ and 𝒜(T̃_z) are semisimple T̃_z(𝔸)-modules.In view of decomposition (<ref>), it suffices to show U_χ is semisimple. We note that U_χ = Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)χ^* is a submodule of Ind_T(𝔸)^T̃_z(𝔸)χ, hence semisimplicity follows. We note that, after replacing T̃_z(𝔸) with T̃_z(𝔸)^χ, analogues of Claim <ref>, Corollary <ref>, and Corollary <ref> still hold, with essentially the same proofs. Similar to the collection 𝒮_χ in the proof of <ref>,we define a collection of irreducible subrepresentations of Ind_T(𝔸)^T̃_z(𝔸)^χ:𝒮̅_χ = {⊗_v'η̅_v: η̅_v is an irreducible constituent of Ind_T(F_v)^T̃_z(F_v)^χ_vχ_v and η̅_v = η̅^0_v for almost all v },where the restriction is also with respect to the spherical vectors. Then Ind_T(𝔸)^T̃_z(𝔸)^χχ is a direct sum of subrepresentations from 𝒮̅_χ. And as a submodule, Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)^χχ^* is also semisimple with constituents in 𝒮̅_χ. We make the following elementary observation.Each element in 𝒮̅_χ is finite-dimensional. For almost all v, [z_v] is trivial, hence we have T̃_z(F_v)^χ_v≅ T(F_v)⋊ A^[z_v],χ_v. Besides, for almost all v, χ_v is unramified and furthermore, the unique irreducible unramified constituent η̅_v^0 of Indη̅_v is nothing but χ_v^*, the unique character that extends χ_v to T(F_v)⋊ A^[z_v],χ_v by valuingtrivially on A^[z_v],χ_v. In particular, given any ⊗_v'η̅_v∈𝒮̅_χ, η̅_v is 1-dimensional for almost all v. Hence ⊗_v'η̅_v is finite-dimensional. As in the local case, the reduction from T̃_z(𝔸) to T̃_z(𝔸)^χ turns out to be convenient without loss of information. The following propositions are the global analogues of Claim <ref> and Claim <ref>.Any representation η̅ = ⊗'_vη̅_v from 𝒮̅_χ becomes χ-isotypic after being restricted to T(𝔸).According to Claim <ref>, at each place v, η̅_v is χ_v-isotypic after being restricted to T(F_v). Thus the statement follows. The induction Ind_T̃_z(𝔸)^χ^T̃_z(𝔸): 𝒮̅_χ→𝒮_χ is bijective. And its inverse is given by sending ⊗'η_v to the unique irreducible constituent of Res_T̃_z(𝔸)^χ^T̃_z(𝔸) (⊗'η_v) which lies in 𝒮̅_χ. Similar to Claim <ref>, we have Ind_T̃_z(𝔸)^χ^T̃_z(𝔸)(⊗_v'η̅_v) ≅'⊗_vInd_T̃_z(F_v)^χ_v^T̃_z(F_v)η̅_v.Now we recall from Claim <ref> that at each place v, local inductionInd_T̃_z(F_v)^χ_v^T̃_z(F_v) gives a bijection between irreducible constituents of Ind_T(F_v)^T̃_z(F_v)^χ_vχ_v and Ind_T(F_v)^T̃_z(F_v)χ_v, respectively. Therefore, Ind_T̃_z(𝔸)^χ^T̃_z(𝔸) is also bijective. At each place v, the inverse of Ind_T̃_z(F_v)^χ_v^T̃_z(F_v): Irr(T̃_z(F_v)^χ_v, χ_v) →Irr(T̃_z(F_v), χ_v) is given by sending η_v∈Irr(T̃_z(F_v), χ_v) to the unique irreducible constituent of Resη_v which lies in Irr(T̃_z(F_v)^χ_v, χ_v). Hence the global statement follows.§.§ Multiplicity on the automorphic sideLet η = ⊗' _vη_v be an irreducible admissible smooth representation of T̃_z(𝔸). In this section, we determine the multiplicity of η in 𝒜(T̃_z):m_η = dim Hom_T̃_z(𝔸) (η, 𝒜(T̃_z)).In view of the decomposition (<ref>), we have m_η = ∑_χ∈ H(T)/T̃_z(F)m_η,χ,where m_η,χ is the contribution of χ to m_η:m_η,χ :=dim Hom_T̃_z(𝔸) (η, U_χ).Hence it reduces to determining m_η,χ. We fix χ∈ H(T) andaim to determine the multiplicity m_η,χ in the rest of this section. Furthermore, from now on, we assume η lies in 𝒮_χ, see (<ref>) for its definition, since otherwise, we have m_η,χ = 0.Now, Proposition <ref> ensures that there exists a unique η̅∈𝒮̅_χ such that Ind_T̃_z(𝔸)^χ^T̃_z(𝔸)η̅ = η. We remind the reader that η̅ is finite-dimensional by Fact <ref>. According to Proposition <ref>, we are able to write U_χvia an induction in steps:U_χ = Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)χ^* = Ind_T̃_z(𝔸)^χ^T̃_z(𝔸)[Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)^χχ^*].As we have pointed out in Remark <ref>, the inner induction Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)^χχ^* is semisimple with irreducible constituents from 𝒮̅_χ. According to Proposition <ref>, the outer induction preserves multiplicity, that is, the following holds:m_η,χ = dim Hom_T̃_z(𝔸)^χ(η̅, Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)^χχ^*).Then by Frobenius reciprocity (Proposition <ref>), this can be simplified further asm_η,χ =dim Hom_T(𝔸)T̃_z(F)^χ(Res_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)^χη̅,χ^*). To determine the right-hand side, we take theT(𝔸)-actions and T̃_z(F)^χ-actions into consideration separately. For T(𝔸)-actions, according to Proposition <ref>, the restriction of η̅ to T(𝔸) is χ-isotypic, which agrees with χ^*. Therefore, themultiplicity can be further simplified by focusing on T̃_z(F)^χ-actions:m_η,χ =dim Hom_T̃_z(F)^χ(Res_T̃_z(F)^χ^T̃_z(𝔸)^χη̅,1). Finally, the restriction of η̅ to T(𝔸) is χ-isotypic, hence its restriction to T(F) is trivial. We may pass Res_T̃_z(F)^χ^T̃_z(𝔸)^χη̅ to the quotient of T̃_z(F)^χ by T(F), i.e. A(F)^[z],χ. We simply denote the representation passed to A(F)^[z],χ by η̅|_A(F)^[z],χ. Precisely speaking, for t_a∈T̃_z(F)^χ lie above a ∈ A(F)^[z],χ, we define η̅|_A(F)^[z],χ(a) := η̅(t_a).Now the above mutiplicity becomesm_η,χ = dim Hom_A(F)^[z],χ(η̅|_A(F)^[z],χ, 1)= 1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χtrη̅|_A(F)^[z],χ(a).We have establishedLet η be an irreducible constituent of U_χ and η̅ be the unique irreducible constituent of Ind_T(𝔸)T̃_z(F)^χ^T̃_z(𝔸)^χχ^* such that Ind_T̃_z(𝔸)^χ^T̃_z(𝔸)η̅ = η.Then the multiplicity of η in U_χ is given bydim Hom_T̃_z(𝔸) (η, U_χ) = 1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χtrη̅|_A(F)^[z],χ(a).And the multiplicity of η in 𝒜(T̃_z) is given by dim Hom_T̃_z(𝔸) (η, 𝒜(T̃_z)) = ∑_χ∈ H(T)/T̃_z(F)1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χtrη̅|_A(F)^[z],χ(a).§ MULTIPLICITY FORMULA FOR DISCONNECTED TORIWe retain the notations from the previous chapter. A pure inner form T̃_z has been fixed.§.§ Settings on the dual sideLet W_F be the global Weil group of F. The reader may refer to Definition <ref> and Definition <ref> for the definitions of global L-parameters, equivalences and near equivalences between them. Moreover, we recall that, as we have summarised at the end of Section <ref>, the global Langlands correspondence gives a natural bijection between the near equivalence classes of global L-parameters and the set of Hecke characters. Throughout this chapter, we restrict ourselves to L-parameters that correspond to H(T), the set of Hecke characters trivial on A_T. Now we enter the disconnected setting. The finite group A acts on T, hence also naturally acts on X^*(T) and furthermore, on T̂ = X^*(T)⊗_ℤℂ^× and the set of global L-parameters. In the disconnected scenario, we need to take the action of A into consideration (see Definition <ref> for the local counterpart) and weaken the notion of near equivalence further. For the fixed pure inner form T̃_z, we make the following definition: Let ϕ_1,ϕ_2: W_F→^LT be two global L-parameters. We say ϕ_1 is nearly A(F)^[z]-equivalent with ϕ_2 if there exists a ∈ A(F)^[z], such that (a·ϕ_1)_v is equivalent with ϕ_2v as local L-parameters for each place v. Equivalently, we have a·χ_1 = χ_2, where χ_i is the Hecke character of T determined by ϕ_i (i = 1,2). We denote the near A^[z]-equivalence class of ϕ by [[ϕ]]. Let ϕ be a global L-parameter, and χ be the Hecke character of T determined by ϕ. After localisation, the local Langlands correspondence for disconnected tori provides a bijection ι_v: Irr(T̃_z(F_v)^χ_v, χ_v)→Irr(π_0(S̃_ϕ_v^[z_v]), [z_v])at each place v. We define the adelic L-packet associated to ϕ asΠ_ϕ = {⊗'_vInd_T̃_z(F_v)^χ_v^T̃_z(F_v)η̅_v | η̅_v∈Irr(T̃_z(F_v)^χ_v,χ_v),ι_v(η̅_v) = 1 for almost allv}.Π_ϕ thus defined consists of smooth irreducible admissible representations of T̃_z(𝔸).According to Proposition <ref>, η̅_v = χ_v^* for almost all places v. Combining this with the fact thatχ_v is unramified almost everywhere, we see that η̅_̅v̅ is also unramified almost everywhere (and with the subspace of sphercial vectors one-dimensional). Now, using the same arguments as in the proof of Proposition <ref>, we can immediately seethat T̃_z(𝒪_v)-fixed vectors in Ind_T̃_z(F_v)^χ_v^T̃_z(F_v)η̅_v is one-dimensional for almost all places v. Therefore, the restricted tensor product is well-defined and is an irreducible admissible smooth representation of T̃_z(𝔸). Later, we will see in Proposition <ref> that Π_ϕ actually coincides with 𝒮_χ. Similarly, we may define Π_ϕ = {⊗'_vη̅_v | η̅_v∈Irr(T̃_z(F_v)^χ_v,χ_v),ι_v(η̅_v) = 1 for almost allv},and it is clear that Π_ϕ consists of smooth irreducible representations of T̃_z(𝔸)^χ. Moreover, since we have seen in the proof of the previous lemma that η̅_v is 1-dimensional for almost all v, any element of Π_ϕ is finite-dimensional. §.§ The pairingLet ϕ be a global L-parameter and χ be the Hecke character determined by ϕ. We hope to define a pairing⟨·,·⟩: A(F)^[z],χ×Π_ϕ →ℂ.We recall that A(F)^[z],χ is a subgroup of A(F)^[z_v], χ_v for any v. Then according to the short exact sequence(<ref>), for any v, there exists some (s_v,a) ∈S̃_ϕ_v^[z_v] = Cent(ϕ_v,T̂⋊ A(F_v)^[z_v]) lying above a, hence its inverse (a^-1(s_v^-1),a^-1)∈S̃_ϕ_v^[z_v] lies above a^-1. And according to the short exact sequence (<ref>), there exists some (t,a) ∈T̃_z(F)^χ lying above a. We note that this further implies (t,a) ∈T_z(F_v)^χ_v for any place v.According to the discussions in Section <ref> and Section <ref>, we have(ϕ_v^-1, a^-1(s_v^-1)) ∈ Z^1(W_F_v, T̂T̂)and (z_v^-1, t) ∈ Z^1(F_v, TT)for any v.Now we define the pairing between a ∈ A(F)^[z],χ and η = ⊗'_vInd_T̃_z(F_v)^χ_v^T̃_z(F_v)η̅_v∈Π_ϕ as⟨ a,η⟩ := ∏_v⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩^-1_TN·tr[ι_v(η̅_v)(s_v,a)],where the pairing ⟨·,·⟩_TN is Tate-Nakayama pairing of hypercohomoly groups (see Section <ref>).We still need to show that ⟨ a,η⟩ is well-defined. We let ρ_v := ι_v(η̅_v).For almost all v, ⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩_TN^-1·tr[ρ_v(s_v,a)] = 1.First, we note that, by the definition of the adelic L-packet Π_ϕ, ρ_v = 1 holds for almost all v's. To show that the first factor is trivial almost everywhere, we need some preparation. Suppose K is a finite Galois extension of F over which T̃_z splits as a semi-direct product T⋊ A. Then almost all v's are unramified in the extension K/F, and for almost all v's, the image ofz_v^-1∈ Z^1(K_v/F_v, T(K_v)) lies in T(𝒪_K_v). Due to the fact that T(𝒪_K_v) is cohomologically trivial when K_v/F_v is unramified, we see that, for almost all v's, there exists some y_v∈ T(𝒪_K_v) such that z_v^-1(σ_v) = y_v^-1σ_v(y_v)for all σ_v∈Gal(K_v/F_v). Since (t,a) ∈T̃_z(F)^χ, we have (t,a)∈T̃_z(𝒪_v) for almost all v's. Combining this with the facts that y_v∈ T(𝒪_K_v) and y_v^-1(t,a)y_v = (ty_v^-1a(y_v),a), we find that ty_v^-1a(y_v) ∈ T(𝒪_v) for almost all v's. On the other hand, according to (<ref>), we see that(z_v^-1, y_va(y_v)^-1) is a 1-hypercoboundary, hence we can replace (z_v^-1, t) by (0, ty_v^-1a(y_v)) without changing the value of the Tate-Nakayama pairing. Due to the compatibility stated in Proposition <ref>, we have⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩_TN = ⟨ (ϕ_v^-1, a^-1(s_v^-1)), (0, ty_v^-1a(y_v))⟩_TN= [ϕ_v]^-1(ty_v^-1a(y_v))= χ_v(ty_v^-1a(y_v))^-1.As we have pointed out in the previous paragraph, ty_v^-1a(y_v) ∈ T(𝒪_v) holds almost everywhere. Finally, χ_v is unramified almost everywhere, hence it follows that ⟨ (ϕ_v^-1, s_v), (z_v^-1, t)⟩_TN = 1 for almost all v's.⟨ a,η⟩ is independent of the choices of (s_v, a)∈S̃_ϕ_v^[z_v] = Cent(ϕ_v,T̂⋊ A^[z_v]) for each v and (t,a)∈T̃_z(F)^χ. Suppose we have another set of choices (s'_v,a) and (t', a). Then we haves'_v = y_vs_vandt' = t_0t,where y_v∈T̂^Γ_v and t_0∈ T(F). Then using Fact <ref> and the fact that ρ_v restricts to [z_v] on π_0(T̂^Γ), we have⟨ (ϕ_v^-1, a^-1(s'_v^-1)), (z_v^-1, t')⟩_TN^-1·tr[ρ_v(s'_v,a)] =⟨ (ϕ_v^-1,a^-1(s_v^-1))+(0,a^-1(y_v^-1)), (z_v^-1, t)+(0,t_0)⟩_TN^-1·tr[ρ_v(y_v)ρ_v(s_v,a)]=⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩_TN^-1· [ϕ_v](t_0)·[z_v](a^-1(y_v^-1))·[z_v](y_v)tr[ρ_v(s_v,a)]=⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩_TN^-1·χ_v(t_0)·tr[ρ_v(s_v,a)].The cancellation indicated above is due to a fixing [z_v]. Finally, we notice that t_0∈ T(F) implies χ(t_0) = 1, i.e. ∏_vχ_v(t_0) = 1. Therefore, after taking products, we obtain∏_v⟨ (ϕ_v^-1, a^-1(s'_v^-1)), (z_v^-1, t')⟩_TN^-1·tr[ρ_v(s'_v,a)] =∏_v⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩_TN^-1·tr[ρ_v(s_v,a)].Thus we have constructed a pairing⟨·,·⟩: A(F)^[z],χ×Π_ϕ→ℂand established its well-definedness.§.§ The multiplicity formulaIn this part, we hope to establish the multiplicity formula for disconnected tori. For this purpose, we need to relate objects on the dual side to those on the group side. First we recall that on the group side, we have defined 𝒮_χ (resp. 𝒮̅_χ) as the isomorphism classes of irreducible constituents of the induced representation Ind_T(𝔸)^T̃_z(𝔸)χ (resp. Ind_T(𝔸)^T̃_z(𝔸)^χχ), while on the dual side, we have defined the adelic L-packet Π_ϕ (resp. Π_ϕ), consisting of smooth irreducible representations of T̃_z(𝔸) (resp. T̃_z(𝔸)^χ). Now, a quick observation is the coincidence of the packets on the group side and the dual side: 𝒮_χ (resp. 𝒮̅_χ) coincides with Π_ϕ (resp. Π_ϕ). It suffices to show 𝒮̅_χ = Π_ϕ. First we note that irreducible constituents of Ind_T(F_v^T̃_z(F_v)^χ_vχ_v are exactly the representations in Irr(T̃_z(F_v)^χ_v,χ_v). And we note that η̅_v is T̃_z(𝒪_v)^χ_v-unramified for almost all v if and only if ι_v(η̅_v) = 1 for almost all v, according to the proofs of Fact <ref> and Lemma <ref>. Next, we establish a crucial fact that the pairing A(F)^[z],χ×Π_ϕ→ℂ defined in the previous section has an incarnation on the automorphic side. Given η = ⊗'_vInd_T̃_z(F_v)^χ_v^T̃_z(F_v)η̅_v∈Π_ϕ, the function a ↦⟨ a, η⟩ is the character of η̅|_A(F)^[z],χ defined in Section <ref>, which is the finite-dimensional representation η̅ = ⊗'_vη̅_v passed to A(F)^[z],χ.We note that (s_v,a)^-1 = (a^-1(s_v^-1),a^-1) and rewrite⟨ a,η⟩ = ∏_v⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩^-1_TN·tr[ρ_v(s_v,a)]= ∏_v⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩^-1_TN·tr[ρ_v(a^-1(s_v^-1), a^-1)^-1].On the other hand, according to the construction of LLC for disconnected tori (<ref>), we have⟨ (ϕ_v^-1, a^-1(s_v^-1)), (z_v^-1, t)⟩^-1_TN·ρ_v(a^-1(s_v^-1), a^-1)^-1 = η̅_v(t,a).Now it is clear that we have⟨ a, η⟩ = ∏_vtr[η̅_v(t,a)] = tr[⊗'_vη̅_v(t,a)] = tr[η̅|_A(F)^[z],χ(a)].Combining the above proposition with Proposition <ref>, we have obtained the automorphic multiplicity formula: Let η be an irreducible constituent of 𝒜(T̃_z). Then the multiplicity of η in 𝒜(T̃_z) is given bym_η = ∑_[[ϕ]]m_η,ϕ,where [[ϕ]] runs over the near A^[z]-equivalence classes of global L-parameters (which correspond to Hecke characters of T that vanish on A_T). Let χ be the Hecke character of ϕ, then the contribution of [[ϕ]] to the multiplicity ism_η,ϕ = 1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χ⟨ a,η⟩.Hence we have the following decomposition into a Hilbert direct sum:L^2(A_TT̃_z(F)\T̃_z(𝔸)) = ⊕ η^⊕ m_η. §.§ Simplification for tori satisfying Hasse principleThe pairing (<ref>) can be simplified when T satisfies Hasse principle. We recall from Section <ref> that, in this case, H^1(W_F,T̂) Hom_cts(T(𝔸)/T(F),ℂ^×) is an isomorphism. In other words, the equivalence classes of L-parameters (see Definition <ref>) coincide with the near equivalence classes of L-parameters (see Definition <ref>). Moreover, we find A(F)^[z],χ = A(F)^[ϕ],[z] and T̃_z(F)^χ = T̃_z(F)^[ϕ]. We consider S̃_ϕ^[z] := Cent(ϕ, T̂⋊ A^[z]), which sits in a short exact sequence similar to (<ref>). Then for any a ∈ A(F)^χ,[z] = A(F)^[ϕ],[z], there exists some (s,a) ∈S̃_ϕ^[z] lying above a with inverse (a^-1(s^-1), a^-1) lying above a^-1. We note that (a^-1(s^-1), a^-1) also lies in S̃_ϕ_v^[z_v] for each v. We arbitrarily choose (t,a)∈T̃_z(F)^χ = T̃_z(F)^[ϕ]. Now one can see that Then we have(ϕ_v^-1, a^-1(s^-1)) ∈ Z^1(W_F_v, T̂T̂)for each v and (z^-1, t) ∈ Z^1(F, TT). If we recall the long exact sequence (<ref>), then we find that the image of [(z^-1,t)] under the composition H^1(F,TT)→ H^1(𝔸,TT) → H^1(𝔸/F,TT) vanishes.By the compatibility between the local and global Tate-Nakayama duality for hypercohomology (Theorem <ref>), we obtain∏_v⟨ (ϕ_v^-1, a^-1(s^-1)), (z_v^-1, t)⟩^-1_TN,local=⟨ (ϕ^-1, a^-1(s^-1)), (z_v^-1, t)⟩^-1_TN,global=⟨ (ϕ^-1, a^-1(s^-1)), 1⟩^-1_TN,global=1.Then the pairing ⟨·,·⟩: A(F)^[z],χ×Π_ϕ →ℂ.can now be simplified into⟨ a,η⟩ = ∏_vtr[ι_v(η̅_v)(s,a)] = tr[⊗_vι_v(η̅_v)(s,a) ],where ⊗_vι_v(η̅_v) is a finite-dimensional representation of π_0(S̃_ϕ^[z]). Eventually, the decomposition reads as L^2(A_TT̃_z(F)\T̃_z(𝔸)) = ⊕ η^⊕ m_η.wherem_η,ϕ = 1/|A(F)^[z],χ|∑_a∈ A(F)^[z],χtr[⊗_vι_v(η̅_v)(s,a) ].§ APPENDICEStocsectionAppendices§ HOMOLOGICAL ALGEBRA §.§ Definitions of group hypercohomology and hyperhomology Let G be a (not necessarily finite) group. First we recall (unnormalised, in the term of <cit.>) bar resolution of ℤ (as a trivial G-module). We set B_0 = ℤG. When n ⩾ 1, we let B_n be the free ℤG-module on the set of all symbols [g_1⊗ g_2⊗…⊗ g_n] with g_i∈ G. Then we define the following free resolution:0 ←ℤ B_0 B_1 B_2⋯where augmentation map ϵ is the unique ℤG-morphism sending 1 ∈ℤG to 1, and for n⩾ 1, ∂:B_n→ B_n-1 is defined to be ∂ := ∑_i=0^n(-1)^i∂_i with∂_0([g_1⊗⋯⊗ g_n])= g_1[g_2⊗⋯⊗ g_n]; ∂_i([g_1⊗⋯⊗ g_n])= [g_1⊗⋯⊗ g_ig_i+1⊗⋯⊗ g_n]fori=1,2,…,n-1; ∂_n([g_1⊗⋯⊗ g_n])= [g_1⊗⋯⊗ g_n-1]. Let A^∙ be a bounded complex of G-modules with differentials f^k: A^k→ A^k+1. Now we consider a cochain complex C^∙(G,A^∙) whose n-th term isC^n(G,A^∙) = ⊕_kHom_ℤG(B_n-k,A^k),with differential d^n:C^n(G,A^∙)→ C^n+1(G,A^∙) whose restriction on a factor Hom_ℤG(B_n-k,A^k) is the ℤ-linear mapHom_ℤG(B_n-k, A^k) →Hom_ℤG(B_n-k, A^k+1) ⊕Hom_ℤG(B_n+1-k, A^k)defined by d^nc^k = f^k∘ c^k + (-1)^kc^k∘∂_n+1-k. Then we define the group hypercohomology H^∙(G,A^∙) to be the cohomology of C^∙(G,A^∙).Similarly, we can define group hyperhomology. Let M_∙ be a bounded complex of G-modules with differentials g_k: M_k→ M_k-1. Then we define a chain complex C_∙(G,M_∙) whose n-th term isC_n(G,M_∙) = ⊕_kB_n-k⊗_ℤG M_k,with differential d_n:C_n(G,M_∙)→ C_n-1(G,M_∙) whose restriction on a factor B_n-k⊗_ℤG M_k is the ℤ-linear mapB_n-k⊗_ℤG M_k→ B_n-k⊗_ℤG M_k-1⊕ B_n-k-1⊗_ℤG M_kdefined by d_n(b⊗ m) = b⊗ g_k(m) + (-1)^k∂(b)⊗ m.And we define the group hyperhomology H^∙(G,M_∙) to be the homology of C_∙(G,M_∙).The notions of r-th hyper(co)cycle and hyper(co)boundary are self-evident. §.§ Complexes of length 2Group hyper(co)homology used in this work concerns complexes of length 2.§.§.§ HypercohomologyLet A be a G-module, and a ∈ A. We write ∂ a to denote the 1-coboundary sending g∈ G to a^-1g(a).Whenever we consider hypercohomology of a complex of G-modules of length 2, we implicitly regard it as a complex concentrated at degrees 0 and 1: A^0A^1. We exhibit hypercohomology groups in degree 0 and 1. By definition, H^0(G,A^0A^1) = ker(f(a),∂ a) = kerf ∩ A^G,andH^1(G,A^0A^1) ={(z,a_1):f(z)-∂ a_1 = 0, z ∈ Z^1(G,A^0), a_1∈ A^1}/{(∂ a_0, f(a_0)):a_0∈ A^0}. Moreover, we have an important long exact sequence relating hypercohomology to cohomology groups of A^0 and A^1.For any r⩾ 1, the following sequence is exact:⋯→ H^r-1(G,A^0)H^r-1(G,A^1)H^r(G,A^0 A^1)H^r(G, A^0) →⋯,where i is defined by sending any (r-1)-cocycle c_1 to r-hypercocycle (0,c_1), and p by sending any r-hypercocycle (c_0,c_1) to r-cocycle c_0. One can apply Grothendieck's spectral sequence for hypercohomology to this special case. Alternatively, we can prove it directly as below. Exactness at H^r-1(G,A^1): i(c_1) is an r-hypercoboundary if and only if f(c_0) -∂ c_1^* = c_1 for some (r-1)-cocycle c_0 and (r-2)-cochain c_1^*. And this happens if and only if c_1 and f(c_0) are cohomologous.Exactness at H^r(G,A^0 A^1): p∘ i = 0 is straightforward by definition. For the other inclusion, if p(c_0,c_1) is a coboundary, then c_0 = ∂ c_0^* for some (r-1)-cochain c_0^*, and then (c_0,c_1) = (∂ c_0^*, c_1) differs from (0, c_1-f(c_0^*)) by a hypercoboundary (∂ c_0^*, f(c_0^*)).Exactness at H^r(G,A^0): Given an r-cocycle c_0 in A^0, by definition, there exists an r-hypercocycle (c_0,c_1) for some c_1 if and only if f(c_0) = ∂ c_1 for some (r-1)-cocycle c_1 in A^1.§.§.§ Hyperhomology When considering hyperhomology of a complex of length 2, we implicitly see it as a complex concentrated at degrees 0 and -1: A B.Then we readily find that H_0(G,AB) = {(a_0,b_1): g(a_0) = ∂ b_1, a_0∈ A_0, b_1∈ C_1(G,B)}/{(∂ a_1, g(a_1)-∂ b_2):a_1∈ C_1(G,A), b_2∈ C_2(G,B)}.And similar to hypercohomology, we have a long exact sequence.For any r⩾ 1, the following sequence is exact:⋯→ H_r(G,A)H_r(G,B)H_r-1(G,AB)H_r-1(G, A) →⋯,where i is defined by sending any r-cycle b_r to (r-1)-hypercycle (0,b_r), and p by sending any (r-1)-hypercycle (a_r-1,b_r) to (r-a)-cycle a_r-1.§.§ On restrictions of homology groupsIn this part, we remind the readers how the restriction maps between homology groups are constructed. In fact, the notion of restriction between homology groups is exactly dual to the more familiar notion of corestriction between cohomology groups. We refer the readers to <cit.> for more details. LetH be a subgroup of G of finite index, and X be a G-module. We consider the norm map between the 0-th homology groups of G and H, i.e. coinvariants with respect to G and H:Nm: X_G → X_Hx↦∑_σ∈ H\ Gσ̅ x,where σ̅ is a representative of σ. One can see immediately thatthe choices of σ̅'s do not affect the image of the sum in X_H, hence the map Nm is well-defined. Now, the restriction is defined to be the unique map between δ-functors H_*(G,-) and H_*(H,-) that extend Nm. Alternatively, we can define the restriction in more down-to-earth terms. Let B^G_∙ be the bar resolution of ℤ as a ℤ[G]-module. We note that B^G_∙ is also a projective resolution of ℤ as a ℤ[H]-module. Since we have B^G_n⊗_ℤ[G]X = (B^G_n⊗_ℤX)_G and B^G_n⊗_ℤ[H]X = (B^G_n⊗_ℤX)_H for each n ⩾ 0, we can consider the norm map between them: Nm: B^G_n⊗_ℤ[G]X → B^G_n⊗_ℤ[H]X.In view of the following immediate observation, Nm induces maps on the cohomology level, which we call the restrictions.Nm thus defined is a chain map. In other words, we have Nm∘∂ = ∂∘Nm. For applications, we still hope to transfer the above construction to the standard chain complex of H so that the restriction on the cohomology level is induced by a chain map between the standard chain complexes B^G_∙⊗_ℤ[G]X = C_∙(G,X) → C_∙(H,X) = B^H_∙⊗_ℤ[H]X, which we will also call Res. Certainly, such a chain map is far from unique. And our construction below will rely on the choice of a section s: H\ G → G.Let B^H_∙ be the bar resolution of ℤ as a ℤ[H]-module. We set out to find a homotopy equivalence between B^G_∙ and B^H_∙ as complexes of ℤ[H]-modules. We arbitrarily fix a section s:H\ G→ G of the projection to right H-cosets p: G → H\ G, such that s(H) = 1. And we define t: G→ H by t(g) = g[s∘ p(g)]^-1. Now we can define a chain map f_*:B^G_∙→ B^H_∙ by settingf_n(g_0(g_1⊗ g_2⊗⋯⊗ g_n)) :=t(g_0)[t(g_0)^-1t(g_0g_1)⊗ t(g_0g_1)^-1t(g_0g_1g_2)⊗⋯⊗ t(g_0… g_n-1)^-1t(g_0… g_n)].One can straightforwardly check that this indeed defines a chain map, that is, f_n's commute with differentials of the complexes. Moreover, we have the following commutative diagram:B^G_∙[r, "ϵ"] [d, "f_∙"']ℤ[d,-, double equal sign distance,double] B^H_∙[r, "ϵ"]ℤThen by basic properties of projective resolutions <cit.>, f_∙ thus constructed is a homotopy equivalence. Finally, the restriction map on the chain level is defined as the composition Res := (-⊗_ℤ[H] X)(f_n)∘Nm: C_n(G,X) → C_n(H,X). Evidently, Res is a chain map. In this setting, we have the following explicit formula for degree 1. The restriction map in degree 1 Res: C_1(G,X) → C_1(H,X) sends x ∈ C_1(G,X) to Res(x)=:x̂ given byx̂_h = ∑_(g, σ)s(p(σ))(x_g),where the sum is taken over all the pairs (g,σ)∈ G× H\ G satisfying s(σ)g = hs(σ p(g)). We understand x as an element ∑_g∈ G g⊗ x_g∈ B^G_1⊗_Z[G]X.In view of (<ref>), x̂ := Res(x) is the following element in C_1(H,X) = B^H_1⊗_ℤ[H]X:f_1[Nm(∑_g∈ G g⊗_ℤ[G] x_g)]= f_1[ ∑_g∈ G∑_σ∈ H\ G s(σ)(g)⊗_ℤ[H] s(σ)(x_g)]= ∑_g∈ G∑_σ∈ H\ G t(s(σ))[t(s(σ))^-1t(s(σ) g)]⊗_ℤ[H]s(σ)(x_g)= ∑_g∈ G∑_σ∈ H\ G t[s(σ)g] ⊗_ℤ[H]s(σ)(x_g),where we have used the fact that the map Nm is independent of choices of coset representatives. Thus in particular, we have used the fixed section s: H\ G → G when calculating the map Nm. We also note that the fact t[s(σ)] = 1 has been applied. Then we havex̂_h = ∑_(g,σ)s(σ)(x_g),where the sum is taken over all the pairs (g,σ)∈ G× H\ G satisfying t[s(σ)g] = h.Now the formula in the desired form follows from the definition of the map t: G→ H, that is, t(g) = gs(p(g))^-1.As a final remark, we note that, although a different choice of section s': H\ G → G, leads to a different restriction map on the chain level, the induced restriction map between the homology groups remains the same. Indeed, s' gives rise to another chain map f'_∙. Again, by well-known facts about projective resolutions <cit.>, f_∙ and f'_∙ are chain homotopic, hence induce the same map on the level of homology.§ THE LANGLANDS CORRESPONDENCE FOR CONNECTED TORI §.§ The local Langlands correspondenceIn this subsection, we briefly recall how Langlands established the Local Langlands Correspondence (abbreviated as LLC) for (connected) tori. Most proofs in this section are omitted.Let F be a p-adic field. Let T be an F-torus whose cocharacter group is X = X_*(T). Then the absolute Galois group Gal(F̅/F) acts on X by ⟨σ· x, a⟩ := σ⟨ x, σ^-1(a)⟩for σ∈Gal(F̅/F), x ∈ X and a ∈F̅. In particular, there is a natural action of the absolute Weil group W_F (as a subgroup of Gal(F̅/F)) on X, and furthermore on the dual torus T̂=Hom(X,ℂ^×). The goal of this section is to establish an isomorphism functorial in T: H^1(W_F,T̂) ≅Hom_cts(T(F),ℂ^×),which we call LLC for tori. And sometimes it is convenient to write LLC in the form of a pairing:H^1(W_F,T̂) × T(F) →ℂ^×,which we call the Langlands pairing.We consider a finite Galois extension K of F over which T splits. We recall that the relative Weil group W_K/F is a quotient W_F/W_K^c, where W_K^c denotes the closure of the commutator of W_K.The natural map induced by quotientH^1(W_K/F,T̂) → H^1(W_F,T̂)is bijective. Let z: W_F→T̂ be a continuous 1-cocycle. After restricting it to a 1-cocycle on W_K, we notice that it becomes a continuous homomorphism due to the trivial Galois action. By continuity, the kernel of this homomorphism must be closed. Now, the commutativity of T̂ suggests that z is trivial on W_K^c, the closure of the commutator of W_K. We call the continuous 1-cocycle W_K/F = W_F/W_K^c→T̂ thus determined z̅. It is immediate to check that z ↦z̅ actually gives the inverse of the natural homomorphism. Therefore, it suffices to work with H^1(W_K/F, T̂). Now, we construct the isomorphismH^1(W_K/F,T̂) →Hom_cts(T(F),ℂ^×),in two steps.§.§.§ Step 1The goal of our first step is to construct a functorial isomorphismℒ:H^1(W_K/F,X)T(F).Since W_K/F acts on X via the projection W_K/F↠Gal(K/F), the kernel K^× acts on X trivially. This implies that the first homology group H_1(K^×,X) is canonically isomorphic to K^×⊗_ℤ X, which in turn is isomorphic to the set of K-points of T:H_1(K^×, X) ≅ K^×⊗_ℤ X≅ T(K).It is convenient to write elements of K^×⊗_ℤX as maps from K^× to X with finite support. And the isomorphism above can be written explicitly:H_1(K^×,X) ∋ [x]↦∏_a∈ K^×x_a(a)∈ T(K),and we denote this identification by ∼_L. Since K^× is of finite index in W_K/F, we can consider the natural restriction map between the homology groups, and then compose the restriction with the identification made above:H_1(W_K/F,X)H_1(K^×,X)T(K).We denote the composition by ℒ.In <cit.>, Langlands shows ℒ is injective with image T(K)^Gal(K/F) = T(F).This finishes the construction of isomorphism (<ref>). We note that ℒ is functorial in T, since both Res and ∼_L are so. §.§.§ Step 2 The second step uses the natural pairing between the cohomology group and the homology group. Let B_∙ be the bar resolution (<ref>) of ℤ as an abstract ℤ[W_K/F]-module. By definition, we haveH_1(W_K/F, X) = H_1(B_∙⊗_ℤ[W_K/F]X) Using the fact that ℂ^× is an injective abelian group, we obtainHom(H_1(W_K/F, X), ℂ^×)= Hom( H_1(B_∙⊗_ℤ[W_K/F]X), ℂ^×) = H^1(Hom(B_∙⊗_ℤ[W_K/F]X, ℂ^×))= H^1(Hom_ℤ[W_K/F](B_∙, Hom(X,ℂ^×))= H^1_abs(W_K/F,T̂). In other words, we have obtained a natural pairing H^1_abs(W_K/F,T̂) × H_1(W_K/F,X) →ℂ^×.Explicitly, given an abstract 1-cocycle t_w and a 1-cycle x_w,their pairing is given by⟨ t ,x⟩ := ∏_w∈ W_K/Fx_w(t_w).where we have made the identification X = X_*(T) ≅ X^*(T) = Hom_alg(T̂,ℂ^×).So far, we have established the following functorial isomorphismH^1_abs(W_K/F,T̂) Hom(H_1(W_K/F,X),ℂ^×) Hom(T(F),ℂ^×). Finally, one can take continuity into consideration. Langlands shows in <cit.>:Let z be a 1-cocycle: W_K/F→T̂. Then z is continuous if and only if its image under the above isomorphism is a continuous character.In view of this, for the continuous cohomology, we have the following functorial isomorphism:H^1(W_F,T̂) ≅ H^1(W_K/F,T̂) ≅Hom_cts(T(F),ℂ^×). This is the isomorphism constructed by Langlands in <cit.> and we call it Langlands' convention. In this work, unless otherwise specified, whenever we say LLC for tori, we are obeying Langlands' convention.§.§.§ Reformulation in terms of L-groupsThe L-group of T is defined to be the semi-direct product ^LT := T̂⋊ W_F. And an L-parameter for T is a continuous morphisms ϕ: W_F→^LT = T̂⋊ W_F such that, for any w ∈ W_F, the projection of ϕ(w) ∈^LT onto W_F is w. From this definition, we may write ϕ(w) = (ϕ_0(w),w)∈T̂⋊ W_F. And one immediately observes that ϕ is an L-parameter if and only if ϕ_0: W_F→T̂ lies in Z^1(W_F,T̂), the set of continuous 1-cocycles. By abuse of notation, we usually drop the subscript “0” and refer to the corresponding 1-cocycle also as ϕ when no ambiguity occurs.Two L-parameters ϕ_1 and ϕ_2 (as maps W_F→^LT) are said to be equivalent if they are conjugate under T̂, that is, there exists some t ∈T̂ such that ϕ_1(w) = t^-1ϕ_2(w)t for any w ∈ W_F. Clearly, this happens if and only if their corresponding 1-cocyles are cohomologous. Indeed, when seen as 1-cocycles, ϕ_1 differ from ϕ_2 by w ↦ t^-1w(t), which is a 1-coboundary. Let Φ(T) be the equivalence classes of L-parameters for T. Then discussions in the previous paragraphs point to the fact that Φ(T) is in natural 1-1 correspondence with the continuous cohomology group H^1(W_F,T̂). Now the LLC for tori can be reformulated as a bijectionΦ(T) ⟷Π(T),where Π(T) is the set of continous characters of T(F). By abuse of notation, [ϕ] may be used to refer to one of the following three: (i) the equivalence class of ϕ as an L-parameter, (ii) the cohomology class of ϕ as a 1-cocycle and (iii) the character of T(F) determined by ϕ under the LLC. §.§.§ Deligne's conventionWe remark that there are two normalisations of LLC for tori. Indeed, if [ϕ]↦χ_[ϕ] is the LLC for tori under Langlands' convention, then the map [ϕ]↦χ_[ϕ]^-1 is clearly an isomorphism from H^1(W_K/F,T̂) to Hom_cts(T(F),ℂ^×) as well. We shall call this “the LLC under Deligne's convention”. Accordingly, in line with Deligne's convention, we shall modify the key isomorphism H_1(W_K/F,X) ≅ T(F), and use 𝒟 to denote the modified version:𝒟:H_1(W_K/F,X)H_1(K^×, X) T(K) [x]↦∏_a∈ K^×x_a(a)^-1.In comparison with (<ref>), here we introduce an inverse when identifying H_1(K^×, X) with T(K), and we call this modified identification ∼_D.We warn the readers that Deligne's convention is the one adopted in <cit.>. To keep in line with their exposition, we will use 𝒟 instead of ℒ when establishing the Tate-Nakayama duality for hypercohomology in Section <ref>. §.§.§ An explicit formula on the chain levelFor later convenience, we need to explicitly exhibit 𝒟 at the chain level. To this end, we need to apply the calculations made in (<ref>), and write down the restriction map on the chain level. We note that there is certainly no canonical way of defining chain-level restriction and the definition we make here will rely on the choice of a section s:Gal(K/F) → W_K/F.We fix a section s:Gal(K/F) → W_K/F of the natural projection p: W_K/F↠Gal(K/F) such that s(1) = 1. The section s gives rise to a normalised (in the term of <cit.>, meaning a_1,σ = a_σ, 1 = 1 for any σ) 2-cocyle a ∈ Z^2(Gal(K/F),K^×), defined bys(σ)s(τ) = a_σ,τs(στ). According to Fact <ref>, the restriction map on the level of chains Res:C_1(W_K/F,X) → C_1(K^×,X)is given by sending a 1-cycle x ∈ C_1(W_K/F,X) to the 1-cycle x̂∈ C_1(K^×,X) defined as follows:x̂_a = ∑_(w,σ)σ(x_w),where the sum is taken over all the pairs (w,σ)∈ W_K/F×Gal(K/F) satisfying s(σ)w = a s(σ p(w)). We may write w = bs(τ) for τ∈Gal(K/F) and b ∈ K^×, so that the previous relation becomes s(σ)bs(τ) = as(στ). And it follows that b = σ^-1(a_σ,τ^-1a). Now the expression of x̂ can be simplified asx̂_a = ∑_(σ, τ)σ(x_σ^-1(a_σ,τ^-1a)s(τ)),with the sum is taken over Gal(K/F)×Gal(K/F). After composing Res defined above with Deligne's identification ∼_D, we have the following chain-level map ϕwhich induces 𝒟: ϕ:C_1(W_K/F,X) → T(K).It is given by sending x ∈ C_1(W_K/F, X) to the following element in T(K)∏_a∈ K^×x̂_a(a^-1) = ∏_σ,τ,a⟨σ(x_σ^-1(a_σ,τ^-1a)s(τ)),a^-1⟩= ∏_σ,τ,a⟨σ(x_as(τ)),a_σ,τ^-1σ(a)^-1⟩,where the product is taken over triples(σ,τ,a) ∈Gal(K/F) ×Gal(K/F) × K^×.The last equality above is obtained by changing the variable σ^-1(a_σ,τ^-1a) ↦ a for fixed σ and τ. In view of Remark <ref>, we note that although a different choice of the section s':Gal(K/F)→ W_K/F leads to another ϕ', both ϕ and ϕ' induce the map 𝒟. §.§ The global Langlands correspondence Let F be a number field,and C_F = F^×\𝔸_F be its idele class group. Let W_F be the global Weil group of F. Let T be a torus defined over F with cocharacter groups X. We fix a finite Galois extension K of F over which both T splits.The global Langlands correspondence (abbreviated as GLC) for tori reads as:There is a natural surjective homomorphism H^1(W_F,T̂) ↠Hom_cts(T(𝔸)/T(F),ℂ^×),whose kernel is finite and consists of locally trivial cohomology classes(defined as lying in the kernel of(<ref>) below). Moreover, this iscompatible with the LLC (in the sense that the diagramH^1(W_F,T̂)[rr, two heads] [d] Hom_cts(T(F)\ T(𝔸_F),ℂ^×)[d]∏_vH^1(W_F_v,T̂)[rr, "LLC","∼"'] ∏_vHom_cts(T(F_v),ℂ^×)commutes). We sketch the construction and refer the reader to <cit.> for details (despite the fact that local and global are handled simultaneously in loc. cit.). The construction of the GLC is completely the same as that of the LLC, and we can divide it into two steps. Step 1. Recall from our setting that K is a Galois extension of F over which T splits. We consider the compositionH_1(W_K/F,X)H_1(C_K,X)C_K⊗_ℤX=T(K)\ T(𝔸_K).Analogous to Proposition <ref>, one can show that the composition above is injective with image (T(K)\ T(𝔸_K))^Gal(K/F). Step 2. We consider the perfect homology-cohomology pairing between the (abstract) homology and cohomology groups:H^1_abs(W_K/F,T̂) × H_1(W_K/F,X) →ℂ^×.After substituting results from Step 1 and taking care of continuity, we have an isomorphismH^1(W_K/F,T̂) Hom_cts((T(K)\ T(𝔸_K))^Gal(K/F),ℂ^×). Finally, in order to relate this to T(F)\ T(𝔸_F), we note that the short exact sequence 1 → T(K) → T(𝔸_K) → T(K)\ T(𝔸_K) → 1gives rise to a long exact sequence1 → T(F) → T(𝔸_F) → (T(K)\ T(𝔸_K))^Gal(K/F)→ H^1(F,T) → H^1(𝔸,T). It is a well-known fact <cit.> that the kernel ofH^1(F,T) → H^1(𝔸,T) = ⊕_vH^1(F_v,T) is finite. The kernel is usually denoted by ^1(F,T) and classes in it are said to be locally trivial. Now, we note that T(F)\ T(𝔸_F) is a subgroup of (T(K)\ T(𝔸_K))^Gal(K/F) with finite index. In turn, this gives us a surjective morphism with finite kernelHom_cts((T(K)\ T(𝔸_K))^Gal(K/F),ℂ^×) ↠Hom_cts( T(F)\ T(𝔸_F),ℂ^×).The composition of (<ref>) with (<ref>) gives the desired surjection in Theorem <ref>. One can check the commutativity of the diagram (<ref>)by inspecting the local-global compatibility in Step 1 and Step 2. The verification is immediate. Now, in view of the commutative diagram (<ref>), one can readily see that the kernel of the map (<ref>) comprises the locally trivial classes. When the map (<ref>) is injective, we say that T satisfies Hasse principle. In this particular case, we have T(F)\ T(𝔸_F) = (T(K)\ T(𝔸_K))^Gal(K/F), hence the global Langlands correspondence is actually an isomophismH^1(W_F,T̂) Hom_cts(T(𝔸)/T(F),ℂ^×). Analogous to the local Langlands correspondence, we can reformulate global Langlands correspondence in terms of L-parameters. The rational structure of T induces an action of the absolute Galois group Gal(F̅/F) on X^*(T), hence on T = X^*(T)⊗_ℤℂ^×. Let the global Weil group of F be W_F. Then after composing with the surjection W_F→Gal(F̅/F) (see <cit.>), we have obtained an action of W_F on T. And we can define the global L-group to be the semidirect product via the action indicated above: ^LT := T̂⋊ W_F. Then global L-parameters along with equivalences between them can be defined in a similar way:A global L-parameter is a continuous L-morphism W_F→^LT. Two global L-parameters ϕ_1,ϕ_2: W_F→^LT are said to be equivalent if they are conjugate by some s ∈T. And again, a global L-parameter can also be regarded as a continuous 1-cocycle W_F→T̂ and the equivalence classes are in 1-1 correspondence to the continuous cohomology group H^1(W_F,T̂). We will use these two interpretations of global L-parameters interchangeably. Let v be a place of F. Note that given a global L-parameter ϕ∈ Z^1(W_F,T̂), after composing the natural map W_F_v→ W_F (see <cit.>), we obtain a local L-parameter ϕ_v: W_F_v→T̂. And we call ϕ_v the localisation of ϕ at v. We can now reformulate the global Langlands correspondence as a natural surjection from the equivalence classes of global L-parameters to the set of Hecke characters of T. To obtain a bijection, we need to weaken the equivalence to “near equivalence”, which we define below: Let ϕ_1,ϕ_2: W_F→^LT be two global L-parameters. We say ϕ_1 is nearly equivalent with ϕ_2 if the localisations ϕ_1v and ϕ_2v are equivalent as local L-parameters at each place v. Let the Hecke character determined by ϕ_i be χ_i, and we can factorise χ_i = ⊗_vχ_iv (i = 1,2). Then from the local-global compatibility between the local and global Langlands correspondences, we immediately see that ϕ_1 is nearly equivalent with ϕ_2 if and only if [ϕ_1v] = [ϕ_2v], in other words, χ_1v = χ_2v for each v. And this happens if and only if χ_1 = χ_2. Thus the global Langlands correspondence becomes a bijection between the near equivalence classes of global L-parameters and the set of Hecke characters.§ THE TATE-NAKAYAMA DUALITY §.§ The Tate-Nakayama TheoremIn this part, we recall a theorem concerning group cohomology of finite groups.Let G be a finite group. We call a G-module C a class module if for all subgroups H of G:(i)H^1(H,C) = 0;(ii)H^2(H,C) is cyclic of order #H. (We call a generator γ a fundamental class.)The motivation as well as the most important examples for this definition originates from class field theory. Let F be a p-adic field and K/F a finite Galois extension. Then as a Gal(K/F)-module, the multiplicative group K^× is a class module.Let F be a number field and K/F a finite Galois extension. Then as a Gal(K/F)-module, the idele class group C_K = I_K/K^× is a class module. Now we can state the Tate-Nakayama Theorem, which provides an isomorphism which we will call the Tate-Nakayama isomorphism. Let G be a finite group, and C a G-class module with fundamental class γ. Let M be a G-module that satisfies Tor_1^ℤ(M,C) = 0. Then the map defined by taking cup product with γĤ^n(G,M)→Ĥ^n+2(G,M⊗_ℤ C)x↦ x∪γis an isomorphism, for each n ∈ℤ. See <cit.>.In some sense, the converse of the above theorem is also true <cit.>. §.§ The local Tate-Nakayama duality Although the Tate-Nakayama Theorem ispurely algebraic in nature, it has abundant implications in arithmetic. Let K/F be a finite Galois extension of p-adic fields. Then one sees that G = Gal(K/F), C = K^× and M = ℤ satisfy conditions in the Tate-Nakayama Theorem. If we let n=-2, then one obtainsGal(K/F)^ab= H_1(K/F,ℤ) = Ĥ^-2(K/F,ℤ)Ĥ^0(K/F, K^×) = F^×/N_K/FK^×,which recovers the local Artin map. More generally, suppose that F is a p-adic field, T is a F-torus and K/F is a Galois extension over which T splits. We let X = X_*(T) be the cocharacter group of T. Then the Tate-Nakayama theorem yields duality results.When n = -2, the isomorphism reads as Ĥ^-2(K/F,X) ≅Ĥ^0(K/F,T(K)) = T(F)/N_K/F(T(K)).We recallan elementary fact from <cit.> that the cup product induces an isomorphism:Ĥ^-2(K/F,X) ≅Ĥ^2(K/F,X^*(T))^*.Hence we have a perfect pairing between H^2(K/F, X^*(T)) and T(F)/Nm T(K). After taking limits, we obtain the duality between H^2(F,X^*(T)) and T(F)^∧, where T(F)^∧ is the completion of T(F) with respect to open subgroups of finite index (i.e. the norm subgroups). We can rephrase this if we consider the short exact sequence given by exponential map1 → X^*(T) → X^*(T)⊗ℂ=Lie(T̂)X^*(T)⊗ℂ^×=T̂→ 1.Indeed, the middle term is uniquely divisible and hence has trivial cohomology. Then the associated long exact sequence of cohomology groups gives an isomorphism H^1(F,T̂) ≅ H^2(F,X_*(T)). To summarise, we recover the finite-order part of LLC: H^1(F,T̂) ≅Hom_cts(T(F)^∧,ℚ/ℤ) = Hom_cts(T(F),ℂ^×)_fin,where the subscript “fin” refers to finite-order characters.When n = 0, the Tate-Nakayama isomorphism reads asĤ^0(K/F,X)≅ H^2(K/F,T(K)).And similarly, after taking limits, we have the duality between H^0(F,X)^∧ and H^2(F,T).We are particularly interested in the case of n = -1. We have the isomorphism Ĥ^-1(K/F,X) ≅ H^1(K/F, T(K)).Again, according to <cit.>, we have an isomorphism induced by the cup product:Ĥ^-1(K/F,X) ≅Ĥ^1(K/F, X^*(T))^*.Hence we have a perfect pairing between finite groups H^1(K/F,X^*(T)) and H^1(K/F,T(K)). The inflations H^1(K/F, T(K))→ H^1(F, T) and H^1(K/F, X) → H^1(F,X) are isomorphisms. By the Hochschild-Serre spectral sequence, we have an exact sequence1 → H^1(K/F, T(K)) → H^1(F,T) → H^1(K,T)^Gal(K/F).By Hilbert 90, we have H^1(K,T) = 1. Hence H^1(K/F, T(K))→ H^1(F, T) is bijective. Since T splits over K, Gal(K̅/K) acts trivially on X. And similarly, combining the Hochschild-Serre exact sequence and H^1(K,X) = 1, we conclude that H^1(K/F, X) → H^1(F,X) is bijective. Taking this fact into consideration, we have obtained the duality between H^1(F,X^*(T)) and H^1(F,T), i.e. a perfect pairingH^1(F,X^*(T)) × H^1(F,T) →ℂ^×.Let Γ := Gal(F̅/F) be the absolute Galois group of F. In <cit.>, Kottwitz reformulates this asH^1(F, T) ≅π_0(T̂^Γ)^*.Indeed, we consider the long exact sequence of cohomology groups in(<ref>), and find H^1(F,X^*(T))≅T̂^Γ/exp(LieT̂) =T̂^Γ/(T̂^Γ)^∘ = π_0(T̂^Γ).In <cit.>, Kottwitz generalises (<ref>) to general connected reductive groups and establish a functorial isomorphism H^1(F, G) ≅π_0(Z(Ĝ)^Γ)^*.To be precise, we let 𝒞 be the category of connected reductive groups over F as objects with normal homomorphisms (see Section 1.8 of <cit.> for the definition) as morphisms. Now, G ↦ H^1(F,G) and G↦π_0(Z(Ĝ)^Γ)^* are isomorphic as functors from 𝒞 to the category of sets.In virtue of Kottwitz's reformulation, there is a natural pairingT̂^Γ× H^1(F,T) →ℂ^×,which we call the Tate-Nakayama pairing. §.§.§ An explicit map of Tate-Nakayama duality on the level of cocyclesFor the case of n = -1, we explicitly exhibit the Tate-Nakayama isomorphism on the level of cocycles. We fix a section s:Gal(K/F) → W_K/F such that s(1)=1. As in (<ref>), this gives rise to a normalised 2-cocyle a ∈ Z^2(Gal(K/F), K^×), which represents the fundamental class in H^2(Gal(K/F), K^×). Meanwhile, we recall that Ĥ^-1(K/F,X) is the subgroup of the Gal(K/F)-coinvariants consisting of norm 0 elements. Now taking cup-product with a gives the Tate-Nakayama isomorphism H_0(W_K/F,X)_0 H^1(K/F,T(K)).According to <cit.>, it is induced by the following map on the level of cocycles:∪ a: C_0(K/F,X)_0 → Z^1(K/F, T) μ ↦(ρ↦∏_σ∈Gal(K/F)⟨σ(μ), σ a_σ^-1,ρ⟩). Wemodify the map (on the level of cocycles) ∪ a for later use. Consider the following element in T(K)β := ∏_σ∈Gal(K/F)⟨σ(μ), a_σ, σ^-1⟩,and the 1-coboundary ∂β∈ B^1(Gal(K/F), T(K)) sending ρ to β^-1ρ(β). We define a ψ∈ Z^1(Gal(K/F), T(K)) by letting ψ(μ) := ∪ a(μ) + ∂β. We note that ∪ a and ψ differ by a 1-coboundary, hence induce the same isomorphism (<ref>) on the level of cohomology.Some elementary simplifications yield:The map ψ is given byψ: C_0(K/F,X)_0 → Z^1(K/F, T)μ ↦(ρ↦∏_σ∈Gal(K/F)⟨ρσ(μ),a_ρ,σ⟩).The 2-cocycle relation for a gives σ a_σ^-1,ρ = a_σ, σ^-1ρ^-1a_σ,σ^-1andρ(a_σ,σ^-1) = a_ρσ,σ^-1a_ρ, σ.∪ a (μ)(ρ) +∂β= ∏_σ∈Gal(K/F)⟨σ(μ), σ a_σ^-1,ρ⟩(∏_σ∈Gal(K/F)⟨σ(μ), a_σ, σ^-1⟩)^-1ρ(∏_σ∈Gal(K/F)⟨σ(μ), a_σ, σ^-1⟩)= ∏_σ∈Gal(K/F)⟨σ(μ), a_σ,σ^-1ρ^-1 a_σ,σ^-1a_σ, σ^-1^-1⟩∏_σ∈Gal(K/F)⟨ρσ(μ),ρ(a_σ,σ^-1) ⟩= ∏_σ∈Gal(K/F)⟨σ(μ), a_σ,σ^-1ρ^-1⟩∏_σ∈Gal(K/F)⟨ρσ(μ), a_ρσ,σ^-1a_ρ,σ⟩.If we apply the change of variable σ↦ρσ to the first product, then we have∏_σ∈Gal(K/F)⟨σ(μ), a_σ,σ^-1ρ^-1⟩ = ∏_σ∈Gal(K/F)⟨ρσ(μ), a_ρσ,σ^-1^-1⟩.After substituting this into the above formula, we obtain∪ a (μ)(ρ) +∂β = ∏_σ∈Gal(K/F)⟨ρσ(μ), a_ρσ,σ^-1^-1⟩∏_σ∈Gal(K/F)⟨ρσ(μ), a_ρσ,σ^-1 a_ρ,σ⟩= ∏_σ∈Gal(K/F)⟨ρσ(μ),a_ρ,σ⟩. We remind the readers that a different choice of the section s':Gal(K/F)→ W_K/F yields another 2-cocycle a' cohomologous to a. Hence, the map ψ' defined in terms of a' induces the same map (cup-product) on the level of cohomology as ψ. §.§ The global Tate-Nakayama dualityLet F be a number field,and C_F = F^×\𝔸_F be its idele class group. Let T be a torus defined over F with cocharacter group X. We fix a finite Galois extension K of F over which T splits. We apply the Tate-Nakayama isomorphism (Theorem <ref>) to Gal(K/F) and C_K. when n = -1, the isomorphism reads as Ĥ^-1(K/F, X) ≅ H^1(K/F, T(𝔸_K)/T(K)).According to an argument similar to that made in Section <ref>, the isomorphism above can be reformulated as a duality (perfect pairing) H^1(K/F,X^*(T)) × H^1(K/F, T(𝔸_K)/T(K)) →ℂ^×.Let Γ be the absolute Galois group of F. Let 𝔸̅ be the direct limit of 𝔸_K, where K ranges over finite Galois extensions of F. After passing to limits, we have an isomorphism à la Kottwitz:H^1(Γ, T(𝔸̅)/T(F̅)) =: H^1(𝔸/F, T) ≅π_0(T̂^Γ)^*,and a global Tate-Nakayama pairing:T̂^Γ× H^1(𝔸/F,T) →ℂ^×.One can easily check its compatibility with the local Tate-Nakayama pairing (<ref>).
http://arxiv.org/abs/2312.16389v1
{ "authors": [ "Yi Luo" ], "categories": [ "math.RT", "math.NT" ], "primary_category": "math.RT", "published": "20231227033042", "title": "On the multiplicity formula for discrete automorphic representations of disconnected tori" }
arrows.meta,bending,decorations.markings,intersections arc arrow/.style args= to pos #1 with length #2 decoration= markings,mark=at position 0 with #2/(), mark=at position #1- with (@1);, mark=at position #1-2*/3 with (@2);, mark=at position #1-/3 with (@3);, mark=at position #1 with (@4); [-Stealth[length=#2,bend]](@1) .. controls (@2) and (@3) .. (@4);, ,postaction=decorate,,arr/.style=arc arrow=to pos #1 with length 2.3mm ϵ dn sn cn am ns1/2 #1#2#1/#2 sech diagH̋ 𝕋 Δ łλ σ øω γ Γ 1 am Tr T BCδ̣ ρ̊ þθ Łℒ e k̃ w̃ →ø̃Soliton condensates for the focusing Nonlinear Schrödinger Equation:a non-bound state case Alexander Tovbis and Fudong Wang============================================================================================ In this paper westudy the spectral theory of soliton condensates - a special limit of soliton gases -for the focusing NLS (fNLS).In particular, we analyze the kinetic equation for the fNLS circular condensate, which represents the first example of an explicitly solvable fNLS condensate withnontrivial large scale space-time dynamics.Solution of the kinetic equation was obtained by reducing it to Whitham type equations for the endpoints of spectral arcs. We also study the rarefaction and dispersive shock waves for circular condensates, as well as calculate the corresponding average conserved quantities and the kurtosis.We want to note that one of the main objectof thespectral theory- the Nonlinear Dispersion Relations -is introducedin the paper assome special large genus (thermodynamic) limitthe Riemann Bilinear Identities that involve the quasimomentum and the quasienergy meromorphic differentials. § INTRODUCTION §.§ Soliton gases for integrable equationsMany nonlinear integrable equations have special solutions representing solitary waves - localized traveling wave solutions also known as solitons.Solitons have some very peculiar and well studied properties. One of the most celebrated of them is the property that two solitons with different velocitiesretain theirshapes and velocities after the interaction, so that the only result of this interaction is a phase change (e.g, a shift in the position of the center) of the above solitons. There are also more complicated N-soliton solutions which can be considered as ensembles of N interacting solitons.The spectral theory of soliton gases is based on the natural idea to interpretsolitons asparticles of some gas with elastic two-particle interactions. Assuming that a givensoliton(a particle) undergoes consistent interactions (phase shifts) with other solitons (particles) in the gas, it is clear that its actual velocitywill be affected by the persistent interactions, namely, by the density and the characteristics of other solitons in the gas interacting with the given one. This idea goes back to the paper of V. E. Zakharov <cit.>, where the formula for the effective velocity of a soliton in a rarefied Korteweg - De Vries (KdV)soliton gas was firstpresented.However, the case of a dense soliton gas requireda different approach, whichwas suggested by G. El in <cit.> for KdV soliton gases and by G. El and A. Tovbis in <cit.> for the focusing Nonlinear Schrödinger Equation (fNLS) soliton gases. This approach is based on considering a certain large N limit ofspecialN-phase nonlinear wave (finite gap) solutions to an integrable system, called the thermodynamic limit.The subject of our interest in this paper will be not the large N limit of finite gapsolutionsto integrable equations, but ratherthe scaled continuum limit of wavenumbers k_j and of frequencies ø_j, associated with these solutions. These limitswill be interpreted as the density of states (DOS) u(z) andthe density of fluxes (DOF) v(z)respectively ofthe corresponding soliton gas, where z denotes the spectral parameter.Since finite gap solutions can be conveniently represented in terms of the Riemann Theta functions, it is convenient for us to describesoliton gases starting fromon the correspondingRiemann surfaces. In particular, genus N finite gap solutions to the fNLS, considered in this paper, can be defined in terms of a genus N Schwarz symmetrical hyperellipticRiemann surface _N. Additional information in the form of N initial phases is required to define a particular finite gap solution.However, thewavenumbers k_j and frequencies ø_j of any finite gap solution to the fNLS, associated with_N, are defined in terms of_N only.In particular, let dp_N, dq_N be second kind real normalized meromorphic differentials on_N with poles only at infinity (both sheets) and the principle parts ± 1 for dp and± 2z for dq there respectively, where the spectral parameter z∈_N. Here real normalized mean that all the periods of dp_N, dq_N on _N are real. Note that the above conditions uniquely definedp_N, dq_N on_N. Then the vectors k⃗, ø⃗ of the (real) periods of dp_N, dq_N with respect to a fixed homology basis (A and B cycles) of _N are vectors of the wavenumbers and frequencies of finite gap solutions on_N respectively, i.e., k_j=∮_A_jdp_N,   _j= ∮_B_jdp_N,ø_j=∮_A_jdq_N,   →_j= ∮_B_jdq_N,where k⃗=(k_1,…,k_N,_1,…,_N), ø⃗=(ø_1,…,ø_N,→_1,…,→_N). The differentialsdp_N, dq_Nare known as quasimomentum and quasienergy differentials respectively (<cit.>, <cit.>).Denote by w_j,N=w_j the j-th normalized holomorphic differential on _N, j=1,…,N, that are defined by the condition ∫_A_kw_j =_̣k,j, k=1,…,N, where _̣k,j is the Kronecker delta. The well known Riemann Bilinear Relations ∑_j[∮_A_jw_m∮_B_jdp_N-∮_A_jdp_N∮_B_jw_m]=2πi ∑Res(∫w_m dp_N), ∑_j[∮_A_jw_m∮_B_jdq_N-∮_A_jdp_N∮_B_jw_m]=2πi ∑Res(∫w_m dp_N),m=1,…,N, form systems of linear equations for k⃗, ø⃗respectively.Indeed,taking real and imaginary parts of (<ref>), one gets ∑_j k_j ∮_B_jw_m=-2π≤(∑Res(∫w_m dp_N) ), _m-∑_j k_j ∮_B_jw_m=-2π≤(∑Res(∫w_m dp_N) ),m=1,…,N. The matrixof the n× n system of linear equationsis positive definite, since it is the imaginary part τ of the Riemann period matrix τ= ∮_B_jw_m.Once k_j are known, the values of _j can be calculated from (<ref>). Thus, the systems(<ref>)- (<ref>) always have a unique solution. Similar results are true for (<ref>). So, the wavenumbers and frequencies vectors k⃗, ø⃗ are connected via theunderlying Riemann surface _N. Thus, the Riemann Bilinear Relations for the quasimomentum and quasienergy differentials dp_N, dq_N, connecting the wavenumbers and frequencies, form theNonlinear dispersion relations (NDR) for the finite gap solutions to the fNLS, defined by_N. One of themain subjects of the spectral theory of soliton gases is the thermodynamic limit of scaled vectorsk⃗, ø⃗, i.e., the thermodynamic limitof Riemann Bilinear Relations (<ref>)-(<ref>), which leads to the continuum version of the NDR established in <cit.> in the form of two Fredholm integral equations (<ref>)-(<ref>). Before considering the thermodynamic limit, we want to address the choice of the homology basis. Namely, the A cycles are chosen asclockwise loops (on the main sheet) around each of N branch cuts of_N,which contains only one branch cut inside. That leavesone remaining branch cutout of theN+1 branch cuts of _N. A B cycleB_j is represented by a loop going throughthe branch cut A_j and through the remaining branch cutof _N, see Figure <ref>.If one starts shrinking just onebranch cut of _N, corresponding to A_j(and, generically, its Schwarz symmetrical) to a point while leaving the other branch cuts unchanged, the system (<ref>)-(<ref>) implies that thecorresponding k_j,ø_j 0, which corresponds toa solitonic limit. Indeed, an fNLS soliton is spectrally represented by a pair of Schwarz symmetrical points and a complex norming constant, the latter can be viewed as an analog of initial phases. For example, an elliptic solution to the fNLSψ_m = e^it(2-m)(x,m) in thelimit m 1 goes into a soliton solutionψ_1 = e^it(x).Plots/RSupper.tex Considering the thermodynamic limit, we assumethat the number N+1 of the branch cuts of _N is growing so that the centers of each branch cut (spectral band) are accumulated on some (Schwarz symmetrical) compactinwith a positive continuous probability density function ϕ(z)on ^+=∩^+. Simultaneously, all the bandwidth are shrinking at the order e^-ν(z)N, where ν(z) is a continuous nonnegative function on ^+, in such a way that the distance between any two bands should be of the order at least O(1/N).The wavenumbers k_j and frequencies ø_j are called solitonic wavenumbers and frequencies respectively, because in the thermodynamic limit they go to zero. The function (z)= 2ν(z)/ϕ(z) is called spectral scaling function. The thermodynamic limit (<ref>)-(<ref>) for the scaled limits u(z),v(z) ofk_j and ø_j respectivelyforthe fNLS soliton gas was derived on <cit.>. So far, this derivation was made rigorous (<cit.>)in the case when ^+ is a curve and >0 on ^+. In the case (z)≡ 0 on ^+, i.e., inthe case of sub-exponential decay of the bandwidth, a soliton gas is called soliton condensate. §.§ Soliton condensate for the KdV and bound state soliton condensate for fNLSIn this paper we consider soliton gases for the fNLSiψ_t +ψ_xx +2 |ψ|^2 ψ=0,where x,t∈ are the space-time variables andψ:^2 is the unknown complex -valued function. The nonlinear dispersion relations (NDR) for the fNLS soliton gasareintegral equations: ∫_^+log≤|z-w̅/z-w|u(w)dł(w)+σ(z)u(z) =z, z∈^+, ∫_^+log≤|z-w̅/z-w|v(w)|dł(w)+σ(z)v(z) =-4zz, z∈^+,where ^+⊂^+ is a compact, ł(w) is a reference measure (for example, the arclength if ^+ is a curve) and σ(z): ^+↦ [0,∞) is a continuous function. As it was mentioned above, the uniqueness and existence of solutions tothe discrete NDR((<ref>)and its analog for dq_N) follow from the positive-definiteness of the imaginary part of the Riemann period matrix Similar results for thecontinuousNDR (<ref>)-(<ref>) were obtained in <cit.>, where each of the equations was considered as a variational equations for theminimizer of the Green's energy functional. Here is one of the results from <cit.>. Let S be a subset of ℂ and let z_0 ∈ℂ. Then S is thick (or non-thin) at z_0 if z_0 ∈S ∖{z_0} and if, for every superharmonic function u defined on a neighborhood of z_0, lim inf_zz_0z ∈ S ∖{ z_0} u(z) = u(z_0), Otherwise, S is thin at z_0. A connected set with more than one point (for example a contour) is thick at all of its points. On the other hand, a countable set is thin at every point.We consider ^+ to be a finite collection of arc or closed regions in . Let σ be continuous on Γ^+, and S_0 = { z ∈Γ^+ |σ(z) = 0 }. Suppose S_0 is either empty or thick at each z_0 ∈ S_0 (see Definition <ref>). Then the solution u(z) to (<ref>) exists and is unique; moreover, u(z)≥ 0 on ^+.Similar results can be proven for (<ref>) with the exception of thenon-negativityof v(z). In particular,the statement of Theorem <ref> is valid in thecase ≡ 0, i.e., in the caseof a soliton condensate.The ratio s(z)=v(z)/u(z)represents the effective velocity of a tracer soliton (elements of the gas with the spectral characteristic z) in the soliton gas. Soliton gases (and condensates) discussed so far were equilibrium soliton gases, that is, the spectral properties of these gases are uniformall over the physical (x,t)-plane. One can also consider non-equilibrium soliton gases, where the parameters of the NDR (<ref>)-(<ref>)are slowly varying with(x,t) on large(x,t) scales. That is, we assume that=(z;x,t), ^+=^+(x,t) and so, the DOS u=u(z;x,t) and DOF v=v(z;x,t).In this case, the NDR (<ref>)-(<ref>) should be supplemented by the “continuity equation"∂_t u(z;x,t)+∂_x v(z;x,t)=0,thusforming the system known as kinetic equation see <cit.>(to be precise,often the NDR(<ref>)-(<ref>)in the kinetic equation is replaced by the equation of state, which represents an integral equation for the unknown s(z) in terms of ^+ and u(z)).Equation (<ref>) can be used to illustrate 3 different scales naturally appearingin our approach to soliton gases: the large number of micro-scale soliton-soliton interactions allows one to derive the NDR descibing meso-scale DOS u(z) and DOF v(z), assuming that atthese relatively large space-time scales the compact ^+ and the spectral scaling function (z), determining u,v,are virtually independent on x,t. Finally, equation (<ref>) describes themacro-scale dynamics of u(z;x,t) and v(z;x,t) for non equilibrium soliton gases, where the large scale x,t dependence of ^+ andhas to be taken into account.Explicit solutions to the NDR(<ref>)-(<ref>)are known in certain special cases, in particular,in the case of a soliton condensate ≡ 0 with ^+ on the imaginary axis (or on any other vertical ray). fNLS soliton gases with such ^+ are called bound state gases. This name reflects the fact that, as can be easily seen from(<ref>)-(<ref>), the effective speed (<ref>)for bound state gases is s=-4 z for all z∈^+. It was observed in <cit.> that, for a bound state condensate, the DOS u(z) is proportional to the dp/dz, where dp is the quasimomentum differential on the hyperelliptic Riemann surfacedefinedby . Similar results are valid for KdV soliton condensates with u(z),v(z)proportional to dp/dz, dq/dz, where dq is the quasienergy differential on . For non-equilibrium KdV soliton condensates, it was proved in <cit.> that the integro-differential kinetic equation for the DOS u=u(z;x,t) and DOF v=v(z;x,t)reduces to the multi-phase KdV-Whitham modulation equations for the endpoints ofderived by Flaschka, Forest and McLaughlin <cit.> and Lax and Levermore <cit.>.Riemann problems for soliton condensates andexplicit solutions for the kinetic equation describing generalized rarefaction and dispersive shock waves were recently considered in <cit.>.Extension of these results of <cit.> to thefNLS bound states condensate is trivial since, as it was mentioned earlier,s(z;x,t) is constant. In the presentpaper we study a class of non bound state fNLS condensates, namely, circular condensates, which exhibits non trivial Whitham equations and for which we solve the NDR (<ref>)-(<ref>), obtain the corresponding Whitham equations as well as some explicit solutions for the kinetic equation describing generalized rarefaction and dispersive shock waves. We also analyze the kurtosis κ for equilibrium and non equilibrium circular fNLS gases.Here the kurtosis is the fourth normalized moment κ=⟨|ψ|^4|⟩/⟨|ψ|^2|^⟩2.of the probability density function (PDF) of the random wave amplitude ψ. The fNLSevolution ofthe so-called partially coherent waves, whose amplitude isgiven by a slowly varying random function with a given(e.g. Gaussian) statistics,was studied in <cit.>.In particular, it was shown there that long time fNLS evolution of partially coherent waves leads to thedoubling of the initial κ, so that the initial κ_0=2,corresponding to the Gaussian distribution, eventually becomesκ_∞=4,indicating “fat tail" distribution and, thus, potential presence ofrogue waves. In Section <ref> we calculate the kurtosis κ for genus one and zero circular condensates and show that κ=2 for a rarefaction wave and κ>2 for a dispersive shock wave.Considering special families ofgenus one condensatesin the limit of diminishing bands, we show that any valueκ>2 can be approached along certain trajectories in the correspondingparameter space. §.§ Main results In this paper, we studythe fNLS soliton condensate supported on a compact ^+⊂ S^+, where S^+⊂^+ is the centeredat z=0 semicircleof the radius ρ>0. Let ^+ consists of n∈ closed nondegeneratearcs (bands) of S^+.The bands are interlaced with n+1 gaps lying on S^+, where the arcs from z=±ρ to the nearest band are also considered as gaps, see Figure <ref>.Any of the latter gaps are considered to be collapsed if the corresponding ±ρ∈^+. We also take the reference measure ł(w) in (<ref>)-(<ref>)to be simply the arclength. The conformal map p(z)=≤(z/ρ+ρ/z) maps ^+ ontowith two branch cuts from ± 1 to ±∞ respectively, where p(S^+)=[-1,1]. Let ℜ_n denote the hyperelliptic Riemann surface with the branch cuts on (-∞,-1], [1,∞) and on^+=p(^+)⊂[-1,1].Plots/conformalmap.texDenote by û(p), v̂(p) solutions of singular integral equationsπH[û](p):=∫_^+û(q)dq/q-p= -p, πH[v̂](p)=4ρ(2p^2-1) on ^+ ; here H denotes the Finite Hilbert Transform (FHT) on ^+. It is straightforward to show (see, for example, <cit.>) that û(p)= P(p)/R(p), v̂(p)=Q(p)/ R(p), where P,Q are polynomials of degree n+1 and n+2 respectively and R(p)=∏_j(p-p(z_j))^ taken over all the endpoints of ^+ and normalized by R(p)∼ p^n as p∞ in . However, P,Q are not uniquely defined by(<ref>), since the FHT H has an n-dimensional kernel. The following theorem establishes solutions of the NDR (<ref>)- (<ref>) in terms of û, v̂.The NDR (<ref>)- (<ref>) for fNLS circular soliton condensate (i.e., with σ≡ 0) have solutionsu(z)=û(p(z))= P(p(z))/ R(p(z)),where  π H[û]=-p on  ^+=p(^+), v(z)=v̂(p(z))= Q(p(z))/ R(p(z)), where  π H[v̂]=4ρ(2p^2-1)on  ^+=p(^+).HereP(p),Q(p) arepolynomials of degrees n+1, n+2 respectively, p∈, û(p)dp/√(1-p^2), v̂(p)dp/√(1-p^2) are second kind meromorphic differentials on ℛ_n and u,v satisfy the conditions∫_c_j u(z)dz=0, ∫_c_j v(z)dz=0 forallj=0,…,n, where c_j denote the gaps on S^+∖^+ where S^+={|z|=ρ,0≤ z≤π}. In the case when an endpoint ±ρ∈^+ and so the corresponding gap c_j collapses,the corresponding integral conditionsin (<ref>) should be replaced by u(±ρ)=0, v(±ρ)=0. Also,u(z),v(z)∈ on ^+ and u>0 on^+∖. We now state the second main theorem about non-uniform circular soliton condensate for the fNLS, where the endpoints z_j=ρ e^i_j of thebands on S^+ can depend on x,t.It is governed by the equation (<ref>)and reflect large scale changes of u,v. Let (x,t)=(_0(x,t),…,_2n-1(x,t)) denote the vector of x,t dependent endpoints of ^+. Assuming smoothnessofu(z;x,t),v(z;x,t) and (x,t), and using Theorem <ref> , we follow the approach of <cit.> to show thatthe continuity equation (<ref>) can be written as a Whitham type equations on the evolution of(x,t).A similar result for the KdV soliton condensates was obtained in <cit.>. If the fNLS circular soliton condensate,described in Theorem <ref>, with ^+∩=∅, isnon equilibrium, then(<ref>)is equivalent tothe system of modulation (Whitham) equations given by∂_t_j+V_j()∂_x_j=0, , j=0,…, 2n-1, whereV_j()=Q(cos_j)/P(cos_j) are bounded velocities, with P,Q as inTheorem <ref>. Let us first obtain (<ref>) from (<ref>).According toTheorem <ref>,(<ref>) can be written as ≤(P(p)/R(p))_t+≤(Q(p)/R(p))_x=0, which should be valid for all p∈^+. Denoting T=R^2, equation (<ref>) can be written as 2T(P_t+Q_x)=PT_t+QT_x or2(P_t+Q_x)=P(logT)_t+Q(logT)_x. Since (logT)_r=-∑_j=0^2n-1(a_j)_r/p-a_j, the second equation (<ref>) yields P(a_j)(a_j)_t+Q(a_j)(a_j)_x=0, j=0,1,…,2n-1, if we take limit p a_j.Given a_j=cos_j, the latter equation implies (<ref>)-(<ref>). Note that all the zeros of P(p) are on the gaps and, thus, the velocities V_j defined by (<ref>) are bounded. Assume now that the evolution of the endpoints satisfies (<ref>) or, equivalently, ∂_t a_j+V_j(a⃗)∂_x a_j=0, j=0,…, 2n-1, where a⃗=(a_0,…, a_2n-1). Then, following <cit.>, we observe that the only poles of thedifferential Ω=∂_tûdp+∂_xv̂dpon ℜ_n are thesecond order poles at each a_j. But the modulation equations (<ref>) show that the principal parts of Ω at each p=a_j is zero, that is,Ω is a holomorphic differential. Since all the gap integrals (and, thus,the B-periods) of a holomorphic differential Ω are zeros, we obtain the kinetic equation ∂_tû+∂_xv̂=0, which is equivalent (<ref>). Modulation equations (<ref>) - (<ref>) form a strictly hyperbolic system of first order quasilinear PDEs provided that all the branchpoints are distinct (otherwise it will just hyperbolic).This system is in the diagonal(Riemann) form with all the coefficients (velocities) being real.Cauchy data for this system consists of (x,0).The system has a unique local (classic) real solution provided (x,0) is of C^1 class,see <cit.>, Theorem 7.8.1, and real and the velocities V_j() are smooth and real. Thus, the fNLS circular gas is (at least locally) preserved under the evolution described by the kinetic equation. As it is well known, systems of hyperbolic equations may develop singularities in the x,t plane, which, in the case of modulation equations (<ref>) - (<ref>), lead to collapse of a band or a gap, or to appearance of a new“double point" that will open into a band or gap. In any case,at a point of singularity (also known as a breaking point), two or more endpoints form a⃗(x,t) collide or a new pair(s) of collapseddouble points appear, so that the Riemann surface ℜ_n develop a singularity. In this paper we do not intend to discuss details of transition of the circular condensate between regions of different genera while passing through a breaking point.However, we would like to mention that since the differentialsû(p)dp, v̂(p)dpare imaginary normalized differentials, they undergo a continuous transition through breaking points, see <cit.>. Thus,fNLS circular condensate is preserved under the kinetic equation evolution through breaking points (change of genus). Some examples of such evolution can be found in Section <ref>.§ PROOF OF THEOREM <REF>In the particular case of a genus zero circular condensate where the point z=ρ is on the band, the NDR where solved in <cit.>. The proof of Theorem <ref> presented below in a sense resembles theproof of the solution to the NDR for a bound state fNLS soliton condensatefrom <cit.>. The conformal map (<ref>) has the inverse z=ρ(p+√(p^2-1)) or z=ρ e^iξ, where p=cosξ. In the variables ξ,þ, where w=ρ e^iþ, each NDR equation in (<ref>) with ≡ 0can be written as-∫̊_^+log≤|sin-þ/2/sin+þ/2|ψ_j(þ)dþ= ϕ_j(ξ),j=1,2,where ψ_1(ξ)=u(e̊^iξ), ψ_2(ξ)=v(e̊^iξ), ^+ is the preimage of ^+ under the map z=ρ e^iξ,ϕ_1(ξ)=s̊i̊n̊, ϕ_2(ξ)=-4ρ^2 sinξcosξ, and the integration in ^+⊂ goes in the negative direction. Here and henceforth, we always assume that the integration over ^+ goes in the positive direction, and, therefore, change the sign in the left hand side of (<ref>).Sinced/dlog≤|sin-þ/2/sin+þ/2|=≤[ -þ/2-+þ/2 ]= sinþ/cosþ-cos, differentiation inof (<ref>)yields ∫̊_^+ψ_j(þ)sinþ/cosþ-cosdþ= -d/dξϕ_j(ξ),j=1,2,orπ H[û](p)=-p,      π H[v̂](p)=4ρ^ (2p^2-1), where û(q)=ψ_1(þ), v̂(q)=ψ_2(þ) and q=cosþ. Inversion of the firstFHT in (<ref>) has the form (<ref>), where thedegree n+1 polynomial P(p) is defined up to a kernel of H acting on ^+. As it is well known (and can be easily verified), this kernel is an n dimensional space that consists of functions K(p)/R(p), where K(p) is an arbitrarypolynomial of degree n-1.We now prove that the n unknown coefficients of K(p) are uniquely defined by conditions (<ref>) for û. We start with proving that û(p)dp/√(1-p^2) has zero residue at p=∞, i.e., it is a second kind meromorphic differential on ℜ_n. Indeed, substituting û(p)=P(p)/R(p) into (<ref>) and calculating the H[û] through the residue at p=∞, we obtaina=-i,    b=i/2∑_j=0^2n-1a_j, where P(p)=1/π≤(ap^n+1+bp^n+…) and a_j=cos_j are the endpoints of ^+. Thus û(p)dp/√(1-p^2) =1-∑_ja_j/2p+…/√(1-p^-2)∏_j(1-a_j/p)^dp=(1+ O(p^-2))dp, which complete the argument.Now equations (<ref>) for u can be written as∫_a_2j-1^a_2jû(p)dp/√(1-p^2)=0, j=0,…,n, with a_-1=1, a_2n=-1 and the corresponding integral in (<ref>) should be replaced by û(± 1)=0 if ± 1∈^+ respectively.The fact that Ω_1=û(p)dp/√(1-p^2) is a second kind differential implies that one of the equations (<ref>) is a tautology and so the remaining n conditions simply define a normalization of Ω_1. In particular, if allexcept onegaps are A-cycles, then(<ref>)implies that Ω_1 is an A-normalized meromorphic differential. Thus, equations(<ref>) always have a unique solution. The cases ± 1∈^+ can be treated as limits of small closing gaps from ± 1 to the nearest endpoint. To complete our arguments for u(z), we needto showthat conditions (<ref>) for umust be satisfied. In this proof we follow the arguments of Th. 6.1, <cit.>, where similar conditions were derived for the bound state fNLS soliton condensate, i.e., when all the bands were situated on the imaginary axis. The idea of the proof is related to the fact that the solution u(z) to (<ref>) is the density of the equilibrium measure for the corresponding Green's energy (<cit.>). If u is such an equilibrium density then the Green's potential G[u]:=∫_^+log≤|z-w̅/z-w|u(w)|dw| of u should be continuous at every regular point of ^+, see <cit.>. In our case, all points of ^+ areregular and so G[u] must be continuous in . Equations (<ref>) implythat there exists at least one zero in each gaps (intervals). Since there are n+1 gaps and the degree of P(p) is n+1, we conclude that each gap has exactly one root of P(p) and, so, all the roots of the polynomial P(p) are real. Thus, according to (<ref>), P(p) is purely imaginary onand, so, u>0 on ^+.Similar arguments hold for the solution v(z) of the second NDR (<ref>).For example, representing Q(p)=1/π≤(ap^n+2+bp^n+1+cp^n+…),it follows from (<ref>)that a=-8i$̊,b=-a/2 ∑_j a_jandc=4i+̊i(̊∑_ja_j^2-6∑_j<ka_ja_k). So,a,b,c∈i.Equations (<ref>) imply thatn+1roots are real. Thus it follows that all the roots ofQare real. § GENUS 0 AND GENUS 1 CIRCULAR CONDENSATESIn this section the results of Theorem <ref> are appliedto two simple cases: the genusn=0,1ofthe condensate.We remind that bygenus we understand the genus of the Riemann surfaceℜ_n, which is equalthe number of gaps on Figure <ref> minus one. For genusn,the corresponding DOS will be denoted byu_n(z;α⃗_n),n=0,1, andα⃗_n=(α_0,α_1,⋯,α_2n+3)with0=α_0≤α_1≤⋯≤α_2n+3= π. In general, the genus of the Riemann surface with branch pointsα⃗_nisn+2. However, sinceα_0=0andα_2n+3=π, two gaps are collapsed and the genus of the Riemann surface is reduced ton. Denotea_j=p(e̊^i_j)=cos(α_j),j=1,2,3,4. The support^+for the genus one circular condensate is illustrated by Fig.<ref>. In what follows, we will notmention_0and_2n+3since they are alwaysfixed.In both cases, the exact solutions to the NDR equations can be explicitly represented with the help of complete elliptic integrals. For higher genus situation, the solution to the NDR equations can be represented using hyperelliptic integrals. Given the contour^+as shown in Fig.<ref>, the solution to the NDR(<ref>)- (<ref>)is given byu(z) =u_1(p;a_1,a_2,a_3,a_4)=√(1-p^2)/π(-p^2+1/2l_1p+A/R(p)), v(z) =v_1(p;a_1,a_2,a_3,a_4)=ρ√(1-p^2)/π(8p^3-4l_1p^2+(4l_2-l_1^2)p+B/R(p)),wherep=p(z)is given by (<ref>),R(p) =√(∏_j=1^4(p-a_j)),     l_1=∑_j=1^4a_j,    l_2=∑_1≤ i<j≤ 4 a_ia_j, A =E(m)/2K(m)(a_2-a_4)(a_1-a_3)-1/2(a_1a_2+a_3a_4), B =-E(m)/K(m)(a_2-a_4)(a_1-a_3)l_1+(a_1a_2-a_3a_4)(a_1+a_2-a_3-a_4), m =(a_1-a_2)(a_3-a_4)/(a_1-a_3)(a_2-a_4),and, as inTheorem <ref>, the branch ofR(p)is chosen so thatR(p)∼p^2asp∞. The effective velocity is then given bys(z)=-≤(8p^3-4l_1p^2+(4l_2-l_1^2)p+B/p^2-1/2l_1p-A)ρ. Based on Theorem <ref>, the general solution to the first NDR reads u(z)=u(p;a_1,a_2,a_3,a_4)=1/π√(1-p^2)(-p^2+c_1p+c_0)/√((p-a_1)(p-a_2)(p-a_3)(p-a_4)), where c_1=1/2l_1,c_0=A are determined by the (gap-vanishing) normalization conditions: ∫_a_4^a_3u(p)dp/√(1-p^2)=0, ∫_a_2^a_1u(p)dp/√(1-p^2)=0. The computation of v is similar and thus omitted. The effective velocity can be computed directly from equation (<ref>). In the case of_3=_4one of the gaps disappears and we arein thegenus 0 situation, defined only by_1, _2. In the caseα_4=α_3(genus zero, seeFig.<ref>) the solution to the NDR(<ref>)-(<ref>)is given byu(z) =u_0(p;a_1,a_2)=1/π√(1-p^2)(p-1/2(a_2+a_1))/√((p-a_2)(p-a_1)), v(z) =v_0(p;a_1,a_2)=-8/πρ√(1-p^2)(p^2-1/2(a_2+a_1)p-1/8(a_1-a_2)^2)/√((p-a_2)(p-a_1)), wherep=p(z)is given by (<ref>),a_j=cos(α_j), j=1,2, and the square-root function takes the principal branch.The effective velocity is then given bys(z)=s_0(p;a_1,a_2)=≤(-8p+(a_2-a_1)^2/p-(a_2+a_1)/2)ρ. Using the conditions ofTheorem <ref> and having in mind that û(± 1)= v̂(± 1)=0, we obtain (<ref>) and v(z)=-8ρ/π√(1-p^2/(p-a_1)(p-a_2))(p^2-a_1+a_2/2p+c_0), where c_0 is the constant that is determined by the gap vanishing condition ∫_a_2^a_1v(z(p))(1-p^2)^-1/2dp=0. Solving the latter equation we obtain c_0=-1/8(a_1-a_2)^2. Given solutions u,v, we obtain (<ref>) forthe effective velocity s. As it was mentioned in Remark <ref>, imaginary normalized differentials have a continuous transition through breaking points, i.e., throughpointsof collapse of a band or a gap. In particular,genus zero solutions ( see equations (<ref>),(<ref>) and (<ref>)) can be directly obtainedfrom the genus one solutions (see equations (<ref>),(<ref>) and (<ref>)) by taking the limita_3a_4^+. That is to say,u_0(p;a_1,a_2)=lim_a_3 a_4^+u_1(p;a_1,a_2,a_3,a_4).Similar limits work for the solutions to the density of fluxes and the corresponding effective velocities.As it was shown in the proof of Theorem <ref>, the DOSu>0on^+; however, the DOFvmay have a zeroz_0on^+, which also coincides with the zero of the effective velocitys. In Figure <ref> below we show different cases of thelocation ofz_0, defined bythe branch points(α_1,α_2). If in the conditions ofCorollary<ref>we further assumeα_2=π, then the solutions to the NDR reduce to u(z) =u_0(p;a_1,-1)=1/π(1- p)( p+1-a_1/2)/√(( p-a_1)(1- p)), v( z) =v_0(p;a_1,-1)=1/πρ(1- p)≤(-8 p^2+4(a_1-1) p+(a_1+1)^2)/√((1- p)( p-a_1)),wherep=p(z)is given by (<ref>), so that s(z)=≤(-8p+(a_1+1)^2/p+(1-a_1)/2)ρ.Expressions (<ref>) and (<ref>) were calculated in <cit.>, see equations (72),(73) there. § MODULATIONAL DYNAMICS FOR THE CIRCULAR CONDENSATESolutions to the kinetic equations for the fNLS circular condensate, obtained in Theorems <ref>-<ref>, look very similar tothat for the KdV soliton condensate, obtained in <cit.>.Following the ideas of<cit.>, in this section we consider the evolution of step function initial conditions for the circular gas modulation equations, i.e. the Riemann problem, that, as in the KdV case, producerarefaction and dispersive shock wave solutions. Consider the step function initial data forthe modulation equations (<ref>) a_1(x,t=0)≡cos(α_1(x,t=0))= q_-,x<0 q_+,x>0 , q_+≠ q_-.that corresponds to the genus zero circular condensate described in Remark <ref>, i.e., the initial datefor thebranch pointa_1∈^+. Thatdefines (see(<ref>)) to the DOS:u(z;x,t=0)=u_0(p(z);q_-,-1),x<0, u_0(p(z);q_+,-1),x>0,and a similar expression for the DOFv, see (<ref>), whereq_±∈(-1,1).In what follows, we will consider self-similar solutions to the modulation system (<ref>), i.e., we considerx/t=ξ=const.It is well-known that the behaviorof such solutions, including the genus of the corresponding hyperelliptic Riemann surfaceℜ=ℜ(x,t)(in the variablep) foru,vdepends on whetherq_->q_+orq_-<q_+. The first case, the genus of ℜ(x,t)stays zero and thedynamics of the DOSu(z;x,t)is characterized by the rarefaction wave solutionof the modulation equation (<ref>). The latter case, however, implies immediate wave-breaking, which can be regularized by introducing a genus one ℜ_1(x,t)and the corresponding dispersive shock wave DOSu(z;x,t)that connectsu=u_0(p;q_-,-1)for large negative andu=u_0(p;q_+,-1)for large positivex. Such regularization is well-known for describing the dispersive shock wave modulations(see <cit.>) of the KdV equation with step initial data. The two types of behavior of the modulational dynamics of the DOS are shown as in Fig.<ref>. The following theoremsummarizesmain results of the section. Supposeu(z;x,0)is given by (<ref>)witha_1(x,0)given by (<ref>). Then for any(x,t)∈×_+we have:(i) if q_->q_+, a_1(x,t)= q_-,x<V_1-t -1/6x/ρ t+1/3,V_1-t<x<V_1+t, q_+,x>V_1+t ,andV_1±=-6ρ(q_±-1/3), so that the DOS is given by u(z;x,t)=u_0(p;a_1(x,t),-1)=1/π(1-p)(p+1-a_1(x,t)/2)/√((p-a_1(x,t))(1-p)), where p=p(z) is given by (<ref>) and the expression for the DOF is given by v(z;x,t) =v_0(p;a_1(x,t),-1)=-8/πρ√(1-p^2)(p^2-1/2(a_1(x,t)-1)p-1/8(a_1(x,t)+1)^2)/√((p+1)(p-a_1(x,t))).(ii) ifq_-<q_+, then a_2(x,t) is uniquely and implicitly determined by the following equation: x/ρ t=-2(q_++a_2+q_–1)-4(a_2-q_-)(q_+-a_2)/(q_+-q_-)μ(m) +q_–a_2, V_2-t<x<V_2+t,where m =(1+q_-)(q_+-a_2)/(q_+-q_-)(1+a_2),μ(m)=E(m)/K(m), V_2- =≤(-16a_1^2+8a_1a_3+2a_3^2-8a_1+4a_3+2/2a_1-a_3+1)ρ, V_2+ =(-2a_1-4a_3+2)ρ. so that u(z;x,t)= u_0(p;q_-,-1),x<V_2-t, u_1(p;q_+,a_2(x,t),q_-,-1),V_2-t<x<V_2+t, u_0(p;q_+,-1),x>V_2+t,where p=p(z) is given by (<ref>); and the expression for the DOF is given byv(z;x,t)= v_0(p;q_-,-1),x<V_2-t, v_1(p;q_+,a_2(x,t),q_-,-1),V_2-t<x<V_2+t, v_0(p;q_+,-1),x>V_2+t.§.§ Proof of Theorem <ref> Proof of the first part (i): Applying Theorem <ref>, we obtain the modulation equation for moving the branch pointα_1:∂_tα_1+V_1∂_xα_1=0,whereV_1 = -6ρ(cos(α_1)-1/3).Sincea_1=cos(α_1), the modulation equation is equivalent to∂_t a_1(x,t)-6ρ(a_1-1/3)∂_x a_1(x,t)=0.Considering the initial data (<ref>), the self-similar solution to the modulation equation is given bya_1(x,t)= q_-,x<V_1-t -1/6x/ρ t+1/3,V_1-t<x<V_1+t, q_+,x>V_1+twhere V_1- =V_1|_a_1=q_-=1=-6ρ(q_–1/3), V_1+ =V_1|_a_1=q_-+=-6ρ(q_+-1/3).Apparently,0>V_1+>V_1-, which generates a rarefaction wave. In this case, the DOS is given byu(z;x,t)=u_0(p;a_1(x,t),-1)=1/π(1-p)(p+1-a_1(x,t)/2)/√((p-a_1(x,t))(1-p)). Proof of the second part (ii): The previously derived solution (the rarefaction wave) is not well-defined forq_-<q_+since a wave breaking occurs immediately. To resolve the issue, it is necessary to introduce higher genus DOS that connectsu_0(p;q_-,-1)andu_0(p;q_+,-1). This can be done by using the genus one DOS (see equation (<ref>)):u(z;x,t)=u_1(p;a_1=q_+,a_2(x,t),a_3=q_-,-1). Following Theorem <ref>, the motion ofa_2(x,t)is governed by the following modulation equation:∂_t a_2+V_2∂_x a_2=0,where V_2 =V_2(a_1,a_2,a_3,-1)= -2ρ(a_1+a_2+a_3-1)-4ρ(a_2-a_3)(a_1-a_2)/(a_1-a_3)μ(m) +a_3-a_2withm=(1+a_3)(a_1-a_2)/(a_1-a_3)(1+a_2),μ(m)=E(m)/K(m).By a direct computation, we obtainV_2- =lim_a_2 q_-V_2=≤(-16a_1^2+8a_1a_3+2a_3^2-8a_1+4a_3+2/2a_1-a_3+1)ρ, V_2+ =lim_a_2 1V_2=(-2a_1-4a_3+2)ρ.Moreover, sinceV_2+-V_2-=2ρ(6a_1-a_3+5)(a_1-a_3)/2a_1-a_3+1,it is evident thatV_2+>V_2-.Then the solution to the modulation equation fora_2is defined implicitly by the following system:V_2(a_1=q_+,a_2,a_3=q_-,-1)=x/t, V_2-t<x<V_2+t. Since the solutiona_2(x,t)is defined implicitly, we need to prove that the functionV_2as a function ofa_2is invertible. In our case, it is equivalent to show that for any fixeda_1,a_3, the functionV_2(a_1,a_2,a_3,-1)is monotonic with respect toa_2. So, tocomplete the proof, we need to prove the following proposition:V_2(a_1,a_2,a_3,-1), as defined by equation (<ref>), is monotonic as a function ofa_2on the interval(a_3,a_1)for any-1<a_3<a_1<1being fixed. Before we prove the proposition, we need the following lemma.Letμ(m)=E(m)/K(m), then1-m<μ(m)<1-m/2, m∈ (0,1). It is well-known that (see Byrd-Friedman <cit.> 710.00 and 710.02) E'(m)=E-K/2m, K'(m)=E-(1-m)K/2m(1-m), where prime means differentiation with respect to m. Then E-(1-m)K=2m(1-m)K'(m)>0, where we have used the fact that K'(m)>0 for m∈ (0,1). Thus, the first inequality in (<ref>) is proven. Using both equations (<ref>), we obtain d/dm(E-(1-m/2)K)=-m/2K'(m),so that E-(1-m/2)K<0. Thus, μ(m)<1-m/2 for m∈ (0,1). We prove the statement by contradiction. Suppose there exists a_2∈ (a_3,a_1) such that ∂_a_2V_2=0. This leads to μ(m)=(a_2-a_3)(a_1^2-a_1a_3-3a_2^2+3a_2a_3+a_1-3a_2+2a_3)/2(a_1-a_3)(2a_1a_2-a_1a_3-3a_2^2+2a_2a_3+a_1-2a_2+a_3).Using (<ref>) to express a_2 in terms ofa_1,a_3 and m,we obtain μ(m)=1/2 +(a_1+1)(a_3+1)/2(m(a_1-a_3)+a_3+1)(a_1-a_3)+[(a_1+1)^2+(a_3+1)^2]m-(a_3+1)^2/2(a_1-a_3)((a_1-a_3)m^2-2(a_1+1)m+a_3+1). Since a_1>a_3 and m∈ (0,1), the first two terms has no singularities. The denominator of the third term processes two zeros (a_1+1)±√(a_1^2-a_1a_3+a_3^2+a_1+a_3+1)/a_1-a_3, but since m<1, we see that the right hand side of (<ref>) has a unique simple pole atm_cr=a_1+1-√(a_1^2-a_1a_3+a_3^2+a_1+a_3+1)/a_1-a_3. On the one hand, since (a_1+1)^2-(√(a_1^2-a_1a_3+a_3^2+a_1+a_3+1))^2=(a_3+1)(a_1-a_3)>0, m_cr>0; On the other hand, since m_cr-1/2=(a_1-a_3)^-1((a_1+a_3)/2+1-√(a_1^2-a_1a_3+a_3^2+a_1+a_3+1)) and ((a_1+a_3)/2+1)^2-(√(a_1^2-a_1a_3+a_3^2+a_1+a_3+1))^2=-(a_1-a_3)^2<0, we obtain m_cr<1/2. Since the singularity m_cr∈ (0,1/2), we consider two cases:(1) m∈ (0,m_cr); (2) m∈ (m_cr,1). In each case, we will construct a contradiction using Lemma <ref>. In the first case, subtract RHS of (<ref>) by 1-m/2, we get m^2 F(m)/2(m(a_1-a_3)+a_3+1)((a_1-a_3)m^2-2(a_1+1)m+a_3+1), whereF(m)=(a_1-a_3)^2m^2-(a_1-a_3)(3a_1-2a_3+1)m +a_3^2-(3a_1+1)a_3+3a_1^2+3a_1+1. Since a_1>a_3, the denominator of (<ref>) is positive for any m∈ (0,m_cr). From the expression of the quadratic function F(m), we see the axis of symmetry is 3a_1-2a_3+1/2(a_1-a_3), which is obviously strictly greater than 1. This implies F(m) is a decreasing function for m∈ (0,1). Thus, F(m)≥ F(1)=(a_1+1)^2>0. And we have shown that RHS of (<ref>)>1-m/2. However, due to Lemma <ref>, this contradicts the inequality satisfied by μ(m). In the second case, subtract RHS of (<ref>) by 1-m, we get m(m-1) G(m)/2(m(a_1-a_3)+a_3+1)((a_1-a_3)m^2-2(a_1+1)m+a_3+1), whereG(m)=(a_1-a_3)^2m^2-(a_1-a_3)(3a_1-a_3+2)m/2-(a_3+1)^2/2. In this case, the denominator is negative since m>m_cr. As for the quadratic function G(m), it is easy to check G(0)<0 and G(1)<0, together with the fact that the leading coefficient of G is positive, we have G(m)<0 for any m∈ (0,1). This implies the expression (<ref>) is negative and we have shown RHS of (<ref>)<1-m. Again, due to Lemma <ref>, this contradicts the inequality satisfied by μ(m). Hence, we have shown ∂_a_2V_2≠ 0,∀ a_2∈ (a_3,a_1). And this means V_2(a_1,a_2,a_3) is a monotonic function of a_2 for a_2∈ (a_3,a_1). Based on Proposition <ref>, we have shown that the solutiona_2defined by the equation (<ref>) is well-defined. Now, let's check the boundary behaviors asxV_2+tandxV_2-t. A direct calculation showslim_x V_2-t u_1(p;a_1=q_+,a_2(x,t),a_3=q_-,-1)=u_0(p;q_-,-1), lim_x V_2+t u_1(p;a_1=q_+,a_2(x,t),a_3=q_-,-1)=u_0(p;q_+,-1). Thus, the genus one DOS, as given byu(z;x,t)= u_0(p;q_-,-1),x<V_2-t, u_1(p;q_+,a_2(x,t),q_-,-1),V_2-t<x<V_2+t, u_0(p;q_+,-1),x>V_2+t.connects two genus zero DOS:u_0(p;q_-,-1)andu_0(p;q_+,-1). Similarly, using the expressions for the DOF (namely, equation (<ref>) and equation (<ref>), we get the modulated DOF as given by equation (<ref>) and equation (<ref>) respectively. This completes the proof for Theorem <ref>. § KURTOSIS IN GENUS 0 AND GENUS 1 CIRCULAR CONDENSATE In this section, we will compute the fourth normalized momentκ=⟨|ψ|^4|/⟩⟨|ψ|^2|^⟩2of the fNLS circular condensate|ψ|-the kurtosis. In the genus 0 case, we obtain that the kurtosis for the condensate is always 2, while in the genus 1 case, the kurtosis is greater 2 but finite. Below we will give the explicit formulae for computing kurtosis in genus 0 and genus 1 case. The main tool is to use the formulae of computing the averaged conserved quantities for the fNLS soliton gas, which are recently developed by the authors<cit.>. It's well-known that the fNLS has infinite many conservation laws ((f_j)_t=(g_j)_x, j≥1), where the densities and the currents,f_jandg_j,can be determined recursively (see for example, Wadati's paper <cit.> ). In order to compute the kurtosis for the circular condensate, we will need the first few densities and currents:f_1 =|ψ|^2, f_3 =|ψ|^4+ψψ_xx, g_2 =|ψ_x|^2-|ψ|^4-ψψ_xx. According to <cit.>, the averaged conserved quantities are given by ⟨f_1| ⟩=2I_1:=4∫_^+(z)u(z)|dz|, ⟨f_3| ⟩=-8/3I_3:=-16/3∫_^+(z^3)u(z)|dz|, ⟨g_2| ⟩=-2J_2:=-4∫_^+(z^2)v(z)|dz|,whereI_j =2∫_Γ_+u(ξ)ξ^j|dξ|, J_j =2∫_Γ_+v(ξ)ξ^j|dξ|, and^+is the support of the circular condensates.Since the total derivatives do not contribute to the average, through integration by parts, the following identities follow⟨f_3| ⟩=⟨|ψ|^4-|ψ_x|^2|,⟩ ⟨g_2| ⟩=⟨2|ψ_x|^2-|ψ^4||⟩ Then by the definition of kurtosis, we haveκ=⟨|ψ|^4|⟩/⟨|ψ|^2|^⟩2=-4/3I_3+1/2J_2/I_1^2. Below, we use the formula (<ref>) to derive formulas for computing the kurtosis for the genus 0 and genus 1 condensate. The main ingredient of the computation is computing the averaged densities (I_j) and averaged fluxes (J_j). The following proposition provides a fairly simple way to compute these quantities. Let^+be the contour for the circular condensate associating with the hyperelliptic Riemann surfaceℜ_n, then the averaged densities (I_j) and the averaged fluxes (J_j) can be computed by the following formulae: forj∈ℤ_+,I_j =2π i ρ^j+1≤{P(z(p))U_j-1(p)/R(z(p)),p=∞}, J_j =2π i ρ^j+1≤{Q(z(p))U_j-1(p)/R(z(p)),p=∞},whereP,Q,Rare defined in Theorem <ref> andU_j(p)=sin[(j+1)arccos(p)]/sin[arccos(p)].is thej-th Chebyshev polynomial of the second kind. Using(<ref>), (<ref>) and changes of variables ξ=ρ e^iθ, p=cosθ, we calculate I_j=2∫_^+u(ξ)ξ^j|dξ| =2ρ^j+1∫ u(ρ e^i θ)sin(jθ) dθ =-2ρ^j+1∫_^+u(z(p))sin(jθ)/sinθdp =-ρ^j+1∮_γ̂P(z(p))U_j-1(p)/R(z(p))dp, =2π i ρ^j+1≤{P(z(p))U_j-1(p)/R(z(p)),p=∞}, where U_j-1(p) is the (j-1)-th Chebyshev polynomial of the second kind. The equality (<ref>) comes from an application of the Cauchy's theorem on the Riemann surface ℜ_n, the loop γ̂ encloses ^+ counterclockwisely. Then a direct application of the residues theorem leads to the equality (<ref>), which is exact the formula (<ref>) for computing the averaged densities. By replacing u, P by v, Q respectively, one get the formula (<ref>) for computing the averaged fluxes.Plots/g1symmetrix.tex Based on the above proposition, in order to compute the kurtosis, we just need the first few averaged quantities, namely,I_1, I_3andJ_2. The following proposition gives an explicit formula for computing the kurtosis for the genus one circular condensate. Let^+be the contour on Fig.<ref>, then the kurtosis for the genus one condensate is given byκ=κ_num/κ_den,whereκ_num=6a_2^4+24(a_1-a_3+1)a_2^3+4≤[(a_1-a_3)^2-(8a_3-10)(a_1-a_3)-3]a_2^2+8≤[(a_1-a_3)^3+5(a_1-a_3)^2+(2a_1^2+2a_3^2-7)(a_1-a_3)-9]a_2+2≤[3(a_1-a_3)^4+4(a_1-a_3)^3-6(a_1-a_3)^2]+8(4a_3+2a_1^2+2a_3^2-9)(a_1-a_3)+54-16(a_1-a_3)(a_2+1)≤(3≤(a_1+a_2+a_3+1/3)^2-2(a_1a_2+a_1a_3+a_2a_3)-28/3)μ,κ_den=3[(a_1+a_2-a_3+3)(a_1+a_2-a_3-1)-4(a_2+1)(a_1-a_3)μ]^2,μ=E(m)/K(m), m=(1+a_3)(a_1-a_2)/(a_1-a_3)(a_2+1). First we use Proposition <ref> to compute I_1, I_3 and J_2, and then substitute them into the formula (<ref>) to compute the kurtosis. After some algebra, we get the kurtosis formula as stated. Notice that the kurtosis for the circular condensate is independent of the radiusρof the semicircleS^+.The kurtosis for the genus 0 condensate is always2for anyα_1,α_2∈(0,π)andα_1<α_2. This case can be degenerated from Proposition <ref> by taking the limit a_3 -1+. In fact, we have lim_a_3 -1+κ_num =6(a_1-a_2+2)^2(a_1-a_2-2)^2, lim_a_3 -1+κ_den =3(a_1-a_2+2)^2(a_1-a_2-2)^2, which immediately imply κ = 2.From the last section, we know that the genus one DOS/DOF(see equations (<ref>), (<ref>)) actually connects two genus zero DOS/DOF (see equations (<ref>), (<ref>)) asa_2a_1ora_2a_3.Also, we have already shown that the kurtosis for the genus zero condensate is always 2, it would be interesting to study the dynamic of the kurtosis as the branch pointa_2moving froma_1toa_3. A careful analysis to the formula (<ref>), we obtain the following theorem. Leta_1,a_3be fixed and satisfy-1<a_3<a_1<1. Then for anya_2∈[a_3,a_1]the genus 1 kurtosisκ, given by formula (<ref>),is greater or equal to 2 and is finite. Moreover,κ=2if and only ifa_2=a_1ora_2=a_3. Using the explicit formula for the genus one kurtosis (<ref>), we define a new function H(μ̃)=κ_num-2κ_den by replacing μ with μ̃ and consider μ̃ as a new variable not depending on a_1,a_2,a_3.To show κ≥ 2, it suffices to show H(μ̃) is positive for any μ̃∈ (1-m,1-m/2), where m=(1+a_3)(a_1-a_2)/(a_1-a_3)(a_2+1). A direct computation shows H(0)=-32(a_1+1)(a_2+1)(a_1-a_3)(a_2-a_3)<0,H(1)=-32(a_2+1)(a_3+1)(a_1-a_3)(a_1-a_2)<0,H(1-m)=32(a_1+1)(a_3+1)(a_2-a_3)(a_1-a_2)>0,H(1-m/2)=8(a_3+1)^2(a_1-a_2)^2>0. Note that the function H(μ̃) is a quadratic function of μ̃. The above observation (as visualized on Fig.<ref>) shows there is a zero in the interval (0,1-m) and another zero in (1-m/2,1), which implies H(μ̃)>0 for any μ̃∈ (1-m,1-m/2) andfor all a_2∈ (a_3,a_1), see inequality (<ref>). This proves that κ>2 for all a_2∈ (a_3,a_1). On the one hand, as a_2 a_1 or a_2 a_3, a direct computation shows κ=2 in both cases. On the other hand, if there exists some μ̃ such that H(μ̃)=0, then either m=0 or m=1 (otherwise we already show H(μ̃)>0). Since a_1>a_3, m=0 or m=1 implies a_2=a_1 or a_2=a_3 respectively. Thus, we have shown that κ≥ 2 and κ=2 if and only if a_2=a_3 or a_1=a_2. Plots/Hmudiag.tex To show that the kurtosis is finite, it suffices to show that, according to the definition of the kurtosis, I_1 is positive and I_3, J_2 are finite. Since I_1=1/2⟨|ψ|^2|$⟩, it is obvious positive. Sinceu,vare integrable with respect to the arc-length measure, we have |I_3|≤ z^3_L^∞(|dz|)u(z)_L^1(|dz|)<∞, |J_2|≤ z^2_L^∞(|dz|)v(z)_L^1(|dz|)<∞. Together with definition of the kurtosis, we conclude thatκ<∞. The proof is done. §.§ The kurtosis for DSWIn this subsection, we study the modulation dynamics of the kurtosis. As discussed in Section 4, there are two types of wave phenomenon, the rarefaction wave and the dispersive shock wave. According to Theorem <ref>, the kurtosis for the rarefaction wave is always2. As for the dispersive shock wave, we follow the same setting in section 4 for the dispersive shock wave. Replacinga_2in equation (<ref>) bya_2(x,t)as implicitly defined by equation (<ref>), we obtain the kurtosis for the dispersive shock wave. Denote the modulated kurtosis byκ_mod=κ_mod(a_2(x,t)). Recall for each fixeda_1,a_3such that-1<a_3<a_1<1, we have definedV_2- =-16a_1^2+8a_1a_3+2a_3^2-8a_1+4a_3+2/2a_1-a_3+1, V_2+ =-2a_1-4a_3+2,which in turn define two rays in thex-tplane:L_±:={(x,t):x-V_2±t=0,t>0}.These two rays split thex-tplane witht>0into three regions:D_1 :={(x,t): x-V_2+t>0}, D_2 :={(x,t): x-V_2+t<0, x-V_2-t>0}, D_3 :={(x,t): x-V_2-t<0}.In regionsD_1andD_3, the kurtosis is2and in the regionD_2, the kurtosis, according to Theorem <ref>, is strictly great than2and finite as long asa_1<1. As an illustrative example, we takea_1=0.9,a_3=-0.4and plot the modulated kurtosisκ_mod(a_2(x,t))in thex-tplane as well as the plot of the kurtosis near the rayL_-in Fig.<ref>. §.§ Scaling limit of the kurtosis of certain genus one circular condensateIn this subsection, we consider certain type of limiting configurations of the circular condensate and study the corresponding kurtosis.Specifically, we seta_3=-a_2≤ 0and study thelimitlim_(a_1,a_2) (1^-,0^+)κ(a_1,a_2,-a_2)along acertain pathL.In the next theorem, we show that for any givens>2,one can always find a pathL_ssuch that the limit in (<ref>) iss.For anys>2thelimitlim_c 0^+κ(a_1,a_2,-a_2)taken along the curve a_1=1-c, a_2=4exp≤{8/3(2-s)c^2}in the parameter plane(a_1,a_2)is equals. The kurtosis (<ref>) can be written in the following form κ(a_1,a_2,-a_2)=P_1+P_2μ/(Q_1+Q_2μ)^2, where P_1,P_2,Q_1,Q_2 are all polynomials of a_1,a_2. Replacing a_1=1-c and we consider the Taylor approximation of those polynomials near (c,a_2)=(0,0), which are given as follows: P_1 = -128/3a_2+32c^2+(a_2c,a_2^2), P_2 =64/3+(a_2,c), Q_1 = -4c+8a_2+(a_2^2, c^2, a_2c), Q_2 = -4+(a_2,c), where ({A_j}_j=1^n) means the correction term is bounded by some linear combination of {A_j}_j=1^n.Using the asymptotic approximations of complete elliptic integral of the first and the second kind (see formula 900.05 and 900.07 in Byrd-Friedman <cit.>), it is straightforward to show μ(m)=1/log4/√(1-m)+≤((m-1)log(1-m)),as m 1^-. Notice, as (c,a_2) (0,0), we have m=1-4a_2+(a_2^2,a_2c), which implies μ(m)=1/log2/√(a_2)+≤(-a_2/log(a_2)),as(a_2,c) (0,0). Then, after some algebraic manipulations, we arrive at the following leading behavior of the kurtosis κ = -128/3a_2+32c^2+64/31/log2/√(a_2)/≤(4c-8a_2+4/log2/√(a_2))^2+o(1). Since a_2 is obviously dominated by -1/log(a_2) as a_2 0^+, we can further simplify the kurtosis to κ = 32c^2+64/31/log2/√(a_2)/≤(4c+4/log2/√(a_2))^2+o(1). Let's denote S=1/log2/√(a_2) and κ_0=2c^2+4/3S/(c+S)^2, then κ=κ_0+o(1) as c 0+. Since κ_0=2+4/3S-4cS-2S^2/(c+S)^2, for c,S are sufficiently close to 0, it is evident that κ_0≥ 2. Apparently, S will always dominate cS+S^2 as (a_2,c) sufficiently close to (0,0), thus, we have κ_0=2+4/3S/(c+S)^2+o(1).Note that S=-3/4(2-s)c^2, we know, as c 0+, c dominates S, thus the denominator is then dominated by c^2. And we eventually get κ=2+4/3S/c^2+o(1), whose limit as c 0+ will be 2. This completes the proof. Since the kurtosis is a continuous function ofa_1,a_2,a_3as long asa_1>a_2>a_3, Theorem <ref> also implies that for any given number that is greater than2, there exists certain configuration (a genus one circular condensate) such that the kurtosis equals the given number. 99BFbook P. F. Byrd and M. D. Friedman, Handbook of Elliptic Integrals for Engineers and Scientists, Springer-Verlag Berlin, Heidelberg (1971) BT14M. Bertola and A. Tovbis,Meromorphic differentials with imaginary periods on degenerating hyperelliptic curves, Analysis and Mathematical Physics, 5, no.1, pp. 1-22 (2015). CERT T. Congy, G.A. El, G. Roberti andA. Tovbis, Dispersive hydrodynamics of soliton condensates for the Korteweg-de Vries equation, J. Nonl. Sci., 33, 104, https://doi.org/10.1007/s00332-023-09940-y (2023) (arXiv:2208.04472). CERTRS T. Congy, G.A. El, G. Roberti, A. Tovbis,S. Randoux andP. Suret, Statistics of extreme events in integrable turbulence, (arXiv:2307.08884). DafBook C. M. Dafermos, Hypderbolic Conservation Laws in Continuum Physics (3rd Edition), Springer-Verlag Berlin, Heidelberg (2010) El2003 G.A. El, The thermodynamic limit of the Whitham equations, Phys. Lett. A 311, (2003), 374-383. ET2020 G.A. El and A. Tovbis, Spectral theory of soliton and breather gases for the focusing nonlinear Schrödinger equation,Phys. Rev. E 101, 052207 (2020). FFMH. Flashka, M. G. Forest and D. W. McLaughlin,Multiphase Averaging and theInverse Spectral Solution of theKorteweg-de Vries Equation, Com. Pure and Appl. Math., 33, pp. 739-784 (1980). ForLee M. G. Forest and J.-E. Lee, Geometry and modulation theory for the periodic Nonlinear Schrödinger equation, in Oscillation Theory, Computation, and Methods of Compensated Compactness, edited by C. Dafermos, J. L. Ericksen, D. Kinderlehrer, andM.Slemrod(SpringerNewYork,NewYork,NY, 1986) pp. 35–70 GP A. V. Gurevich, L. P. Pitaevskii, Nonstationary structure of a collisionless shock wave, Sov. Phys. JETP 38 (2)291–297 (1974). KT2021A. Kuijlaars and A. Tovbis,On minimal energy solutions to certain classes of integral equations related to soliton gases for integrablesystems, Nonlinearity 34, no. 10,7227 (2021) (arXiv:2101.03964) LL83P. Lax and D. Levermore, The Small Dispersion Limit of theKorteweg-de Vries Equation. II, Comm. Pure Appl. Math. 36, 571-593, (1983). ST E.B. Saff and V. Totik, Logarithmic Potentials with External Fields, Springer Verlag, Berlin, 1997. TWA. Tovbis and F. Wang, Recent developments in spectral theory of the focusing NLS soliton and breather gases: the thermodynamic limit of average densities, fluxes and certain meromorphic differentials; periodic gases, J. Phys. A: Math. Theor. 55, 424006 (2022). Wadati M. Wadati, H. Sanuki and K. Konno, Relationships among inverse method, Bäcklund transformation and an infinite number of conservation laws, Prog. Theor. Phys, 53, 2, 419–436, (1975). Za71 V. E. Zakharov, Kinetic equation for solitons, Sov. Phys. JETP 33, 538 (1971). ZS V. E. Zakharov and A. B. Shabat, Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media, Sov. Phys. JETP 34, 62 (1972).
http://arxiv.org/abs/2312.16406v1
{ "authors": [ "Alexander Tovbis", "Fudong Wang" ], "categories": [ "nlin.PS", "math-ph", "math.MP" ], "primary_category": "nlin.PS", "published": "20231227042759", "title": "Soliton condensates for the focusing Nonlinear Schrodinger Equation: a non-bound state case" }
plain theoremTheorem[subsection] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary notation[theorem]Notation remark[theorem]Remark definition[theorem]Definition example[theorem]Example questionQuestion conjectureConjecture convention[theorem]Conventionequationsection theoremsubsection
http://arxiv.org/abs/2312.16655v1
{ "authors": [ "Sourav Ghosh" ], "categories": [ "math.GT", "math.DG", "53-xx, 37-xx" ], "primary_category": "math.GT", "published": "20231227180316", "title": "Deformation of Fuchsian representations and proper affine actions" }
LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States AnalysisJinwen He^1,2 Yujia Gong^1,2 Kai Chen^1,2 Zijin Lin^1,2 Chengan Wei^1,2Yue Zhao^1,2 ^1SKLOIS, Institute of Information Engineering,Chinese Academy of Sciences^2School of Cyber Security, University of Chinese Academy of Sciences{hejinwen, gongyujia, linzijin, weichengan, zhaoyue, chenkai}@iie.ac.cn ========================================================================================================================================================================================================================================================================================================================Finding min s-t cuts in graphs is a basic algorithmic tool with applications in image segmentation, community detection, reinforcement learning, and data clustering. In this problem, we are given two nodes as terminalsand the goal is to remove the smallest number of edges from the graph so that these two terminals are disconnected. We study the complexity of differential privacy for the min s-t cut problem and show nearly tight lower and upper bounds where we achieve privacy at no cost for running time efficiency. We also develop a differentially private algorithm for the multiway k-cut problem, in which we are given k nodes as terminals that we would like to disconnect. As a function of k, we obtain privacy guarantees that are exponentially more efficient than applying the advanced composition theorem to known algorithms for multiway k-cut. Finally, we empirically evaluate the approximation of our differentially private min s-t cut algorithm and show that it almost matches the quality of the output of non-private ones. § INTRODUCTIONMin s-t cut, or more generally Multiway k-cut, is a fundamental problem in graph theory and occupies a central place in combinatorial optimization. Given a weighted graph and k terminals, the multiway k-cut problem asks to divide the nodes of the graph into k partitions such that (1) each partition has exactly one terminal, and (2) the sum of the weights of edges between partitions, known as the cut value, is minimized <cit.>. Multiway k-cut is a clustering method used in a large variety of applications, including energy minimization <cit.> and image segmentation <cit.> in vision, reinforcement learning <cit.>, community detection <cit.> and many other learning and clustering tasks <cit.>.The above applications are carried on large data sets, and more and more frequently, those applications are executed on sensitive data. Hence, designing algorithms that preserve data privacy has been given substantial attention recently. A widely used and conservative standard of data privacy is differential privacy (DP) developed by Dwork <cit.>, which indicates that an algorithm is differentially private if, given two neighboring databases, it produces statistically indistinguishable outputs.When DP is applied to graph data, two variants of DP were introduced <cit.>: in edge DP, two neighboring graphs differ in only one edge, and in node DP, two graphs are neighboring if one graph is obtained from removing one node and all the edges incident to that node from the other graph. In this work, we focus on the edge DP. In a significant fraction of cases, the nodes of a network are public, but the edge attributes – which are the relationships between nodes – are private. For many clustering algorithms such as k-means, k-median, and correlation clustering,tight or near-tight differentially private algorithms have already been developed <cit.>. In particular, tight algorithms for differentially private min-cut have been long known <cit.>, where the min-cut problem asks to divide the graph into two partitions such that the cut value is minimized; in the min-cut problem, there is no restriction that two nodes must be on different sides of the cut. Even though min-cut and min s-t cut might seem similar, the algorithm techniques for min-cut do not extend to min s-t cut. A glimpse of this difference can be seen in the fact that there exist polynomially many min-cuts in any graph, but there can be exponentially many min s-t cuts for fixed terminals s and t[Take n-2 nodes v_1,...,v_n-2 in addition to s and t, and add v_i to both s and t for all i. Suppose the weight of all edges is 1. There are 2^n-2 min s-t cuts, as each min s-t cut should remove exactly one edge from each s-v_i-t path; either (s, v_i) or (v_i, t) is fine.In this example, there are n-2 min cuts: for each i, remove edges (v_i,s) and (v_i,t), which is a cut of size 2. Moreover, one can show that the number of min-cuts is polynomial for any graph; there are at most n2 of them, please see <cit.>. ].Our Results and Technical OverviewIn this paper, we provide the edge-differentially private algorithm for min s-t cut and prove that it is almost tight. To the best of our knowledge, this is the first DP algorithm for the min s-t cut problem. Our first result is an ϵ-private algorithm for min s-t cut with O(n/ϵ) additive error.theoremdpstcut For any ϵ>0 and for weighted undirected graphs, there is an algorithm for min s-t cut that is (ϵ,0)-private and with high probability returns a solution with additive error O(n/ϵ). Moreover, the running time of this private algorithm is the same as the running time of the fastest non-private one.Moreover, our proof of <ref> extends to when an edge-weight changes by at most τ in two neighboring graphs. In that case, our algorithm is (, 0)-private with O(n ·τ / ) additive error.Furthermore, our approach uses the existing min s-t cut algorithms in a black-box way. More precisely, our method changes the input graph in a specific way, i.e., it adds 2(n-1) edges with special weight, and an existing algorithm does the rest of the processing. In other words, our approach automatically transfers to any computation setting – centralized, distributed, parallel, streaming – as long as there is a non-private min s-t cut for that specific setting. The main challenge with this approach is showing that it actually yields privacy. Perhaps surprisingly, our approach is almost optimal in terms of the approximation it achieves. Specifically, we complement our upper-bound by the following lower-bound result.theoremthmlb Any (ϵ,δ)-differential private algorithm for min s-t cut on n-node graphs requires expected additive error of at least n/20 for any ϵ≤ 1 and δ≤ 0.1.Note that <ref> holds regardless of the graph being weighted or unweighted. Given an unweighted graph G, let the cut C_s(G) be the s-t cut where node s is in one partition and all the other nodes are in the other partition. The algorithm that always outputs C_s(G) is private, and has the additive error of O(n) since the value of C_s(G) is at most n-1 in unweighted graphs. Thus one might conclude that we do not need <ref> for a private algorithm, as one cannot hope to do better than O(n) additive error by <ref>. However, this argument fails for weighted graphs when C_s(G) has a very large value due to weighted edge incident to s, or when a graph has parallel unweighted edges. In <ref>, we show an application in which one computes a weighted min s-t cut while as the input receiving an unweighted graph. We further evaluate our theoretical results on private min s-t cut in <ref> and show that despite our additive approximation, our approach outputs a cut with the value fairly close to the min s-t cut.Moreover, we show that it is, in fact, more accurate than more natural heuristics (such as outputting C_s(G)) for a wide range of the privacy parameter ϵ.<ref> and <ref> depict an interesting comparison to differentially private min cut algorithms. Gupta et al. <cit.> provide an ϵ-private algorithm with O(logn/ϵ) additive error for min cut and shows that any ϵ-private algorithm requires Ω(log n) additive error. The large gap between ϵ-private algorithms for the min cut and min s-t cut could provide another reason one cannot easily extend algorithms for one to the other problem. While finding (non-private) Min s-t cut is polynomially solvable using max-flow algorithms <cit.>, the (non-private) Multiway k-cut problem is NP-hard for k≥ 3 <cit.> and finding the best approximation algorithm for it has been an active area of research <cit.>. The best approximation factor for the Multiway cut problem is 1.2965 <cit.>, and the largest lower bound is 1.20016 <cit.>. From a more practical point of view, there are a few simple 2-approximation algorithms for multiway k-cut, such as greedy splitting <cit.>, which split one partition into two partitions in a sequence of k-1 steps. Another simple 2-approximation algorithm by <cit.> is the following: (1) if the terminals of the graph are s_1,…,s_k, for each i, contract all the terminals except s_i into one node, t_i; (2) then run min s-t cut on this graph with terminals s_i,t_i; (3) finally, output the union of all these k cuts. This algorithm reduces multiway k-cut into k instances of min s-t cut. Hence, it is easy to see that if we use a ϵ-private min s-t cut with additive error r, we get a kϵ-private multiway k-cut algorithm with additive error kr and multiplicative error 2. In a similar way, one can make the greedy splitting algorithm private with the same error guarantees. Using advanced composition <cit.> can further reduce this dependency polynomially at the expense of private parameters. We design an algorithm that reduces the dependency on k of the private multiway cut algorithm to log k, which is exponentially smaller dependence than applying the advanced composition theorem. We obtain this result by developing a novel 2-approximation algorithm for multiway k-cut that reduces this problem to logk instances of min s-t cut. Our result is represented in <ref>.We note that the running time of the Algorithm of <ref> is at most O(logk) times the private min s-t cut algorithm of <ref>. theoremdpmultiway For any ϵ>0, there exists an (ϵ,0)-private algorithm for multiway k-cuton weighted undirected graphs that with probability 1 - 1/n at least returns a solution with value 2· OPT+O(nlogk/ϵ), where OPT is the value of the optimal non-private multiway k-cut. Finally, we emphasize that all our private algorithms output cuts instead of only their value. For outputting a number x privately, the standard approach is to add Laplace noise (1/) to get an -private algorithm with additive error at most O(1/) with high probability. Additional implications of our results. First, the algorithm for <ref>, i.e., <ref>, uses a non-private min s-t cut algorithm in a black-box manner. Hence, <ref> essentially translates to any computation setting without asymptotically increasing complexities compared to non-private algorithms. Second, <ref> creates (recursive) graph partitions such that each vertex and edge appears in O(log k) partitions. To see how it differs from other methods, observe that the standard splitting algorithm performs k-1 splits, where some vertices appear in each of the split computations. There are also LP-based algorithms for the multiway k-cut problem, but to the best of our knowledge, they are computationally more demanding than the 2-approximate greedy splitting approach. Consequently, in the centralized and parallel settings such as PRAM, in terms of total work, <ref> depends only logarithmically on k while the popular greedy splitting algorithm has a linear dependence on k. To the best of our knowledge, no existing method matches these guarantees of <ref>. § PRELIMINARIES In this section, we provide definitions used in the paper. We say that an algorithm 𝒜 outputs an (α,β)-approximation to a minimization problem P for α≥ 1,β≥ 0, if on any input instance I, we have OPT(I) ≤α𝒜(I)+β where 𝒜(I) is the output of 𝒜 on input I and OPT(I) is the solution to problem P on input I. We refer to α and β as the multiplicative and additive errors respectively. Graph Cuts We use uv to refer to the edge between nodes u and v. Let G=(V,E,w_G) be a weighted graph with node set V, edge set E and w_G:V^2 →_≥ 0, where for all edges uv∈ E, w_G(uv)>0, and for all uv∉ E, w_G(uv)=0.For any set of edges C, let w_G(C)=∑_e∈ Cw_G(e). If removing C disconnects the graph, we refer to C as a cut set. When C is a cut set, we refer to w_G(C) as the weight or value of C. We drop the subscript G when it is clear from the context. For subsets A,B⊆ V, let E(A,B) be the set of edges uv=e∈ E such that u∈ A and v∈ B.Given k terminals s_1,…, s_k, a multiway k-cut is a set of edges C⊆ E such that removing the edges in C disconnects all the terminals. More formally, there exists k-disjoint node subsets V_1,…,V_k∈ V such that s_i∈ V_i for all i, ∪_i=1^k V_i = V, and C=∪_i,j∈{1,…,k} E(V_i,V_j). The Multiway k-cut problem asks for a Multiway k-cut with the lowest value. In our algorithms, we use the notion of node contractions which we formally define here. Let Z⊆ V be a subset of nodes of the graph G=(V,E,w_G). By contracting the nodes in Z into node , we add a nodeto the graph and remove all the nodes in Z from the graph. For every v∈ V∖ Z, the weight of the edge v is equal to ∑_z∈ Z w_G(vz). Note that if none of the nodes in Z has an edge to v, then there is no edge fromto v.Differential PrivacyWe formally define neighboring graphs and differential private algorithms.Graphs G=(V,E,w) and G'=(V,E',w') are called neighboring if there is uv∈ V^2 such that |w_G(uv)-w_G'(uv)|≤ 1 and for all u'v'≠ uv, u'v'∈ V^2, we have w_G(u'v')=w_G'(u'v').A(randomized) algorithm 𝒜 is (ϵ,δ)-private (or (ϵ,δ)-DP) if for any neighboring graphs G and G' and any set of outcomes O⊂ Range(𝒜) it holds𝒜(G)∈ O≤ e^ϵ𝒜(G')∈ O+δ.When δ=0, algorithm 𝒜 is pure differentially private, or ϵ-private.Let ϵ_1,…,ϵ_t>0 and δ_1,…,δ_t≥0. If we run t (possibly adaptive) algorithms where the i-th algorithm is (ϵ_i,δ_i)-DP, then the entire algorithm is (ϵ_1+…+ϵ_t,δ_1+…+δ_t)-DP.Exponential distribution. For λ > 0, we use X ∼(λ) to denote that a random variable X is drawn from the exponential distribution with parameter λ. In our proofs, we use the following fact. Let X ∼(λ) and z ≥ 0. Then X ≥ z = exp-λ z.Laplace distribution. We use X ∼(b) to denote that X is a random variable is sampled from the Laplace distribution with parameter b. Let X ∼(b) and z > 0. ThenX > z = 1/2exp-z/band |X| > z = exp-z/b. Let X, Y ∼(). Then, the distribution X - Y follows (1 / ). We use ^b to denote the cumulative distribution function of (b), i.e.,^b(x) = 1/2 bexp-x/b. We use ^b to denote the cumulative distribution function of (b), i.e., ^b(x) = 1/2expx/b if x ≤ 01 - 1/2exp-x/b otherwiseWhen it is clear from the context what b is, we will drop it from the superscript. For any t ∈ and τ≥ 0 it holds^1/(t+τ)/^1/(t)≤ e^τϵ and ^1/(t+τ)/^1/(t)≤ e^τϵ. The proof of this claim is standard, and for completeness, we provide it below.By definition, we have(t+τ)/(t) = /2exp-t+τ//2exp-t = exp-t+τ + t≤expτ.Also by definition, it holds (t + τ) = ∫_-∞^t + τ(x)dx. Using <ref> we derive(t + τ) ≤expτ∫_-∞^t + τ(x - τ)dx = exp(τ) ∫_-∞^t(x)dx = exp(τ) (t). § DP ALGORITHM FOR MIN S-T CUTIn this section, we prove <ref>. Our DP min s-t cut algorithm is quite simple and is provided as <ref>. The approach simply adds an edge between s and every other node, and an edge between t and every other node. These edges have weight drawn from (). The challenging part is showing that this simple algorithm actually preserves privacy. Moreover, when we couple the approximation guarantee of <ref> with our lower-bound shown in <ref>, then up to a 1/ factor <ref> yields optimal approximation guarantee.Remark: Technically, there might be multiple min s-t cuts that can be returned on <ref> of <ref>. We discuss that, without loss of generality, it can be assumed that there is a unique one. We elaborate on this in <ref>. §.§ Differential Privacy AnalysisBefore we dive into DP guarantees of <ref>, we state the following claim that we use in our analysis. Its proof is given in <ref>. Suppose x and y are two independent random variables drawn from (1/). Let α, β,γ be three fixed real numbers, and let τ≥ 0. DefineP(α,β,γ) x<α,y<β,x+y<γ.Then, it holds that1 ≤P(α+τ,β+τ,γ)/P(α,β,γ)≤ e^4τϵ.Let τ be the edge-weight difference between two neighboring graphs. <ref> is (4 τ, 0)-DP. Let G and G' be two neighboring graphs. Let edge e = uv be the one for which w_e in G and G' differ; recall that it differs by at most τ. To analyze the probability with which <ref> outputs the same cut for input G as it outputs for input G', we first sample all the X_s, x and X_t, x for x ∈ V ∖{s, t, u, v}. Intuitively, the outcomes of those random variables are not crucial for the difference between outputs of <ref> invoked on G and G'. We now elaborate on that.Consider all the s-t cuts – not only the minimum ones – in G before X_s, u, X_s, v, X_t, u and X_t, v are sampled; for instance, assume for a moment that those four random variables equal 0.Let (G) be all those cuts sorted in a non-decreasing order with respect to their weight. Observe that (G) = 2^n - 2. As we will show next, although there are exponentially many cuts, we will point to only four of them as crucial for our analysis. This will come in very handy in the rest of this proof. We now partition (G) into four groups based on which u and v are on the same side of the cut as s. Let _u(G) be the subset of (G) for which u is on the same side of a cut as s while v is on the same side of the cut as t. Analogously we define _v(G). We use _u, v(G) to represent the subset of (G) for which s, u, and v are all on the same side. Finally, by _∅(G) we refer to the subset of (G) for which t, u, and v are all on the same side.Let C_u, C_v, C_u,v, and C_∅ be the min cuts in _u(G), _v(G), _u, v(G), and _∅(G), respectively, before X_s, u, X_s, v, X_t, u and X_t, v are sampled. It is an easy observation that sampling X_s, u, X_s, v, X_t, u and X_t, v and altering the weight of the edge e changes the weight of all the cuts in _u(G) by the same amount. This same observation also holds for _v(G), _u, v(G) and _∅(G).This further implies that the min s-t cut of G after X_s, u, X_s, v, X_t, u and X_t, v are sampled will be among C_u, C_v, C_u,v, and C_∅. Observe that these four cuts are also the minimum-weight cuts in _u(G'), _v(G'), _u, v(G'), and _∅(G'), respectively.In other words, the min s-t cuts in G and in G' are among C_u, C_v, C_u,v and C_∅, but not necessarily the same; this holds both before and after sampling X_s, u, X_s, v, X_t, u and X_t, v. In the rest of this proof, we show that sampling X_s, u, X_s, v, X_t, u and X_t, v makes it likely that the min s-t cuts in G and G' are the same cut.The ratio of probabilities in G' and G that C_u is the min s-t cut. Simply, our goal is to computew_G'(C_u) < w_G'(C_v) ∧ w_G'(C_u) < w_G'(C_u,v) ∧ w_G'(C_u) < w_G'(C_∅)/w_G(C_u) < w_G(C_v) ∧ w_G(C_u) < w_G(C_u,v) ∧ w_G(C_u) < w_G(C_∅). First consider the case w_G(e) = w_G'(e) + τ, for some τ > 0.For the ease of calculation, for x ∈{∅, {u, v}}, define Δ_x = w_G(C_u) - w_G(C_x)- τ and define Δ_v = w_G(C_u) - w_G(C_v) before X_s, u, X_s, v, X_t, u and X_t, v are sampled. In the rest of this proof, we are analyzing the behavior of cuts when X_s, u, X_s, v, X_t, u and X_t, v are sampled. Then, we havew_G(C_u) < w_G(C_v) ∧ w_G(C_u) < w_G(C_u,v) ∧ w_G(C_u) < w_G(C_∅)=[Δ_v + X_s,v + X_t,u < X_t, v + X_s, u ∧ Δ_u, v + X_s, v + X_t, u + τ < X_t, v + X_t, u ∧ Δ_∅ + X_s, v + X_t, u + τ < X_s, v + X_s, u] =[Δ_v + X_s,v + X_t,u < X_t, v + X_s, u ∧ Δ_u, v + X_s, v + τ < X_t, v ∧ Δ_∅ + X_t, u + τ < X_s, u].Now, we replace X_s,v - X_t,v by x and X_t,u - X_s,u by y. Then, <ref> can be rewritten asw_G(C_u) < w_G(C_v) ∧ w_G(C_u) < w_G(C_u,v) ∧ w_G(C_u) < w_G(C_∅)=x + y < -Δ_v ∧ x < -Δ_u, v - τ ∧ y < -Δ_∅ - τ.Applying the same analysis for G' we derivew_G’(C_u) < w_G’(C_v) ∧ w_G’(C_u) < w_G’(C_u,v) ∧ w_G’(C_u) < w_G’(C_∅)=[Δ_v + X_s,v + X_t,u < X_t, v + X_s, u ∧ Δ_u, v + X_s, v + X_t, u < X_t, v + X_t, u ∧ Δ_∅ + X_s, v + X_t, u < X_s, v + X_s, u] =[Δ_v + X_s,v + X_t,u < X_t, v + X_s, u ∧ Δ_u, v + X_s, v< X_t, v ∧ Δ_∅ + X_t, u < X_s, u].Replacing X_s,v - X_t,v by x and X_t,u - X_s,u by y yieldsw_G'(C_u) < w_G'(C_v) ∧ w_G'(C_u) < w_G'(C_u,v) ∧ w_G'(C_u) < w_G'(C_∅)=x + y < -Δ_v ∧ x < -Δ_u, v ∧ y < -Δ_∅.Note that by <ref> we have that x and y follow (1/). Moreover, observe that the random variables x and y are independent by definition. Therefore, by invoking <ref> with α = -Δ_u,v - τ, β = -Δ_∅ - τ, and γ = -Δ_v we derivew_G'(C_u) < w_G'(C_v) ∧ w_G'(C_u) < w_G'(C_u,v) ∧ w_G'(C_u) < w_G'(C_∅)/w_G(C_u) < w_G(C_v) ∧ w_G(C_u) < w_G(C_u,v) ∧ w_G(C_u) < w_G(C_∅)≤ e^4 τ.Considering the second case w_G(e) + τ = w_G'(e), for some τ > 0, we derive that the ratio <ref> is upper-bounded by 1 ≤ e^4 τ. The remaining cases. The proof for the remaining case, i.e., analysis when C_v, C_u,v and C_∅ is the minimum, follows the same steps as the case we have just analyzed.Finalizing the proof. It remains to discuss two properties that are needed to complete the proof.First, our analysis above is applied for all but 4 random variables X fixed. Nevertheless, our analysis does not depend on how those random variables are fixed. Therefore, for any fixed cut C, summing/integrating over all the possible outcomes of those variables yields<ref> outputs C given G/<ref> outputs C given G'≤ e^4 τ. Second, our proof shows the ratio of a cut C being the minimum one in G and G'. However, the DP definition applies to any set of multiple cuts. It is folklore that in the case of pure DP, i.e., when δ = 0, these two cases are equivalent.This completes the analysis.§.§ Approximation AnalysisObserve that showing that <ref> has O(n / ·log n) additive error is straightforward as a random variable drawn from (b) is upper-bounded by O(log n / b) whp. <ref> proves the O(n / ) bound. On a very high level, our proof relies on the fact that for a sufficiently large constant c, it holds that only a small fraction of random variables X_i, j exceeds c/; this fraction is e^-c.With probability at least 1 - n^-2, <ref> outputs a min s-t cut with additive error O(n / ). Let G be an input graph to <ref> and letbe the graph after the edges on <ref> are added.contains 2(n-1) more edges than G. This proof shows that with probability at least 1 - n^-2, the total sum of weights of all these edges is O(n / ).We first provide a brief intuition. For the sake of it, assume = 1. Whp, each X_i, j weighs at most 5 log n. Also, consider only those X_i, j such that X_i, j≥ 2; the total sum of those random variables having weight less than 2 is O(n). It is instructive to think of the interval [2, 5 log n] being partitioned into buckets of the form [2^i, 2^i+1). Then, the value of each edge added by <ref> falls into one of the buckets. Now, the task becomes upper-bounding the number c_i of edges in bucket i. That is, we let Y_s, u^i = 1 iff X_s, u∈ [2^i, 2^i+1), which results in c_i = ∑_u ∈ V(Y_s, u^i + Y_t, u^i). Hence, c_i is a sum of 0/1 independent random variables, and we can use the Chernoff bound to argue about its concentration.There are two cases. If c_i is more than O(log n), then c_i ∈ O(c_i) by the Chernoff bound. If c_i∈ o(log n), e.g., c_i = O(1), we can not say that with high probability c_i ∈ O(c_i). Nevertheless, it still holds c_i ∈ O(log n) whp. Edges in E() ∖ E(G) with weights more than 5 log n. Let Y ∼(). By <ref>, Y > 5 log n = exp-5 log n = n^-5. Since each X_s,v and X_t, v is drawn from (), we have that for n ≥ 2 with probability at least 1 - n^-3 each of the edges in E() ∖ E(G) has weight at most 5 log n. Edges in E() ∖ E(G) with weights at most 5 log n. Observe that the sum of the weights of all the edges in E() ∖ E(G) having weight at most 2 / is O(n / ). Hence, we focus on the edge weights in the interval [2, 5 log n]. We partition this interval into O(loglog n) subintervals where each, except potentially the last one, of the form [2^i / , 2^i + 1 / ).Let Y ∼(). We haveY ∈ [2^i/, 2^i + 1/)≤Y ≥ 2^i /= e^-2^i.Let c_i be the number of random variables among X_s, v and X_t, v whose values belong to [2^i/, 2^i + 1/). Then, we derivec_i≤2(n-1)/e^2^i≤2n/2^2^i≤2n/2^2i,where to obtain the inequalities, we use that i ≥ 1. By the Chernoff bound, for appropriately set constant b > 0, it holds that c_i ≤ b ·maxlog n, 2n/2^2i with probability at least 1 - n^-5. By the union bound, this claim holds for all the O(loglog n) partitions simultaneously and with probability at least 1 - n^-4. Hence, with probability at least 1 - n^-4, the sum of the edge-weights in E() ∖ E(G) across all the O(loglog n) partitions is at most∑_i = 1^log5 log n b ·maxlog n, 2n/2^2i·2^i + 1/≤O(log n)/ + 2 b ∑_i ≥ 02n/2^i · = log n + 8 b n/∈ On/,where we used max(log n, 2n / 2^2i) ≤log n + 2n / 2^2i. By taking the union bound over both cases, we have that with probability at least 1 - n^-2, it holds that the sum of the weights of all edges added to G is O(n / ). Hence, with probability 1 - n^-2 at least, the min s-t cut inhas weight at most the min s-t cut in G plus O(n/).This completes our analysis.§.§ Proof of <ref> To prove the lower-bound, we observe that if x < α and y < β, then x < α + τ and y < β + τ as well. Hence, it trivially holds P(α+τ,β+τ,γ) ≥ P(α,β,γ) and hence P(α+τ,β+τ,γ)/P(α,β,γ)≥ 1. We now analyze the upper bound. For the sake of brevity, in the rest of this proof, we use F to denoteand f to denote . We consider three cases depending on parameters α,β,γ.Case γ≥α+β+2 τ. We have x+y<γ|x<α,y<β=1=x+y<γ|x<α+τ,y<β+τ. So, it holds that P(α,β,γ) =x+y<γ|x<α,y<β·x<α,y<β= x<α,y<β= F(α) F(β)Similarly, P(α+τ,β+τ,γ)= F(α+τ) F(β+τ). Now using <ref>, we obtain that P(α+τ,β+τ,γ)/P(α,β,γ)≤ e^2τϵ. Case γ<α+β. We write P(α,β,γ) as follows.P(α,β,γ)= ∫_-∞^β∫_-∞^min(α,γ-y) f_x(x|y)dx f_y(y)dy=∫_-∞^βF(min(α,γ-y)) f(y)dy= F(α)∫_-∞^γ-α f(y)dy + ∫_γ-α^βF(γ-y)f(y)dy= F(α)F(γ-α) + ∫_γ-α^βF(γ-y)f(y)dySimilar to <ref> we have P(α+τ,β+τ,γ) = F(α+τ)F(γ-α-τ)+ ∫_γ-α-τ^β+τF(γ-y)f(y)dy We rewrite <ref> as follows to obtain a lower bound on P(α,β,γ).P(α,β,γ)= F(α)F(γ-α-2 τ)+∫_γ-α-2 τ^γ-αF(α)f(y)dy+ ∫_γ-α^βF(γ-y)f(y)dy≥ F(α)F(γ-α-2 τ) + e^-2 τϵ∫_γ-α-2τ^βF(γ-y)f(y)dyIn obtaining the inequality, we used the fact that if y∈ [γ-α-2τ,γ-α] then 0≤ (γ-y)-α≤ 2τ and so by <ref> we have F(α) ≥ e^-2τϵF(γ-y).Now we compare the two terms of <ref> with <ref>. By <ref> we have that F(α)F(γ-α-2τ)≥ e^-2τϵF(α+τ)F(γ-α-τ) and ∫_γ-α-2τ^βF(γ-y)f(y)dy ≥ e^-τϵ∫_γ-α-τ^β+τF(γ-y)f(y)dy. So we have P(α,β,γ)≥ e^-3τϵP(α+τ,β+τ,γ).Case α+β≤γ<α+β+2τ. ThenP(α,β,γ)= ∫_-∞^β∫_-∞^min(α,γ-y) f_x(x|y)dx f_y(y)dy=∫_-∞^βF(min(α,γ-y)) f(y)dy= F(α) F(β)≥ e^-2τϵ F(α + τ) F(β + τ) = e^-2 τϵ (F(α + τ) F(β - τ) + F(α + τ) (F(β + τ) - F(β - τ)) ≥ e^-4 τϵ (F(α + τ) F(β + τ) + F(α + τ) (F(β + τ) - F(β - τ)) .Note that <ref> is obtained since for any y≤β we have α≤γ-y. <ref> and <ref> are both obtained using <ref>. One can easily verify that <ref> for P(α+1,β+1,γ) holds in this case as well. Using the fact that γ-α-τ≤β+γ and F being a non-decreasing function, we further lower-bound <ref> asP(α+τ,β+τ,γ) ≤F(α+τ)F(β + τ)+ ∫_γ-α-τ^β+τF(γ-y)f(y)dy≤F(α+τ)F(β + τ)+ F(α + τ) ∫_γ-α-τ^β+τ f(y)dy = F(α+τ)F(β + τ)+ F(α + τ) (F(β + τ) - F(γ - α - τ))≤F(α+τ)F(β + τ)+ F(α + τ) (F(β + τ) - F(β - τ)).<ref> conclude the analysis of this case as well. §.§ Multiple min s-t cuts Let G and G' be two neighboring graphs, and letand ', respectively, be their modified versions constructed by <ref>. <ref> outputs a min s-t cut in G. However, what happens if there are multiple min s-t cuts inand the algorithm invoked on <ref> breaks ties in a way that depends on whether a specific edge e appears in G or not? If it happens that e is the edge difference between G and G', then such a tie-breaking rule might reveal additional information about G and G'. We now outline how this can be bypassed.Observe that if the random variables X_s,u and X_t,u were sampled by using infinite bit precision, then with probability 1 no two cuts would have the same value. So, consider a more realistic situation where edge-weights are represented by O(log n) bits, and assume that the least significant bit corresponds to the value 2^-t, for an integer t ≥ 0. We show how to modify edge-weights by additional O(log n) bits that have extremely small values but will help obtain a unique min s-t cut. Our modification consists of two steps. First step. All the bits corresponding to values from 2^-t-1 to 2^-t-2log n remain 0, while those corresponding to larger values remain unchanged. This is done so that even summing across all – but at most n2 – edges it holds that no matter what the bits corresponding to values 2^-t-2log n - 1 and less are, their total value is less than 2^-t. Hence, if the weight of cut C_1 is smaller than the weight of cut C_2 before the modifications we undertake, then C_1 has a smaller weight than C_2 after the modifications as well. Second step. We first recall the celebrated Isolation lemma. Let T and N be positive integers, and letbe an arbitrary nonempty family of subsets of the universe {1, …, T}. Suppose each element x∈{1, …,T} in the universe receives an integer weight g(x), each of which is chosen independently and uniformly at random from {1,…, N}. The weight of a set S ∈ is defined as g(S) = ∑_x ∈ S g(x).Then, with probability at least 1 - T / N there is a unique set inthat has the minimum weight among all sets of .We now apply <ref> to conclude our modification of the edge weights in . We let the universe {1, …, T} from that lemma be the following 2(n-2) elements = {(s, v) | v ∈ V(G) ∖{s, t}}∪{(t, v) | v ∈ V(G) ∖{s, t}}. Then, we letrepresent all min s-t cuts in , i.e., S ⊆ belongs toiff there is a min s-t cut C insuch that for each (a, b) ∈ S the cut C contains X_a, b. So, by letting N = 2 n^2, we derive that with probability 1 - 1/n it holds that no two cuts represented byhave the same minimum value with respect to g defined in <ref>. To implement g in our modification of weights, we modify the bits of each X_s, v and X_t, v corresponding to values from 2^-t-2log n - 1 to 2^-t-2log n - log N to be an integer between 1 and N chosen uniformly at random.Only after these modifications, we invoke <ref> of <ref>. Note that the family of cutsis defined only for the sake of analysis. It is not needed to know it algorithmically.§ LOWER BOUND FOR MIN S-T CUT ERROR In this section, we prove our lower bound. Our high-level idea is similar to that of<cit.> for proving a lower bound for private algorithms for correlation clustering.*For the sake of contradiction, let 𝒜 be a (ϵ,δ)-differential private algorithm for min s-t cut that on any input n-node graph outputs an s-t cut that has expected additive error of less than n/20. We construct a set of 2^n graphs S and show that 𝒜 cannot have low expected cost on all of the graphs on this set while preserving privacy.The node set of all the graphs in S are the same and consist of V={s,t, v_1,…,v_n} where s and t are the terminals of the graph and n>30. For any τ∈{0,1}^n, let G_τ be the graph on node set V with the following edges: For any 1≤ i≤ n, if τ_i=1, then there is an edge between s and v_i. If τ=0, then there is an edge between t and v_i. Note that v_i is attached to exactly one of the terminals s and t. Moreover, the min s-t cut of each graph G_τ is zero. Algorithm 𝒜 determines for each i if v_i is on the s-side of the output cut or the t-side. The contribution of each node v_i to the total error is the number of edges attached to v_i that are in the cut. We denote this random variable in graph G_τ by e_τ(v_i). Since there are no edges between any two non-terminal nodes in any of the graphs G_τ, the total error of the output is the sum of these individual errors, i.e., ∑_i=0^n e_τ(v_i). Let e̅_τ(v_i) be the expected value of e_τ(v_i) over the outputs of 𝒜 given G_τ.Let p_τ^(i) be the marginal probability that v_i is on the s-side of the output s-t cut in G_τ. If τ_i = 0, then v_i is connected to t and so e̅_τ(v_i) = p_τ^(i). If τ_i=1, then v_i is connected to s and so e̅_τ(v_i) = 1-p_τ^(i). By the assumption that 𝒜 has a low expected error on every input, we have that for any τ∈{0,1}^n,(n+2)/20 > ∑_i,τ_i=0 p_τ^(i)+∑_i,τ_i=1 (1-p_τ^(i))Let S_i be the set of τ∈{0,1}^n such that τ_i=0, and S̅_i be the complement of S_i, so that τ∈S̅_i if τ_i = 1. Note that |S_i|=|S̅_i| = 2^n-1. Fix some i, and for any τ∈{0,1}^n, let τ' be the same as τ except for the i-th entry being different, i.e., for all j≠ i, τ_j = τ_j', and τ_i≠τ_i'. Since G_τ and G_τ' only differ in two edges, from 𝒜 being (ϵ,δ)-differentially private for any j we have p_τ^(j)≤ e^2ϵ· p_τ'^(j)+δ. So for any i,j we have ∑_τ∈ S_i p_τ^(j)≤∑_τ∈S̅_i(e^2ϵp_τ^(j) +δ)From <ref> we have2^n· 0.05(n+2)> ∑_τ∈{0,1}^n∑_i:τ_i=1 (1-p^(i)_τ)=∑_i=1^n∑_τ∈ S_i(1-p^(i)_τ)≥∑_i=1^n ∑_τ∈S̅_i (1-[e^2ϵp^(i)_τ +δ]) =n2^n-1(1-δ) -e^2ϵ∑_i=1^n∑_τ∈S̅_i p_τ^(i)Where the last inequality comes from <ref>. Using <ref> again, we have that ∑_i=1^n∑_τ∈S̅_i p_τ^(i)<2^n· 0.05(n+2), so we have that 2^n· 0.05(n+2)>n2^n-1(1-δ)-e^2ϵ(2^n· 0.05(n+2)). Dividing by 2^n we have0.05(n+2)(1+e^2ϵ)>n(1-δ)/2.Now since ϵ≤ 1, δ≤ 0.1, and e^2<7.4 we get that 0.05· 8.4(n+2) >0.45nHence, we have n<28, which is a contradiction to n>30.§ DP ALGORITHM FOR MULTIWAY CUTIn this section, we show our approach for proving <ref> restated below.* We first develop an algorithm for the non-private multiway k-cut problem and then use it to prove <ref>. Our algorithm invokes a min s-t cut procedure O(logk) times.§.§ Solving Multiway Cut in logk Rounds of Min s-t Cut Our new multiway k-cut algorithm is presented in <ref>. <ref> first finds a cut that separates s_1,…,s_k' from s_k'+1,…,s_k. This separation is obtained by contracting s_1,…,s_k' into a single node called s, contracting s_k'+1,…,s_k into a single node called t, and then running min s-t cut on this new graph. Afterward, each of the two partitions is processed separately by recursion, and the algorithm outputs the union of the outputs on each of the two partitions. We first show that in each recursion level of the algorithm, we can run the min s-t cut step (<ref>) of all the instances in that level together as one min s-t cut so that we only run O(logk) many min s-t cuts.<ref> is a reduction of multiway k-cut on n-node graphs to O(logk) many instances of min s-t cut on O(n)-node graphs. Moreover, if T(𝒜,n,m) is the running time of 𝒜 on an n-node m-edge graphs, <ref> runs in O(logk· [T(𝒜,n)+m]). Let G be the input graph; suppose it has n nodes. If we consider the recursion tree of the algorithm on G, we can perform the min s-t cut step (<ref>) of all of the subproblems on one level of the recursion tree by a single min s-t cut invocation. To see this, assume that G_1^r',…,G_r^r' are in level r' of the recursion tree for r=2^r'-1, and for sake of simplicity, assume that k is a power of 2. Let s_i,t_i be the two terminals in _i^r' (defined in <ref>) for i∈{1,…,r}. We can think of the collection of graphs _1^r',…,_r^r' as one graphG^r'. Contract s_1,…,s_r into s, and contract t_1,…, t_r into t, to obtain the graph ^r' from G^r'. Then performing a min s-t cut algorithm on ^r' is equivalent to performing min s-t cut on each of G_i^r', as there is no edge between G_i^r' and G_j^r' for any i,j. Note that since G_1^r',…,G_r^r' are disjoint and are subgraphs of G, ^r' has at most n nodes. Moreover, the node contraction processes in each recursion level take at most O(m) as each edge is scanned at most once. To prove that <ref> outputs a multiway k-cut that is a 2-approximation of the optimal multiway k-cut, we first present a few definitions. Given graph G with node set V, a partial multiway k-cut is a set of disjoint node subsets V_1,…,V_k such that for each i, terminal s_i∈ V_i. Note that V_1∪…∪ V_k is not necessarily the whole V. For a set of nodes S⊆ V, let δ_G(S) be the sum of the weights of the edges with exactly one endpoint in S, i.e., the sum of the weights of the edges in E(S,V∖ S). Let w(E(S)) be the sum of the weights of the edges with both endpoints in S. <ref> is the main result in this section, and we state the proof of it . Let e(n) be a convex function and G=(V,E,w) be a weighted graph. Letbe an algorithm that with probability 1 - α outputs a (1,e(n))-approximate min s-t cut for e(n)=cn/ϵ for constants ϵ>0, c≥ 0. Then, for any graphwith terminals s_1,…,s_k and any partial multiway k-cut V_1,…, V_k of it, <ref> outputs a multiway cut that with probability 1 - α O(log k) has a value at most ∑_i=1^kδ(V_i)+w(E(V))+O(log(k) e(n)) where V = V∖ [∪_i=1^k V_i].A direct Corollary of <ref> is a 2-approximation algorithm of the optimal multiway k-cut. Recall that the novelty of this algorithm is in using O(logk) many min s-t cut runs as opposed to O(k) which is depicted in <ref>. Given a graph G, integer k≥ 2 and any exact min s-t cut algorithm, <ref> returns a multiway k-cut that is a 2-approximation of the optimal multiway k-cut. Let 𝒜 be an exact s-t-cut algorithm that is part of the input to <ref>. So we have e(n)=0. Let C_ALG be the output of <ref>. By <ref>, C_ALG is a multiway k-cut.Let C_OPT be the optimal multiway k-cut with partitions V_1,…,V_k. Note that there are no nodes that are not partitioned, and hence w(E(V))=0. So by <ref>, we have that w(C_ALG)≤∑_i=1^kδ_G(V_i)=2w(C_OPT).§.§ Proof of <ref>We use induction on k to prove the theorem. Suppose that <ref> outputs C_ALG. We show that the C_ALG is a multiway k-cut and that the value of C_ALG is at most w(E(V)) + ∑_i=1^kδ(V_i)+2log(k) e(n). We will first perform the analysis of approximation, assuming thatprovides the stated approximation deterministically. At the end of this proof, we will consider that the approximation guarantee holds with probability 1 - α.Base case: k = 1. If k=1, then C_ALG = ∅, and so it is a multiway 1-cut and w(C_ALG)=0≤δ(V_1)+w(E(V)). Inductive step: k ≥ 2. So suppose that k≥ 2. Hence, k'≥ 1 and k-k'≥ 1, where k' is defined on <ref>.Let (A,B) be the s-t cut obtained in <ref>, where _1 is the graph induced on A and_2 is the graph induced on B. Since the only terminals in _1 are s_1,…,s_k', we have that V_1∩ A,…, V_k'∩ A is a partial multiway k'-cut on _1.By the induction hypothesis, the cost of the multiway cut that <ref> finds on _1 is at most w(E(V∩ A))+∑_i=1^k'δ__1(V_i∩ A) + 2log(k') e(|A|). Similarly, by considering the partial multiway (k-k') cut V_k'+1∩ B,…,V_k∩ B on _2, the cost of the multiway cut that <ref> finds on _2 is at most w(E(V∩ B))+∑_i=k'+1^kδ__2(V_i∩ B)+2log(k-k')e(|B|). So the total cost w(C_ALG) of the multiway cut that <ref> outputs is at most w(C_ALG)≤ w(E(V∩ A))+w(E(V∩ B))+∑_i=1^k'δ__1(V_i∩ A)+∑_i=k'+1^kδ__2(V_i∩ B)+w(E(A,B))+2log(k')e(|A|)+2log(k-k') e(|B|)First note that C_ALG is a multiway k-cut: this is because by induction the output of the algorithm on _1 is a multiway k'-cut and the output of the algorithm on _2 is a multiway (k-k')-cut. Moreover, E(A,B)∈ C_ALG. So the union of these cuts and E(A,B) is a k-cut, and since each terminal is in exactly one partition, it is a multiway k-cut.Now, we prove the value guarantees. Let U_1 = V_1∪…∪ V_k' and U_2 = V_k'+1∪…∪ V_k. So U=U_1∪ U_2 = V_1∪…∪ V_k is the set of nodes that are in at least one partition. Recall that V = V∖ U is the set of nodes that are not in any partition. Consider the following cut that separates {s_1,…,s_k'} from {s_k'+1,…,s_k}: Let A' = [U_1∩ A] ∪ [U_1∩ B] ∪ [V∩ A]. Let B' = [U_2∩ B] ∪ [U_2∩ A] ∪ [V∩ B]. Since (A,B) is a min cut that separates {s_1,…,s_k'} from {s_k'+1,…,s_k} with additive error e(n), we have w(E(A,B))≤ w(E(A',B'))+e(n). Note that A = [U_1∩ A] ∪ [U_2∩ A] ∪ [V∩ A] and B=[U_1∩ B] ∪ [U_2∩ B] ∪ [V∩ B]. So turning (A,B) into (A',B') is equivalent to switching U_2∩ A and U_1∩ B between A and B. So we have that w(E(U_2∩ A,U_2∩ B))+w(E(U_2∩ A,V∩ B)) + w(E(U_1∩ B,U_1∩ A))+w(E(U_1∩ B,V∩ A))≤ w(E(U_2∩ A,U_1∩ A))+w(E(U_2∩ A,V∩ A)) + w(E(U_1∩ B,U_2∩ B))+w(E(U_1∩ B,V∩ B))+e(n)<ref> is illustrated in <ref>.Using <ref>, we obtain thatw(E(A,B))= w(E(U_2∩ A,U_2∩ B))+w(E(U_2∩ A,V∩ B))+ w(E(U_1∩ B,U_1∩ A))+w(E(U_1∩ B,V∩ A))+ w(E(U_2∩ A,U_1∩ B))+w(E([U_1∩ A]∪ [V∩ A],[U_2∩ B]∪ [V∩ B])) ≤ w(E(U_2∩ A,U_1∩ A))+w(E(U_2∩ A,V∩ A)) + w(E(U_1∩ B,U_2∩ B))+w(E(U_1∩ B,V∩ B)) + w(E(U_2∩ A,U_1∩ B))+w(E([U_1∩ A]∪ [V∩ A],[U_2∩ B]∪ [V∩ B])) +e(n)So, we conclude that w(E(A,B)) ≤w(E(U_1∩ B, [U_2∩ B]∪ [V∩ B]∪ [U_2∩ A]))+w(E(U_1∩ A,[U_2∩ B]∪ [V∩ B]))+w(E(U_2∩ A, [U_1∩ A]∪ [V∩ A]))+ w(E(U_2∩ B,V∩ A)) + w(E(V∩ A,V∩ B))+e(n)We substitute w(E(A,B)) in <ref> using <ref>. Recall that U_1 = ∪_i=1^k'V_i, δ__1(V_i∩ A)=w(E(V_i∩ A,A∖ V_i)) and δ_G(V_i) = w(E(V_i,V∖ V_i)). For any i∈{1,…,k'}, we have that E(V_i∩ B, [U_2∩ B]∪ [V∩ B]∪ [U_2∩ A]) and E(V_i∩ A,[U_2∩ B]∪ [V∩ B]) are both disjoint from E(V_i∩ A,A∖ V_i). Moreover all these three terms appear in E(V_i,V∖ V_i). So we have w(E(U_1∩ B, [U_2∩ B]∪ [V∩ B]∪ [U_2∩ A]))+w(E(U_1∩ A,[U_2∩ B]∪ [V∩ B])) +∑_i=1^k'δ__1(V_i∩ A) ≤∑_i=1^k'δ_G(V_i)Note that the first two terms above are the first two terms in <ref>. Similarly, we havew(E(U_2∩ A, [U_1∩ A]∪ [V∩ A])) + w(E(U_2∩ B,V∩ A))+ ∑_i=k'+1^kδ__2(V_i∩ B) ≤∑_i=k'+1^kδ_G(V_i)Note that the first two terms above are the third and fourth terms in <ref>. Finally w(E(V∩ A))+w(E(V∩ B))+w(E(V∩ A,V∩ B)) ≤ w(E(V)). So, we upper-bound <ref> asw(C_ALG) ≤∑_i=1^kδ_G(V_i)+w(E(V))+e(n)+2log(k') e(|A|)+2log(k-k')e(|B|).Since k'=k/2 and k-k'=k/2, we have that k'≤k+1/2 and k-k'≤k+1/2. Moreover, since e=cn/ϵ for ϵ>0 and c≥ 0, we have that e(|A|)+e(|B|)≤ e(|A|+|B|)=e(n). Therefore,e(n)+2log(k') e(|A|)+2log(k-k')e(|B|) ≤e(n)(1+2logk+1/2)≤ 2log(k)e(n).The above inequality finishes the approximation proof. The success probability. As proved by <ref>, the min s-t cut computations by <ref> can be seen as invocations of a min s-t cut algorithm on O(log k) many n-node graphs; in this claim, we useto compute min s-t cuts. By union bound, each of those O(log k) invocations outputs the desired additive error by probability at least 1 - α O(log k). §.§ Differentially Private Multiway CutOur Differentially Private Multiway cut algorithm is essentially <ref> where we use a differentially private min s-t cut algorithm (such as <ref>) in <ref> to compute a differentially private multiway k-cut. This concludes the approach for <ref>, which is a direct corollary of <ref> below.Let ϵ>0 bea privacy parameter, G an n-node graph, andan ϵ-DP algorithm that with probability at least 1 - 1/n^2 computes a min s-t cut with additive error O(e(n)/ϵ) on any n-node graph.Then, <ref> is a (ϵlogk)-DP algorithm that with probability at least 1 - O(log k) · n^-2 finds a multiway k-cut with multiplicative error 2 and additive error O(logk· e(n)/ϵ). The error guarantees directly result from <ref> and <ref>. Hence, here we prove the privacy guarantee. Consider the recursion tree of <ref>, which has depth O(log k). As proved by <ref>, each level of the recursion tree consists of running algorithm 𝒜 on disjoint graphs, and so each level is ϵ-DP. By basic composition (see <ref>), since <ref> is a composition of O(log k) many ϵ-DP mechanisms, it is (ϵlog k)-DP. § EMPIRICAL EVALUATIONWe perform an empirical evaluation on the additive error of <ref> and validate our theoretical bounds.Set-up. As our base graph, we use email-Eu-core network <cit.>which is an undirected unweighted 1,005-node 25,571-edge graph. This graph represents email exchanges, where two nodes u and v are adjacent if the person representing u has sent at least one email to the person representing v or vice-versa. However, this graph does not account for multiple emails sent between two individuals. To make the graph more realistic, we add random weights to the edges from the exponential distribution with a mean of 40 (rounded to an integer) to denote the number of emails sent between two nodes. We take 10 percent of the nodes and contract them into a terminal node s, and take another 10 percent of the nodes and contract them into another terminal node t. We make different instances of the problem by choosing the nodes that contract into s or t uniformly at random. Note that node contraction into terminals is a standard practice in real-world min s-t cut instances, as there are often nodes with predetermined partitions and to make a s-t cut (or multiway k-cut) instance one needs to contract these nodes into one terminal node for each partition. We take the floor of the values X_s,u and X_t,u to obtain integral weights. Baseline. Let C_s be the cut where one partition consists of only s, and let C_t be the cut where one partition consists of only t.We compare the cut value output by <ref> against min(w(C_s),w(C_t)). Particularly, if C_0 is the min s-t cut of an instance and <ref> outputs C_alg, then we compare relative errors w(C_alg)-w(C_0)/w(C_0) and min(w(C_s),w(C_t))-w(C_0)/w(C_0)[As another standard heuristic one could consider a random s-t cut. We do not include this baseline here as it often has a very high error and the terminal cut does much better than a random cut.]. We refer to the former value as the private cut relative error, and to the later value as the terminal cut relative error.Results. We first set ϵ=0.5.We then consider50 graph instances, and for each of them, we evaluate the average private error over 50 rounds of randomness used in <ref>, see <ref>(left) for the results. For almost all instances, the private cut error (including the error bars) is less than the terminal cut error. Moreover, the private cut error is quite stable, and far below the theoretical bound of n/ϵ. We next change ϵ and repeat the set-up above for each ϵ∈{1/15,1/14,…,1/2,1}. For each ϵ in this set we produce 50 graph instances, measure the average private error over 100 rounds of randomness for each, and then average the terminal cut error and mean private cut error for each graph instance. In this way, we obtain an average value for terminal cut and private cut errors for each of the ϵ values; we refer to <ref>(right) for the results. The linear relationship between the private cut error and 1/ϵ can be observed in <ref>(right). § ACKNOWLEDGEMENTSWe are grateful to anonymous reviewers for the valuable feedback. S. Mitrović was supported by the Google Research Scholar Program. alpha
http://arxiv.org/abs/2312.16370v1
{ "authors": [ "Mina Dalirrooyfard", "Slobodan Mitrović", "Yuriy Nevmyvaka" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20231227011308", "title": "Nearly Tight Bounds For Differentially Private Min $s$-$t$ and Multiway Cut" }
Modeling and Analysis of GEO Satellite Networks Dong-Hyun Jung, Hongjae Nam, Junil Choi, and David J. LoveD.-H. Jung is with the Satellite Communication Research Division, Electronics and Telecommunications Research Institute, Daejeon, 34129, South Korea (e-mail: dhjung@etri.re.kr). H. Nam and D. J. Love are with the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907 USA (e-mail: nam86@purdue.edu; djlove@purdue.edu). J. Choi is with the School of Electrical Engineering, KAIST, Daejeon, 34141, South Korea (e-mail: junil@kaist.ac.kr). January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The extensive coverage offered by satellites makes them effective in enhancing service continuity for users on dynamic airborne and maritime platforms, such as airplanes and ships. In particular, geosynchronous Earth orbit (GEO) satellites ensure stable connectivity for terrestrial users due to their stationary characteristics when observed from Earth. This paper introduces a novel approach to model and analyze GEO satellite networks using stochastic geometry.We model the distribution of GEO satellites in the geostationary orbit according to a binomial point process (BPP) and examine satellite visibility depending on the terminal's latitude. Then, we identify potential distribution cases for GEO satellites and derive case probabilities based on the properties of the BPP. We also obtain the distance distributions between the terminal and GEO satellites and derive the coverage probability of the network. We further approximate the derived expressions using the Poisson limit theorem. Monte Carlo simulations are performed to validate the analytical findings, demonstrating a strong alignment between the analyses and simulations. The simplified analytical results can be used to estimate the coverage performance of GEO satellite networks by effectively modeling the positions of GEO satellites.Index terms — Satellite communications, coverage analysis, stochastic geometry, GEO satellite networks. § INTRODUCTION Satellite communications have recently been utilized to offer worldwide internet services by taking advantage of their extensive coverage. In this regard, the 3rd Generation Partnership Project (3GPP) has been working toward the integration of terrestrial networks (TNs) and non-terrestrial networks (NTNs) since Release 15 [<ref>], [<ref>]. The utilization of non-terrestrial entities, such as geosynchronous Earth orbit (GEO) satellites, low Earth orbit (LEO) satellites, and high-altitude platforms, presents an opportunity to extend communication services beyond terrestrial boundaries. With this advancement, aerial users like drones, airplanes, and vehicles involved in urban air mobility could benefit from enhanced connectivity.To facilitate the integration between the TNs and NTNs, 3GPP has been addressing adding features to the standard to support NTNs with existing TNs [<ref>]. The different altitudes and stationary nature between GEO and LEO satellites result in distinct characteristics in terms of communication services and orbital configurations. In general, LEO satellites could offer greater throughput due to the lower path loss compared to that of GEO satellites, whereas the high-speed movement of LEO satellites leads to frequent inter-satellite handovers.On the contrary, GEO satellites, being viewed as stationary, could maintain stable connections with ground users at the cost of relatively lower throughput.In the design of LEO constellations, Walker Delta constellations with various inclinations are utilized to achieve uniform coverage near the equator, while Walker Star constellations are employed to provide services to the polar regions [<ref>]. In contrast to the LEO satellites, GEO satellites are positioned exclusively in the geostationary orbit on the equatorial plane to maintain a stationary view from Earth. §.§ Related Works The system-level performance of LEO satellite networks has been recently evaluated using stochastic geometry where the positions of LEO satellites are effectively modeled by spatial point processes. Binomial point processes (BPPs) have been widely used to model the distributions of LEO satellites because the total number of LEO satellites is deterministic [<ref>]-[<ref>]. The initial work [<ref>] provided BPP-based coverage and rate analyses and showed that the BPP satisfactorily models deterministic Walker constellations. The distance distribution between the nearest points on different concentric spheres was obtained in [<ref>]. With this distribution, the coverage probability of LEO satellite communication networks was derived in [<ref>], specifically examining the role of gateways as relays between the satellites and users.The ergodic capacity and coverage probability of cluster-based LEO satellite networks were evaluated in [<ref>] considering two different types of satellite clusters. However, the BPP-based analytical results include highly complex terms, which are not fairly tractable to evaluate network performance. Instead of the BPPs, Poisson point processes (PPPs) could be used to approximately model the LEO satellite constellations using the Poisson limit theorem when a large number of satellites exists [<ref>]-[<ref>]. Both the BPP- and PPP-based performance analyses were carried out in [<ref>] under the shadowed-Rician fading in terms of the outage probability and system throughput. The downlink coverage probability of LEO satellite networks was derived in [<ref>] considering a recent satellite-to-ground path loss model and an elevation angle-dependent line-of-sight (LOS) probability. The altitude of satellite constellations was optimized in [<ref>] to maximize the downlink coverage probability. In [<ref>], a non-homogenous PPP was used to model the non-uniform distribution of LEO satellites across different latitudes. More tractable results for the coverage probability were provided in [<ref>] where the density of LEO satellites was also optimized.The link-level performance of GEO satellite systems has been also analyzed from various perspectives [<ref>]-[<ref>]. The earlier studies [<ref>], [<ref>] introduced a flexible resource allocation design for GEO satellite systems to maximize spectrum utilization. The primary focus was on minimizing the number of frequency carriers and transmit power required to meet the demands of multi-beam scenarios. The coverage of GEO satellite systems was enhanced in [<ref>] using weighted cooperative spectrum sensing among multiple GEO satellites via inter-satellite links. In [<ref>], reflecting intelligent surfaces were integrated into a downlink GEO scenario where the joint power allocation and phase shift design problem was efficiently solved. The interference analyses between a GEO satellite and a LEO satellite were conducted in [<ref>] to assess the effectiveness of an exclusive angle strategy in mitigating in-line interference. In [<ref>], the interference analysis between a single GEO satellite and multiple LEO satellites was introduced based on the probability density functions (PDFs) of the LEO satellites' positions. Although the above works [<ref>]-[<ref>] have successfully investigated diverse aspects of satellite communication systems, they have primarily focused on the satellites in non-geostationary orbits or a single GEO satellite. §.§ Motivation and ContributionsGEO satellites appear stationary when observed from Earth because they move in the same direction as the Earth's rotation, and their orbits are positioned on the equatorial plane, i.e., the inclination of zero degrees.[ While GEO satellites could have inclinations greater than or equal to zero degrees, our paper focuses specifically on geostationary orbit satellites for communication purposes, which are characterized by inclinations close to zero degrees. ]Due to this inherent property of the geostationary orbit, modeling the positions of multiple GEO satellites is different from the techniques for modeling multiple LEO satellites but has not been addressed before to the best of our knowledge.To fill this gap, we investigate the fundamental framework of GEO satellite networks by leveraging the orbital characteristics of the GEO satellites. The key contributions of this paper are summarized as follows. * Modeling and analysis of GEO satellite networks: The orbital characteristics of the GEO satellites lead to dissimilar dimensional geometry. The terminals are on the surface of Earth (3D distribution), while the GEO satellites are located on the equatorial plane (2D distribution). This makes the critical differences from the works [<ref>]-[<ref>] that considered LEO constellations. Hence, terminals at different latitudes experience unequal satellite visibility, resulting in performance gaps.Considering these characteristics, we provide a novel approach to model and analyze GEO satellite networks based on stochastic geometry.Specifically, we distribute GEO satellites in the geostationary orbit according to a BPP and then analyze satellite visibility depending on the terminal's latitude. We identify the possible distribution cases for the GEO satellites and derive the probabilities of these cases based on the properties of the BPP. We also obtain the distance distributions between the terminal and GEO satellites and then derive the coverage probability using these distributions.* Poisson limit theorem-based approximation: We approximate the satellite distribution as a PPP using the Poisson limit theorem. With this approach, the derived satellite-visible probability, distance distributions, and coverage probability are further simplified. Using the two-line element[A two-line element set is a data format encoding the list of orbital elements of a satellite at a given epoch time, which is publicly provided by the North American Aerospace Defense Command. Based on the two-line element set, the position and velocity of the satellite could be predicted by using a simplified general perturbation model, e.g., SGP4 [<ref>].] dataset of the currently active GEO satellites, we compare the average number of visible satellites between the actual distribution and the BPP model. Additionally, we explore the performance gap between the BPP- and PPP-based satellite distributions, thereby identifying the conditions under which the Poisson limit theorem applies to the distribution of GEO satellites.The rest of this paper is organized as follows. In Section <ref>, the network model for a GEO satellite communication network is described. In Section <ref>, the orbit visibility and distance distributions are analyzed. In Section <ref>, the analytical expressions of the coverage probability are derived. In Section <ref>, simulation results are provided to validate our analysis, and conclusions are drawn in Section <ref>. Notation: [·] indicates the probability measure, and [·] denotes the expectation operator. The complement of a set 𝒳 is 𝒳^c.nk denotes the binomial coefficient. Bin(n,p) denotes the binomial distribution with the number of trials n and the success probability p. The cumulative distribution function (CDF) and the PDF of a random variable X are F_X(x) and f_x(x), respectively. Γ(·) is the Gamma function, and the Pochhammer symbol is defined as (x)_n=Γ(x+n)/Γ(x).The Euclidean norm of a vector 𝐱 is ||𝐱||. The Lebesgue measure of a region 𝒳 is |𝒳|, which represents the volume of 𝒳. The unit step function is u(·), andthe Dirac delta function is δ(·). § NETWORK MODELWe consider a downlink GEO satellite network where N GEO satellites at altitudeserve ground terminals. It is notable that for the geostationary orbit, unlike LEOs, the altitudeis consistently specified as 35,786 kilometers.We assume that the positions of the GEO satellites 𝐱_n, n∈{0,1,⋯,N-1}, are randomly determined according to a homogeneous BPP ={𝐱_0, 𝐱_1, ⋯, 𝐱_N-1} in the circular geostationary orbitas shown in Fig. <ref>.This orbit can be expressed with spherical coordinates as ={ρ=+, ψ=π/2, 0 ≤φ≤ 2π}, whereρ, ψ, and φ are the radial distance, polar angle, and azimuthal angle, respectively, andis the Earth's radius.We focus on a typical terminal located at arbitrary latitude ϕ and longitude θ where its position is given by 𝐭 =[ t_x; t_y; t_z ] =[ cosϕcosθ; cosϕsinθ; sinϕ ].Because the satellites above the horizontal plane can be observed from Earth, a part of the geostationary orbit is only visible to the terminal, which we call the visible arc and denote it by . We assume that the terminal is served by the nearest satellite in , and the other satellites inbecome interfering nodes.Let (𝒳) denote the number of satellites distributed in region 𝒳 according to the BPP . Then () is the number of visible satellites positioned in the visible arc.For notational simplicity, we let the index n=0 denote the serving satellite and the indices n=1,⋯,()-1 represent the interfering satellites, while the remaining indices are for the invisible satellites, which are irrelevant to the typical terminal.The satellites adopt directional beamforming to compensate for large path losses at the receivers.For analytical tractability, we assume that the boresight of the serving satellite's beam is directed toward the target terminals, while the beams of other satellites are fairly misaligned [<ref>], [<ref>].This assumption is well motivated in GEO scenarios because GEO satellites appear stationary from Earth, meaning that their beams remain fixed within a specific ground area. With this property, the beams of GEO satellites are carefully designed to avoid overlapping the beams of the existing GEO satellites, thereby mitigating inter-satellite interference.Hence, the effective antenna gain G_n, n∈{0,1,⋯,N-1}, is given byG_n =, n=0, ,whereis the transmit antenna gains of the satellites, andis the receive antenna gain of the terminal.The path loss between the terminal and the satellite at 𝐱_n is given by ℓ(𝐱_n)= (c/4π)^2 R_n^-α where R_n=||𝐱_n-𝐭|| is the distance, c is the speed of light, f_c is the carrier frequency, and α is the path-loss exponent. We assume the satellite channels experience Nakagami-m fading, which effectively captures the LOS property of satellite channels [<ref>], [<ref>]. The Nakagami-m fading model can reflect various channel circumstances by varying the Nakagami parameter m. For example, the Nakagami-m distribution becomes the Rayleigh distribution when m=1, and the Rician-K distribution when m=(K+1)^2/2K+1. The CDF of the channel gain of the Nakagami-m fading model is given by F_h_n(x)=1-e^-m x∑_k=0^m-1(mx)^k/k!.Since the satellites are distributed according to the BPP, there can be three distribution cases for the visible satellites as follows. * Case 1: ()=0, i.e., no visible satellite exists. There are no serving and interfering satellites.* Case 2: ()= 1, i.e.,one visible satellite exists. The only visible satellite functions as the serving satellite without any interfering satellite.* Case 3: ()>1, i.e., more than one visible satellite exist. Both the serving and at least one interfering satellites exist. Considering these cases, the received signal-to-interference-plus-noise ratio () at the typical terminal is given by= 0,() = 0 ,h_0 ℓ(𝐱_0)/N_0 W,() = 1,h_0 ℓ(𝐱_0)/I + N_0 W,() > 1whereis the transmit power assuming all satellites transmit with the same power, and I=∑_n=1^()-1 h_n ℓ(𝐱_n) is the aggregated inter-satellite interference. § MATHEMATICAL PRELIMINARIES§.§ Satellite Visibility AnalysesIt is worth noting that the length of the visible arc (ϕ) in the geostationary orbit highly depends on the terminal's latitude as shown in Fig. <ref>.For example, when the terminal is placed on the equator, i.e., ϕ=0, the visible arc, depicted as the red curve, is the longest. As the latitude increases, the visible arc shrinks and finally vanishes at the latitude of =cos^-1(/+) ≈ 81.3degrees.Based on this observation, the length of the visible arc is obtained in the following lemma. The length of the visible arc, i.e., |(ϕ)|, is given by|(ϕ)|= 2(+)sin^-1(√(1- ^2ϕ/(1+/)^2))for |ϕ| <, and |(ϕ)|=0 otherwise. As shown in Fig. <ref>, if ϕ=0, the geostationary orbit from the terminal's horizontal view becomes the circle with the radius of +. However, if ϕ increases, the orbit can be seen as the ellipse whose semi-major and minor axes are + and (+)cosϕ, respectively. Hence, we can obtain the equation of the ellipse asx̅^2/(+)^2+y̅^2/(+)^2cos^2ϕ=1where x̅ and y̅ are the projected axes observed from the horizontal view.By substituting y̅= into (<ref>), we can obtain the length between E and M on the xy-plane asEM=√((+)^2-^2^2ϕ).Since EO=+, we have∠EOM = sin^-1(EM/EO) = sin^-1(√(1- ^2ϕ/(1+/)^2)).Using this angle, we finally obtain the length of the visible arc as |(ϕ)|=2(+)∠EOM for ϕ>0. For ϕ<0, the length can be readily obtained because the geometry is symmetric about the xy-plane, which completes the proof.The length of the visible arc (ϕ) is inversely proportional to the absolute value of the latitude ϕ, i.e., (ϕ) ∝1/|ϕ|, for |ϕ| ≤≈ 81.3 degrees. This is because as |ϕ| increases from 0 to , ϕ increases from 1 to 1+/, resulting in a decrease in (ϕ).Hence, the maximum length of the visible arc is achieved when the terminal is located at the equator, i.e., ϕ=0, and has a value of 2(+a)cos^-1(/+) ≈ 119,657 km. Based on the properties of the BPP, the number of satellites in the visible arc follows the binomial distribution with the success probability [<ref>]=|(ϕ)|/||=1/πsin^-1(√(1- ^2ϕ/(1+/)^2))for |ϕ| <, and =0 otherwise.Thus, the average number of visible satellites is given by [()]=N. Fig. <ref> shows the length of the visible arc depending on the latitude ϕ. As expected in Remark <ref>, the length has its maximum 119,657 km at ϕ=0 degrees, decreases with |ϕ|, and vanishes at|ϕ| ≈ 81.3 degrees. This tendency implies that terminals near the equator are more likely to see many satellites, resulting in better satellite visibility, while those in the polar region, especially |ϕ|> 81.3 degrees, are rarely in the coverage of GEO satellites. The probabilities for the visible satellite distribution of Cases 1, 2, and 3 are given by[()=0]= (1-)^N,[()=1]= N(1-)^N-1,[()>1]=1-(1-)^N-N(1-)^N-1,respectively. For more details, see Appendix <ref>.We remark that the probability of Case 1 is called the satellite-invisible probability, which is the probability that all satellites are invisible, while the sum of the probabilities of Cases 2 and 3 is the satellite-visible probability, which is the probability that at least one satellite is visible. For the northern hemisphere, i.e., ϕ > 0, the satellite-invisible probability increases with the latitude ϕ and becomes one at ϕ =. This can be proved by taking the derivative of (<ref>) with respect to ϕ asd/dϕ(1-)^N= N(1-)^N-1tanϕ/π√((1+/)^2/^2ϕ- 1)>0.When ϕ>, =0, which results in the satellite-invisible probability equal to one.When N →∞ for |ϕ|<, the terminal can see at least one serving satellite and one interfering satellite because [()=0] → 0, [()=1]→ 0, and [()> 1] → 1, which is intuitively true. Fig. <ref> shows the case probabilities for various numbers of satellites N={2,10,100}. This figure verifies Remark <ref> that the satellite-invisible probability corresponding to Case 1 increases with ϕ∈ [0, ] and then reaches one at ϕ=. As expected in Remark <ref>, the satellite-visible probability corresponding to Cases 2 and 3 increases with N. Even with a hundred GEO satellites, [()>1] is almost one for any latitude less than . Thus, Case 3 becomes the most probable case for terminals located at |ϕ|< when a fairly large number of GEO satellites are in orbit. In contrast, only Case 1 happens for the terminals whose latitude is above . §.§ Distance DistributionsNext, we characterize the distributions of the following three distances from the typical terminal located at the latitude of ϕ. * R∈[,]: the distance to the nearest satellite* R_0∈[,]: the distance to the serving satellite* R_n∈[r_0,]: the distances to the interfering satellites given R_0=r_0, n∈{1,⋯,()-1} The distances to the nearest and farthest points in the geostationary orbit are denoted byandand can be obtained by applying the Pythagorean theorem to S_NTT^' and S_FTT^' shown in Fig. <ref>, i.e.,S_N T^2 = S_N T^'^2 + T T^'^2= (+ - √(t_x^2 + t_y^2))^2 + t_z^2andS_F T^2 = S_F T^'^2 + T T^'^2= (+ + √(t_x^2 + t_y^2))^2 + t_z^2.After some manipulation using the facts that t_x^2 + t_y^2=^2cos^2ϕ and t_z^2=^2sin^2ϕ, we can obtainandas=S_N T=√((+-cosϕ)^2 + ^2 sin^2ϕ)and=S_F T=√((++cosϕ)^2 + ^2 sin^2ϕ).Moreover,the maximum distance to the visible satellite, denoted by , can be obtained as= TE = (TM^2 + EM^2)^1/2 according to the geometry in Fig.<ref>. From EM given in (<ref>) with the fact that TM=tanϕ, we have =√(^2tan^2ϕ + (+)^2 - ^2 ^2ϕ)=√(^2+2). The largest possible distance from the terminal to any visible satellite is always =√(^2+2)≈ 41,679 km regardless of ϕ. This result is intuitive because the endpoints of the visible arc can be seen as the intersection between the horizontal plane and the sphere with the radius of +. Before deriving the distance distributions, we let (r), r∈[,], denote an arc of the geostationary orbit whose maximum distance to the terminal is r, which is shown in Fig. <ref> with r=r_0.Using the law of cosines, the length of the arc (r) is given by|(r)|=2(+)∠EOT^'=2(+)cos^-1(OE^2 + OT^'^2 - ET^'^2/2 OE·OT^')=2π(+)·1/πcos^-1((+)^2+^2-r^2/2(+)cosϕ)_≜Ψ(r,ϕ) where OE=+, OT^'=cosϕ, and ET^'=√(r^2-t_z^2)=√(r^2-^2sinϕ). Please note that we define a new function Ψ(r,ϕ) for the simplicity of notation, which will be used to efficiently express the distance distributions in the following lemmas. The CDF and PDF of R are respectively given byF_R(r) = 0, r<,1-(1-Ψ(r,ϕ))^N,≤ r < ,1,and f_R(r) =2 N r (1-(r,ϕ))^N-1/π√(v_1-(v_2-r^2)^2),≤ r < ,0,wherev_1=4(+)^2^2cos^2ϕ and v_2=(+)^2+^2. For more details, see Appendix <ref>. The CDF and PDF of R_0 are respectively given byF_R_0(r) = 0, r<, F_R(r)/F_R( ),≤ r < ,1,and f_R_0(r) = f_R(r)/F_R( ),≤ r < ,0,For more details, see Appendix <ref>.When N →∞, the CDF and PDF of the distance R_0 asymptotically become F_R_0(r) → u(r-) and f_R_0(r) →δ(r-), respectively. This means that the distance to the serving satellite is deterministic and has a value of , i.e., the possible minimum distance to satellites given the latitude ϕ.Given R_0=r_0, the CDF and PDF of R_n, n∈{1,2,⋯,()-1}, are, respectively, given byF_R_n|r_0(r) =0, r<r_0, Ψ(r,ϕ)-Ψ(r_0,ϕ)/Ψ(,ϕ)-Ψ(r_0,ϕ), r_0 ≤ r < ,1,and f_R_n|r_0(r) =2r/(π√(v_1-(v_2-r^2)^2))/Ψ(,ϕ)-Ψ(r_0,ϕ), r_0 ≤ r < ,0,.For more details, see Appendix <ref>. Fig. <ref> verifies the analytical expressions given in Lemmas <ref>, <ref>, and <ref> for ϕ=30 degrees. It is shown that our analyses are in good agreement with the simulation results. The case probabilities and the distance distributions obtained in this section are the key instruments, which will be used for deriving the stochastic geometry-based performance in the next section. § COVERAGE ANALYSIS In this section, we first derive the coverage probability of the GEO satellite network with the BPP-based satellite distribution. Then, we further simplify the expression using the Poisson limit theorem, which states that the binomial distribution can be approximated as the Poisson distribution as N→∞ [<ref>]. §.§ Binomial Distribution-Based AnalysisBefore analyzing the coverage probability, we derive the Laplace transform of the aggregated interference power using the following two lemmas. Given that the serving satellite is at a distance of r_0 from the terminal, the number of possible interfering satellites is (∩(r_0)^c) =N-1 (except for the serving satellite), and the region where interfering satellites can be located is ∩(r_0)^c. The number of interfering satellites in this region is abinomial random variable with the success probability , i.e., (∩(r_0)^c) ∼Bin(N-1, ), where= |∩(r_0)^c|/|∩(r_0)^c| = Ψ(,ϕ)-Ψ(r_0,ϕ)/1-Ψ(r_0,ϕ).This result comes directly from the definition of the BPP. For more details, see [<ref>].Given R_0=r_0, the Laplace transform of the aggregated interference power I=∑_n=1^(∩(r_0)^c) h_n ℓ(𝐱_n) is given byℒ_I|r_0(s)=∑_=0^N-1N-1^^N-1-×∏_n=1^∫_r_0^(mr_n^α/s+mr_n^α)^m f_R_n|r_0(r_n) dr_n where = 16 π^2^2/( c^2). For more details, see Appendix <ref>.The coverage probability is the probability that theat the typical terminal defined in (<ref>) is greater than or equal to a threshold τ, i.e., [≥τ].Using the result in Lemma <ref>, the coverage probability is given in the following theorem. The coverage probability for GEO satellite networks is approximated as(τ;m) ≈ (1-(1-)^N) ∑_i=1^mmi(-1)^i+1×∫_^ e^-ν iN_0 W τ r^αℒ_I|r_0=r(ν i τ r^α)f_R_0(r)dr where ν=m(m!)^-1/m. For more details, see Appendix <ref>.This approximated coverage probability in Theorem <ref> becomes exact when m=1, which is given in the following corollary. Under the Rayleigh fading, i.e., m=1, the coverage probability for GEO satellite networks in Theorem <ref> becomes exact and is given by( τ;1) = (1-(1-)^N) ×∫_^ e^- N_0 W τ r^αℒ_I|r_0=r(τ r^α)f_R_0(r)dr. This result is obtained by directly setting m=1 in (<ref>). The expression for the coverage probability in (<ref>) includes the integrals that appear to be unsolvable analytically, but can be evaluated numerically. To further simplify the expression, we conduct the Poisson limit theorem-based approximation in the following section.§.§ Poisson Limit Theorem-Based Approximation When sufficiently many GEO satellites are in orbit, e.g., N→∞, the BPP can be interpreted as a PPPwith the density of λ=N/||=N/2π(+) according to the Poisson limit theorem [<ref>]. The void probability of the PPPfor arc (r) is the probability that there is no satellite in (r), which is given by e^-λ |(r)|.Using this property, we obtain the satellite visible probability as[ ()>0]= 1-[()=0]≈ 1-e^-λ |()| = 1-e^-N Ψ(,ϕ)and approximate the CDF and PDF of R and R_0 in the next two lemmas.The approximated CDF and PDF of R, denoted by F̃_R(r) and f̃_R(r), are respectively given byF̃_R(r) =0, r<,1-e^-N Ψ(r,ϕ),≤ r < ,1,and f̃_R(r) =2 r N e^-N Ψ(r,ϕ)/π√(v_1-(v_2-r^2)^2),≤ r < ,0,. With the PPP approximation of GEO satellite positions, we approximate the CDF of R asF̃_R(r) = [R≤ r] = 1-[R>r] = 1-e^-λ |(r)|.By substituting (<ref>) and λ=N/2π(+) here, the approximated CDF is obtained.The PDF is directly given by taking the derivative of the CDF, which completes the proof.The approximated CDF and PDF of R_0, denoted by F̃_R_0(r) and f̃_R_0(r), are obtained by substituting the CDF and PDF of R in Lemma <ref>into (<ref>) and (<ref>) as F̃_R_0(r) = 0, r<,1-e^-N Ψ(r,ϕ)/ 1-e^-N Ψ(,ϕ),≤ r < ,1,and f̃_R_0(r) = 2 N / π/ 1-e^-N Ψ(,ϕ)· re^-N Ψ(r,ϕ)/√(v_1-(v_2-r^2)^2),≤ r < ,0,. The proof of this lemma is complete by following the same steps as in the proof of Lemma <ref>. These distance distributions are much simpler than the BPP-based results obtained in the previous section. Thus, the simplified CDFs and PDFs will play a crucial role in reducing the computational complexity of evaluating the coverage probability. Also, the simplified Laplace transform is provided next. When the satellites are distributed according to the PPPwith the density of λ=N/2π(+), and the effective antenna gains for the interfering satellites are equal, i.e., =G̅ ∀ n ≠ 0 for an arbitrary constant G̅, the Laplace transform of the aggregated interference power is derived asℒ̃_I|r_0(s) =e^-2N/π(Ω_1()-Ω_1(r_0) - Ω_2(s,r_0) )where Ω_1(r)=-1/2tan^-1(v_2 - r^2/√(v_1-(v_2-r^2)^2)) and Ω_2(s,r_0)=∫_r_0^1/(s r^-α/m ω+1)^m r dr/√(v_1-(v_2-r^2)^2) with ω = 16 π^2^2/G̅ c^2. See Appendix <ref>.Unlike the Laplace transform of the aggregated interference power for the binomially distributed satellites, given in Lemma <ref>, the approximated Laplace transform in Lemma <ref> does not rely on any distance distribution thanks to the stochastic property of the PPP .Using the simplified distance distributions and Laplace transform, the coverage probability is obtained in the following theorem.When GEO satellites are distributed according to a PPPwith a density of λ=N/2π(+), the coverage probability is given byP̃_cov(τ;m) ≈2N/π/1-e^-N Ψ(,ϕ)∑_i=1^mmi (-1)^i+1Ξ_i(τ;m)where Ξ_i(τ;m) = ∫_^r e^-Θ_i(r,τ;m)/√(v_1-(v_2-r^2)^2) drwith Θ_i(r,τ;m)= N Ψ(r,ϕ) + ν iN_0 W τ r^α+ 2N/π (Ω_1()-Ω_1(r_0) - Ω_2(ν i τ r^α,r_0) ).The proof is complete by following the proof of Theorem <ref> with the approximated results(<ref>), (<ref>), and (<ref>). Although the expression in Theorem <ref> has the integral term in Ξ_i(τ;m), it is much easier to calculate than that in Theorem <ref> due to the simplified Laplace transform.§ SIMULATION RESULTS In this section, we numerically verify the derived results based on the simulation parameters listed in Table <ref> unless otherwise stated. The handheld terminals are considered for the S-band as in the 3GPP standardization [<ref>].With the assumed effective isotropically radiated power (EIRP) density, which is calculated as /W, we obtain the transmit power of the satellites as =52.77 dBm.The typical terminal is located in Seoul, South Korea, i.e., {ϕ,θ}={37,137} deg. Fig. <ref> compares our analysis with the simulation results considering the actual GEO satellites. The positions of the actual GEO satellites, depicted in Fig. <ref> Fig:geo_scatter, are calculated from the two-line element dataset given in https://celestrak.org/ on October 21, 2023. In this dataset, we consider the satellites with an inclination less than 1 degree among all the actual GEO satellites in geosynchronous orbits. As a result, the number of considered satellites is N=391. In Fig. <ref> Fig:avg_num_vs_phi, we compare the average number of visible satellites for the actual and BPP-based satellite distributions. For the actual distribution, the average number is calculated by averaging the number of visible satellites over all longitudes at each latitude. For the BPP-based distribution, theaverage number comes from [()]=N given in Remark <ref>.It is shown that our approach to model GEO satellites is fairly reasonable because the average number of actually visible satellites at a terminal is almost the same as our analysis. This alignment is mainly achieved by displacing adjacent GEO satellites at a certain distance to be almost evenly distributed in the geostationary orbit in order to minimize interference with other satellites. In Fig. <ref> Fig:geo_scatter, a small number of GEO satellites are positioned in the areas with the longitude from 180 to 220 degrees (140^∘W-180^∘W). These areas encompass the Alaska and Pacific region where the demand for communication services is scarce. In the future, additional GEO satellites may be deployed in these areas to accommodate potential service needs.Fig. <ref> shows the numerical results of coverage probabilities.The BPP-based analytical results are given from Theorem <ref>, while the PPP-based analysis comes from Theorem <ref>.In Fig. <ref> Fig:Pcov_vs_tau, the coverage probability in Theorem <ref> provides a fairly close performance to the simulation results for various path loss exponents α={3,3.7}, verifying the effectiveness of the BPP-based modeling of the GEO distribution. Fig. <ref> Fig:Pcov_vs_N shows the coverage probability versus the number of GEO satellites N. As N increases, the coverage probability first increases until N reaches a certain value, and then decreases. When the number of GEO satellites is small, deploying additional satellites enhances system performance by increasing the satellite visibility and the received signal-to-noise ratio from the serving satellite. However, for a large number of satellites, the presence of more satellites can lead to increased interference, resulting in a degradation of coverage performance.The approximated coverage probability in Theorem 2 is fairly similar to the one in Theorem 1, especially for high N, which verifies the Poisson limit theorem-based approximation.Fig. <ref> shows the coverage probability versus the latitude of the terminal. As we expected, the coverage probability highly relies on the terminal's latitude. This is because the geostationary orbit is on the equatorial plane, resulting in unequal satellite visibility from different latitudes. This phenomenon successfully explains the fact that GEO satellites cannot provide any coverage to polar regions due to their inherent orbital characteristics. Furthermore, unlike the case with a relatively small number of satellites, e.g., N=10, when there are a large number of satellites, e.g., N=100 or 200, the regions with high latitudes, e.g., |ϕ|>60 degrees, have the higher coverage performance compared to those near the equator. This is mainly because when many GEO satellites interfere with each other, the region that sees a shorter visible arc of the geostationary orbit has better performance due to less interference. Despite this fact, when the sidelobe of the satellites' beam patterns, the main factor causing interference, is designed to be sufficiently low, e.g., /=30 dB, the coverage performance around all latitudes is enhanced, and simultaneously the performance gap between different latitudes decreases. In addition, the terminals at the equator, i.e., ϕ=0 degrees, achieve slightly higher coverage performance than those near the equator because of smaller path losses and better satellite visibility.§ CONCLUSIONSIn this paper, we investigated a novel approach to model the distribution of geosynchronous Earth orbit (GEO) satellites according to a binomial point process. We analyzed the distance distributions and the probabilities of distribution cases for the serving satellite. We also derived the coverage probability, and the approximated expression was obtained by using the Poisson limit theorem. Simulation results well matched the derived expressions, and the approximate performance was fairly close to the actual system performance. The impacts of the signal-to-interference-plus-noise ratio threshold, the number of GEO satellites, and the latitude of the terminal were discussed in terms of coverage probabilities. The analytical results are expected to give a fundamental framework for understanding GEO satellite networks and offer guidance when designing practical techniques for the heterogenous satellite communication systems. § PROOF OF LEMMA <REF> Using the finite-dimensional distribution of the BPP , the probability that q satellites are positioned in the visible arcis given by [<ref>][()=q,(^c)=N-q]=Nq^q (1-)^N-qwhere^c is the invisible arc, i.e., the arc under the horizontal plane, whose length is |^c|=||-||. By substituting q=0 and 1 in (<ref>), we can obtain[()=0] and [()= 1], respectively. The probability of Case 3 can be given by [()>1]=1-[()= 0]-[()=1], which completes the proof.§ PROOF OF LEMMA <REF>Let D denote the distance from the terminal to an arbitrary satellite. Then, the probability that D is less than or equal to r is equivalent to the probability that the satellite is located within 𝒜(r), i.e., the success probability for 𝒜(r), which is given by[D≤ r]=|𝒜(r)|/|𝒜|=Ψ(r,ϕ).Since the satellites inare independent and identically distributed (i.i.d.),the CDF of R is given by F_R(r) = 1- [R > r] =^(a) 1- (1-[D ≤ r])^Nwhere (a) follows from the independence of the distances to the satellites. The CDF is obtained by substituting (<ref>) into (<ref>), and the PDF is derived by differentiating the CDF, which completes the proof.§ PROOF OF LEMMA <REF>Note that the maximum distance between the terminal and the visible satellite is defined as . With this definition, the CDF of R_0 is given byF_R_0(r)= [R≤ r|()>0]=[R ≤ r, R ≤]/[R≤].Using the CDF of R given in Lemma <ref>, (<ref>) is expressed as (<ref>). The PDF is directly obtained by differentiating (<ref>), which completes the proof. § PROOF OF LEMMA <REF> We now explore a specific case where given R_0=r_0, the distance to a satellite is larger than r_0 and less than or equal to r.In this case, the satellite is positioned in (r)∩(r_0)^c because the distance to the nearest satellite is already fixed to R_0=r_0. The probability of this case is interpreted as the success probability for the arc (r)∩(r_0)^c, which can be computed as the ratio of |(r)∩(r_0)^c| to |∩(r_0)^c|, i.e.,[r_0 < D≤ r|R_0=r_0] = |(r)∩(r_0)^c|/|∩(r_0)^c|= |(r)|-|(r_0)|/||-|(r_0)| = Ψ(r,ϕ)-Ψ(r_0,ϕ)/1-Ψ(r_0,ϕ). For a given R_0=r_0, the CDF of R_n is given by F_R_n|r_0(r) = [R_n ≤ r|R_0=r_0]=[D ≤ r|R_0=r_0, r_0<D ≤]=[D ≤ r, r_0 < D ≤|R_0=r_0]/[r_0<D ≤|R_0=r_0]= 0, r<r_0, [r_0<D ≤ r|R_0=r_0]/[r_0<D ≤|R_0=r_0], r_0 ≤ r < ,1,From (<ref>) and (<ref>), we can obtain the CDF of R_n given R_0 = r_0 as in (<ref>). The PDF is directly obtained by using the derivative of Ψ(r,ϕ), given by d Ψ(r,ϕ)/d r=2 r/π/√(v_1-(v_2-r^2)^2). This completes the proof.§ PROOF OF LEMMA <REF> The Laplace transform is derived as ℒ_I|r_0(s) = _,{h_n}[exp(-s∑_n=1^(∩(r_0)^c) h_n ℓ(𝐱_n))]= _,{h_n}[∏_n=1^(∩(r_0)^c)exp(-sh_n ℓ(𝐱_n))]=^(a)_,{R_n}[∏_n=1^(∩(r_0)^c)ℒ_h_n( s/ R_n^α)]=^(b)_[∏_n=1^(∩(r_0)^c)∫_r_0^ℒ_h_n( s/ r_n^α) f_R_n|r_0(r_n) dr_n]=^(c)_[∏_n=1^(∩(r_0)^c)∫_r_0^(mr_n^α/s+mr_n^α)^m f_R_n|r_0(r_n) dr_n] =^(d)∑_=0^N-1[(∩(r_0)^c)=] ×∏_n=1^∫_r_0^(mr_n^α/s+mr_n^α)^m f_R_n|r_0(r_n) dr_nwhere (a) follows from the i.i.d. distribution of the channel gains h_n, (b) follows from the i.i.d. distribution of the distances R_n, (c) follows because ℒ_h_n(s)=(m/s+m)^m, and (d) follows from the law of total expectation, i.e., [X]=∑_i[A_i][X|A_i].According to Lemma <ref>, the probability [(∩(r_0)^c)=] in (<ref>) is derived as N-1^^N-1-, which completes the proof.§ PROOF OF THEOREM <REF>Considering the satellite visible probability identified in Section <ref>, the coverage probability is given by=[()=0] [≥τ|()=0] + [()>0] [≥τ|()>0]=^(a)[()>0] [≥τ|()>0]where (a) follows because when there is no visible satellite, i.e., ()=0, the corresponding coverage probability [≥τ|()=0] is zero. When () > 0, the coverage probability is derived as[≥τ|() > 0]= _R_0[ [ h_0 ℓ(𝐱_0)/N_0 W + I≥τ |R_0=r ]] = ∫_^[h_0 ≥ (I + N_0 W) τ r^α |R_0=r ]f_R_0(r)dr= ∫_^_I[[h_0 ≥ (I + N_0 W) τ r^α |R_0=r, I ]]f_R_0(r)dr≈^(a)∫_^_I[∑_i=1^mmi(-1)^i+1e^-ν i(I + N_0 W) τ r^α ]f_R_0(r)dr=∑_i=1^mmi(-1)^i+1∫_^e^-ν iN_0 W τ r^α_I[e^-ν iI τ r^α ]f_R_0(r)drwhere (a) follows from the approximated CDF of the channel gain F_h_n(x)=1-∑_i=1^mmi(-1)^i+1e^-ν i x [<ref>]. From the definition of the Laplace transform, i.e., ℒ_X(s) = _X[e^-sX], we obtain (<ref>), which completes the proof. § PROOF OF LEMMA <REF>The Laplace transform is derived asℒ̃_I|r_0(s) = [e^-sI|R_0 = r_0 ] =^(a)_[∏_n=1^(∩(r_0)^c)_h_n[e^-sh_n ℓ(𝐱_n)] ] =^(b)exp(-λ∫_𝐱_n ∈(∩(r_0)^c)( 1-_h_n[e^-sh_n ℓ(𝐱_n)] ) d𝐱_n ) =^(c)exp(-λ∫_𝐱_n ∈(∩(r_0)^c)( 1- 1/(s r^-α/m ω+1)^m) d𝐱_n )=^(d)exp(-2N/π∫_r_0^( 1- 1/(s r^-α/m ω+1)^m)r dr/√(v_1-(v_2-r^2)^2))= exp(-2N/π∫_r_0^r dr/√(v_1-(v_2-r^2)^2)_Ω_1()-Ω_1(r_0)+ 2N/π∫_r_0^1/(s r^-α/m ω+1)^mr dr/√(v_1-(v_2-r^2)^2)_Ω_2(s,r_0))where (a) follows from the independence of channel gains h_n, (b) follows from the Campbell’s theorem for the PPP , (c) follows from the Laplace transform [e^-s h_n]=ℒ_h_n(s)=(m/s+m)^m, and (d) comes from d|(r)|/dr =4r(+)/√(v_1-(v_2-r^2)^2). Using the derivative of Ω_1(r), d/drΩ_1(r)=r/√(v_1-(v_2-r^2)^2), and the definition of Ω_2(s,r_0) given in Lemma <ref>,the proof is complete.1bib:3GPP_38.811 3GPP TR 38.811 v15.4.0, “Study on NR to support non-terrestrial networks," Sep. 2020.bib:3GPP_38.821 3GPP TR 38.821 v16.0.0, “Solutions for NR to support non-terrestrial networks (NTN)," Dec. 2019.bib:Su-22 Y. Su, Y. Liu, Y. Zhou, J. Yuan, H. Cao, and J. Shi, “Broadband LEO satellite communications: Architectures and key technologies," IEEE Wireless Commun., vol. 26, no. 2, pp. 55-61, Apr. 2019. bib:Okati1 N. Okati, T. Riihonen, D. Korpi, I. Angervuori, and R. Wichman, “Downlink coverage and rate analysis of low Earth orbit satellite constellations using stochastic geometry," IEEE Trans. Commun., vol. 68, no. 8, pp. 5120-5134, Aug. 2020.bib:Talgat1 A. Talgat, M. A. Kishk, and M.-S. Alouini, “Nearest neighbor and contact distance distribution for binomial point process on spherical surfaces," IEEE Commun. Lett., vol. 24, no. 12, pp. 2659-2663, Dec. 2020.bib:Talgat2 ——, “Stochastic geometry-based analysis of LEO satellite communication systems," IEEE Commun. Lett., vol. 25, no. 8, pp. 2458-2462, Aug. 2021.bib:Jung1 D.-H. Jung, G. Im, J.-G. Ryu, S. Park, H. Yu, and J. Choi, “Satellite clustering for non-terrestrial networks: Concept, architectures, and applications," IEEE Veh. Technol. Mag., vol. 18, no. 3, pp. 29-37, Sep. 2023. bib:Jung2 D.-H. Jung, J.-G. Ryu, W.-J. Byun, and J. Choi, “Performance analysis of satellite communication system under the shadowed-Rician fading: A stochastic geometry approach," IEEE Trans. Commun., vol. 70, no. 4, pp. 2707-2721, Apr. 2022.bib:Al-Hourani1 A. Al-Hourani, “An analytic approach for modeling the coverage performance of dense satellite networks," IEEE Wireless Commun. Lett., vol. 10, no. 4, pp. 897-901, Apr. 2021.bib:Al-Hourani2 ——, “Optimal satellite constellation altitude for maximal coverage," IEEE Wireless Commun. Lett., vol. 10, no. 7, pp. 1444-1448, July 2021.bib:Okati2 N. Okati and T. Riihonen, “Nonhomogeneous stochastic geometry analysis of massive LEO communication constellations," IEEE Trans. Commun., vol. 70, no. 3, pp. 1848-1860, Mar. 2022.bib:Park J. Park, J. Choi, and N. Lee, “A tractable approach to coverage analysis in downlink satellite networks," IEEE Trans. Wireless Commun., vol. 22, no. 2, pp. 793-807, Feb. 2023. bib:Abdu T. S. Abdu, S. Kisseleff, E. Lagunas, S. Chatzinotas, and B. Ottersten, “Joint carrier allocation and precoding optimization for interference-limited GEO satellite," in Proc. 39th International Communications Satellite Systems Conference (ICSSC 2022), 2022, pp. 128-132.bib:Abdu2T. S. Abdu, S. Kisseleff, E. Lagunas, and S. Chatzinotas, “Flexible resource optimization for GEO multibeam satellite communication system," IEEE Trans. Wireless Commun., vol. 20, no. 12, pp. 7888-7902, Dec. 2021.bib:Jia M. Jia, X. Liu, X. Gu, and Q. Guo, “Joint cooperative spectrum sensing and channel selection optimization for satellite communication systems based on cognitive radio," Int. J. Satell. Commun. Network., vol. 35, no. 2, pp. 139-150, Dec. 2015. bib:KhanW. U. Khan, E. Lagunas, A. Mahmood, B. M. ElHalawany, S. Chatzinotas, and B. Ottersten, “When RIS meets GEO satellite communications: A new sustainable optimization framework in 6G," in Proc. IEEE 95th Veh. Technol. Conference: (VTC-Spring), Jun. 2022, pp. 1-6. bib:CS.Park C.-S. Park, C.-G. Kang, Y.-S. Choi, and C.-H. Oh, “Interference analysis of geostationary satellite networks in the presence of moving non-geostationary satellites," in Proc. 2nd Int. Conf. Inf. Technol. Converg. Services, 2010, pp. 1-5.bib:FortesJ. M. P. Fortes, R. Sampaio-Neto, and J. E. A. Maldonado, “An analytical method for assessing interference in interference environments involving NGSO satellite networks," Int. J. Satell. Commun., vol 17., no. 6, pp. 399-419, Dec. 1999.bib:SGP4D. Vallado and P. Crawford, “SGP4 orbit determination," in Proc. AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Aug. 2008, pp. 6770-6799. bib:Jung3 D.-H. Jung, J.-G. Ryu, and J. Choi, “Satellite clusters flying in formation: Orbital configuration-dependent performance analyses," doi: arXiv:2305.01955.bib:Chiu S. N. Chiu, D. Stoyan, W. S. Kendall, and J. Mecke, Stochastic Geometry and Its Applications, 3nd ed. New York, NY: Wiley, 2013.bib:Jung4 D.-H. Jung, J.-G. Ryu, and J. Choi, “When satellites work as eavesdroppers," IEEE Trans. Inf. Forensics Security, vol. 17, pp. 2784-2799, 2022.bib:Andrews J. G. Andrews, T. Bai, M. N. Kulkarni, A. Alkhateeb, A. K. Gupta, and R. W. Heath, “Modeling and analyzing millimeter wave cellular systems," IEEE Trans. Commun., vol. 65, no. 1, pp. 403-430, Jan. 2017.
http://arxiv.org/abs/2312.15924v1
{ "authors": [ "Dong-Hyun Jung", "Hongjae Nam", "Junil Choi", "David J. Love" ], "categories": [ "cs.IT", "eess.SP", "math.IT" ], "primary_category": "cs.IT", "published": "20231226073640", "title": "Modeling and Analysis of GEO Satellite Networks" }
Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4 Sondos Mahmoud Bsharat^*, Aidar Myrzakhan^*, Zhiqiang Shen^* ^*joint first author & equal contribution 1. VILA Lab, Mohamed bin Zayed University of AI  ===============================================================================================================================================================^* Equal contribution as the first authors.Accurate human trajectory prediction is crucial for applications such as autonomous vehicles, robotics, and surveillance systems. Yet, existing models often fail to fully leverage the non-verbal social cues human subconsciously communicate when navigating the space.To address this, we introduce Social-Transmotion, a generic model that exploits the power of transformers to handle diverse and numerous visual cues, capturing the multi-modal nature of human behavior. We translate the idea of a prompt from Natural Language Processing (NLP) to the task of human trajectory prediction, where a prompt can be a sequence of x-y coordinates on the ground, bounding boxes or body poses. This, in turn, augments trajectory data, leading to enhanced human trajectory prediction. Our model exhibits flexibility and adaptability by capturing spatiotemporal interactions between pedestrians based on the available visual cues, whether they are poses, bounding boxes, or a combination thereof. By the masking technique, we ensure our model's effectiveness even when certain visual cues are unavailable, although performance is further boosted with the presence of comprehensive visual data.We delve into the merits of using 2d versus 3d poses, and a limited set of poses. Additionally, we investigate the spatial and temporal attention map to identify which keypoints and frames of poses are vital for optimizing human trajectory prediction. Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY. The code is publicly available: https://github.com/vita-epfl/social-transmotionhttps://github.com/vita-epfl/social-transmotion. § INTRODUCTION Predicting future events is often considered an essential aspect of intelligence <cit.>. This capability becomes critical in autonomous vehicles, where accurate predictions can help avoid accidents involving humans. For instance, consider a scenario where a pedestrian is about to cross the street. A non-predictive agent may only detect the pedestrian when they are directly in front, attempting to avoid a collision at the last moment. In contrast, a predictive agent can anticipate the pedestrian's actions several seconds ahead of time, making informed decisions on when to stop or proceed.Trajectory prediction models aim to forecast the future positions of objects or people based on a sequence of observed 3d positions in the past. These models have substantial implications for various fields such as autonomous driving <cit.>, socially-aware robotics <cit.>, and security <cit.>. Despite acknowledging the inherent stochasticity that arises from human free will, traditional predictors have limited performance, as they typically rely on a single data point per person (i.e., their x-y coordinates on the ground) as input.This singular focus neglects a wealth of additional signals, such as body language, fine-grained social interactions, and gaze directions, that humans naturally exhibit to communicate their intended trajectories.In this study, we explore the signals that humans consciously or subconsciously use to convey their mobility patterns. For example, individuals may turn their heads and shoulders before altering their walking direction—a visual cue that cannot be captured using a sequence of spatial locations over time. Similarly, social interactions may be anticipated through gestures like hand waves or changes in head direction.Our goal is to propose a generic architecture for human trajectory prediction that leverages additional information whenever they are available (e.g., the body poses).We incorporate the sequence of observed cues as input, along with the observed trajectories, to predict future trajectories, as depicted in <Ref>. We translate the idea of a prompt from Natural Language Processing (NLP) to the task of human trajectory prediction, where a prompt can be a sequence of x-y coordinates on the ground, bounding boxes or body poses.We refer to our task as promptable human trajectory prediction. We embrace the multi-modal nature of human behavior by accommodating various visual cues to better capture the intricacies and nuances of human motion, leading to more accurate trajectory predictions. The challenge lies in effectively encoding and integrating all these visual cues into the prediction model. We introduce Social-Transmotion, a generic and adaptable transformer-based model for human trajectory prediction. This model seamlessly integrates various types and quantities of visual cues, thus enhancing adaptability to diverse data modalities and exploiting rich information for improved prediction performance. Its dual-transformer architecture dynamically assesses the significance of distinct visual cues of both the primary and neighboring pedestrians, effectively capturing relevant social interactions and body language cues.To ensure the generality of our network, we employ a training strategy that includes selective masking of different types and quantities of visual cues. In other words, our model exhibits robustness even in the absence of certain visual cues. In other words, it can make predictions without relying on bounding boxes when pose information is unavailable, or it can use trajectory inputs alone when no visual cues are accessible. Our experimental results demonstrate that Social-Transmotion outperforms previous models. Additionally, we provide a comprehensive analysis of the usefulness of different visual representations, including 2d and 3d body pose keypoints and bounding boxes, for trajectory prediction. We show that 3d pose keypoints more effectively capture social interactions, while 2d pose keypoints can be a good alternative when 3d pose information is unavailable. We also consider the requirements for using poses from all humans at all times and the necessity of 3d versus 2d poses or even just bounding boxes. In some applications, only the latter may be available. We provide an in-depth analysis of these factors in Section <ref>.In summary, our contributions are twofold.First, we present Social-Transmotion, the pioneering generic Transformer-based model for promptable human trajectory prediction, designed to flexibly utilize various visual cues for improved accuracy, even in the absence of certain cues. Second, we provide an in-depth analysis of the usefulness of different visual representations for trajectory prediction. § RELATED WORKS §.§ Human trajectory predictionHuman trajectory prediction has evolved significantly over the years. Early models, such as the Social Force model, focused on the attractive and repulsive forces among pedestrians <cit.>. Later, Bayesian Inference was employed to model human-environment interactions for trajectory prediction <cit.>. As the field progressed, data-driven methods gained prominence <cit.>, with many studies constructing human-human interactions <cit.> to improve predictions. For example, Social-LSTM <cit.> used hidden states to model observed neighbor interactions, while  <cit.> proposed the directional grid for better social interaction modeling.In recent years, researchers have expanded the scope of social interactions to encompass human-context interactions <cit.> and human-vehicle interactions <cit.>.Various architectural models have been used, spanning from recurrent neural networks (RNNs) <cit.>, generative adversarial networks (GANs) <cit.> and diffusion models <cit.>.The introduction of Transformers and positional encoding <cit.> has led to their adoption in sequence modeling, owing to their capacity to capture long-range dependencies. This approach has been widely utilized recently in trajectory prediction <cit.> showing state-of-the-art performance on trajectory prediction <cit.>.Despite advancements in social-interaction modeling, previous works have predominantly relied on sequences of pedestrian x-y coordinates as input features. With the advent of datasets providing more visual cues <cit.>, more detailed information about pedestrian motion is now available. Therefore, we design a generic transformer that can benefit from incorporating visual cues in a promptable manner. §.§ Visual Cues for Trajectory PredictionMulti-task learning has emerged as an effective approach for sharing representations and leveraging complementary information across related tasks. Numerous pioneering studies have demonstrated the potential benefits of incorporating additional associated tasks into human trajectory prediction, such as intention prediction <cit.>, 2d/3d bounding-box prediction <cit.>, and action recognition <cit.>.The human pose serves as a potent indicator of human intentions. Owing to the advancements in pose estimation <cit.>, 2d poses can now be readily extracted from images.In recent years, a couple of studies have explored the use of 2d body pose as visual cues for trajectory prediction in image/pixel space <cit.>. However, our work concentrates on trajectory prediction in camera/world coordinates, which offers more extensive practical applications. Employing 2d body pose presents limitations, such as information loss in depth, making it difficult to capture the spatial distance between agents. In contrast, 3d pose circumvent this issue and have been widely referred to in pose estimation <cit.>, pose forecasting <cit.>, and pose tracking <cit.>. Nevertheless, 3d pose data may not always be available in real-world scenarios. Inspired by a recent work in intention prediction, which demonstrated enhanced performance when employing bounding boxes <cit.>, we have also included this visual cue in our exploration. Our goal is to investigate the effects of various visual cues, including but not limited to 3d human pose, on trajectory prediction.A study with close ties to our research is that of <cit.>, which highlighted the utility of an individual pedestrian's 3d body pose for predicting their trajectory. However, our research incorporates social interactions among poses, a feature overlooked in their study.Also, unlike <cit.>, which proposed head orientation as a feature, we explore more granular representations. Our work not only considers the effect of social interactions between 3d pose but also other visual cues, amplifying trajectory prediction precision. Moreover, our adaptable network is capable of harnessing any available visual cues. § METHOD Our main objective is to tackle the task of predicting future trajectories. To achieve this, we have developed an adaptable model that effectively utilizes various visual cues alongside historical trajectory data. We also recognize that different scenarios may present varying sets of visual cues. To address this, our model is trained to be flexible to handle different types and quantities of cues. As illustrated in <Ref>, our model comprises two transformers. The cross-modality transformer takes as inputs the agent's previous 2d coordinates and can incorporate additional cues like the agent's 2d or 3d pose information and bounding boxes from past time-steps. By incorporating these diverse cues, the Cross-Modality Transformer (CMT) generates a more informative representation of the agent's behavior. Additionally, the Social Transformer (ST) is responsible for merging the outputs from the first transformers of different agents. By combining these individual representations, the social transformer captures interactions between agents, enabling the model to analyze their interplay and dependencies.§.§ Problem FormulationWe denote the trajectory sequence of pedestrian i as 𝐱_𝐢^𝐓, the 3d and 2d local pose coordinates as 𝐱_𝐢^3𝐝𝐏 and 𝐱_𝐢^2𝐝𝐏 respectively, and the 3d and 2d bounding box coordinates as 𝐱_𝐢^3𝐝𝐁 and 𝐱_𝐢^2𝐝𝐁, respectively. We also label the observed time-steps as t=1, ..., T_obs and the prediction time-steps as t = T_obs+1, ..., T_pred. In a scene with N pedestrians, the network input is 𝐗 = [X_1, X_2, X_3, ..., X_N], where X_i = {𝐱_𝐢^𝐜, 𝐜∈{ 𝐓, 3𝐝𝐏, 2𝐝𝐏, 3𝐝𝐁, 2𝐝𝐁 }} depending on the availability of different cues. The tensor 𝐱_𝐢^𝐜 has a shape of (T_obs, e^c, f^c), where e^c represents the number of elements in a specific cue (for example the number of keypoints) and f^c denotes the number of features for each element.Without loss of generality, we consider X_1 as the primary agent. The network's output, 𝐘 = Y_1, contains the predicted future trajectory of the primary pedestrian, following the standard notation. §.§ Input Cues EmbeddingsTo effectively incorporate the visual cues into our model, we employ a cue-specific embedding layer to embed the coordinates of the trajectory and all visual cues for each past time-step. In addition, we utilize positional encoding techniques to represent the input cues' temporal order. We also need to encode the identity of the person associated with each cue and the keypoint type for keypoint-related cues (e.g., neck, hip, shoulder). To tackle this, we introduce three distinct embeddings: one for temporal order, one for person identity, and one for keypoint type.The temporal order embedding facilitates the understanding of the sequence of cues, enabling the model to capture temporal dependencies and patterns. The person identity embedding allows the model to distinguish between different individuals within the input data. Lastly, the keypoint type embedding enhances the model's ability to extract relevant features and characteristics associated with different keypoint types movement. These embeddings are randomly initialized and learned during the training process.H_i^c = MLP^c(𝐱_𝐢^𝐜) + P, The resulting tensor H_i^c has a shape of (T_obs, e^c, D), where D represents the embedding dimension, MLP^c refers to cue-specific Multi-Layer Perceptron (MLP) embedding layers, and the tensor P contains positional encoding information. §.§ Latent Input QueriesWe equip each agent with a set of latent queries labeled as Q_i of shape (T_pred - T_obs, D). Given the substantial and variable quantity of input modalities, we employ latent queries to encompass the motion information of each agent across the multitude of modalities. These queries encoded by the CMT, together with the 2d coordinate representations of past motion, are then directed into the second transformer ST. In the final layers of the network, each latent query associated with the primary agent is mapped to represent one of the potential future positions. §.§ Cross-Modality Transformer (CMT) The CMT in our model is designed to process various inputs embedding vectors. By incorporating these different cues, the CMT is capable of encoding a more comprehensive and informative representation of the agent's motion dynamics. Furthermore, CMT employs shared parameters to process the various modalities and ensure efficient information encoding across different inputs.mQ_i, mH_i^c= 𝐂𝐌𝐓(Q_i, H_i^c, c∈{T, 3dP, 2dP, 3dB, 2dB }).CMT transforms the latent representation of agent motion, concat(H_i^T,Q_i), into a motion cross-modal tensor mH_i^M with shape (T_pred, D) where mH_i^M = concat( mH_i^T, mQ_i). Similarly, each cues embedding tensor H_i^c is mapped to mH_i^c with shape (T_obs, e^c, D).It is important to note that while our CMT receives inputs from various cues, only the motion cross-modal tensor mH_i^M is passed to the ST transformer. Therefore, the number of input vectors to the ST is independent of the number of the input cues. This decision is based on the assumption that the motion cross-modal features capture and encode information from the different cues. §.§ Social Transformer (ST) ST in our model integrates the motion tensors from the CMT across all agents. By combining the individual representations from different agents, the ST creates a comprehensive representation of the collective behavior, considering the influence and interactions among the agents. This enables the model to better understand and predict the complex dynamics in multi-agent scenarios. SM_i= ST(mH_i^M, i∈[1,N]). ST transforms the motion cross-modal tensor of each agentmH_i^M to a socially aware encoding tensor SM_i with shape (T_pred, e^T, D). We denote SM_i = concat(SM_i^T, SM_i^Q), where SM_i^T and SM_i^Q are respectively the mappings of mH_i^T and mQ_i.Finally, SM_1^Q undergoes a projection layer that transforms it into the 2d coordinate predictions of the future positions. §.§ Input Masking To ensure the generality and adaptability of our network, we employ a training approach that involves masking different types and quantities of visual cues. Each sample in the training dataset is augmented with a variable combination of cues, including trajectories, 2d or 3d human pose information, and bounding boxes. This masking technique enables our network to learn and adapt to various cue configurations during training.Subsequently, we conduct testing to evaluate the model's performance across different combinations of visual cues. By systematically varying the presence or absence of specific cues in the input, we assess the model's ability to leverage different cues for accurate trajectory prediction.Our model is trained with Mean Square Error (MSE) loss function between 𝐘 and ground truth 𝐘̂. § EXPERIMENTS In this section, we present the datasets used, metrics and baselines, and an extensive analysis of the results in both quantitative and qualitative aspects followed by the discussion. The implementation details will be found in <Ref>. §.§ DatasetsWe evaluate on three publicly available datasets providing visual cues: JTA <cit.>, JRDB <cit.>, and Pedestrians and Cyclists in Road Traffic <cit.>. Furthermore, we report on the well-known ETH-UCY dataset <cit.>, which does not contain visual cues.JTA dataset: a large-scale synthetic dataset containing 256 training sequences, 128 validation sequences, and 128 test sequences, with a total of approximately 10 million 3d keypoints annotations. The abundance of data and multi-agent scenarios in this dataset enables a thorough exploration of our models' potential performance, thus we consider this dataset as our main dataset. We predict the location of future 12 time-steps given the previous 9 time-steps.JRDB dataset: a real-world dataset that provides a diverse set of pedestrian trajectories and 2d bounding boxes, allowing for a comprehensive evaluation of our models in both indoor and outdoor scenarios.We used `gates-ai-lab' for validation, indoor scenario `packard-poster-session' and outdoor scenario `tressider' for testing and the other scenarios for training. We predict future 12 time-steps given past 9 time-steps under 2.5 frames per second (fps).Pedestrians and Cyclists in Road Traffic dataset: gathered from real-world urban traffic settings, comprises more than 2,000 pedestrian trajectories paired with their corresponding 3d body poses. It contains 50,000 test samples. For evaluations on this dataset, the models observe one second and predict the next 2.52 seconds at 25 fps. §.§ Metrics and BaselinesWe evaluate the models in terms of Average Displacement Error (ADE), Final Displacement Error (FDE) and Average Specific Weighted Average Euclidean Error (ASWAEE) <cit.>:- ADE: the average displacement error between the predicted location and the real location of the pedestrian across all prediction time-steps;- FDE: the displacement error between the final predicted location and the real location;- ASWAEE: the average displacement error per second for specific time-steps; Following <cit.>, we compute it for these five time-steps: [t=0.44s, t=0.96s, t=1.48s, t=2.00s, t=2.52s]We selected the best-performing trajectory prediction models <cit.> from the Trajnet++ leaderboard <cit.>. In addition, we compare with recent state-of-the-art modelsEqMotion <cit.>, Autobots <cit.>, and Trajectron++ <cit.> and pose-based trajectory prediction model <cit.>. Note that in this paper, we concentrate on deterministic prediction, and thus, all models generate a single trajectory per agent. §.§ Results Quantitative results<Ref> compares the previous models with our proposed visual-cues-based model on two datasets. Our model, even when provided with only past trajectory information at inference time, surpasses previous models in terms of ADE/FDE. Moreover, the integration of pose information into our model leads to a significant enhancement. This improvement stems from the ability of pose-based models to capture body rotation patterns before changes in walking direction occur.3d pose yields better improvements compared to 2d pose. It can be attributed to the fact that modeling social interactions requires more spatial information, and 3d pose provides the advantage of depth perception compared to 2d pose.The absence of pose information in the JRDB dataset led us to rely on bounding boxes as visual cue. The results show that incorporating bounding boxes is better than only-trajectory-based predictions. Additionally, we conducted a similar experiment on the JTA dataset and observed that the inclusion of 2d bounding boxes, in addition to trajectories, improved the FDE metric. However, it is important to note that the performance was still lower compared to utilizing 3d pose cues.Furthermore, we conducted an experiment taking as input trajectory, 3d pose and 3d bounding box. The findings show that the performance of this combination was similar to using only trajectories and 3d poses. This suggests that, on average, incorporating 3d bounding boxes does not provide additional information beyond what is already captured by 3d poses. Lastly, we assessed the model's performance using all accessible cues: trajectory, 3d and 2d poses, and 3d and 2d bounding boxes, and it yielded the best outcomes. Qualitative results Figure <ref> provides a visual comparison between Social-Transmotion, which uses only trajectory inputs, with its pose-based counterpart. The inclusion of pose information helps the model predict when the agent changes its direction and avoid collisions with neighbors. For instance, in the right figure, adding pose enables the model to understand body rotation and collision avoidance simultaneously, resulting in a prediction closer to the ground truthPredicting sudden turns presents a significant challenge for trajectory prediction models. However, the addition of pose information can help overcome this. As demonstrated in the middle figure, the pose-based model excels in scenarios involving sudden turns, leveraging pose to anticipate forthcoming changes in walking state, an aspect the conventional model fails to capture. We also provide some failure cases of the model in <Ref>. §.§ Discussions What if we have imperfect input? In real-world situations, obtaining complete trajectory and body poses can be challenging due to obstructions or errors in pose estimation. Therefore, we conducted an experiment where we evaluated the model using randomly masked trajectories and pose keypoints in the observation. We compared the performance of the generic model (trained on all visual cues with masking) and the specific model (trained on trajectory and pose) as presented in <Ref>. The results demonstrate that our proposed generic model exhibits significantly greater robustness against both low quantities and low-quality input data. By utilizing modality masking and meta-masking, our generic model reduces its reliance on a single modality and enhances robustness. For instance, when both models encounter challenging incomplete trajectory and pose input (50% T + 10% 3d P), the ADE/FDE drop of the generic model (-19.6% / -19.3%) is substantially smaller compared to the specific model (-232.6% / -188.4%). Additionally, the generic model proves to be more adept at handling noisy pose keypoints than the specific model. What if we use different variations of 3d pose? Previously, we observed that the 3d Pose-based model achieves the best performance. To delve deeper into the contribution of pose information in improving ADE and FDE, we conducted an ablation study on different variations of pose. To investigate the impact of neighboring poses, we assessed if the performance boost resulted from the primary pedestrian's pose alone or if interactions contributed. <Ref> shows that relying solely on the primary pedestrian's pose significantly improves performance over the purely trajectory-based Social-Transmotion. However, incorporating all pedestrian poses further enhances performance, underscoring the significance of considering pose interactions in trajectory prediction. Then, we utilized the last observed pose as the only visual cue for all agents in the scene. <Ref> shows similar performance with just the last observed frame compared to all observed frames, highlighting the importance of the last frame for trajectory prediction. Our investigation also extended to the exclusive use of head pose as the visual cue, i.e., all non-head pose keypoints were excluded. <Ref> demonstrates that the performance with only head pose is similar to the trajectory-only model. This suggests the importance of including other keypoints for improved model performance. In <Ref>, we provide the spatial and temporal attention maps for further investigations. What if we use other architecture designs instead of CMT–ST? Our Social-Transmotion architecture employs two transformers: one for individual pedestrian feature extraction and another for capturing pedestrian interactions. Here, we conduct a comparative analysis of this dual-transformer setup against three alternative designs in <Ref>.In the MLP–ST design, we adopt a unified single-transformer model. Trajectory and pose information is extracted using a Multi-Layer Perceptron (MLP), and the resultant tokens representing various pedestrian features are aggregated. This allows for simultaneous attention to the diverse features of all pedestrians. The observed performance decline in <Ref> underscores the advantages of utilizing CMT for extracting useful features.We also tested the impact of swapping the order of CMT and ST, involving the extraction of all pedestrians' features at a specific time-step, followed by the second transformer to attend to all time-steps. <Ref> shows the increased errors. Our hypothesis is that the relationships between an individual's keypoints across different time-steps are more significant than the interactions among keypoints of multiple individuals within a specific time-step.The ST first approach challenges the network by requiring it to extract useful information from numerous irrelevant connections. To assess the influence of social interaction modeling, we conducted an experiment where we removed ST while retaining the CMT only configuration. As outlined in Table <ref>, we observe a significant performance drop. This underscores the effectiveness of dual-transformers. §.§ Experiment on Pedestrians and Cyclists in Road Traffic Dataset<Ref> compares our model to the previous work <cit.> that used 3d body pose to predict human trajectories on this dataset. Here, the notations 'c' and 'd' represent two variations of their model using a continuous or discrete approach, respectively. The results indicate the effectiveness of our dual transformer and its proficiency in utilizing pose information because of the masking strategy.§.§ Experiment on the ETH-UCY datasetIn the paper, we have presented our proposed generic model, highlighting its adaptability to various visual modalities. We have also conducted a comparative analysis with prior models using datasets that incorporate visual cues. In this section, our aim is to assess the model's performance on the widely recognized ETH-UCY dataset, which has only trajectory labels.The ETH-UCY dataset  <cit.> is a real-world dataset that provides 2d pedestrian trajectories labels in birds-eye-view. It has five different subsets named ETH, Hotel, Univ, Zara1 and Zara2. Following established conventions in previous research, we employ the task of predicting 12 future time-steps based on 8 preceding time-steps, all observed at a frame rate of 2.5 fps.<Ref> illustrates the deterministic prediction performance of ours and previous works. Notably, our model shows commendable performance, particularly on the challenging ETH subset. This is attributed to the efficacy of our dual-transformer architecture, enabling our model to attain superior results on the ETH subset and competitive performance on other subsets when using solely trajectory as input.§ CONCLUSIONSIn this work, we introduced Social-Transmotion, the first generic promptable Transformer-based model adept at managing diverse visual cues in varying quantities, thereby augmenting trajectory data for enhanced human trajectory prediction. Social-Transmotion, engineered for adaptability, highlights that with an efficient masking strategy and a powerful network, integrating visual cues is never harmful and, in most cases, helpful (free win). By embracing the multi-modal aspects of human behavior, our approach pushed the limits of conventional trajectory prediction performance.Limitations: While our generic model can work with any visual cue, we have examined a limited set of visual cues and noted instances in the appendix where they did not consistently enhance trajectory prediction performance. In the future, one can study the potential of alternative visual cues such as gaze direction, actions, and other attributes, taking into account their presence in datasets. Moreover, although our model demonstrates strong performance even without visual cues, it is important to note that we rely on estimation methods to derive these cues. An intriguing avenue for research involves benefiting directly from images by the development of efficient feature extraction networks. These networks could facilitate the transformation of images into optimized prompts, enabling the direct utilization of visual information.unsrt§ APPENDIX §.§ Attention MapsTo explore the impact of different keypoints/frames on trajectory prediction task, we displayed the attention maps in <Ref>. The first map illustrates temporal attention, and the second map represents spatial attention. The attention weights assigned to earlier frames are comparatively lower, indicating that later frames contain more valuable information for trajectory prediction. In simpler scenarios, the last observed frame may be sufficient, as demonstrated in our previous ablation study. However, in more complex scenarios, a larger number of observation frames may be required.We also observed that specific keypoints, such as the ankles, wrists, and knees, play a significant role in determining direction and movement. Generally, there is symmetry across different body points, with a slight tendency towards the right. We hypothesize it may be attributed to data bias. These findings open up opportunities for further research, particularly in identifying a sparse set of essential keypoints that can offer advantages in specific applications. In addition, <Ref> depicts two examples involving turns. For the simpler scenario (<Ref>), a single frame capturing body rotation is adequate. Conversely, for the more complex situation (<Ref>), several frames prove to be more informative in facilitating accurate trajectory prediction.§.§ Failure Cases We have also incorporated illustrative instances showcasing instances where the model's performance falls short. These examples serve as valuable insights, pinpointing potential avenues for enhancement. For instance, as portrayed in <Ref> and <Ref>, it becomes apparent that relying solely on poses may not always yield optimal outcomes. The integration of supplementary visual cues like gaze or the original scene image could potentially offer advantageous improvements. §.§ Implementation Details Our training configuration for the model included three layers and four heads for both the Cross-Modality Transformer (CMT) and Social Transformer (ST), with a model dimension of 128. The Adam optimizer  <cit.> was employed, starting with a learning rate of 1e-4, which decreased by a factor of 0.1 after completing 80% of a total of 50 epochs. We implemented a 30% modality-mask and 10% meta-mask. The CMT comprises 6 layers with 4 heads, while the ST comprises 3 layers with 4 heads. All training was executed on a NVIDIA V100 GPU with 32GB of memory.
http://arxiv.org/abs/2312.16168v1
{ "authors": [ "Saeed Saadatnejad", "Yang Gao", "Kaouther Messaoud", "Alexandre Alahi" ], "categories": [ "cs.CV", "cs.RO" ], "primary_category": "cs.CV", "published": "20231226185649", "title": "Social-Transmotion: Promptable Human Trajectory Prediction" }
Journal ofClass Files, December 2023 Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals Traditional computer vision generally solves each single task independently by a dedicated model with the task instruction implicitly designed in the model architecture, arising two limitations:(1) it leads to task-specific models, which require multiple models for different tasks and restrict the potential synergies from diverse tasks; (2) it leads to a pre-defined and fixed model interface that has limited interactivity and adaptability in following user' task instructions. To address them, Visual Instruction Tuning (VIT) has been intensively studied recently, which finetunes a large vision model with language as task instructions, aiming to learn from a wide range of vision tasks described by language instructions a general-purpose multimodal model that can follow arbitrary instructions and thus solve arbitrary tasks specified by the user. This work aims to provide a systematic review of visual instruction tuning, covering (1) the background that presents computer vision task paradigms and the development of VIT; (2) the foundations of VIT that introduce commonly used network architectures, visual instruction tuning frameworks and objectives, and evaluation setups and tasks; (3) the commonly used datasets in visual instruction tuning and evaluation; (4) the review of existing VIT methods that categorizes them with a taxonomy according to both the studied vision task and the method design and highlights the major contributions, strengths, and shortcomings of them; (5) the comparison and discussion of VIT methods over various instruction-following benchmarks; (6) several challenges, open directions and possible future works in visual instruction tuning research.Visual instruction tuning, general-purpose multimodal model, general-purpose vision-language model, deep neural network, deep learning, computer vision, visual recognition, visual generation, visual assistant Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey Jiaxing Huang^†, Jingyi Zhang^†, Kai Jiang, Han Qiu and Shijian Lu^* All authors are with the School of Computer Science and Engineering, Nanyang Technological University, Singapore.† denotes equal contribution; * denotes corresponding author.January 14, 2024 =========================================================================================================================================================================================================================================================§ INTRODUCTION Computer vision has been a long-standing challenge in artificial intelligence, which aims to enable computers, machines or systems to perceive, analyze, comprehend and interact with the visual world like human beings <cit.>. With the development of deep neural networks <cit.>, computer vision research has achieved great successes in a spectrum of tasks, such as discriminative vision tasks (e.g., image classification and segmentation, object detection, etc.) and generative vision tasks (e.g., image generation, image editing, etc.).Nevertheless, in this line of research, each vision task is generally solved independently by a dedicated vision model, where the task instruction is implicitly considered and designed in the model architecture, such as segmentation heads for mask prediction, detection heads for box prediction, image captioning heads for descriptive text generation and image generation decoder for generating RGB images. This gives rise to two inherent limitations: (1) it leads to vision models that are task-specific, which requires training and using multiple models for different tasks and restrict the potential synergies from diverse tasks; (2) it results in vision models that typically have a pre-defined and fixed interface, leading to limited interactivity and adaptability in following users' task instructions.Recently, instruction tuning has demonstrated great effectiveness in fine-tuning large language models (LLMs) towards general-purpose LLMs. In instruction tuning, natural languages are used to explicitly represent various task instructions and guide the end-to-end trainable model to understand and switch to the task of interest.In this way, the model can be fine-tuned with a broad range of tasks described by natural language instructions, ultimately leading to a general-purpose model that can follow arbitrary instructions and solve arbitrary tasks specified by the user <cit.>. Inspired by the success in natural language processing, visual instruction tuning has been proposed, which fine-tunes large vision models with language as task instructions, aiming to build a general-purpose multimodal model (or called general-purpose vision-language model). Specifically, visual instruction tuning constructs a universal interface that takes both visual and language inputs, where the language input works as task instructions which guide the model to understand the task of interest, process the visual input accordingly and return the expected output. With this universal interface, the model can be fine-tuned with a wide of vision tasks using visual instruction tuning data (i.e., a triplet of data consisting of visual input, language instruction input and the corresponding output), resulting in a general-purpose multimodal model that accepts arbitrary language instruction inputs and visual inputs and can thus solve arbitrary vision tasks. For example, given a natural image as the visual input, the output of the general-purpose multimodal model could be a detailed image description, a set of bounding boxes, or a modified image if the language instruction input asks to “describe the image"", “locate objects in the image”, or “modify the style of the image”.The benefits of visual instruction tuning are threefold: (1) it constructs a universal vision task interface with language as task instructions, which allows the model to learn and solve a wide range of vision tasks, benefiting from the synergies from diverse tasks; (2) it enables the model to accept arbitrary task instructions from the user, ultimately forming an intelligent model with strong interactivity and adaptability in following the user's intent; (3) it is computationally efficient as it can leverage the off-the-shelf pre-trained large vision model and large language model, and combine and fine-tune them to ultimately construct a general-purpose multimodal model.Despite the significant interest in visual instruction tuning for constructing a general-purpose multimodal model, as evidenced by the considerable number of recent papers illustrated as illustrated in Figure <ref>, the research community is short of a comprehensive survey that can help sort out existing visual instruction tuning methods, the facing challenges, as well as future research directions.Despite the significant interest in visual instruction tuning for constructing a general-purpose multimodal model, as evident from the numerous recent publications as illustrated in Figure <ref>, the research community lacks a systematic survey that can help comprehensively organize current visual instruction tuning approaches, the existing research challenges and potential research directions for future studies. We strive to address this void via conducting a comprehensive survey of visual instruction tuning studies over a diverse range of vision tasks, ranging from discriminative image tasks (e.g., image classification and segmentation) to generative image tasks (e.g., image generation and editing), complex image reasoning tasks (e.g., visual question answering and visual assistant), video tasks, medical vision tasks, 3D vision tasks, etc. The survey is performed from different perspectives, ranging from background to foundations, datasets, methodology, benchmarks, and current research challenges and open research directions. We hope this effort will offer a comprehensive overview on what accomplishments we have achieved, what challenges we currently faced, and what we could further achieved in visual instruction tuning research. We summarizes the main contributions of this work in three aspects. First, it provides a systematic review of visual instruction tuning. We develop a taxonomy according to both the studied vision task and the method design, and highlight the major contributions, strengths, and shortcomings of existing visual instruction tuning methods. Unlike other literature reviews that primarily concentrate on the NLP filed or delve into vision-language pre-training, our survey centers on the newly emerging research direction of visual instruction tuning, and systematically organizes the recent methods according to the investigated vision task and the instruction tuning design, offering a comprehensive overview of this promising research direction. Second, it investigates and analyzes the up-to-date advancements of visual instruction tuning, comprising a thorough benchmarking and discussion of existing methods over various instruction-following evaluation datasets. Third, it identifies and discusses several challenges, along with potential directions for future studies in visual instruction tuning research.The remaining sections of this work are organized as the follows. Section <ref> introduces the task paradigms in computer vision, the development of visual instruction tuning and several relevant surveys.Section <ref> investigates the foundations of visual instruction tuning, encompassing commonly used network architectures, visual instruction tuning frameworks and objectives, and evaluation setups and tasks for instruction-tuned general-purpose multimodal models.Section <ref> provides an overview of widely adopted datasets in visual instruction tuning and the evaluation of instruction-tuned models.Section <ref> categorizes and reviews various visual instruction tuning methods.§ BACKGROUND In this section, we present the development of computer vision task paradigm and how it evolves from “traditional task paradigm” towards the new “instruction-based task paradigm". In addition, we also summarize the development of visual instruction tuning. §.§ Task Paradigms for Computer Vision The development of computer vision task paradigm can be roughly categorized into two stages: (1) the “traditional task paradigm” characterized by a pre-defined and fixed task interface, and (2) the “instruction-based task paradigm" featuring with an interactive, adaptive and flexible instruction-following task interface. Subsequently, we delve into a detailed introduction, comparison, and analysis of these two task paradigms. §.§.§ Traditional Task Paradigm for Computer Vision In traditional computer vision task paradigm, each vision task is generally solved independently by a dedicated vision model, where the task instruction is implicitly considered and designed in the model architecture. Specifically, upon a feature extraction backbone like ResNet or ViT, traditional computer vision task paradigm generally achieves different vision tasks by designing various task-specific prediction heads, where each prediction head takes the extracted features as input and generates outputs with a pre-defined and fixed format for the given task. For example, semantic segmentation is generally achieved by a segmentation head that takes image features as input and returns a segmentation mask in a pre-defined format, i.e., Height×Width×Number of Categories. Object detection is typically accomplished through a detection head that predicts based on the input image features a set of bounding boxes in the pre-defined format, {N, x1, y1, x2, y2} where the first term denotes the number of predicted boxes and the last four terms stand for box coordinates. Image generation is commonly achieved via an image decoding head, which decodes image features into an image in the RGB format. In summary, traditional computer vision task paradigm implicitly consider the task instruction in the model design.Therefore, this paradigm generally solves each vision task independently by a dedicated vision model, resulting in that most existing studies in this line of paradigm focus on developing effective model architectures for each of various vision tasks respectively.As a result, traditional computer vision task paradigm often suffers from two inherent limitations, including (1) it leads to vision models that are task-specific, which requires training and using multiple models for different tasks and restrict the potential synergies from diverse tasks, and (2) it results in vision models that typically have a pre-defined and fixed task interface, leading to limited interactivity and adaptability in following users’ task instructions, as shown in Figures <ref>.§.§.§ Instruction-based Task Paradigm for Computer Vision Driven by the successes in natural language processing, a new instruction-based task paradigm has been proposed, which introduces visual instruction tuning that fine-tunes large vision models with language as task instructions, ultimately building a general-purpose multimodal model (or called general-purpose vision-language model), as shown in Fig. 2. In visual instruction tuning, it first constructs a universal interface that takes both visual and language inputs, where the language input works as task instructions which guide the model to understand the task of interest, process the visual input accordingly and return the expected output. With such a universal interface, the model can learn a wide of vision tasks described by natural language instructions, ultimately forming a general-purpose multimodal model that accepts arbitrary language instruction inputs and visual inputs and can thus solve arbitrary vision tasks.Compared with the traditional computer vision task paradigm that considers and designs the task instruction implicitly in the model architecture, this new paradigm explicitly represent various vision task instructions in natural languages, enabling the model to understand and learn a wide range of vision tasks and ultimately can accept arbitrary language instruction inputs and visual inputs and solve arbitrary vision tasks. §.§ Development of Visual Instruction Tuning Visual instruction tuning studies have made great progresses since the pioneer work of LLaVA. We summarize the development of visual instruction tuning from three aspects : (1) Task Instructions: from “unilingual instructions” to “multilingual instructions”.(2) Visual inputs: from “a single type of visual input” to “multiple types of visual input”.(3) Task difficulty: from simple to complex tasks.§ VISUAL INSTRUCTION TUNING FOUNDATIONS Visual instruction tuning <cit.> aims to fine-tune large vision models with visual instruction-following data, targeting general-purpose multimodal model (GPMM).The pipeline of visual instruction tuning generally consists of two stages, i.e., visual instruction-following data construction and visual instruction tuning as illustrated in Figure <ref>.This section introduces the foundation of visual instruction tuning, including common ways for constructing visual instruction-following data, network architectures for encoding image and text data, visual instruction-tuning framework, objective and downstream tasks for evaluations. §.§ Visual Instruction-Following Data ConstructionVisual instruction-following data typically have the format of {, , }, wheredenotes instruction questions,denotes input image and text pairs (i.e.,= {, }) anddenotes the response following the given instruction.Visual instruction-following data is generally expanded from public multimodal data, such as image-text pairs <cit.>,augmented via the application of large language models <cit.>.Specifically, given an image and its associated text {, }, severalquestions are created aimed at guiding the model to describe the image's content, as illustrated in Figure <ref>.The accumulation of such instructions is generally achieved through two primary methods: first, through manual composition <cit.>; and second, by employing large language models to generate instructions based on a set of initial seed prompts <cit.>.Then, the created instructions are fed to LLMs with the image-text pair to obtain the visual instruction-following data:Human: , Assistant: .To enhance the diversity and improve the quality of both instructions and responses, recent studies have focused on two strategies: firstly, integrating additional contextual information, such as location data and bounding boxes, to facilitate detailed image comprehension; secondly, designing multiple types of instruction-following data such as single-turn descriptions and multi-turn conversations. Specifically, single-turn descriptions are typically generated by prompting large language models (LLMs) with a series of questions as illustrated in Figure <ref> (a).Different from the single-turn descriptions, multi-turn conversations require Human keep asking questions about the given image such as the object category, object location and object actions and Assistant answers the questions over several iterations as in Figure <ref> (b), where fine-tuning model with multi-turn conversations largely equips the model with strong chat capability. §.§ Network Architectures Visual instruction tuning utilizes a multimodal model to extract features from image and text components in visual instruction-following data. This model generally includes a vision encoder and a large language model as its core components. The section introduces the deep neural networks that are commonly employed in the field of visual instruction tuning.§.§.§ Architectures for Vision Learning Transformers have gained considerable attention in vision learning due to their effectiveness and versatility. Vision Transformer (ViT) is commonly employed for image feature extraction, employing a sequence of Transformer blocks, each consisting of a multi-head self-attention layer and a feed-forward network. In practical application, different pre-trained versions of ViT are utilized. For instance, CLIP-pre-trained ViT is used for broad image understanding <cit.>, while SAM-pre-trained ViT is favored for more detailed, fine-grained image analysis <cit.>.In video feature learning, ViT is extended with additional temporal encoders to effectively model time-related information. For example, Valley <cit.> introduces a temporal modeling component to capture the dynamic aspects of input videos.For 3D image feature learning, as in the case with PointCloud data, specialized models like Point-BERT <cit.> and PointNet <cit.> are employed. These models are designed to effectively extract features from PointCloud data, facilitating a deeper understanding of 3D spaces.§.§.§ Architectures for Language LearningFor text feature learning, transformer-based large language models (LLMs) are prevalent.Specifically, the Transformer <cit.> adopts an encoder-decoder architecture. The encoder comprises 6 blocks, each incorporating a multi-head self-attention layer and a multi-layer perceptron (MLP). Similarly, the decoder consists of 6 blocks, each including a multi-head attention layer, a masked multi-head layer, and an MLP.Building upon the standard Transformer architecture, LLaMA <cit.> has emerged as a prominent choice for text feature extraction due to its proficiency across a range of language tasks.Based on LLaMA <cit.>, several instruction-tuned LLMs, such as Vicuna <cit.> and Guanaco <cit.>, are also leveraged for extracting text features.§.§.§ Architectures for Audio Learning For extracting audio features, transformer-based architecture has been adopted. For example, Whisper <cit.>, which is a general-purpose speech recognition model, has been adopted for learning audio features.§.§ Visual Instruction Tuning Framework The widely-adopted framework for visual instruction tuning is illustrated as in Figure <ref>, which generally consists of a vision encoder, a large language model (LLM) and a adapter.In this framework, the vision encoder is adopted for extracting features from images. The adapter then serves as a bridge, translating these image features into the word embedding space, thereby facilitating the LLM's interpretation of the vision encoder's outputs. The adapter is often designed to be lightweight and cost-effective, such as a few linear layers <cit.>, to ensure efficient multimodal integration. Subsequently, the LLM processes the combined text and image embeddings to generate the expected language response.§.§ Visual Instruction Tuning ObjectiveGiven the constructed visual instruction-following data, the multimodal modelis fine-tuned in a full-supervised manner.Specifically, the multimodal model is trained to predicted each token in the output sequentially based on the instruction and input image. §.§ Evaluation Setups and Tasks In this section, we present commonly used setups and tasks in general-purpose multimodal model evaluation.The setups include human evaluation, GPT-4 evaluation and traditional quantitative evaluation and the tasks used for traditional quantitative evaluation include discriminative tasks (e.g., image classification, object detection), generative tasks (e.g., image generation) and complex image reasoning tasks (e.g., VQA, image captioning and visual assistant).§.§.§ Human Evaluation Since the objective of visual instruction tuning to enhance the capability of multimodal model to understand the human instructions effectively and accurately, human evaluation is vital for assessing the tuned multimodal models, specifically for tasks that require a high level of understanding and could not be easily quantified by traditional metrics.Specifically, human evaluation enables to assess the tuned model from various aspects, such as relevance that whether the model's response is relevant to the given instruction, coherence that if the text is logically consistent and well-organized and fluency that if the generated response is natural and correctly follows the grammatical rules. §.§.§ GPT-4 Evaluation Although human evaluation is beneficial and helpful, it is always time-consuming and costly. Inspired by the strong capability of GPT-4 <cit.> in understanding human instructions, some studies adopt GPT-4 as an alternative for measuring the quality of the model’s generated responses. Specifically, GPT-4 evaluates the model from various aspects, including helpfulness, relevance, accuracy, and level of detail, and then assign an overall score ranging from 1 to 10, where higher scores reflect better performances. In addition, GPT-4 will be required to give a detailed explanation for the evaluations, enabling better understand the capability of the tuned model. §.§.§ Quantitative Metric Evaluation In addition to human and GPT-4 evaluation, various downstream tasks are adopted for quantitative evaluation, including discriminative tasks, generative tasks and complex image reasoning tasks.For evaluating the discrimination capability of the model, several image recognition tasks are adopted such as image classification <cit.>, object detection <cit.>, segmentation <cit.> and visual grounding <cit.>.For evaluating the model's capability in generating image or video, image generation <cit.>, pointcloud generation <cit.> and video generation <cit.> tasks are adopted.Besides, various tasks including visual question answering <cit.> and image captioning <cit.> are leveraged for assessing the model's capability in complex image reasoning.Recently, several visual assistant benchmarks <cit.> are proposed for comprehensively assessing the instruction-tuned model. For example, MMBench <cit.> is designed for robustly and accurately evaluating the various abilities of multimodal models by assessing the model from 20 different aspects, such as logic reasoning andfine-grained perception. SeedBench <cit.> enables comprehensive assessment by incorporating 12 evaluation tasks spanning from the understanding of the image to the comprehension of the video. § DATASETS This section summarizes the widely adopted datasets for visual instruction tuning and evaluations.§.§ Datasets for Visual Instruction TuningFor Visual Instruction Tuning, multiple multimodal instruction-following datasets were collected. According to the data type, instruction-following datasets can be categorized into single-turn dataset and multi-turn dataset, as detailed in Table <ref>. §.§.§ Single-turn* MiniGPT-4 <cit.> curates an image description dataset that contains 3439 image-text pairs for instruction fine-tuning. MiniGPT-4 randomly selects 5000 images from the Conceptual Caption dataset <cit.> and prompts its pre-trained VLM model to generate detailed descriptions for each image. The generated descriptions are then refined and filtered both manually and by using ChatGPT, resulting in 3439 high-quality image-text pairs. * Clotho-Detail <cit.> is an audio-text instruction dataset that contains 3938 audio-text pairs with an average length of 52.7 words for description. Clotho-Detail is extended from Clotho <cit.>, by using GPT-4 to aggregate its original short captions into long descriptions. * VGGSS-Instruction <cit.> is an image-audio-text triple-modality instruction dataset. It adopts a group of fixed templates to wrap the original labels of VGGSS <cit.> into descriptions. The dataset contains 5158 image-audio-text pairs where the audio is only related to a certain region in the image. * DetGPT <cit.> curate an instruction tuning dataset for reasoning-based object detection. Using short captions and category names of existing objects for each image as prompts, DetGPT uses ChatGPT to generate a long description, as well as several query-answer pairs for each image. The result instruction tuning dataset contains 5000 images and around 30000 query-answer pairs. * MultiInstruct <cit.> build a comprehensive instruction dataset that covers 62 diverse multimodal tasks from 10 broad categories, such VQA, Image-text matching, grounded generation, and so on. These tasks include 34 existing tasks derived from 21 public dataset and 28 new tasks extended from them. Each task is equipped with 5 instruction templates to prompt the model to perform the specific task. * Shikra-RD <cit.> is an instruction-tuning dataset for the task of referential dialogue, which contains 5922 question-answer pairs. It resorts to GPT-4 to generate referential question-answer pairs based on the bounding box and description annotations of Flickr30K dataset, where the object coordinate may appear in both the questions or answers for referential region understanding. * MGVLID <cit.> is a multi-grained vision-language instruction-following dataset, involving both image-level and region-level instruction data. For image-level instruction data, MGVLID collects commonly used Question-Answering, image captioning, and object detection datasets, and converts their annotations into a unified instruction format. For the region-level instruction data, MGVLID uses various instruction templates to refine region-text pairs, collected from existing regional-level tasks such as object detection and OCR, into question-answer pairs.* AS-1B <cit.> is a large region-text dataset that contains 1.2 billion region-text pairs extracted from 11 million images. Each region is annotated with a semantic tag, several question-answer pairs, and a detailed caption for the comprehensive description, resulting in a total of 3.5 million distinct semantic tags for the entire dataset. * MM-IT <cit.> is a multimodal instruction-tuning dataset that contains 60k manually annotated data and 150k synthetic data for diverse modalities including image, video, and audio.* LRV-Instruction <cit.> is a large-scale robust visual-instruction dataset that contains 400K instructions generated by GPT-4, involving 16 vision-language tasks. In addition to positive question-answer pairs, LRV introduces negative instructions, which may involve manipulations or incorrect content, to improve the robustness of LLMs.* VisIT-Bench <cit.> is a visual instruction benchmark that contains 592 test instances, covering tasks from basic recognition to game playing and creative generation. * T2M <cit.> is a text-to-multimodal instruction dataset that contains 14.7k instances. The target is to generate corresponding multimodal contents given text captions. * ChiMed-VL-Instruction <cit.> is a Chinese medicine vision language instruction dataset that contains 479k question-answer pairs.* Valley-Instruct-73k <cit.> is a video instruction dataset that contains 73k instruction data, including 37k conversation pairs, 26k reasoning QA pairs and 10k description pairs. * MACAW-LLM <cit.> is a multimodal instruction dataset that consists of 69K image instruction pairs generated from COCO image captions <cit.> and 50K video instruction pairs generated from Charades <cit.> and AVSD <cit.>. §.§.§ Multi-turn * LLaVA-Instruct-158k <cit.> contains 158 image-text instruction data, including 58k conversation data asking about the visual content of the image, 23k description data, and 77k complex reasoning data where the question may involve multi-step reasoning process. * GPT4RoI <cit.>convert Visual Genome region caption annotations <cit.>, RefCOCOg <cit.>, Flcker30k <cit.>, and Visual Commonsense Reasoning <cit.> into instruction data for single/multiple region understanding, and leverage LLaVA-Instruct-158k <cit.> supplemented with bounding box annotations to improve the capability of multi-round conversation. * MultiModal-GPT <cit.>employs a unified instruction template to construct instruction data for both language-only data such as Dolly 15k and Alpaca GPT4 <cit.> and language-vision data including LLaVA <cit.>, Mini-GPT4 <cit.>, A-OKVQA <cit.>, COCO Caption <cit.>, and OCR VQA <cit.>.* MIMIC-IT <cit.> is an instruction dataset that contains 2.8 million multimodal instruction-response pairs for language, image, and video understanding. It contains 502k video clips and 8.1 million images, supporting eight languages including English, Chinese, Spanish, Japanese, French, German, Korean, and Arabic.* SVIT <cit.> is a large instruction dataset that contains 4.2 million visual instruction data. It comprises 1.6 million conversation QA pairs, 1.6 complex reasoning QA pairs, 1.0 million referring QA pairs, and 106k image description data, supporting comprehensive capability of visual understanding and reasoning.* PF-1M <cit.> contains 975k instruction-response data. It collects 37 image captioning and VQA datasets, then uses its pre-trained Polite Flamingo <cit.> to rewrite their original annotations into a unified instruction-answer format, and clean the data on both rule-based and model-based filters, obtaining 975k high-quality instruction data. * ILuvUI <cit.> is instruction dataset for UI tasks, i.e. UI element detection or multi-step UI navigation and planning. It contains 224K conversations, 32K consise description data, 32k detailed description dta, 32k logical reasoning data, 32k potential actions, and 1k UI transition data. * StableLLaVA <cit.> is a synthetic image-dialogue dataset. It uses Chatgpt to generate image prompts, then cooperates with StableDiffusion <cit.> to generate the corresponding image, and additionally employs Chatgpt to generate descriptions based on the same image prompts, resulting in 126K image-dialogue pairs. * X-LLM <cit.> construct a multimodal instruction data including about 10k samples that are selected and transformed from MiniGPT-4 <cit.>, AISHELL-2 <cit.>, VSDial-CN, and ActivityNet Caps <cit.>. * GPT4Tools <cit.> curate a instruction dataset to enable LLMs to use multimodal tools. It contains 71.4K instruction-following data involving 23 tools for image generation and image understanding for the training set, 1170 data which share the same tools involved in the training data as validation set, and 652 samples including 8 new tools as test set. * LLaVAR <cit.> construct 16K multi-turn conversation data for text-rich image understanding, by prompt GPT-4 with OCR data and image captions.* PVIT <cit.> build an image-region-language instruction dataset. It contains 146k single-turn instruction data converted from VQA datasets, 86k instruction data for five specific tasks (i.e., small object recognition, same-category object discrimination, object relationship based reasoning, object attribute based reasoning, and optical character recognition) on object understanding, and 22k general instruction data generated by prompting ChatGPT with image description and in-centext examples. * SparklesDialogue <cit.> is instruction dataset for conversations involving multiple images. It comprises of two parts, SparklesDialogueCC and SparklesDialogueVG. SparklesDialogueCC, generated based on Conceptual Captions <cit.>, contains 4.5k dialogues, each of which consists of at least two images and two round of conversation. And SparklesDialogueVG is built from Visual Genome <cit.> and includes 2k dialogue. Each dialogue contains at least threeimages across two turns. * GRIT <cit.> is large instruction dataset for referring and grounding tasks. It contains 1.1 million instruction data for image reasoning and understanding which are converted from public dataset or generated via ChatGPT and GPT-4, and 130k negative data to improve model robustness and reduce object hallucination. * VIGC-LLaVA <cit.> is an instruction dataset autonomously generated by VLLMs through the Visual Instruction Generation and Correction (VIGC) framework <cit.>. It contains 36.7k instruction data generated from COCO dataset <cit.> and 1.8 million instruction data from Objects365 <cit.>. * M^3IT <cit.> is a multimodal, multilingual instruction tuning dataset that contains 2.4 million instances. It involves 40 visual-language tasks and 400 munnually written instruction templates, with seven tasks translated into 80 languages. * LLaVA-Med <cit.> curate a biamedical instruction dataset by prompte GPT-4 to generate multi-round conversations. It contains 60, 000 image-text pairs with 5 medical image modalities, including CXR (chest X-ray), CT (computed tomography), MRI (magnetic resonance imaging), histopathology, and gross (i.e., macroscopic) pathology. * Mosit <cit.> is a modality-switching instruction tuning dataset that supports complex multimodal inputs and outputs for multi-round instruction conversation. Each conversation in Mosit consists 3-7 rounds (question-answer pairs) where either question or answer may include multimodal content (text, image, audio, and video) at either the question or the answer. It contains a total of 5k dialogues.* PointLLM <cit.> constructs large point-text instruction dataset that contains 660k description data and 70k complex instruction. It leverages GPT-4 to convert the 3D object captioning dataset Cap3D <cit.> into instruction following dataset. * TEXTBIND <cit.> curates a instruction dataset containing 25.6k conversation for image understanding. which is achieved by applying its proposed TEXTBIND, an annotation-free frmawork for improving the multi-turn instruction following capability of LLMs, to GPT4 and the CC3M <cit.> dataset. * MULTIS <cit.> is a multimodal instruction-tuning dataset that contains 4.4 million task-specific samples that are converted from public question-answering and captioning dataset using ChatGPT, and 209k multimodal chat samples involving conversations, descriptions and complex reasoning on multiple modalities including text, image, audio, and video. * LAMM <cit.> includes 186k text-image instruction pairs, and 10k text-pointcloud instruction pairs. It contains four types of instruction data, including daily conversation, factual knowledge dialogue about knowledge and content reasoning, detailed description, and visual task dialogues. The task dialogues involve most vision tasks for both 2D and 3D vision, such as captioning, scene graph recognition, classification, detection, counting and OCR. * VideoChat <cit.> is a video-centric instruction dataset build from WebVid-10M <cit.> using ChatGPT. It contains 7K video descriptions and 4k video conversations. * Video-ChatGPT <cit.> is a video-based instruction daaset containing 100k video-instruction pairs annotated by human annotators, off-the-shelf models, and GPT3.5. It covers various data types such as detailed descriptions, summarizations, question-answer pairs, and conversations, .etc. * OphGLM <cit.> is a ophthalmic instruction dataset comprising of 20k dialogs related to ophthalmic diseases. §.§ Datasets for Instruction-tuned Model EvaluationWith visual instruction tuning, we can build general-purpose multimodal models that can solve various vision tasks according to users' instructions. Various datasets have been adopted in Instruction-tuned models evaluations, including datasets for discriminative image tasks (e.g., image classification <cit.>, object detection <cit.>, image segmentation <cit.>, visual grounding <cit.>), generative image tasks (e.g., image generation <cit.>), complex image reasoning tasks (e.g., visual question answering <cit.>, image captioning <cit.>, visual assistant <cit.>), video tasks (e.g., video generation <cit.>, video captioning <cit.>, video VQA <cit.>), medical vision tasks (e.g., medical VQA <cit.>, medical classification <cit.>, medical segmentation <cit.>), document vision tasks (e.g., document VQA <cit.>) and 3D vision tasks (e.g., pointcloud classification <cit.>, pointcloud generation <cit.>, pointcloud VQA <cit.>, pointcloud detection <cit.>). § VISUAL INSTRUCTION TUNING Visual instruction tuning towards general-purpose multimodal models has been explored for various vision tasks, including discriminative tasks, generative tasks, complex image reasoning tasks, video tasks, medical vision tasks, document vision tasks, and 3D vision tasks as illustrated in Table <ref>.This section reviews them with the above-mentioned tasks listed in Tables <ref> and <ref>.§.§ Instruction-based Image Learning for Discriminative TasksInstruction-based image learning for discriminative tasks has been widely explored for general-purpose multimodal models, which construct instruction datasets and tuning methods for learning discriminative multimodal features.§.§.§ Image ClassificationIn this task, visual instruction tuning <cit.> aims to learn multimodal category information for image classification by specifically designed instruction tuning methods and datasets. For example,Instruction-ViT introduces the instruction tuning method into the vision transformer (ViT) via employing and fusing the multimodal prompts (in texts and images) that carry class-related information for guiding model fine-tuning as shown in Figure <ref>. Specifically, Instruction-ViT leverages the self-attention mechanisms of the transformer to combine the multimodal prompts and input image.Then it uses a learnable [CLS] token to represent global image features and a series of prompt tokens to represent prompt features to complete the downstream task of classification, where the similarity between [CLS] token and prompt tokens have been utilized to guide model fine-tuning.The innovative instruction tuning method of fusing multimodal prompts improves accuracy and domain adaptation ability for image classification networks.§.§.§ Image SegmentationImage segmentation aims to partition a digital image into multiple segments or regions to simplify or change the representation of an image into something that is more semantic and easier to analyze. In general-purpose multimodal models with visual instruction tuning, image segmentation involves using the multimodal instructions and expressions to guide the model to reason and comprehend users' intents, segmenting regions in images.Large Language Instructed Segmentation Assistant (LISA) <cit.> first proposed a new “reasoning segmentation” task, which aims to generate a segmentation prediction according to a free-form query text that involves complex reasoning.Unlike the vanilla referring segmentation task, query texts in reasoning segmentation are more intricate and may involve complex vision and language reasoning or world knowledge.This task requires the model to possess the ability to reason the user-specified text queries and the image jointly and produce the expected segmentation predictions.As shown in Figure <ref>, LISA designed a multimodal Large Language Model (LLM) named LISA to produce segmentation masks based on complex and implicit query texts.LISA incorporates a new token, represented as <SEG`>, to signify the request for the segmentation output.Using the embedding-as-mask paradigm, LISA has been empowered with segmentation abilities and gains advantages through end-to-end training.Thus, the model can handle various scenarios, such as complex reasoning, explanatory answers, and multi-round conversations.In addition, LISA has demonstrated strong zero-shot segmentation ability when trained exclusively with reasoning-free data and can be further enhanced via fine-tuning over reasoning segmentation image-instruction pairs.§.§.§ Object DetectionObject detection aims to identify and locate the objects in a given image or video frame. In general-purpose multimodal models with visual instruction tuning, object detection involves using visual instructions to guide the model in identifying and localizing objects within an image.In VisionLMM <cit.>, object detection is one of the vision-centric tasks that the framework is designed to address.It leverages LLMs to handle object detection in an instruction-based way which is open-ended and customizable, allowing for the flexible definition and management of object detection tasks using language instructions.As shown in Figure <ref>, VisionLMM consists of 3 core designs. The first is the language instructions that unify a diverse range of vision tasks and enable flexible task configuration. The second is the Instruction-Aware Image Tokenizer that extracts the required visual information according to the provided language instructions for effective comprehension and parsing of the visual input. The third one is the LLM-based open-task decoder. It takes inputs the extracted visual embeddings and language instruction embeddings and generates the expected results for various vision tasks.VisionLLM enables instruction-based task configuration, such as fine-grained object detection and coarse-grained object detection, and achieves an mAP of over 60% on the COCO dataset, which places it on par with detection-specific models. DetGPT <cit.> introduced a new paradigm for object detection called reasoning-based object detection, which enables the system to reason users' task instructions and visual inputs jointly, allowing it to understand and follow users' intents and conduct object detection accordingly, even if the use's task instruction does not explicitly mention the object. This paradigm aims to address the limitations of conventional object detection systems by allowing users to use natural languages to express their intents, and the model can reason users' intents and detect the object of interest. DetGPT involves a two-stage approach for reasoning-based object detection. In the first stage, a multimodal model is used for comprehending the input image, which predicts the related object descriptions that fit the detection instructions specified by users. In the second stage, based on the predicted object descriptions, an open-vocabulary detector is then employed to generate the detection predictions.As shown in Figure <ref>, DetGPT consists of an image encoder for visual feature extraction, and a cross-modal mapping module that maps visual features to the aligned image-text feature space.Additionally, it employs a pre-trained large language model to comprehend and reason the visual features and the language instructions jointly, ultimately determining which of the objects could fulfill users' instructions.The open-vocabulary object detector then locates the target objects among the results from the multimodal model. Shikra <cit.> focuses on addressing the absence of natural referential ability in current Multimodal Large Language Models (MLLMs) by introducing a unified model capable of handling inputs and outputs of spatial coordinates in natural language form.Shikra aims to enable referential dialogue, which is an essential component of everyday human communication and possesses extensive practical applications.It is designed to handle tasks related to spatial coordination, such as REC, PointQA, VQA, and Image Captioning, without the need for extra vocabularies, position encoders, or external plug-in models. Shikra's architecture comprises a vision encoder, an alignment layer, and a Large Language Model (LLM). It uses a pre-trained Vision Transformer as the visual encoder, an alignment layer to align visual and language information, and a large language model to process natural language inputs and generate responses. The design is intentionally simple, without the need for additional vocabularies, position encoders, or external plug-in models.ChatSpot <cit.> propose precise referring instruction tuning, which aims to enable multimodal large language models (MLLMs) to support fine-grained interaction. It focuses on utilizing a diverse range of prompts, like points and bounding boxes, as the location prompts to indicate the specific regions of interest (RoIs) in images. Precise referring instruction tuning improves the flexibility and user-friendliness of the interaction with MLLMs, particularly in the context of vision-language tasks. As illustrated in Figure <ref>, the proposed unified end-to-end multimodal large language model, ChatSpot, comprises 3 main designs: an image encoder, a decoder-only large language model (LLM), and a modality alignment block. The image encoder processes visual inputs, while the LLM handles language understanding and generation. The modality alignment block aligns visual tokens with the language semantic space, enabling seamless integration of vision and language modalities for diverse forms of interaction, including mouse-clicking, drawing boxes, and native language input. ChatSpot exhibits promising performance on a series of designed evaluation tasks.All-Seeing (AS) Project <cit.>, which contributes a large-scale dataset, named AS-1B, for open-world panoptic visual perception as well as the All-Seeing Model, a universal vision-language model capable of recognizing and understanding context in arbitrary regions.As shown in Figure <ref>, The All-Seeing Model (ASM) consists of two modules including a position-aware image tokenizer and an LLM-based decoder. The first module encodes image conditioned the location information represented as bounding boxes, masks, and points, which empowers ASM with thelocation ability.As the second module inherits world knowledge and reasoning ability from the pre-trained LLMs, it can provide a robust foundation for visual perception.Additionally, ASM designs a special prompt to enable the model to switch to and handle generative or discriminative vision tasks accordingly. The ASM model demonstrates remarkable zero-shot performance in various vision and language tasks, including regional retrieval, recognition, captioning, and question-answering, and is evaluated on representative vision and vision-language tasks. PVIT <cit.> introduces Position-enhanced Visual Instruction Tuning (PVIT), which extends the capabilities of Multimodal Large Language Models (MLLMs) by integrating an additional region-level vision encoder. The proposed method also includes a region-level instruction data construction scheme and an evaluation dataset to facilitate the training and evaluation of PVIT. The model architecture of PVIT is illustrated in Figure <ref>, it consists of three primary components: a vision encoder, a region encoder, and a large language model (LLM). The model processes an input image together with instructions containing embedded regions and generates corresponding responses. The region encoder is responsible for extracting region-level features from the image and regions, which are then integrated into the large language model for fine-grained multimodal instruction tuning. The stage training strategy of PVIT involves an initial stage where a linear projection layer is trained to align region features with the embedding space of the large language model (LLM). In the second stage, the model is fine-tuned with region-level instruction data to adapt to complex fine-grained instructions. This approach allows the model to first learn to understand region features and then enhance its capabilities in following instructions that contain regions.§.§.§ Visual GroundingObject detection involves identifying and locating objects within an image by classifying and locating them. In contrast, visual grounding goes a step further by linking specific regions or objects within an image to textual descriptions, requiring comprehension of the context and semantics of both the visual scene and the associated language.Visual instruction tuning for visual grounding aims to enable the system to understand finer-grained context, attributes, and the relationships between objects as described in the text, effectively bridging the gap between visual perception and linguistic representation. BuboGPT <cit.>, a multimodal language model with visual grounding capabilities, enabe a fine-grained understanding of visual objects and other modalities. It proposes an off-the-shelf visual grounding pipeline and a two-stage training scheme for joint multimodal understanding. Additionally, the paper constructs a high-quality multimodal instruction-tuning dataset, facilitating the model's ability to recognize and respond to arbitrary combinations of input modalities. As shown in Figure <ref>, the model architecture of BuboGPT consists of a multimodal language model that integrates visual grounding capabilities. It employs a visual grounding pipeline with tagging, grounding, and entity-matching modules to establish fine-grained relations between visual objects and other modalities. Additionally, BuboGPT uses a two-stage training scheme to align vision and audio features with language and performs multimodal instruction tuning on a high-quality dataset to enable joint multimodal understanding. Ferret <cit.> introduces a Multimodal Large Language Model (MLLM) that can understand spatial referring and accurately ground open-vocabulary descriptions within an image.Ferret proposes a novel hybrid region representation that combines discrete coordinates with continuous visual features to refer to regions of various shapes and formats within an image,This representation allows Ferret to flexibly handle inputs that mix referred regions with free-form texts and accurately ground the mentioned objects in its outputs. As shown in Figure <ref>, the model architecture of Ferret consists of an image encoder to extract image embeddings, a spatial-aware visual sampler to extract regional continuous features, and a Large Language Model (LLM) to jointly model image, text, and region features. This architecture enables Ferret to process diverse region inputs, such as points, bounding boxes, and free-form shapes, and accurately ground open-vocabulary descriptions. Ferret demonstrates superior performance in various tasks and reduces object hallucination.GLaMM <cit.> introduces a new task called Grounded Conversation Generation (GCG), aiming to generate natural language responses that are seamlessly integrated with object segmentation masks. It requires generating image descriptions with the phrases or words that are linked to the corresponding segmentation masks, thereby bridging the gap between textual and visual understanding.Moreover, it proposes GLaMM, which is first model that has the ability to generate natural language responses that involves segmentation masks.As shown in Figure <ref>, GLaMM consists of five core components: a global image encoder, a regional image encoder, a large language model (LLM), a grounding image encoder, and a pixel decoder.With the above modules, it enables the model to accept text and visual inputs, and interact at multiple levels of granularity and generate grounded textual outputs accordingly.In summary, this architecture enables image-level, region-level and pixel-level understand and perception. GLaMM demonstrates superior performance on its created Grounding-anything Dataset (GranD) and designed evaluation protocol. §.§ Instruction-based Image Learning for Generative TasksInstruction-based learning for generative tasks in multimodal models has gained significant attention. This approach involves constructing high-quality instruction-following datasets and designing instruction-tuning methods to enhance large language models. These models acquire multi-turn, interleaved multimodal instruction-following capabilities, enabling them to perform advanced multimodal tasks, including image generation and editing.§.§.§ Image Generation GPT4Tools <cit.> enables open-source language models to effectively use multimodal tools. It constructs a tool-related instructional dataset from advanced language models and utilizes Low-Rank Adaptation (LoRA) optimization to enhance the language models' tool-usage capabilities. Additionally, it proposes a benchmark to evaluate the accuracy of language models in using tools, demonstrating significant improvements in tool usage across various visual tasks. As shown in Figure <ref>, the GPT4Tools framework involves constructing a tool-related instruction dataset by prompting an advanced language model with various multimodal contexts. This dataset is then used to fine-tune open-source language models using Low-Rank Adaptation (LoRA) optimization, enabling them to effectively use tools for visual tasks such as comprehension and image generation. Additionally, the framework includes a benchmark to evaluate the language models' ability to use tools, showcasing significant improvements in tool usage accuracy.TextBind <cit.> enhances large language models with multi-turn interleaved multimodal instruction-following capabilities. It significantly reduces the need for high-quality exemplar data, making it more accessible and scalable for real-world tasks. The proposed model, MIM, trained on TextBind, outperforms recent baselines in open-world multimodal conversations, demonstrating remarkable performance in textual response generation, image generation, and overall multimodal instruction-following. As shown in Figure <ref>, MIM seamlessly integrates image encoder and decoder models to accommodate interleaved image-text inputs and outputs. It supplements large language models with visual input and output modules, enabling the model to process multi-turn interleaved multimodal instructions and generate coherent responses. The architecture is trained in two stages, focusing on aligning the feature spaces of vision and language models and further improving instruction-following capabilities. §.§.§ Image Editing LLaVA-Interactive makes significant contributions to the field of multimodal human-AI interaction by providing a cost-efficient and versatile system for multi-turn dialogues with human users. It combines visual and language prompts, enabling sophisticated multimodal tasks such as image editing, segmentation, and generation. Additionally, LLaVA-Interactive addresses technical challenges in system development and demonstrates its capabilities across a wide range of real-world application scenarios, showcasing its potential for performing new, complex tasks in various domains. As shown in Figure <ref>, the workflow of LLaVA-Interactive involves several key steps for visual creation processes. It begins with image input, where users can upload an image or generate one by providing a language caption and drawing bounding boxes to establish the spatial arrangement of objects. Users can then engage in visual chat, interactive segmentation, and grounded editing to iteratively refine their visual creations. This multi-turn interaction allows users to ask questions, create object masks, place new objects on the image, and make adjustments to achieve their intended visual outcomes.§.§ Instruction-based Image Learning for Complex Reasoning Tasks§.§.§ Image CaptioningImage captioning involves training models to understand the content of an image and generate a natural language description that accurately represents the visual content. This task requires integrating computer vision techniques for image understanding with natural language processing methods for language generation. The goal is to enable machines to describe the visual content of an image in a human-like manner, allowing for better understanding and interpretation of visual information. Visual instruction tuning improves the task of image captioning by providing a fine-tuning process with specifically devised and fine-grained multimodal instruction sets.This allows the model to associate system instructions and text queries with input multimodal contexts, enhancing its ability to generate accurate and relevant captions for images. GPT4RoI introduces spatial instruction tuning for large language models on region-of-interest (RoI) in image-text pairs.This model allows users to interact with both language and drawing bounding boxes to adjust referring granularity, and it can mine a variety of attribute information within each RoI.GPT4RoI is trained on 7 region-text pair datasets and brings an unprecedented interactive and conversational experience compared to previous image-level models, enhancing fine-grained multimodal understanding. As shown in Figure <ref>, GPT4RoI assist the task of image captioning by allowing models to incorporate references to specific regions of interest (RoI) in the image.This enables the models to generate captions that are more detailed and specific to particular regions within the image.By aligning language instructions with RoI features, visual instruction tuning enhances the model's ability to understand and describe fine-grained visual details, leading to more accurate and informative image captions.MiniGPT-4 is a model that aligns visual embedding space with a popular LLM, Vicuna, to achieve advanced vision-language abilities. The model demonstrates the ability to generate detailed image descriptions, create websites from hand-drawn drafts, write stories and poems inspired by images, and provide cooking recipes from food photos. MiniGPT-4 also highlights the importance of fine-tuning the model with a detailed image description dataset to enhance the naturalness of the produced languages and their usability. Clever Flamingo a novel method to curate raw vision-language datasets into visual instruction tuning data, reducing the “multimodal alignment tax”.It constructs a large-scale visual instruction tuning dataset based on response rewriting and introduces a U-shaped multi-stage visual instruction tuning approach.It also demonstrates the advantages of the resulting model in terms of both multimodal understanding and response politeness. As shown in Figure <ref>, the U-shaped multi-stage visual instruction tuning approach involves three stages.In Stage 1, the focus is on improving the instruction-following ability by tuning only the language model.Stage 2 shifts to improving the visual understanding capability by exclusively tuning the connector.Finally, in Stage 3, the model is fine-tuned to recover the optimal politeness of the responses.This approach aims to enhance the model's multimodal understanding and response politeness efficiently. DreamLLM is a learning framework that introduces a versatile Multimodal Large Language Model (MLLM) capable of generating free-form interleaved content and excelling at zero-shot or in-context vision-language comprehension and synthesis tasks. It operates on the principles of generative modeling of language and image posteriors, as well as fostering the generation of raw, interleaved documents, allowing it to learn all conditional, marginal, and joint multimodal distributions effectively. The contribution of DreamLLM lies in demonstrating the effectiveness of achieving enhanced learning synergy between multimodal content understanding and creation, paving the way for further research in the multimodal machine learning field.AnyMAL is a unified model designed to reason over diverse input modality signals and generate textual responses. It presents an efficient and scalable solution for building Multimodal LLMs, fine-tuning the model with a multimodal instruction set covering diverse tasks, and achieving strong zero-shot performance in both automatic and human evaluations on various multimodal tasks. Additionally, AnyMAL extends previous approaches by allowing for diverse input modalities beyond vision signals and scaling the LLM parameters to 70B via an efficient pre-training approach.§.§.§ Visual Question AnsweringVisual Question Answering (VQA) combines image understanding and natural language processing to answer questions about the content of a given image. In this task, a user presents an image along with a question in natural language that refers to some aspect of the image. The VQA model then analyzes the visual data to understand the scene, identifies relevant components, and processes the text of the question. Finally, it generates an accurate and relevant answer based on the synthesis of these two streams of information. The challenge for the VQA is to correctly interpret the visual cues and the context of the question, which requires a deep understanding of both the visual elements in the image and the semantics of the question. Visual instruction tuning improves the performance of the VQA model by enabling efficient adaptation of large language models to effectively process and integrate visual instructions, leading to enhanced reasoning ability and accurate responses in VQA tasks. LaVIN proposes a novel and efficient solution for vision-language instruction tuning called Mixture-of-Modality Adaptation (MMA). This approach enables the joint optimization of multimodal large language models (LLMs) with a small number of parameters, significantly reducing training costs. The proposed MMA equips LLMs with lightweight adapters and a routing scheme to dynamically choose adaptation paths for different modalities, resulting in a large vision-language instructed model called LaVIN.As shown in Figure <ref>, LaVIN employs a simplified and lightweight architecture that incorporates Mixture-of-Modality Adapters (MM-Adapters) to process instructions from different modalities. These MM-Adapters connect the large language model (LLM) with the image encoder, enabling efficient adaptation to vision-language tasks. The architecture is optimized through Mixture-of-Modality Training (MMT) in an end-to-end manner, allowing LaVIN to effectively execute input instructions from various modalities while demonstrating superior performance in vision-language tasks.SciTune focuses on aligning large language models (LLMs) with scientific disciplines, concepts, and goals. The framework includes two stages: scientific concept alignment and scientific instruction tuning. By training LLaMA-SciTune models on science-focused multimodal tasks, the paper demonstrates improved performance in visual grounded language understanding and multimodal reasoning, surpassing human performance in the ScienceQA benchmark. Additionally, the paper emphasizes the use of human-generated scientific multimodal instructions to align LLMs with natural scientific concepts and true human intent. MultiInstruct leverages instruction tuning to improve the generalizability of Vision-Language pretrained models on multimodal and vision tasks. It introduces new metrics such as Sensitivity to measure the model's capability to consistently produce results regardless of slight variations in instructions. MultiInstruct demonstrates strong zero-shot performance on various unseen multimodal tasks and highlights the potential benefits of larger text-only instruction datasets for multimodal instruction tuning. LMEye is a human-like eye with a play-and-plug interactive perception network designed to enable dynamic interaction between Large Language Models (LLMs) and external vision information. LMEye significantly improves zero-shot multimodal performances for various scales and types of LLMs, demonstrating superior performance on evaluation benchmarks for multimodal LLMs, visual question answering, in-detail image description, and multimodal reasoning tasks. Additionally, LMEye addresses the limitations and challenges associated with MLLMs, such as generating toxic or biased content, and proposes potential improvement solutions.VPG-C aims to enhance the ability of Multimodal Large Language Models (MLLMs) to comprehend demonstrative instructions with interleaved multimodal context. The proposed VPG-C module infers and completes missing visual details, and it also introduces a synthetic discriminative training strategy to fine-tune VPG-C without the need for supervised demonstrative instruction data. Additionally, it introduces a comprehensive benchmark called DEMON for evaluating MLLMs on 31 tasks with complex vision-language demonstrative context. The results show that VPG-C achieves notable zero-shot performance on the DEMON benchmark and demonstrates superior performance on established benchmarks like MME and OwlEval.BLIVA is a multimodal Large Language Model that leverages learned query embeddings and encoded patch embeddings to enhance text-image visual perception and understanding. BLIVA demonstrates superior performance in both general and text-rich Visual Question Answering (VQA) benchmarks, showcasing exceptional OCR capabilities and robust localization ability. The model's innovative design bolsters performance in academic benchmarks and real-world examples, highlighting its effectiveness in handling text-rich visual questions.MiniGPT-v2 is a unified interface for vision-language multi-task learning. It is designed to effectively handle various vision-language tasks, such as image description, visual question answering, and visual grounding, using a single architecture. Its key innovations include the use of unique identifiers for different tasks during training, enabling the model to distinguish and learn multiple tasks efficiently, and achieving state-of-the-art results on diverse vision-language benchmarks. The mPLUG-Owl2 is a multimodal foundation model that revolutionizes large language models by incorporating modality collaboration and interference mitigation. It features a modularized network design, a modality-adaptive module, and a two-stage training paradigm to effectively manage multimodal signals. It achieves state-of-the-art performance on vision-language benchmarks, demonstrates adaptability in zero-shot multimodal tasks, and also excels in pure-text benchmarks. It also provides in-depth analysis and validation of the impact of modality collaboration and offers insights into the effectiveness of the proposed training paradigm for future multimodal foundation models. InstructBLIP <cit.> is a visual instruction tuning pipeline, which help construct a general-purpose multimodal model that can handle a broad range of vision tasks via a universal task interface with languages as task instructions.As shown in Figure <ref>, InstructBLIP consists of a Query Transformer (Q-Former) that extracts instruction-aware visual features from the output embeddings of a frozen image encoder. These visual features are then fed as soft prompt input to a frozen Language Model (LLM). During instruction tuning, the Q-Former is finetuned while the image encoder and LLM remain frozen. This architecture allows for the extraction of task-relevant visual features based on the given instructions, enhancing the model's ability to follow instructions and generate responses. With a comprehensive study on vision-language instruction tuning, it demonstrates the effectiveness of InstructBLIP on zero-shot generalization to unseen tasks. The framework achieves state-of-the-art performance on a diverse set of vision-language tasks and provides novel techniques for instruction-aware visual feature extraction and balanced dataset sampling.InternLM-XComposer is a vision-language large model that excels in advanced image-text comprehension and composition. Its key innovations lie in three main areas: 1) Interleaved Text-Image Composition, allowing seamless integration of images into coherent articles, 2) Comprehension with Rich Multilingual Knowledge, enabling deep understanding of visual content across diverse domains, and 3) State-of-the-art Performance, consistently achieving top results in various vision-language benchmarks. Additionally, it introduces a novel evaluation procedure for assessing the quality of interleaved text-image articles.§.§.§ Visual AssistantVisual assistant typically refers to a system or application that uses computer vision and machine learning algorithms to understand and process visual information, such as images, in conjunction with language. It is capable of interpreting visual content and responding to queries or instructions related to the visual input.Instruction-based Image Learning enhances the ability of visual assistants to understand and follow multimodal vision-and-language instructions, improving the adaptability to user instructions, and ultimately leading to performance improvements in multimodal tasks and instruction-following capabilities. This process contributes to the development of a more capable and adaptable visual assistant, enabling it to effectively process and respond to both visual and language-based instructions.LLaVA first introduces visual instruction tuning, extending the concept of instruction tuning to the language-image multimodal space. It presents the LLaVA model, which demonstrates impressive multimodal chat abilities and achieves state-of-the-art accuracy when fine-tuned on Science QA. Additionally, the paper constructs two evaluation benchmarks for visual instruction following and makes the model, data, and code publicly available, contributing to the research community. The LLaVA architecture is shown in Figure <ref>, which leverages a vision encoder, specifically the CLIP visual encoder ViT-L/14, to provide visual features for input images. These visual features are then processed by a language model termed Vicuna. The architecture consists of two stages: visual feature alignment and fine-tuning end-to-end, where the visual encoder weights are frozen, and the pre-trained weights of the projection layer and LLM in LLaVA are updated. This architecture enables the model to effectively leverage the capabilities of both the pre-trained language model and the visual encoder for general-purpose visual and language understanding. Otter introduces the multimodal In-Context Instruction Tuning (MIMIC-IT) dataset, which consists of instruction-image-answer triplets and in-context examples. Otter itself is a multimodal model with in-context instruction tuning based on OpenFlamingo, showcasing improved instruction-following ability and in-context learning. Additionally, it optimizes OpenFlamingo's implementation, reducing the training requirements and integrating it into Hugging Face Transformers for easier use by researchers.LLaVA-1.5 improves baselines for large multimodal models (LMMs) with visual instruction tuning. The authors demonstrate that simple modifications to the LLaVA framework, such as using an MLP cross-modal connector, incorporating academic task-related data, introducing response formatting prompts to balance short- and long-form VQA, scaling up the input image resolution, and including additional visual knowledge sources, result in stronger and more feasible baselines. These improvements lead to state-of-the-art performance across 11 benchmarks, using significantly less training data and compute resources compared to existing methods. The work provides a fully-reproducible and affordable baseline for future research in open-source LMMs.SVIT introduces a large-scale dataset called SVIT, containing 4.2 million instruction tuning data generated by prompting GPT-4 with manual annotations of images. The dataset aims to enhance visual instruction tuning for multimodal models, leading to better performance in visual perception, reasoning, and planning tasks. The experiments demonstrate that training multimodal models on the SVIT dataset achieves superior performance compared to training on smaller datasets. As shown in Figure <ref>, ILuvUI introduces a Vision-Language Model (VLM) specifically tailored for understanding and interacting with user interfaces (UIs). The model is trained using a dataset of image-instruction pairs generated from UI screenshots, and it demonstrates the ability to describe UI elements, provide contextual help, and plan multi-step interactions. The paper also benchmarks ILuvUI against existing models, highlighting its effectiveness in UI understanding tasks and its potential for enhancing UI accessibility. Additionally, the paper discusses the need for standardized benchmarks to evaluate VLMs in the context of UI tasks. AssistGPT introduces a multimodal AI assistant system called AssistGPT, which integrates multiple models to handle complex visual tasks. AssistGPT utilizes an interleaved language and code reasoning approach called Plan, Execute, Inspect, and Learn (PEIL). It consists of four core modules: Planner, Executor, Inspector, and Learner. The Planner controls the reasoning process, the Executor executes external tools, the Inspector manages input and intermediate results, and the Learner assesses system performance and records successful trials as in-context examples.The system showcases its capabilities in processing complex images and videos, understanding high-level queries, and handling flexible inputs, demonstrating its effectiveness beyond benchmark results. StableLLaVA introduces a novel data collection methodology for enhancing visual instruction tuning in multimodal Large Language Models (LLMs). The proposed approach synthesizes both images and associated dialogues, addressing limitations encountered with benchmark datasets including noise and domain bias.The research showcases the flexibility of the pipeline by generating a large-scale dataset covering more than ten useful capabilities and demonstrates significant improvements in model performance across these capabilities.X-LLM is a Multimodal Large Language Model, which integrates multiple modalities such as images, speech, and videos into a large language model through X2L interfaces. The framework demonstrates impressive capabilities in tasks like visual spoken question answering and multimodal machine translation. Additionally, the paper introduces a three-stage training method for X-LLM and constructs a high-quality multimodal instruction dataset to further enhance its performance. Overall, the contributions include the development of a powerful multimodal language model and the exploration of joint multimodal instruction data to improve its capabilities. As shown in Figure <ref>, X-LLM's network architecture consists of multiple frozen single-modal encoders, including image, video, and speech encoders, aligned with a large language model (ChatGLM) through X2L interfaces. These interfaces, such as the image interface, video interface, and speech interface, convert multimodal information into foreign languages using Q-Formers and Adapter modules. The training process involves three stages, focusing on converting multimodal information, aligning representations with the LLM, and integrating multiple modalities. Overall, the architecture enables the integration of diverse modalities into a large language model for multimodal understanding and response generation. PandaGPT is a model that integrates multimodal encoders from ImageBind and language models from Vicuna to perform instruction-following tasks across six modalities: image/video, text, audio, depth, thermal, and IMU. It demonstrates the ability to connect information from different modalities and compose their semantics naturally, enabling tasks such as image description generation, story writing inspired by videos, and answering questions about audios. PandaGPT's training on aligned image-text pairs allows it to display emergent cross-modal capabilities for data other than image and text, paving the way for holistic understanding of inputs across different modalities. LAMM introduces the Language-Assisted multimodal (LAMM) dataset, framework, and benchmark, aiming to facilitate the training and evaluation of multimodal large language models (MLLMs). The main contributions include the comprehensive dataset and benchmark covering a wide range of vision tasks for 2D and 3D vision, a detailed methodology for constructing multimodal instruction tuning datasets, and a primary MLLM training framework optimized for modality extension. Additionally, the paper provides baseline models, extensive experimental observations, and analysis to accelerate future research in the field of multimodal language models.LLaVAR is an enhanced visual instruction-tuned model for text-rich image understanding. It also collects noisy and high-quality instruction-following data to augment visual instruction tuning, significantly improving text understanding within images. The model's enhanced capability allows for end-to-end interactions based on various forms of online content combining text and images, and the authors open-source the training and evaluation data together with the model checkpoints. Qwen-VL is a versatile vision-language model that integrates image understanding, text reading, localization, and multi-round dialogue capabilities. It addresses the limitations of large language models by incorporating visual signals and demonstrates superior performance in tasks such as image captioning, visual question answering, refer expression comprehension, and text-oriented tasks. The model's multi-task pre-training data and its ability to handle diverse style tasks make it a valuable contribution to multimodal research. CogVLM is a powerful open-source visual language model that excels in a broad range of multimodal tasks such as image captioning, visual question answering, and visual grounding. The model's superior performance and robust generalization are rigorously validated through quantitative evaluations on various benchmarks, showcasing its remarkable capability and robustness. Additionally, the paper presents qualitative examples generated by CogVLM, demonstrating its effectiveness in real-world applications.As shown in Figure <ref>, CogVLM comprises four fundamental components: a vision transformer (ViT) encoder, an MLP adapter, a pretrained large language model (GPT), and a visual expert module. The ViT encoder processes the image, the MLP adapter maps the output of ViT into the same space as the text features, and the pretrained large language model forms the base for further training. The visual expert module is added to each layer to enable deep visual-language feature alignment, consisting of a QKV matrix and an MLP in each layer. This architecture allows for deep fusion of vision and language information, resulting in state-of-the-art performance on multimodal tasks.SEED-LLaMA introduces SEED, a discrete image tokenizer designed to enable Large Language Models (LLMs) to process and generate text and images interchangeably. SEED-LLaMA, a multimodal AI assistant, is produced by pretraining and instruction tuning on interleaved visual and textual data with SEED tokenizer. It demonstrates impressive performance in multimodal comprehension and generation tasks, as well as compositional emergent abilities such as multi-turn in-context multimodal generation. The key contribution lies in enabling LLMs to perform scalable multimodal autoregression under its original training recipe, thus advancing the potential of multimodality in AI. As shown in Figure <ref>, SEED is a discrete image tokenizer that converts 2D raster-ordered features into a sequence of causal semantic embeddings, which are further discretized into quantized visual codes with causal dependency.These visual codes are then decoded into generation embeddings aligned with the latent space of a pre-trained model, allowing for the generation of realistic images.SEED enables Large Language Models to perform scalable multimodal autoregression on interleaved visual and textual data, thus unifying multimodal comprehension and generation tasks within a single framework.OtterHD introduces OtterHD-8B model, which addresses the limitations of fixed-resolution inputs in Large Multimodal Models (LMMs). It leverages the Fuyu-8B architecture to process images of varying resolutions, demonstrating enhanced performance in discerning fine details in complex scenes. The model's contribution lies in its ability to effectively handle high-resolution images and its performance on the MagnifierBench benchmark, highlighting the importance of resolution flexibility in contemporary LMMs.ImageBind-LLM is a model that enhances multimodality instruction tuning and cache-enhanced inference. It revisits prior works such as ImageBind and LLaMA-Adapter, and evaluates the proposed ImageBind-LLM on a new benchmark, MME. The model demonstrates strong performance in perception tasks and showcases its multimodal instruction capabilities through qualitative analysis. Overall, the paper contributes to the development of robust and versatile language models with enhanced multimodality understanding and performance. As shown in Figure <ref>, the training paradigm of ImageBind-LLM involves a two-stage training pipeline. In the first stage, the model is pre-trained on large-scale image-caption data to learn image-conditioned response capacity. This stage involves aligning the joint embedding space of ImageBind with LLaMA using a learnable bind network and an attention-free zero-initialized mechanism for visual knowledge injection. In the second stage, the model is fine-tuned on a mixture of language instruction data and visual instruction data to equip it with both language and visual instruction-following abilities. Additionally, a training-free visual cache model is proposed to mitigate the modality discrepancy between training and inference. §.§ Instruction-based Video LearningInstruction-based video learning improves the performance of the video comprehensionby enabling efficient adaptation of large language models (LLMs) to effectively devise, process, and integrate video-centric instruction tuning datasets, leading to enhanced spatiotemporal reasoning and causal relationship inferencing ability and accurate responses in visual question answering tasks.§.§.§ Visual AssistantEmbodiedGPT is an end-to-end multimodal foundation model for embodied AI with a "chain-of-thought" capability, enabling embodied agents to interact with the physical world more naturally. It also develops two datasets, EgoCOT and EgoVQA, and proposes a cost-effective training approach for extracting task-relevant features from planning queries. The approach demonstrates state-of-the-art or comparable performance on multiple embodied tasks, including embodied control, planning, video captioning, and video QA, outperforming existing models on benchmark tasks.ChatBridge is a novel multimodal language model that leverages large language models to bridge the gap between various modalities. It proposes a two-stage training approach to align different modalities with language and introduces a new multimodal instruction tuning dataset called MULTIS.VideoChat is a chat-centric video understanding system that integrates video foundation models and large language models. It proposes a video-centric instruction dataset emphasizing spatiotemporal reasoning and causal relationships, providing a valuable asset for training chat-centric video understanding systems. It also presents qualitative experiments showcasing the system's potential across various video applications and sets a standard for future research in the field of video understanding. As shown in Figure <ref>, the framework of VideoChat consists of two main components: VideoChat-Text and VideoChat-Embed. VideoChat-Text textualizes videos in stream by converting visual data into textual format using various vision models and prompts, allowing a pretrained large language model to address user-specified tasks based on the video text descriptions. On the other hand, VideoChat-Embed encodes videos as embeddings and combines video and language foundation models with a Video-Language Token Interface (VLTF) to optimize cross-modality, enabling the model to effectively communicate with users through a large language model. Video-ChatGPT is a multimodal model that merges a pretrained visual encoder with a Large Language Model (LLM) to understand and generate detailed conversations about videos. It presents a new dataset of 100,000 video-instruction pairs and develops a quantitative evaluation framework for video-based dialogue models. The model's architecture, training process, and evaluation results are thoroughly described, showcasing its competence in video understanding and conversation generation. Additionally, the paper proposes a novel human-assisted and semi-automatic annotation framework for generating high-quality video instruction data. As shown in Figure <ref>, the architecture of Video-ChatGPT leverages a pretrained visual encoder, CLIP ViT-L/14, to extract both spatial and temporal video features. These features are then projected into the input space of a Large Language Model (LLM) using a learnable linear layer. The resulting model is capable of understanding and generating detailed conversations about videos, showcasing proficiency in video reasoning, creativity, spatial understanding, action recognition, and temporal understanding. Video-LLaMA focus on empowering Large Language Models (LLMs) with the capability to understand both visual and auditory content in videos. It aims to enable LLMs to comprehend and generate meaningful responses grounded in the visual and auditory information presented in the videos. As shown in Figure <ref>, the architecture of Video-LLaMA consists of two main branches: the Vision-Language Branch and the Audio-Language Branch. The Vision-Language Branch includes a frozen pre-trained image encoder, a position embedding layer, a video Q-former, and a linear layer to transform video representations into the same dimension as the text embeddings of LLMs. The Audio-Language Branch includes a pre-trained audio encoder, a position embedding layer, an audio Q-former, and a linear layer to map audio features to the embedding space of LLMs. These branches enable Video-LLaMA to process both visual and auditory content within a single framework.Valley aims to develop a multimodal foundation model capable of comprehending video, image, and language within a general framework. Valley aims to function as a highly effective video assistant that can make complex video understanding scenarios easy. It focuses on creating a seamless interaction between humans and machines, enabling natural and intuitive conversations while engaging in various tasks related to video understanding. MACAW-LLM is a novel architecture for multimodal language modeling, integrating image, audio, video, and text data.It also introduces the MACAW-LLM instruction dataset, which covers diverse instructional tasks and modalities. MACAW-LLM involves a simplified one-step instruction fine-tuning process, a multimodal dataset for instruction-tuned language models, and an architecture that aligns multimodal features with textual features for generating output sequences. §.§ Instruction-based 3D Vision Learning 3D vision tasks involve the analysis and interpretation of visual data to reconstruct and understand the three-dimensional structure of the environment, including depth estimation, 3D reconstruction, object recognition, and scene comprehension. These tasks enable machines to interact with the physical world in a more human-like manner, supporting applications in robotics, augmented reality, autonomous vehicles, and more. The increasing demand for natural language interactions with 3D content includes scenarios such as verbally commanding robots to manipulate objects and interactively creating and editing 3D content through natural language. Existing efforts with 2D images face challenges such as depth ambiguity and viewpoint dependency, making it essential to empower LLMs to comprehend 3D structures accurately and effectively.This capability opens up new avenues for natural language interactions with 3D objects and environments.§.§.§ Visual AssistantPointLLM is a large language model specifically designed for understanding 3D object point clouds. It provides a comprehensive evaluation suite, including benchmarks and a large-scale dataset, which will be open-source for community use. It also addresses the limitations of traditional metrics in evaluating language models and emphasizes the need for more comprehensive and reliable measures. Additionally, it explores the potential of PointLLM in tasks such as text-to-3D generation, demonstrating its capacity to generate detailed and accurate captions for 3D models.As shown in Figure <ref>, the architecture of PointLLM consists of three main components: a pre-trained point cloud encoder, a large language model (LLM) backbone, and a multimodal projection layer. The point cloud encoder encodes point clouds into tokens, which are then combined with text tokens and fed into the LLM backbone. The LLM backbone, based on transformer architecture, processes the combined sequence of tokens to generate responses. The model is trained using a two-stage strategy, aligning the latent spaces between the encoder and the LLM, followed by instruction-based fine-tuning. LAMM is an open-source endeavor focused on multimodal Large Language Models (MLLMs). The main focus of LAMM include the introduction of a comprehensive dataset and benchmark covering a wide range of vision tasks for 2D and 3D vision, the methodology for constructing multimodal instruction tuning datasets and benchmarks for MLLMs, and the provision of a primary but potential MLLM training framework optimized for modality extension. Additionally,it provides baseline models, extensive experimental observations, and analysis to accelerate future research in the field of MLLMs. As shown in Figure <ref>, the framework of the multimodality language model (MLLM) in LAMM involves encoding each modality, such as image or point cloud, using corresponding pre-trained encoders. The encoded features are then projected to the same feature space as the text embeddings by a trainable projection layer. Instructions are tokenized and concatenated with vision and text tokens to feed into the MLLM model. The model is trained in a one-stage end-to-end fashion with trainable projection layers and LoRA modules, allowing for the extension to cover more modalities and tasks, such as video understanding and image synthesis. §.§ Instruction-based Medical Vision Learning§.§.§ Medical Visual Question AnsweringMedical Visual Question Answering (MedVQA) tasks involve answering natural language questions about medical visual content, where the goal is to aid in the interpretation of medical images with vital clinic-relevant information. For example, PMC-VQA introduces a generative model, MedVInT, for Medical Visual Question Answering (MedVQA) and establishes a scalable pipeline to construct a large-scale MedVQA dataset that covers various modalities and diseases. Additionally, it proposes a more challenging benchmark for evaluating VQA methods in the medical domain.§.§.§ Visual Assistant Medical visual assistant is a vision-language conversational assistant specifically designed for biomedical applications, which is trained to understand and converse about biomedical images, providing open-ended responses to inquiries about the content of biomedical images.Instruction-based Medical Vision Learning aims to transfer a general purposed multimodal large language model as medical visual assistant by carefully captured large-scale medical visual instruction dataset as well as specifically designed visual instruction tuning methods.The medical visual assistant is capable of following diverse instructions and completing tasks in a conversational manner, making it a valuable tool for biomedical visual question answering and providing informed advice in biomedical-related fields. OphGLM is an ophthalmology large language-and-vision assistant, which integrates visual models with large language models in ophthalmology.It constructs a fine-tuned dataset for ophthalmic diseases, develop disease diagnosis models based on fundus images, and create a novel ophthalmology large language-and-vision assistant.The experimental results demonstrate the potential of OphGLM in clinical applications in ophthalmology. As shown in Figure <ref>, OphGLM consists of two main modules: the fundus diagnosis pipeline and the OphGLM pipeline. The fundus diagnosis pipeline includes disease diagnosis and lesion segmentation models based on fundus images. The OphGLM pipeline integrates the fundus image diagnostic report with the fundus dialogue, ultimately generating high-quality responses. This architecture allows OphGLM to accept fundus images as input and provide accurate and detailed medical information. §.§ Instruction-based Document Vision LearningThe document understanding model is designed to automatically extract, analyze, and comprehend information from various types of digital documents.It aims to understand and interpret complex relationships between visual text and objects in diverse types of images, such as diagrams, documents, and webpages.Instruction tuning for document learning involves enhance the general purpose model comprehend and interpret visual information in various types of documents by designed visual instruction tuning strategy for visual-text understanding tasks and the captured datasets that facilitate multimodal document understanding. §.§.§ Visual AssistantmPLUG-DocOwl is a modularized Multimodal Large Language Model designed for OCR-free document understanding. It proposes a unified instruction tuning strategy to balance language-only, general vision-and-language, and document understanding.As shown in Figure <ref>, the instruction tuning paradigm of mPLUG-DocOwl involves the integration of diverse document understanding tasks into a unified format for training. It includes tasks such as Visual Question Answering, Information Extraction, Natural Language Inference, and Image Captioning.It outperforms existing multimodal models in document understanding and demonstrates strong generalization on various downstream tasks without specific fine-tuning. It also provides a carefully constructed evaluation set, LLMDoc, for assessing diverse document understanding capabilities, and conducts human evaluation to compare the performance of mPLUG-DocOwl with other models.mPLUG-PaperOwl focuses on strengthening the multimodal diagram analysis ability of Multimodal Large Language Models (LLMs) to assist in academic paper writing. It introduces the M-Paper dataset, which supports the joint comprehension of multiple scientific diagrams, including figures and tables in the format of images or Latex codes. It also proposes three multimodal tasks and a GPT-based metric to measure the paragraph analysis quality, and it validates the effectiveness of multimodal inputs and training strategies through comprehensive experiments. As shown in Figure <ref>, the overall architecture of mPLUG-PaperOwl follows a three-module framework, consisting of a vision encoder, a vision abstractor, and a Large Language Model as the language decoder. The vision encoder is fine-tuned to better learn how to filter useful visual diagram information for generating analysis, while the vision abstractor is fine-tuned to improve the model's ability to understand and describe diagrams. The model is trained on an ensemble of training data from three multimodal tasks to enhance its performance. § CONCLUSION Visual instruction tuning fine-tunes a large vision model with language as task instructions, ultimately learning from a wide range of vision tasks described by language instructions a general-purpose multimodal model that can follow arbitrary instructions and thus solve arbitrary tasks specified by the user. In this survey, we extensively review visual instruction tuning studies from different perspectives, ranging from background to foundations, datasets, methodology, benchmarks, and current research challenges and open research directions. We summarize visual instruction tuning datasets, methods, and performances in tabular forms, aiming to offer a comprehensive overview on what accomplishments we have achieved, what challenges we currently faced, and what we could further achieved in visual instruction tuning research.IEEEtran
http://arxiv.org/abs/2312.16602v1
{ "authors": [ "Jiaxing Huang", "Jingyi Zhang", "Kai Jiang", "Han Qiu", "Shijian Lu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227145437", "title": "Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey" }
rajnandinisharma.rs.mst18@itbhu.ac.in School of Materials Science and Technology, Indian Institute of Technology (Banaras Hindu University), Varanasi-221 005, India School of Materials Science and Technology, Indian Institute of Technology (Banaras Hindu University), Varanasi-221 005, IndiaSchool of Materials Science and Technology, Indian Institute of Technology (Banaras Hindu University), Varanasi-221 005, India Surface Physics and Material Science Division, Saha Institute of Nuclear Physics Kolkata, 1/AF Bidhannagar, Sector 1, Kolkata 700 064, India shrawan.mst@iitbhu.ac.in School of Materials Science and Technology, Indian Institute of Technology (Banaras Hindu University), Varanasi-221 005, India Magnonics has shown the immense potential of compatibility with CMOS devices and the ability to be utilized in futuristic quantum computing. Therefore, the magnonic crystals, both metallic and insulating, are under extensive exploration. The presence of high spin-orbit interaction induced by the presence of rare-earth elements in thulium iron garnet (TmIG) increases its potential in magnonic applications. Previously, TmIG thin films were grown using ultra-high vacuum-based techniques. Here, we present a cost-effective solution-based approach that enables the excellent quality interface and surface roughness of the epitaxial TmIG/GGG. The deposited TmIG (12.2 nm) thin film's physical and spin dynamic properties are investigated in detail. The confirmation of the epitaxy using X-ray diffraction in ϕ-scan geometry along with the X-ray reflectivity and atomic force for the thickness and roughness analysis and topography, respectively. The epitaxial TmIG/GGG have confirmed the perpendicular magnetic anisotropy utilizing the polar-magneto-optic Kerr effect. Analyzing the ferromagnetic resonance study of TmIG/GGG thin films provides the anisotropy constant K_U = 20.6×10^3 ± 0.2×10^3 N/m^2 and the Gilbert damping parameter α = 0.0216 ± 0.0028. The experimental findings suggest that the solution-processed TmIG/GGG thin films have the potential to be utilized in device applications. All solution grown epitaxial magnonic crystal of thulium iron garnet thin film Shrawan Kumar Mishra January 14, 2024 ============================================================================== Magnonics is the study of spin waves-based information processing and transmission <cit.>. Magnons have the potential to be utilized in more dense logic gates, along with the processing and transport of information simultaneously <cit.>. Magnon's superposition ability makes it a potential candidate for its uses as a qubit in quantum computing <cit.>. There are various magnon carrier systems; some are conducting, and others insulating <cit.>. Conducting magnonic crystals are CoFeB<cit.>, NiFe(permalloy) <cit.>, and Heusler compounds <cit.>. Iron garnets are one class of these insulating magnonic crystals <cit.>. Initially, various fundamental understandings of magnon behaviours like magnon-magnon scattering and magnetic resonance have been considered as possible microscopic origins using these ferrimagnetic insulators <cit.>. Recently, the heterostructure yttrium iron garnet (YIG) found its application in spin pumping. Both exchange and dipolar spin waves are higher-order spin waves in the single-crystal YIG thin films <cit.>. Soon after its discovery, the experimental study confirmed system has the lowest dissipation (lowest linewidth of the ferromagnetic resonance), making it a promising system for variousapplications<cit.>. A recent study shows that in pulsed laser deposition (PLD) grown Pt/YIG, the interfacial spin Hall angle (θ_SH) is 0.33 <cit.>. But further advanced processing can be achieved by perpendicular magnetic anisotropy (PMA) in the system <cit.>. YIG has a low anisotropy constant K_U = 1× 10^3 N/m^2 when deposited on Gd_3Ga_5O_12 (GGG) substrate<cit.>. High spin-orbit coupling in rare-earth iron garnets has the potential to resolve this <cit.>. Complete rare-earth series can form the iron garnets <cit.>. The rare-earth elements have their unique magnetic ordering so they contribute to the ferrimagnetic coupling. This contribution causes compensation temperature, which is the lowest magnetization state. Thulium iron garnet (TmIG) is the rare-earth garnet with Curie temperature (T_C ≈ 550 K) that has the lowest compensation temperature ≈15 K and room temperature moderate saturation magnetization. Recently, the Pt/TmIG heterostructure with PMA shows the magnetic switching and spin magnetoresistance <cit.>, and TmIG/Au/TmIG shows the spin valves properties <cit.>.Iron garnet thin films are grown using ultra-high vacuum facilities and require extravagant facilities like PLD, off-centred rf-sputtering, and liquid phase epitaxy (LPE). Few studies have produced polycrystalline iron garnets using solution methods like spin-coating <cit.>. However, the epitaxial thin film growth using spin-coating is not reported to date. This article presents a cost-effective, all-solution-based spin-coating method that uses the substrate's crystal structure to reference and grows an epitaxial thin film of TmIG/GGG. The epitaxial TmIG/GGG has been studied using synchrotron Grazing Incident X-ray diffraction (GIXRD). The confirmation of the epitaxy is presented using GIXRD ϕ-scan. The topography and elemental analysis of TmIG magnonic crystal are studied in detail. The magnetic study of the good interface quality epitaxial TmIG thin film is reported in the present article. TmIG thin film on single-crystal gadolinium gallium garnet (GGG) substrate of (111) orientation was deposited using all solution-based spin-coating. To prepare a solution,nitrates of iron (Fe(NO_3)_3.9H_2O, (98% purity)) and thulium (Tm(NO_3)_3.5H_2O (99.9% purity)) in 3:2 ratio was amalgam in the 2-methoxyethanol with 400 mM concentration. The solution was stirred and aged for three days to make it uniform and have a gel-like consistency. The substrate's surface quality should be excellent to deposit the thin film. To clean, the substrate (GGG) was ultra-sonicated with de-ionized water, acetone, and 2-propanol for 30 minutes each. Further, the substrate was plasma-cleaned for 10 min at 10 W in an oxygen atmosphere. The uniformly stirred solution was statically spin-coated on the cleaned substrate at 4000 rpm for 30 sec. The excellent interface and film quality are achieved by heating the spin-coated film in three stages. An excessive solvent was initially evaporated at 363 K for 2 hours in the air on the hot plate. Organic solvent decomposition was processed by heating the film further at 623 K for 30 minutes in a muffle furnace (in air). The final phase formation was achieved by annealing the prepared film at 1223 K for 3 hrs in a tubular furnace with an oxygen environment. The crystal structure of GGG and TmIG is analogous; therefore, the growth of the epitaxial TmIG becomes favorable. The structural confirmation was done using synchrotron grazing-incident x-ray diffraction (GIXRD) using 10 KeV energy of INDUS-2 (BL-13) RRCAT, Indore. Thickness estimation uses X-ray reflectivity (XRR) utilizing Bruker D8 Diffractometer. XRR data was fitted using Parrat32 software utilizing Parratt's formalism <cit.>. The morphology of the thin films was observed using atomic force microscopy (AFM) utilizing a Bruker nano IR microscope. The elemental composition study uses X-ray photoelectron spectroscopy (XPS) using Thermo Fisher Scientific model K alpha using aluminum K-alpha radiation. The magnetic study uses a white LED-based Magneto-optical Kerr effect (MOKE) microscope in polar mode along with room temperature ferromagnetic resonance (FMR), utilizing broadband FMR of Quantum design Phase FMR. The phase formation of the TmIG thin film deposited using sol-gel-based spin coating is performed using the GIXRD. As the mismatch between the substrate and the thin film is less than a percent, therefore, highly monochromatic 10 keV synchrotron X-rays have been utilized. Figure <ref> (a) presents the out-of-plane XRD of TmIG (444) reflection with the substrate GGG (444) highest intensity reflection. Inset Figure <ref>(a) shows the logarithmic plot of the intensity to show the excellent interface quality that confirms high crystallinity due to the presence of Laue oscillations <cit.>. The interplanar distance of GGG (444) and TmIG (444) are 1.7938 ± 0.0085 Å, and 1.7778 ± 0.0084 Å, respectively. The lattice constants are 12.4281 ± 0.0116 Å, and 12.3145 ± 0.0116 Å, for the GGG and TmIG, respectively. Experimental data confirm the smoothness of the interface and the epitaxial growth between the substrate and thin film as shown in Figure <ref> (b). The strain because of the mismatch between the two Δϵ = a_GGG- a_TmIG/a_GGG is 0.88 % which shows the tensile strain on the layer of TmIG thin film. The tensile strain is the cause of the presence of the PMA in the sample (discussed in further sections). Figure <ref> (b) represents the ϕ-scan of the TmIG thin film. The ϕ-scan is measured along the (008) Bragg reflection, which is ψ = 54.7^∘ from the (111) Bragg reflection <cit.>. The three-fold symmetry in the ϕ-scan can be observed in Figure <ref> (c). The angle difference of 120^0 between three-fold symmetry is experimentally observed in ϕ-scan, which confirms the epitaxy of the deposited thin film <cit.>. The stress (σ) at the interface is calculated using the following equation <cit.>: σ = Y/1-νΔϵwhere, Y is Young's modulus (2.00×10^11N/m^2) and the ν is Poisson's ratio (0.29) as present in literature <cit.>. The σ calculated is 2.573×10^9 ± 2 N/m^2.The substrate film interface quality is essential for the magnonic application. Figure <ref> illustrates the topography and structural quality of the deposited thin film using AFM and XRR. Figure <ref> (a) illustrates AFM showing smooth topography, estimating mean roughness is ≈0.8 nm. Figure <ref> (b) illustrates the XRR of the TmIG thin film fitted using Parratt's formalism and that estimated the thickness of the ≈12.2 nm, and the interfacial roughness ≈0.2 nm, which is excellent. The degree of the crystalline can also be accessed using the presence of Laue oscillations in the inset of Figure <ref>(a). The surface roughness estimated is ≈0.4 nm, which is in order equivalent to surface roughness estimation using AFM. The topography of the deposited thin film is smooth and suggests homogeneous growth on the substrate. Low surface and interface roughness show the potential of the sol-gel-based spin-coating method to study further application possibilities.The elemental composition is probed using the XPS. Figure <ref>: depicts the survey scan and the high-resolution XPS of the TmIG thin film. Figure <ref> (a): plot the survey scan, gives the presence of O, C, Fe, Tm, and N in the thin film. The sample constituted O, Fe, and Tm, but the environmental exposure caused the C and N absorption. The sample is calibrated using the carbon at peak position 284.2 ± 0.1 eV. Figure <ref> (b) illustrates the high-resolution spectra of thulium 4d_5/2 core electrons. Tm_4d_5/2 is observed at 175.7 ± 0.1 eV which is supported by the literature <cit.>. Thulium is present in Tm^2+ and Tm^3+ but the most stable valence state is Tm^3+. The presence of a satellite peak at 179.2 ± 0.1 is similar to the literature and confirms Tm^3+ charge state <cit.>. Figure <ref> (c) illustrates the high-resolution spectra of the oxygen 1s core electrons. The main peak at binding energy 529.4 ± 0.1 eV is because the O_1s electrons bind in the TmIG; along with this, the surface contribution of the oxygen is also there at 530.8 ± 0.1 eV. Figure <ref> (d) illustrates the high-resolution spectra of the iron 2p core electrons. Iron is present in a 2:3 ratio of octahedral and tetrahedral coordinates in TmIG (space group Ia3̅d). Therefore, the Fe 2p_3/2 and 2p_1/2 peak comprise two peaks each. The Fe octahedral (Fe_oct) peaks 2p_3/2 at binding energy 709.8 ± 0.1 eV and 2p_1/2 at binding energy 723.1 ± 0.1 eV. The Fe tetrahedral (Fe_tetra) peaks 2p_3/2 at binding energy 711.2 ± 0.1 eV and 2p_1/2 at binding energy 724.5 ± 0.1 eV. The theoretical ratio between the area of octahedral and tetrahedral Fe is 2:3. The experimental ratio of the area of peak 2p_3/2 octahedral and tetrahedral Fe is 0.72, and of peak 2p_1/2 octahedral and tetrahedral Fe is 0.67 <cit.>. The ratio is very close to the theoretical ratio 2:3, confirming that the quality of the sample is excellent <cit.>. The difference between the core electron peak and the satellite peak is large and equivalent to the 8 eV, which further establishes that Fe is in a 3+ valence state and the stoichiometry is balanced <cit.>. The atomic percentage of the constituents Tm^3+, Fe^5+ and O^2- are 16 %, 26%, and 58%, respectively. The atomic percent is calculated using the CASAXPS software <cit.>. As the TmIG has application in magnonics, the magnetic properties of the deposited sample determine its application potential. Figure <ref> presents the magnetic behaviour of the deposited all solution-based epitaxial TmIG thin film. Figure <ref> (a) illustrates the polar MOKE measurements. The presence of out-of-plane uniaxial anisotropy gives the MOKE signal, which confirms the existence of the perpendicular magnetic anisotropy (PMA) at room temperature in the thin film <cit.>. The uniaxial anisotropy (K_U) in the thin film is a combination of various components like stress-induced anisotropy (K_σ), magneto-crystalline anisotropy (K_M), and shape anisotropy (K_S) <cit.>. Figure <ref> (b) illustrates the schematics of the PMA in the film. The stress-induced anisotropy is calculated using the formula as follows: K_σ = -3/2λ_111σwhere, λ_111 is the magnetostriction constant (-5.2 ×10^-6) of the TmIG as present in literature <cit.>. The value of the estimated stress-induced anisotropy (K_σ) is 20.07×10^3 ± 5.71 10^-2 N/m^2. Shape anisotropy (K_S) estimated is 0.49 × 10^3 N/m^2. The cubic anisotropy constant (K_1) value is taken from the literature<cit.> is -1.1× 10^3 N/m^2. The final uniaxial anisotropy value is estimated as follows: K_U = -K_1/12 + K_σ + K_S where, K_1/12 is K_M. The estimated K_U from the strain is 20.11 × 10^3 N/m^2. The magnetic study of the TmIG is also performed using FMR. It probes the precession of the moments along the external field, and this precession resonates with the applied frequency. The absorption of that frequency at a particular magnetic field gives the resonance magnetic field and the linewidth of the absorption, which signifies the moment's precession and energy dissipation, respectively. Figure <ref>: depicts the FMR results, (a) plots in-plane the resonance magnetic field (H_res) as a function of frequency and fitted using Kittel equation, (b) illustrates the linewidth as a function of the frequency. Kittel equation <cit.> is as presented below:f = (γ/2π) √(H(H+μ_0M_eff)) The effective magnetization (μ _0M_eff) estimated is -0.292 ± 0.003 T. As the PMA is confirmed with the polar MOKE, the negative value of μ _0M_eff shows that anisotropy dominates the saturation magnetization. μ _0M_eff is composed of saturation magnetization (μ _0M_S) and the anisotropy field (H_U) in following equation<cit.>: μ _0M_eff = μ _0M_S - H_U In literature, the value of saturation magnetization (μ _0M_S) of bulk TmIG is 0.1244 T <cit.>. The anisotropy Field (H_U) is estimated to be 0.4167 T. The anisotropy constant (K_U) is calculated by substituting the anisotropy field and saturation magnetization as present in the following equation: K_U = H_U × M_S/2K_U is estimated by substituting, the values is 20.6×10^3 ± 0.2×10^3 N/m^2. K_U estimation by FMR is equivalent to the K_U calculated with the strain in GIXRD. This value is higher than the literature due to the self-growth of the TmIG/GGG and form a better interface. The value of the gyromagnetic ratio is 19.46 ± 0.09 GHz/T, which is lower than the free electron because of the high spin-orbit coupling of the thulium ions. The Landé g-factor estimated is 1.391 ± 0.006 from the gyromagnetic ratio. The Landé g-factor is smaller than the free electron as well as the value present for TmIG in literature<cit.>. The presence of low Landé g-factor can be due to the presence of high anisotropy present in TmIG/GGG thin film.The uniform mode is generated while the moment precession relaxes by the dissipating energy due to extrinsic and intrinsic factors. Extrinsic factors are like defects and electro-electron interaction, and intrinsic factors are like two-magnon interaction and high spin-orbit interaction. These factors can be obtained from the linear fitting of the magnetic linewidth as a function of the applied frequency. Figure <ref> (d) depicts the linewidth (Δ H) as a function of the applied frequency. Yellow dots are obtained by analyzing the experimental FMR data as a function of the applied field (the intensity of the dP/dH is low, which causes the deviation), and the green line fits them linearly. The relation of linewidth, intrinsic, and extrinsic damping parameters is as follows:Δ H = Δ H_0 + 4πα/γ f The Δ H_0 is the extrinsic part of the TmIG energy dissipation, and the fitted value is the Δ H_0 is 17.69 ± 1.08 mT. The intrinsic dissipation is stated as the Gilbert damping parameter (α) = 0.0216 ± 0.0028. Gilbert damping parameter is of the same order as the present in literature in which samples are prepared with sophisticated methods <cit.>.This article presents a cost-effective method and to understand it the literature is compared with experimental observation in table <ref>. All the samples are grown on the GGG (111) substrate, and the thicknesses are different, but the Gilbert damping parameter is of the same order. The estimated K_U is higher compared to the literature and is supported by the lower value of the Landé g-factor of the present work. In conclusion, sol-gel-based spin coating is utilized to deposit the epitaxial thulium iron garnet (TmIG) thin film on the GGG substrate. The elemental analysis confirms the stoichiometric deposition with the low interface and the surface roughness. The presence of the perpendicular magnetic anisotropy of this all-solution method deposited TmIG due to the stress-induced anisotropy. Due to the presence of the high spin-orbit coupling gives rise to the lower gyromagnetic ratio and Landé g-factor, which is well matched with the literature. The intrinsic and extrinsic dissipation factors of TmIG present a potential for the deposition method. With further improvements, this cost-effective method of deposition has the potential to be used for magnonics applications.§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENTRajnandini Sharma: Conceptualization (lead); Synthesis (equal); Data curation (lead); Formal analysis (lead); Project administration (supporting); Validation (equal); Visualization (lead); Writing – original draft (lead); Writing – review & editing (supporting).Pawan Kumar Ojha: Synthesis (equal); Formal analysis (supporting). Simran Sahoo: Formal analysis (supporting); Writing – review & editing (supporting).Rijul Roychowdhury: Experimental contribution (GIXRD). Shrawan K. Mishra:Funding acquisition (lead); Project administration (lead); Supervision (lead); Validation (equal); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (lead).§ DATA AVAILABILITY STATEMENTThe data that support the findings of this study are available from the corresponding author upon reasonable request.§ DECLARATION OF COMPETING INTERESTThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.§ ACKNOWLEDGEMENTThis work is financially supported by the Nano Mission program, DST, India project No. IIT(BHU)/R&D/SMST/18-19/09. RS acknowledges the DST INSPIRE for the INSPIRE fellowship. The authors are thankful to the Saha Institute of Nuclear Physics, Kolkata, for facilitating the experiments at the GIXS Beamline (BL-13), Indus-2, RRCAT, Indore, and for the technical support received during the beamtime. The authors are thankful to Dr. V. R. Reddy and UGC-DAE CSR, Indore, for the MOKE experiment.
http://arxiv.org/abs/2312.15973v1
{ "authors": [ "Rajnandini Sharma", "Pawan Kumar Ojha", "Simran Sahoo", "Rijul Roychowdhury", "Shrawan Kumar Mishra" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20231226095750", "title": "All solution grown epitaxial magnonic crystal of thulium iron garnet thin film" }
2cm -15mm -1cm
http://arxiv.org/abs/2312.16056v1
{ "authors": [ "Avijit Das", "Shivrat Sachdeva", "Debajyoti Sarkar" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20231226135852", "title": "Bulk reconstruction using timelike entanglement in (A)dS" }
Relativistic hot accretion flow]Properties of relativistic hot accretion flow around rotating black hole with radially varying viscosity monu18@iitg.ac.in These authors contributed equally to this work. Santabrata Dassbdas@iitg.ac.in These authors contributed equally to this work. Department of Physics, Indian Institute of Technology Guwahati, Guwahati, 781039, Assam, India We examine the effect of variable viscosity parameter (α) in relativistic, low angular momentum advective accretion flow around rotating black holes. Following the recent simulation studies of magnetohydrodynamic disk that reveal the radial variation of α(r), we theoretically investigate the properties of the global transonic accretion flow considering a one-dimensional power law prescription of viscosity parameter as α(r) ∝ r^θ, where the viscosity exponent θ is a constant. In doing so, we adopt the relativistic equation of state and solve the fluid equations that govern the flow motion inside the disk. We find that depending on the flow parameters, accretion flow experiences centrifugally supported shock transition and such shocked accretion solutions continue to exist for wide ranges of the flow energy, angular momentum, accretion rate and viscosity exponent, respectively. Due to shock compression, the hot and dense post-shock flow (hereafter PSC) can produce the high energy radiations after reprocessing the soft photons from the pre-shock flow via inverse Comptonization. Since PSC is usually described using shock radius (r_s), compression ratio (R) and shock strength (S), we study the role of θ in deciding r_s, R and S, respectively. Moreover, we obtain the parameter space for shock and find that possibility of shock formation diminishes as θ is increased. Finally, we compute the limiting value of θ (i.e., θ^ max) that admits shock and find that flow can sustain more viscosity when it accretes onto rapidly rotating (a_ k→ 1) black hole in comparison to weakly rotating (a_ k→ 0) black hole.[ * January 14, 2024 ====================§ INTRODUCTION Accretion of matter onto compact object is considered to be the most efficient energy release process. However, in the context of accretion disk theory, the underlying mechanisms responsible to transport the angular momentum through the disk are not yet well understood and remain the most intriguing unresolved problem due to the disagreement of findings between the numerical simulations <cit.> and observational results <cit.>. In particular, <cit.> found an apparent discrepancy of a factor of ∼ 10 between observational and theoretical estimates of viscosity parameter in accretion flow around black hole (BH).In a seminal work, <cit.> (hereafter SS73) introduced dimensionless viscosity parameter α defined as the ratio of the viscous stress to the pressure of the accretion flow. In the absence of detailed understanding of the viscous mechanism, SS73 considered α to be a global constant all throughout, typically in the range 0.001-0.1 <cit.>. Afterwords, considering the effective shear viscosity driven by the magneto-rotational instability, <cit.> suggested that α may not be constant throughout the flow, instead it possibly varies both spatially and temporally in an accretion flow. Similar findings were also reported by numerous groups of researchers while examining the overall characteristics of α using magneto-hydrodynamical simulations <cit.>. Very recently, <cit.> computed the profile of the ratio of Maxwell stress to the gas pressure, and found that unlike standard SS73 viscosity parameter, it also varies with radial distance as well. Needless to mention that the measure of α from both local and global simulations is undoubtedly challenging as it depends on several factors, namely initial magnetic field geometry and strength, grid resolutions, etc <cit.>. Accordingly, it appears that the range of α values is not well constrained and hence, it remains inconclusive. Indeed, it is the viscous stress that generally varies inside the disk, and hence, α is often considered as radially varying. Adopting these ideas, we investigate the properties of relativistic viscous advective accretion flow around the rotating BHs. During accretion, matter starts accreting with subsonic speed from a large distance and plunges into BH supersonically to meet the horizon condition. Because of this, inflowing matter experiences smooth transition from sub- to super-sonic domain at least once while accreting onto BH. However, flow can encounter such sonic transitions multiple times depending on the flow energy and angular momentum, and solution of this kind is specially encouraging because centrifugal wall may trigger shock transitions <cit.>. Such shock transitions are possible provided Rankine-Hugonoit conditions (RHCs) are favourable <cit.>. At shock, accreting matter jumps from supersonic to subsonic value and this causes the convergent post-shock matter hot, dense, and puffed-up, which is commonly referred as post-shock corona (PSC). After the shock, accreting matter starts moving towards the horizon and gradually gains radial velocity. This process continues and finally, matter crosses the event horizon (r_g) with supersonic speed after passing through a critical point located close to r_g. In reality, PSC comprises swarm of the hot electrons. These electrons interact with the soft photons from pre-shock matter via inverse-Comptonization process and produce hard X-ray radiations <cit.>. When RHCs do not satisfied, however flow possesses more than one critical point, PSC is expected to exhibit time varying modulation that usually may give rise to the quasi-periodic variations of emitted photons commonly observed from Galactic BH sources <cit.>.Being motivated with this, we examine the structure of the steady, viscous, advective flow that accretes on to a rotating BH. We adopt the viscosity parameter that is radially varying as α(r) = α_0 (r/r_g)^θ, where r is the radial distance, r_g is gravitational radius, α_0 is the proportionality constant and θ is the viscosity exponent. We consider relativistic equation of state (REoS) that satisfactorily accounts the thermal properties of the low angular momentum accreting matter <cit.>. Further, we use a recently developed pseudo potential to mimic the BH gravity <cit.> for spin values ranging from weakly rotating (a_ k→ 0) to rapidly rotating (a_ k→ 1) limits. Considering all these, we calculate the global transonic accretion solutions (GTAS) by solving the fluid equations using the accretion model parameters. Moreover, we identify the requisite GTAS that admit standing shock transitions, and render the dependencies of the dynamical as well as thermodynamical flow variables on the model parameters. In addition, we investigate the various shock properties, such as shock radius(r_s), compression ratio(R), and shock strength(S), respectively, and study how r_s, R and S depend on the viscosity (α) and accretion rate (ṁ). We also determine the range of flow energy(E) and angular momentum(λ) that render shock-induced GTAS, and ascertain the domain of shock parameter space in λ- E plane. We find that such a parameter space is altered as the viscosity exponent (θ) is varied. Since θ plays pivotal role in deciding the accretion disc structure, it is important to explore the limiting value of the viscosity exponent (θ^ max). Accordingly, we put effort to estimate θ^ max and find that it fervently depends on both BH spin (a_ k) and α_0.This paper is organized in the following manner. In 2, we describe the model considerations and basic equations. We obtain GTAS both in absence and presence of shock and discuss the shock properties in 3. In 4, we discuss how the shock parameter space alters with the change of viscosity and calculate the maximum limit of viscosity exponent for shock. Finally, in 5, we present conclusions.§ BASIC CONSIDERATIONS AND MODEL EQUATIONS We begin with an axisymmetric, steady-state, height-averaged viscous advective accretion disk around a rotating BH in presence of synchrotron cooling. We also consider that such a disk remains confined around the disk equatorial plane. We approximate the effect of gravity by adopting an effective pseudo-potential <cit.> that successfully delineates the spacetime wrapping due to rotating BH. The accretion process inside the disk is driven by the viscous stress (W_rϕ) and we consider W_rϕ = -αΠ, where Π denotes the vertically integrated pressure including ram pressure, and α, the viscosity parameter, is the dimensionless quantity that is assumed to vary with radial coordinate. Needless to mention that α absorbs all the detailed microphysics of the viscous processes. With these considerations, we express all the governing equations using M_ BH=G=c=1, where M_ BH, G and c denote BH mass, gravitational constant and light speed, respectively. With this, we write the length in unit of r_g=GM_ BH/c^2, and accordingly, time and angular momentum are written in unit of r_g/c and r_g c. §.§ Governing Equations The basic fluid equations that describe the motion of the accreting matter inside the disc around a rotating BH are as follows:(a) Conservation equation for radial momentum:υdυ/dr+1/h ρdP/dr+d Φ_e^eff/dr = 0,where υ, P, ρ and h, denote the flow velocity, gas pressure, mass density and specific enthalphy, respectively. In addition, Φ_ e^ eff refers the effective potential of a rotating BH that mimics the spacetime geometry at the disc equatorial plane and is given by <cit.>,Φ_e ^eff = 1/2ln[r Δ/a_k^2 (r+2) - 4 a_kλ + r^3 -λ^2(r-2)], (1a)where λ is the specific angular momentum of the accreting matter, a_ k denotes the BH spin, and Δ = r^2 - 2 r + a_ k^2. (b) Mass conservation equation:Ṁ= 2πυΣ√(Δ),where Ṁ is the mass accretion rate. In this work, we do not consider the ejection of matter in the form of outflow/jets and hence, Ṁ is treated as global constant in absence of any mass loss from the disk. Moreover, for convenience, we express the accretion rate in unit of Eddington accretion rate as ṁ = Ṁ/Ṁ_ Edd, where Ṁ_ Edd = 1.44 × 10^17( M_ BH/M_⊙). Further, Σ is the vertically integrated surface mass density of the accreting matter <cit.>, and is written as Σ = 2 ρ H, where H refers the local disc half thickness expressed as <cit.>,H^2 = P r^3/ρ F,F = γ_ϕ^2(r^2 + a_k^2)^2 + 2 Δ a_k^2/(r^2 + a_k^2)^2 - 2 Δ a_k^2, γ_ϕ^2 = 1/(1-λΩ).Here, Ω [ = (2 a_ k+λ(r-2))/(a_ k^2(r + 2) - 2 a_ kλ + r^3)] denotes the angular velocity of the accreting matter.(c) Conservation equation for azimuthal momentum:υdλ/dr + 1/Σ rd/dr(r^2 W_rϕ) = 0,where, we consider the rϕ component of the viscous stress as W_rϕ = - αΠ = - α (W + Συ^2) <cit.>. In equation <ref>, W denotes the vertically integrated pressure and Σ represents the vertically integrated mass density. In this work, we consider radially varying viscosity parameter resembling power law distribution as α = A ( r/r_g)^θ= α_0 r^θ, (3a)where A, θ and α_0 are regarded as constants all throughout the flow. Similar findings on the radial variation of viscosity parameter are also recently reported by the group of workers <cit.>. Note that when θ→ 0, we obtain globally constant viscosity parameter α = α_0, as in the case of `α model' prescription <cit.>.(d) Equation for energy balance:Συ T ds/dr = υ H/Γ -1( dP/dr - Γ P/ρdρ/dr)= Q^--Q^+.In equation (<ref>), T is the flow temperature, s is the specific entropy and and Γ is the adiabatic index. Moreover, during accretion, the heat gain and lost by the flow are denoted by Q^+ and Q^-, respectively. Following <cit.>, we adopt the mixed shear stress prescription to compute the viscous heating of the flow and is given by,Q^+=-αρ H r (P/ρ+υ^2)dΩ/dr.In general, the bremsstrahlung cooling process is regarded as the inefficient cooling process <cit.>. Hence, in this work, we consider energy loss due to synchrotron cooling only. Accordingly, the synchrotron emissivity of the convergent accretion flow is obtained as <cit.>,Q^- = Q^ syn =16/3e^2/c(e B/m_e c)^2(k_B T/m_e c^2)^2 n_eerg cm^-3 s^-1,where e, m_e, and n_e are the charge, mass, and number density of the electrons respectively, k_B is the Boltzmann constant, B is the magnetic fields. In the astrophysical context, the present of magnetic fields is ubiquitous inside the disc and hence, the ionized flow should emit synchrotron photons causing the accreting flow to cool down significantly. Indeed, the characteristics of structured magnetic fields inside the disc still remains unclear, and hence, we rely on the random or stochastic magnetic field. For the purpose of simplicity, we use equipartition to estimate magnetic field and obtained as, B = √(8 πβ P), where β is a dimensionless constant. Evidently, β≲ 1 confirms that the magnetic fields remain confined within accretion disc <cit.>. For the purpose of representation, in this work, we choose β = 0.1.In order to close the governing equations (<ref> - <ref>), one requires to consider an equation of state (EoS) that relates P, ρ and internal energy (ϵ) of the flow. Hence, we consider an EoS for relativistic flow which is given by <cit.>,ϵ = ρ f/(1+m_p/m_e),withf = [ 1+Θ( 9Θ + 3/3 Θ +2) ] + [ m_p/m_e + Θ( 9 Θ m_e + 3 m_p/3 Θ m_e + 2 m_p) ],where Θ (=k_B T/m_e c^2) is the dimensionless temperature of the flow. Utilizing the relativistic EoS, we express polytropic index as N = 1/2df/dΘ, adiabatic index as Γ = 1+ 1/N and sound speed as C_s^2 = Γ P/e + P = 2Γ Θ/f + 2Θ, respectively <cit.>. Further, following <cit.> and with the help of equation (<ref>), we compute the entropy accretion rate as ℳ̇ = υ H √(Δ)[ Θ^2(2+3Θ)(3Θ + 2m_p/m_e) ]^3/4exp(k_1), where k_1 = 0.5 ×[f/Θ - (1+ m_p/m_e)/Θ].Using equations (<ref>-<ref>), we get the radial velocity gradient in the form of wind equation as,dυ/dr =N(r, υ, Θ, λ, α )/ D(r, υ, Θ, λ, α ),where both N and D depend on r, υ, Θ, λ, α, and their explicit mathematical expressions are given in Appendix A. Using equation (<ref>), we obtain the radial derivatives of angular momentum (λ) and dimensionless temperature (Θ) as,dλ/d r = λ_1 + λ_2dυ/d r,anddΘ/d r = Θ_1 + Θ_2dυ/d r,where the mathematical form of the coefficients, such as λ_1, λ_2, Θ_1, and Θ_2 are described in Appendix A.Indeed, during accretion, subsonic flow commences accreting towards the BH from a far away distance r_ edge (hereafter disc outer edge) and crosses BH horizon (r_h) supersonically. Therefore, flow ought to pass through a critical point (r_c) where it smoothly transits from subsonic to supersonic domain. Note that for r > r_h, flow may possess multiple critical points depending on the flow parameters. Following <cit.>, we carry out the critical point analysis, where (dυ/dr)_r_c=0/0 as one gets N =D= 0 at the critical point. Because of this, we make use of the lHospital rule while computing (dυ/dr) at r_c. For physically acceptable solutions around BH, we consider saddle type critical points only where (dυ/dr) yields two distinct real values at the critical point <cit.>. When r_c forms near r_h, we refer it as inner critical point (r_ in), otherwise it is termed as outer critical point (r_ out) <cit.>.§ GLOBAL TRANSONIC ACCRETION SOLUTIONS (GTAS) The global transonic accretion solutions (GTAS) are obtained by solving the coupled differential equations (<ref> - <ref>) for a set of model parameters. Some of these parameters, namely α_0, θ, ṁ and a_ k remain constant all throughout, while the others, i.e., critical point r_c and angular momentum λ_c at r_c are treated as local parameters. Using the model parameters, we first integrate equations (<ref>-<ref>) starting from r_c upto r_h and again from r_c to r_ edge (∼ 1500). Thereafter, a complete GTAS around BH is obtained by joining both parts of the solutions. Based on the choice of the model parameters, flow becomes transonic either at r_ in or at r_ out before crossing the BH horizon.Figure <ref> shows the typical sets of global transonic accretion solutions (GTAS) for flows injected from r_ edge=1500 with various θ values. Following the procedure mentioned in <cit.>, we calculate the accretion solution containing the inner critical point r_ in=5.50, where we choose λ_ in=3.15, α_0=0.01, θ = 0.0, a_ k=0.0, and ṁ=0.01. This renders a global accretion solution as it successfully connects BH horizon r_h with r_ edge. We note the flow variables at r_ edge as λ_ edge=21.10, υ_ edge=8.9 × 10^-4, and Θ_ edge=0.98115. In reality, we can get the same accretion solution once the flow equations are integrated towards BH horizon using these noted boundary values. Here, black solid curve denote the radial velocity υ(r), whereas black dashed curves represent the sound speed C_s(r) of the flow for θ=0.0. Next, we increase θ=0.04 while keeping other flow variables unchanged at r_ edge and calculate GTAS by suitably tuning υ_ edge=1.18 × 10^-3, and Θ_ edge=0.98106. Here, we additionally require the boundary values of υ_ edge and Θ_ edge to integrate the fluid equations from r_ edge, as critical point remains unknown. The solution is depicted using blue color where we find that the inner critical point is shifted outwards as r_ in=6.019 for θ = 0.04. Similarly, for θ=0.0596, flow solution (red) continues to maintain similar character as in the case of θ=0.0 and 0.04, having inner critical points at r_ in=6.464. Solutions of this kind that are passing through r_ in are similar to ADAF-type accretion solutions <cit.>. When θ is increased further, the nature of the flow solution (in green) is changed and it becomes transonic at outer critical point r_ out=83.488 rather than r_ in. When θ is increased further, flow solution (in green) changes its character and becomes transonic at the outer critical point r_ out=83.488 instead of the inner critical point. Usually, the solutions containing r_ out are of Bondi type <cit.>. For the purpose of clarity, regions around the critical points (r_ in and r_ out) are zoomed which are shown using filled circles at the insets. We tabulate the flow variables at r_ edge, r_ out, and r_ in in Table <ref>. Overall, we observe that the role of θ is pivotal in deciding characteristic of GTAS around BHs.Next, we present the accretion flow solutions in figure <ref>a, where the radial variation of Mach number (M=υ/C_s) is demonstrated. Here, all the solutions become transonic at r_ in=6.17 with λ_ in=3.01, α_0=0.01, ṁ = 0.01, a_ k=0.0, respectively. For θ = 0.6, we obtain a GTAS that smoothly connects the BH horizon with r_ edge where flow angular momentum matches with its Keplerian value as shown in dotted curve. We gradually decrease θ and find that beyond the limiting value as θ = 0.591, accretion flow becomes closed as shown using solid curve. The result plotted using dashed curve corresponds to θ = 0.3. The closed accretion solutions passing through r_ in are noteworthy as they can join with another solution passing through r_ out via centrifugally supported shocks. Indeed, the existence of shocks in advective accretion flows has intense implication because the solution of this kind satisfactorily explains the temporal and spectral properties of BH sources <cit.>. Accordingly, in the subsequent sections, we investigate the shock-induced GTAS around BHs. In figure <ref>b, we present the variation of angular momentum for the solutions presented in figure <ref>a, where big-dashed curve denote the Keplerian angular momentum profile.In figure <ref>a, we depict an example of shock-induced global accretion solution, which passes thorough both r_ out=395.32 and r_ in=5.808 while accreting onto a stationary BH (a_ k=0.0). Here, matter is injected sub-sonically from r_ edge=1500 with λ_ edge=4.68, υ_ edge=8.35× 10^-3, Θ_ edge=0.336, ṁ=0.01, α_0 = 0.01, and θ=0.1. As subsonic matter proceeds towards the BH, it becomes supersonic at r_ out=395.32 and proceeds further towards the BH. Indeed, accreting matter can seamlessly cross the BH horizon after passing r_ out as shown using dotted curve. Interestingly, supersonic accreting matter sees an alternative possibility of discontinuous shock transition of the flow variables in the subsonic branch as the Rankine-Hogoniot conditions (RHCs) <cit.> for standing shock are satisfied at shock radius (r_s). We determine the standing shock location r_s=42.07 for a vertically integrated flow by employing shock conditions (RHCs) which are (a) continuity of energy flux: [ E]=0, (b) continuity of mass flux: [Ṁ]=0, and (c) continuity of momentum flux [W + Συ^2] =0 across shock. Here, we express the local energy of the flow as E=υ^2/2+log h + Φ^ eff_ e, and the quantities within the bracket () denote their differences across shock transition location. We show the shock transition using vertical arrow. Immediately after shock transition, the radial velocity of the matter decelerates, however, itprogressively increases as the matter proceeds towards the horizon. Eventually, matter crosses the BH horizon at supersonic speed after passing through the inner critical point at r_ in=5.808. In the figure, arrows show how the matter moves towards BH. Note that the post-shock branch of shocked solution is similar in nature to the solution for θ = 0.3 in figure <ref>. Further, we calculate the shock-induced GTAS around a rotating BH of a_ k=0.99 for flows injected with λ_ edge=3.68, υ_ edge=8.33 × 10^-3, Θ_ edge=0.342, ṁ=0.01, α_0 = 0.01, and θ=0.1 from r_ edge=1500. The shock is formed at r_s = 14.33 in between r_ out=405.46 and r_ in=1.492. Here, we observe that for a chosen set of (α_0,θ), shock exists around rapidly rotating BH when λ_ edge assumes relatively smaller value and vice versa. Indeed, this is expected because accreting matter crosses the BH horizon with angular momentum lower than the marginally stable angular momentum (λ_ ms) and λ_ ms evidently decreases with the increase of a_ k <cit.>. In Table <ref>, we present the flow variables at the critical points for shock-induced GTAS presented in Figure <ref>.In figure <ref>, we present the profile of the different flow variables for the shocked accretion solution presented in figure <ref>. We depict the variation of the radial velocity (υ) in Figure <ref>(a), where discontinuous transition of υ is observed at the shock radius (r_s). We show the radial variation of mass density (ρ) of accreting matter in figure <ref>(b) and find that ρ increases monotonically with the decrease of r in the pre-shock branch although sudden jump of ρ is yielded across the shock front. Such a density jump at r_s is inevitable in order to maintain the conservation of mass flux (see equation 2). Because of this, PSC experiences density compression which is eventually quantified in terms of compression ratio defined as R=Σ_+/Σ_-, where Σ (=2 ρ H) is vertically integrated mass density of accretion flow at a given radial coordinate. We obtain R=2.31. In figure <ref>(c), the variation of temperature (T in Kelvin) with r is shown. Indeed, the temperature of PSC shoots up as the kinetic energy of the upstream (pre-shock) flow is transformed into thermal energy in the downstream (post-shock) flow, which eventually resulted the increase of PSC temperature. Usually, the temperature jump at r_s is determined by means of the shock strength (S), and it is defined as S=M_-/M_+, where M_-  (M_+) being the pre-shock (post-shock) Mach number. We obtain S=2.92. We present the entropy accretion rate (Ṁ) in figure <ref>(d) and show that Ṁ at PSC is larger compared to pre-shock region. This discernibly indicates that the shock-induced GTAS are favourable over the shoch-free GTAS according to the second law of thermodynamics <cit.>. We demonstrate angular momentum (λ) variation in figure <ref>(e) and find that the transport of λ remains feeble within several hundreds of gravitational radius, although it increases rapidly towards the outer edge of the disk (r_ edge). This possibly happens as viscous time-scale becomes larger than infall time-scale of accretion flow around BH. We show that disc thickness is scaled with radial coordinate in figure <ref>(f) and find that H/r < 1 is maintained all throughout (r_h ≲ r ≤ r_ edge) in presence of shock. Furthermore, we display the variation of scattering optical depth τ in figure <ref>g. In this work, τ is given by τ = κρ H, where κ =0.38 cm^2g^-1. Sinceτ < 1 particularly at r < r_s, the disc continues to remain as optically thin there. Hence, the hard X-ray radiations originated from PSC would escape significantly with ease. In Fig. <ref>h, we present the synchrotron emissivity (in erg  cm^-3  s^-1) with r. From the figure, it is evident that the net energy loss from PSC is highly profound in comparison with pre-shock flow. In a similar way, we depict the different flow variables, such as υ, ρ, T, Ṁ, λ,  H/r,  τ and Q^ syn in figure <ref> corresponding to the shocked accretion flow around a spinning BH of spin a_ k=0.99 presented in figure <ref>b. Figure evidently indicates that the overall radial variations of these quantities are qualitatively similar with the results of a_ k=0.0, except the region at the vicinity of the BH horizon (r_h). In particular, we find that τ continues to increase as the flow approaches to the horizon (r → r_h), although it is seen to decrease for weakly rotating BH (see Fig. <ref>). This happens because τ is broadly regulated by the density (ρ) and ρ is increased significantly at the vicinity of BH having a_ k=0.99. We skip the detail descriptions of other quantities to avoid repetitions.In figure <ref>, we display how shock radius changes with θ values for flows injected with fixed outer boundary values at r_ edge. Here, we choose ṁ = 0.01, and α_0=0.01. In the top panel, we set the energy E_ edge = 1.0004 and angular momentum λ_ edge = 4.01 at r_ edge = 1500 and allow the flow to accrete onto a non-rotating BH of a_ k=0.0. We note that for θ=0.0, the subsonic flow becomes supersonic at r_ out=386.573 and shock is formed at r_s=129.30 as the RH conditions are satisfied there. We also calculate compression ratio as well as shock strength for this solution and obtain as R=1.65 and S=1.88. This solution is shown with solid curve, whereas solid vertical arrow denotes shock radius. As θ is increased to θ=0.01, shock front moves inwards at r_s = 69.57. This happens due to the fact that the increase of θ enhances the viscous effect in the accretion flow, and hence, the transport of λ in the outward direction becomes more intense. This effectively weakens the centrifugal repulsion resulting the shock to move closer to the BH horizon. Evidently, this finding suggests that shock formation in accretion flow is centrifugally driven. Here, we obtain R=2.07 and S=2.51. We plot this solution using dotted curve. For the purpose of representation, we plot another solution for θ = 0.02 using dashed curve. Indeed, the value of θ can not be increased indefinitely, and we find that beyond a limiting value of θ, which is θ_c=0.033, RHC for shock are not favourable and hence, shock does not form. Interestingly, time-varying shock may still be possible, however, investigation of this is beyond the scope of this paper. Note that θ_c does not owns a universal value as it is dependent on other flow variables. Accretion solution for θ_c=0.033 are depicted using dot-dashed curve. In the bottom panel, we present the shocked accretion solutions for flows accreting onto rotating BH of a_ k=0.99. Here, we choose energy E_ edge = 1.00023 and angular momentum λ_ edge = 2.67 at r_ edge = 1500. The solutions depicted with solid, dotted, dashed and dot-dashed correspond to θ = 0, 0.01, 0.015, and θ_c=0.021, respectively. In Table <ref>, we tabulate the flow quantities corresponding to these accretion solutions harbouring shock waves. In figure <ref>, variation of shock properties, namely shock radius r_s (upper panel), compression ratio R (middle panel), and shock strength S (lower) are depicted with θ. In the left panels, we display the results for the stationary BH of a_ k=0.0, where flows are injected from r_ edge=1500 with identical energy (E_ edge=1.0004) and angular momentum (λ_ edge=3.98). Here, we set ṁ=0.01 and obtain the results for α_0 = 0.01 (solid) , 0.011 (dashed) and 0.012 (dotted), respectively. Figure <ref>a clearly shows that stable shocks exist for an ample range of θ values. As already anticipated, for a fixed α_0, r_s decreases with θ as it weakens the centrifugal repulsion against the gravitational attraction. Moreover, for a given θ, when α_0 is higher, the angular momentum transport becomes more efficient weakening the centrifugal barrier. Because of this, shock front proceeds inwards. Notice that for a fixed α_0, when θ > θ_c, shock disappears as RH conditions are not satisfied. As indicated earlier that θ_c strictly depends on the other flow variables (see 4). Indeed, the radiative cooling processes that primarily determines the flux of the high energy radiations from the disk are strongly dependent on both ρ and T distributions across shock front <cit.>. Keeping this in mind, in figure <ref>b, we depict the variation of the compression ratio (R, measure of density compression across shock) as function of θ corresponding to shock-induced GTAS presented in figure <ref>a. We observe that when θ is increased, shock is generally pushed towards the BH. Due to this, PSC becomes further compressed causing the overall increase of R. Similar trend is generally observed in the variation of R irrespective of the α_0 values provided shock exists. Similarly, in figure <ref>c, we display how shock strength (S, measure of temperature jump across the shock front) varies with θ for the solutions presented in figure <ref>a. It is clear that for a fixed α_0, shock strength S monotonically increases as θ is increased and ultimately shifted from weaker to stronger regime. We continue the analyses and present the outcome for a_ k=0.99 in the right side panels of figure <ref>, where flows are injected from r_ edge=1500 with identical E_ edge=1.0004, λ_ edge=2.44, and ṁ=0.001. In order to preserve θ range intact, here we choose relatively smaller ṁ compared to the same used for flows around weakly spinning BH. In figures <ref>d-f, results are are obtained for α_0 = 0.005 (solid), 0.0055 (dashed) and 0.006 (dotted), respectively. Note that the overall variations of r_s, R, and S with θ for a_ k=0.99 appear qualitatively similar as delineated in the left panels for a_ k=0.0.In figure <ref>, we investigate the effect of accretion rate (ṁ) for shock triggering in a convergent accretion flow. Such a portentous effort is very much useful as the radiative cooling processes are regulated by ṁ. While doing so, we inject matter onto a non-rotating BH (a_ k=0.0) from r_ edge=1500 with E_ edge=1.0004, λ_ edge=3.98 and α_0=0.01. We display the obtained results in left panels, where solid, dashed, and dotted curves correspond to ṁ = 0.01, 0.1 and 0.2, respectively. Similarly, for a_ k=0.99, we choose r_ edge=1500, E_ edge=1.0004, λ_ edge=2.44, α_0=0.005, and results are drawn in the right panels. The spin values are marked at the top of the figure. In panels (a) and (d), we present the variation of r_s with θ, where shocks are seen to proceed further inward close to BH horizon as θ increases. This feature is commonly observed irrespective to ṁ values provided shock is formed. What is more is that for a given θ, when ṁ is increased, shock front moves inward. This is not surprising because higher ṁ eventually increases the effect of cooling in PSC, and accordingly, thermal pressure decreases. Hence, shock settles down at the location closer to the horizon to maintain pressure balance on both sides of the discontinuity. In panels (b) and (e), we compare the compression ratio (R) and notice that R increases with θ. This evidently indicates that for a convergent flow, accretion shock becomes stronger as r_s decreases. We further find that shock strength S increases monotonically with θ, and for a given θ, when shocks form closer to BH, S is enhanced and vice versa (see panels (c) and (f)).Next, we investigate the effect of BH rotation (a_ k) on r_s and present the obtained results in figure <ref>, where the variation of r_s with a_ k for different θ is depicted. For this analysis, we inject matter from r_ edge = 1500 with λ_ edge = 3.78, E_ edge = 1.0004, ṁ = 0.01 and α_0 = 0.01. In the figure, solid, dashed, dotted, dot-dashed and big-dashed curves are used to indicate results correspond to θ=0.0, 0.03, 0.05, 0.07, and 0.095, respectively. It is clear from the figure that for a fixed θ, r_s moves outwards from BH horizon as θ increases for flows with fixed outer boundary conditions. Accordingly, the effective size of the PSC is increased, and hence, the possibility of up-scattering the soft-photons from pre-shock disk at PSC is increased in producing the high energy radiations. We further notice that for a given θ, shocks form for a particular range of a_ k, and as θ is increased, the range of a_ k is shifted to the higher side. This is not surprising because of the fact that for a fixed λ_ edge, higher θ increases angular momentum transport causing the overall reduction of λ (r) close to BH. Indeed, it is evident that for higher a_ k, shock exists when λ is relatively low <cit.>, and this happens because of spin-orbit coupling present in the effective potential (see equation (1a)) describing spacetime geometry around BH. These findings are consistent with the results of <cit.>. In contrary, we observe that r_s decreases due to the increase of θ for flows accreting onto a BH having a fixed spin (a_ k) value. Moreover, we observe that the lower limit of r_s is gradually reduced when the flow with fixed outer boundary accretes onto the BHs of increasing spin (a_ k) values.§ SHOCK PARAMETER SPACE In this section, we proceed further to identify the region of parameter space that admits stationary shock solutions for viscous advective accretion flow around BHs. It is evident from figures <ref>-<ref> that shocked-induced GTAS are obtained for a range of angular momentum and θ values. Hence, we examine how the shock properties alters with θ in a viscous flow, and classify the effective domain of parameter space in terms of θ in the λ_ in- E_ in plane, where λ_ in and E_ in refer to the angular momentum and energy of the flow at r_ in <cit.>. We choose E_ in and λ_ in in defining the shock parameter space as the flow is expected to advect into BH with energy and angular momentum resembling these values. The results are presented in figure <ref>, where left panel is for a_ k=0.0 and the effective region bounded with solid, dashed, dot-dashed, and dotted curves are for θ = 0.0, 0.1, 0.3, and 0.35, respectively. Here, we set ṁ=0.01. Similarly, in the bottom panel, we illustrate the results for a_ k=0.99, where solid, dashed, dot-dashed, and dotted curves are used to separate the region for θ = 0.0, 0.3, 0.5, and 0.7, respectively. Here, we choose ṁ=10^-4. In each panel, a_ k and θ values are marked. We observe that in both panels, the effective domain of λ_ in- E_ in space for standing shock is reduced as θ increases. and accordingly, the shock formation possibility is also diminished <cit.>. Indeed, when θ exceeds its limiting value (i.e., θ > θ^ max), the parameter space for standing shock disappears. Note that for θ > θ^ max, flow angular momentum at the vicinity of BH is reduced to such a limit that the centrifugal barrier becomes very weak and it could not trigger the shock transition. Hence, standing shock ceases to exist. Nevertheless, time-dependent shocked accretion solutions may exist for θ > θ^ max, which were examined by numerical simulation to study the oscillatory behaviour of shock solutions <cit.>. Interestingly, the solutions of this kind satisfactorily account for the quasi-periodic oscillations (QPOs) phenomenon that are commonly observed in BH-XRBs <cit.>. However, we indicate that the study of time-dependent shock solution is beyond the scope of this framework and we plan to consider it as future work.We continue our study to examine the ranges of λ_ in and θ in terms of α_0 that admit shocked-induced GTAS. In order to do that, we set ṁ=0.01 and scan the range of θ for a given set of (λ_ in,α_0) by freely varying r_ in (equivalently E_ in). The obtained results for a_ k=0.0 and 0.99 are shown in figure <ref>. In panel (a), solid, dashed, and dotted are obtained for α_0=0.01, 0.02, and 0.03, that separate the region for shocked accretion solutions from the shock-free solutions. Similarly, in panel (b), solid, dashed, and dotted are obtained for α_0=0.02, 0.04, and 0.06. We observe that the permissible region for shock in λ_ in-θ plane gradually diminishes with the increase of α_0 for both slowly and rapidly rotating BHs. In addition, we find that for a given α_0, θ attains its maximum value, namely θ^ max, at a fixed λ_ in.We further observe that as α_0 is increased, the value of θ^ max is decreased and it is obtained at smaller λ_ in values. In figure <ref>, we demonstrate how θ^ max varies with α_0. Open squares represent the results for a_ k=0.0, while open circles are for a_ k=0.99. These data points are further fitted empirically as θ^ max= δα^-1/2_0 - η e^-ξα_0, where δ, η, and ξ are the constants, and their values strictly depends on a_ k which are presented in Table <ref>. In the figure, solid curves denote the best-fit representations of the fitted function described above for a_ k=0.0 and 0.99, respectively. Figure clearly indicates that the accretion flows with relatively higher viscosity continue to harbour shock waves around the highly spinning BHs as compared to the weakly rotating BHs. This happens mostly because of the fact that the outer critical points turn into nodal type <cit.> at higher viscosity for a_ k→ 0, and hence, shock-induced GTAS ceases to exist.§ CONCLUSIONS In this study, we examine the structure of the viscous accretion flow that includes the more general viscosity than those usually discussed in the literature <cit.>. In particular, we consider the viscosity parameter to vary with the radial coordinate as α(r) and observe that GTAS continue to exist around rotating BH. Depending on input parameters, i.e., viscosity, angular momentum, accretion rate, the accretion flow may harbour shock waves. Indeed, the shock-induced GTAS is promising in the sense that it has the potential to explain the spectro-temporal properties of BH sources <cit.>. We find important results that are presented below. * There exists global transonic accretion solutions either pass through inner critical points (r_ in, ADAF-type) or outer critical points (r_ out, Bondi type) for low angular momentum flow. We find that when viscosity is appropriately chosen by tuning the viscosity exponent θ, keeping the other flow parameter fixed at the outer edge (r_ edge), ADAF-type solutions change its character to become Bondi type (see figure <ref>). Further, when θ is decreased for an ADAF-type solution, global accretion solution eventually becomes closed as it could not extend upto the disk outer edge (see figure <ref>), although it can join with a Bondi-type solution via Rankine-Hugoniot shock transition (see figure <ref>). Note that these findings are seen in both weakly rotating (a_ k→ 0) and rapidly rotating (a_ k→ 1) BHs. Since it is generally perceived that BHs may accrete low angular momentum matter from its surroundings stars, shock seems to be an indispensable component in the accretion flow. * We observe that because of shock transition, convergent accretion flow is compressed yielding hot and dense PSC (see figure <ref>-<ref>). Thus, PSC contains swarm of hot electrons, which are likely to reprocess the low energy photons from pre-shock flow via inverse-Comptonization and generate hard X-ray radiations <cit.>. Such a signature of excess high energy radiations is often observed from galactic X-ray binaries harbouring BH sources <cit.>. With this, we infer that the shock radius (r_s) which coarsely measures the size of PSC seems to play viable role to emit hard X-ray radiations from accretion disc. * When viscosity is enhanced, the efficiency of the angular momentum transport increases that evidently weakens the centrifugal repulsion against gravity. Because of this, for higher θ, the size of PSC is reduced as the shock frontmoves inward to satisfy the pressure equilibrium on both sides of the discontinuity (see figure <ref>).Accordingly, by suitably changing θ, one can regulate the accreting dynamics including PSC while explaining the disk emission. * We determine the limiting range of flow parameters that admit shock transition in viscous accretion flow around both slowly and rapidly rotating BHs. We find that shock-induced GTAS are not discrete solutions. In fact, solutions of this kind are obtained for ample range of the flow parameters (see figure <ref>). However, the possibility of shock formation diminishes as we increase viscosity, and beyond a critical limit of θ > θ^ max, shock disappears. Indeed, θ^ max does not owns a universal value as it is dependent on other flow variables. * We quantify θ^ max as function of α_0 for a_ k=0 and 0.99, and find that it sharply decreases at lower α_0 and ultimately settles down to its asymptotic limit (see figure <ref>). It is noteworthy to refer that in the literature, there exists results of shock-induced transonic accretion flows obtained from simulation studies <cit.>. Indeed, in all these works, the viscosity parameter (α) was treated as global constant all throughout the disk. On the contrary, adopting the variable viscosity prescription, numerical simulation results of accretion flows around BHs are also reported. In particular, <cit.> examined the dynamical behaviour of the azimuthally and time averaged α that typically ranges between ∼ 0.01 and ∼ 0.1 throughout most of the disk. <cit.> reported feeble variation of α (∼ 0.01-0.3) across the disk length scale with a peak around 2-3r_g. In studying truncated accretion disk, <cit.> obtained α∼ 0.07 at ∼ 600r_g in the quasi-steady state, although α settles down to ≈ 0.02 at the inner edge of the disk. Evidently, in these variable α studies, the formation of shock is not observed simply because these simulations were performed with Keplerian or quasi-Keplerian flows which are subsonic in nature and hence, are incapable of triggering shock transition <cit.>. Accordingly, it remains infeasible to compare the results obtained from the present formalism with the existing simulations. Nevertheless, we infer that with the suitable choices of the input parameters, accretion flow having variable α would possibly be capable in possessing shock as corroborated in <cit.>. We further indicate that based on the the above findings, the quantitative description of the viscosity profile adopted in the present formalism seems to be fairly consistent with the results of the simulation works. We further mention that in an accreting system, PSC seems to play vital role in deciphering the observational signatures commonly observed in BH X-ray binary sources. As indicated earlier that PSC can reprocesses the soft photons via inverse Comptonization to produce hard X-ray radiations which eventually contributes in generating the high energy tail of the energy spectrum <cit.>. Occasionally, Galactic X-ray binaries do show spectral state transitions, which is possibly resulted when PSC geometry alters <cit.>. When PSC demonstrates time varying modulation, it resembles an astonishing phenomenon known as Quasi-periodic Oscillations (QPO) of hard X-ray radiations <cit.>. Moreover, it has been reported that PSC can deflect a part of the accreting matter in the form of jets/outflow <cit.>. Considering all these, we argue that the present formalism in examining the PSC characteristics is fervently relevant in the astrophysical context.Finally, we indicate the limitations of this formalism as it is developed considering several assumptions. We use effective potential to describe the space-time geometry around the rotating BH avoiding rigorous general relativistic approach. We neglect structured large scale magnetic fields and use stochastic magnetic field configuration. We also consider the flow to remain confined in single temperature domain although flow is expected to maintain two-temperature (for both ions and electrons) profiles. Implementation of all such issues is beyond the scope of this work and we intend to take up these relevant issues in future projects. § DATA AVAILABILITY The data underlying this article will be available with reasonable request.§ ACKNOWLEDGEMENTS This work was supported by the Science and Engineering Research Board (SERB) of India through grant MTR/2020/000331. 82 [Abramowicz and Chakrabarti(1990)]Abramowicz-etal1990 Abramowicz MA, Chakrabarti SK (1990) Standing Shocks in Adiabatic Black Hole Accretion of Rotating Matter. The Astrophysical Journal 350:281. 10.1086/168380 [Aktar et al(2017)Aktar, Das, Nandi, and Sreehari]Aktar-etal2017 Aktar R, Das S, Nandi A, et al (2017) Estimation of mass outflow rates from dissipative accretion disc around rotating black holes. 471(4):4806–4819. 10.1093/mnras/stx1893, https://arxiv.org/abs/1707.07511https://arxiv.org/abs/arXiv:1707.07511 [astro-ph.HE] [Aktar et al(2018)Aktar, Das, Nandi, and Sreehari]Aktar-etal2018 Aktar R, Das S, Nandi A, et al (2018) Advective accretion flow properties around rotating black holes - application to GRO J1655-40. Journal of Astrophysics and Astronomy 39(1):17. 10.1007/s12036-017-9507-0, https://arxiv.org/abs/1801.04116https://arxiv.org/abs/arXiv:1801.04116 [astro-ph.HE] [Aktar et al(2019)Aktar, Nandi, and Das]Aktar-etal2019 Aktar R, Nandi A, Das S (2019) Accretion-ejection in rotating black holes: a model for `outliers' track of radio-X-ray correlation in X-ray binaries. 364(2):22. 10.1007/s10509-019-3509-0, https://arxiv.org/abs/1901.10091https://arxiv.org/abs/arXiv:1901.10091 [astro-ph.HE] [Baby et al(2020)Baby, Agrawal, Ramadevi, Katoch, Antia, Mandal, and Nandi]Baby-etal2020 Baby BE, Agrawal VK, Ramadevi MC, et al (2020) AstroSat and MAXI view of the black hole binary 4U 1630-472 during 2016 and 2018 outbursts. 497(1):1197–1211. 10.1093/mnras/staa1965, https://arxiv.org/abs/2007.00928https://arxiv.org/abs/arXiv:2007.00928 [astro-ph.HE] [Balbus and Hawley(1991)]Balbus-Hawley1991 Balbus SA, Hawley JF (1991) A Powerful Local Shear Instability in Weakly Magnetized Disks. I. Linear Analysis. 376:214. 10.1086/170270 [Balbus and Hawley(1998)]Balbus-Hawley1998 Balbus SA, Hawley JF (1998) Instability, turbulence, and enhanced transport in accretion disks. Reviews of Modern Physics 70(1):1–53. 10.1103/RevModPhys.70.1 [Becker and Kazanas(2001)]Becker-Kazanas2001 Becker PA, Kazanas D (2001) Exact Expressions for the Critical Mach Numbers in the Two-Fluid Model of Cosmic-Ray-modified Shocks. The Astrophysical Journal 546(1):429–446. 10.1086/318257, https://arxiv.org/abs/astro-ph/0101020https://arxiv.org/abs/arXiv:astro-ph/0101020 [astro-ph] [Bondi(1952)]Bondi1952 Bondi H (1952) On spherically symmetrical accretion. 112:195. 10.1093/mnras/112.2.195 [Chakrabarti and Titarchuk(1995)]Chakrabarti-Titarchuk1995 Chakrabarti S, Titarchuk LG (1995) Spectral Properties of Accretion Disks around Galactic and Extragalactic Black Holes. 455:623. 10.1086/176610, https://arxiv.org/abs/astro-ph/9510005https://arxiv.org/abs/arXiv:astro-ph/9510005 [astro-ph] [Chakrabarti(1989)]Chakrabarti-1989 Chakrabarti SK (1989) Standing Rankine-Hugoniot Shocks in the Hybrid Model Flows of the Black Hole Accretion and Winds. 347:365. 10.1086/168125 [Chakrabarti(1996)]Chakrabarti1996 Chakrabarti SK (1996) Grand Unification of Solutions of Accretion and Winds around Black Holes and Neutron Stars. 464:664. 10.1086/177354, https://arxiv.org/abs/astro-ph/9606145https://arxiv.org/abs/arXiv:astro-ph/9606145 [astro-ph] [Chakrabarti and Das(2004)]Chakrabarti-Das2004 Chakrabarti SK, Das S (2004) Properties of accretion shock waves in viscous flows around black holes. 349(2):649–664. 10.1111/j.1365-2966.2004.07536.x, https://arxiv.org/abs/astro-ph/0402561https://arxiv.org/abs/arXiv:astro-ph/0402561 [astro-ph] [Chakrabarti and Manickam(2000)]Chakrabarti-Manickam2000 Chakrabarti SK, Manickam SG (2000) Correlation among Quasi-Periodic Oscillation Frequencies and Quiescent-State Duration in Black Hole Candidate GRS 1915+105. 531(1):L41–L44. 10.1086/312512, https://arxiv.org/abs/astro-ph/9910012https://arxiv.org/abs/arXiv:astro-ph/9910012 [astro-ph] [Chakrabarti and Molteni(1995)]Chakrabarti-Molteni1995 Chakrabarti SK, Molteni D (1995) Viscosity prescriptions in accretion discs with shock waves. 272(1):80–88. 10.1093/mnras/272.1.80 [Chattopadhyay and Chakrabarti(2000)]Chattopadhyay-Chakrabarti2000 Chattopadhyay I, Chakrabarti SK (2000) A Comparative Study of Bondi-Type and Radiative Outflows Around Compact Objects. International Journal of Modern Physics D 9(6):717–731. 10.1142/S0218271800000670 [Chattopadhyay and Kumar(2016)]Chattopadhyay-Kumar2016 Chattopadhyay I, Kumar R (2016) Estimation of mass outflow rates from viscous relativistic accretion discs around black holes. 459(4):3792–3811. 10.1093/mnras/stw876, https://arxiv.org/abs/1605.00752https://arxiv.org/abs/arXiv:1605.00752 [astro-ph.HE] [Chattopadhyay and Ryu(2009)]Chattopadhyay-Ryu2009 Chattopadhyay I, Ryu D (2009) Effects of Fluid Composition on Spherical Flows Around Black Holes. 694(1):492–501. 10.1088/0004-637X/694/1/492, https://arxiv.org/abs/0812.2607https://arxiv.org/abs/arXiv:0812.2607 [astro-ph] [Das(2007)]Das-2007 Das S (2007) Behaviour of dissipative accretion flows around black holes. 376(4):1659–1670. 10.1111/j.1365-2966.2007.11501.x, https://arxiv.org/abs/astro-ph/0610651https://arxiv.org/abs/arXiv:astro-ph/0610651 [astro-ph] [Das and Chakrabarti(2008)]Das-Chakrabarti2008 Das S, Chakrabarti SK (2008) Dissipative accretion flows around a rotating black hole. Monthly Notices of the Royal Astronomical Society 389(1):371–378. 10.1111/j.1365-2966.2008.13564.x, https://arxiv.org/abs/0806.1985https://arxiv.org/abs/arXiv:0806.1985 [astro-ph] [Das and Sarkar(2018)]Das-Sarkar2018 Das S, Sarkar B (2018) Standing shocks in magnetized advection accretion flows onto a rotating black hole. 480(3):3446–3456. 10.1093/mnras/sty2071, https://arxiv.org/abs/1807.11417https://arxiv.org/abs/arXiv:1807.11417 [astro-ph.HE] [Das et al(2001a)Das, Chattopadhyay, and Chakrabarti]Das-etal2001a Das S, Chattopadhyay I, Chakrabarti SK (2001a) Standing Shocks around Black Holes: An Analytical Study. 557(2):983–989. 10.1086/321692, https://arxiv.org/abs/astro-ph/0107046https://arxiv.org/abs/arXiv:astro-ph/0107046 [astro-ph] [Das et al(2001b)Das, Chattopadhyay, Nandi, and Chakrabarti]Das-etal2001b Das S, Chattopadhyay I, Nandi A, et al (2001b) Computation of outflow rates from accretion disks around black holes. 379:683–689. 10.1051/0004-6361:20011307, https://arxiv.org/abs/astro-ph/0402555https://arxiv.org/abs/arXiv:astro-ph/0402555 [astro-ph] [Das et al(2009)Das, Becker, and Le]Das-etal-2009 Das S, Becker PA, Le T (2009) Dynamical Structure of Viscous Accretion Disks with Shocks. 702(1):649–659. 10.1088/0004-637X/702/1/649, https://arxiv.org/abs/0907.0875https://arxiv.org/abs/arXiv:0907.0875 [astro-ph.HE] [Das et al(2014)Das, Chattopadhyay, Nandi, and Molteni]Das-etal2014 Das S, Chattopadhyay I, Nandi A, et al (2014) Periodic mass loss from viscous accretion flows around black holes. 442(1):251–258. 10.1093/mnras/stu864, https://arxiv.org/abs/1405.4415https://arxiv.org/abs/arXiv:1405.4415 [astro-ph.HE] [Das et al(2021)Das, Nandi, Agrawal, Dihingia, and Majumder]Das-etal2021 Das S, Nandi A, Agrawal VK, et al (2021) Relativistic viscous accretion flow model for ULX sources: a case study for IC 342 X-1. 507(2):2777–2781. 10.1093/mnras/stab2307, https://arxiv.org/abs/2108.02973https://arxiv.org/abs/arXiv:2108.02973 [astro-ph.HE] [Dihingia et al(2018)Dihingia, Das, Maity, and Chakrabarti]Dihingia-etal2018 Dihingia IK, Das S, Maity D, et al (2018) Limitations of the pseudo-newtonian approach in studying the accretion flow around a kerr black hole. Phys Rev D 98:083,004. 10.1103/PhysRevD.98.083004, <https://link.aps.org/doi/10.1103/PhysRevD.98.083004> [Dihingia et al(2019a)Dihingia, Das, Maity, and Nandi]Dihingia-etal2019 Dihingia IK, Das S, Maity D, et al (2019a) Shocks in relativistic viscous accretion flows around Kerr black holes. 488(2):2412–2422. 10.1093/mnras/stz1933, https://arxiv.org/abs/1903.02856https://arxiv.org/abs/arXiv:1903.02856 [astro-ph.HE] [Dihingia et al(2019b)Dihingia, Das, and Nandi]Dihingia-etal2019a Dihingia IK, Das S, Nandi A (2019b) Low angular momentum relativistic hot accretion flow around Kerr black holes with variable adiabatic index. 484(3):3209–3218. 10.1093/mnras/stz168, https://arxiv.org/abs/1901.04293https://arxiv.org/abs/arXiv:1901.04293 [astro-ph.HE] [Fragile et al(2007)Fragile, Blaes, Anninos, and Salmonson]Fragile-etal2007 Fragile PC, Blaes OM, Anninos P, et al (2007) Global General Relativistic Magnetohydrodynamic Simulation of a Tilted Black Hole Accretion Disk. 668(1):417–429. 10.1086/521092, https://arxiv.org/abs/0706.4303https://arxiv.org/abs/arXiv:0706.4303 [astro-ph] [Fukue(1987)]Fukue-1987 Fukue J (1987) Transonic disk accretion revisited. 39(2):309–327 [Giri and Chakrabarti(2012)]Giri-Chakrabarti2012 Giri K, Chakrabarti SK (2012) Hydrodynamic simulations of viscous accretion flows around black holes. 421(1):666–678. 10.1111/j.1365-2966.2011.20343.x, https://arxiv.org/abs/1112.1500https://arxiv.org/abs/arXiv:1112.1500 [astro-ph.HE] [Giri and Chakrabarti(2013)]Giri-Chakrabarti2013 Giri K, Chakrabarti SK (2013) Hydrodynamic simulation of two-component advective flows around black holes. 430(4):2836–2843. 10.1093/mnras/stt087, https://arxiv.org/abs/1212.6493https://arxiv.org/abs/arXiv:1212.6493 [astro-ph.HE] [Hawley and Krolik(2001)]Hawley-Krolik2001 Hawley JF, Krolik JH (2001) Global MHD Simulation of the Inner Accretion Disk in a Pseudo-Newtonian Potential. 548(1):348–367. 10.1086/318678, https://arxiv.org/abs/astro-ph/0006456https://arxiv.org/abs/arXiv:astro-ph/0006456 [astro-ph] [Hawley and Krolik(2002)]Hawley-Krolik2002 Hawley JF, Krolik JH (2002) High-Resolution Simulations of the Plunging Region in a Pseudo-Newtonian Potential: Dependence on Numerical Resolution and Field Topology. 566(1):164–180. 10.1086/338059, https://arxiv.org/abs/astro-ph/0110118https://arxiv.org/abs/arXiv:astro-ph/0110118 [astro-ph] [Hawley et al(1995)Hawley, Gammie, and Balbus]Hawley-etal1995 Hawley JF, Gammie CF, Balbus SA (1995) Local Three-dimensional Magnetohydrodynamic Simulations of Accretion Disks. 440:742. 10.1086/175311 [Hawley et al(1996)Hawley, Gammie, and Balbus]Hawley-etal1996 Hawley JF, Gammie CF, Balbus SA (1996) Local Three-dimensional Simulations of an Accretion Disk Hydromagnetic Dynamo. 464:690. 10.1086/177356 [Hogg and Reynolds(2018)]Hogg-Reynolds2018 Hogg JD, Reynolds CS (2018) The Dynamics of Truncated Black Hole Accretion Disks. II. Magnetohydrodynamic Case. 854(1):6. 10.3847/1538-4357/aaa6c6, https://arxiv.org/abs/1801.05836https://arxiv.org/abs/arXiv:1801.05836 [astro-ph.HE] [Iyer et al(2015)Iyer, Nandi, and Mandal]Iyer-etal2015 Iyer N, Nandi A, Mandal S (2015) Determination of the Mass of IGR J17091-3624 from “Spectro-temporal” Variations during the Onset Phase of the 2011 Outburst. 807(1):108. 10.1088/0004-637X/807/1/108, https://arxiv.org/abs/1505.02529https://arxiv.org/abs/arXiv:1505.02529 [astro-ph.HE] [King et al(2007)King, Pringle, and Livio]King-etal2007 King AR, Pringle JE, Livio M (2007) Accretion disc viscosity: how big is alpha? 376(4):1740–1746. 10.1111/j.1365-2966.2007.11556.x, https://arxiv.org/abs/astro-ph/0701803https://arxiv.org/abs/arXiv:astro-ph/0701803 [astro-ph] [Kumar and Chattopadhyay(2014)]Kumar-Chattopadhyay2014 Kumar R, Chattopadhyay I (2014) Dissipative advective accretion disc solutions with variable adiabatic index around black holes. 443(4):3444–3462. 10.1093/mnras/stu1389, https://arxiv.org/abs/1407.2130https://arxiv.org/abs/arXiv:1407.2130 [astro-ph.HE] [Landau and Lifshitz(1959)]Landau-Lifshitz1959 Landau LD, Lifshitz EM (1959) Fluid mechanics. Pergamon Press, Oxford (1959) [Lanzafame et al(1998)Lanzafame, Molteni, and Chakrabarti]Lanzafame-etal1998 Lanzafame G, Molteni D, Chakrabarti SK (1998) Smoothed particle hydrodynamic simulations of viscous accretion discs around black holes. 299(3):799–804. 10.1046/j.1365-8711.1998.01816.x, https://arxiv.org/abs/astro-ph/9706248https://arxiv.org/abs/arXiv:astro-ph/9706248 [astro-ph] [Lee et al(2016)Lee, Chattopadhyay, Kumar, Hyung, and Ryu]Lee-etal2016 Lee SJ, Chattopadhyay I, Kumar R, et al (2016) Simulations of Viscous Accretion Flow around Black Holes in a Two-dimensional Cylindrical Geometry. 831(1):33. 10.3847/0004-637X/831/1/33, https://arxiv.org/abs/1608.03997https://arxiv.org/abs/arXiv:1608.03997 [astro-ph.HE] [Lu et al(1999)Lu, Gu, and Yuan]Lu-etal1999 Lu JF, Gu WM, Yuan F (1999) Global Dynamics of Advection-dominated Accretion Revisited. 523(1):340–349. 10.1086/307725, https://arxiv.org/abs/astro-ph/9905099https://arxiv.org/abs/arXiv:astro-ph/9905099 [astro-ph] [Lyubarskii(1997)]Lyubarskii-1997 Lyubarskii YE (1997) Flicker noise in accretion discs. 292(3):679–685. 10.1093/mnras/292.3.679 [Majumder et al(2022)Majumder, Sreehari, Aftab, Katoch, Das, and Nandi]Majumder-etal2022 Majumder S, Sreehari H, Aftab N, et al (2022) Wide-band view of high-frequency quasi-periodic oscillations of GRS 1915+105 in 'softer' variability classes observed with AstroSat. 512(2):2508–2524. 10.1093/mnras/stac615, https://arxiv.org/abs/2203.02710https://arxiv.org/abs/arXiv:2203.02710 [astro-ph.HE] [Mandal and Chakrabarti(2005)]Mandal-Chakrabarti2005 Mandal S, Chakrabarti SK (2005) Accretion shock signatures in the spectrum of two-temperature advective flows around black holes. 434(3):839–848. 10.1051/0004-6361:20041235 [Matsumoto and Tajima(1995)]Matsumoto-Tajima1995 Matsumoto R, Tajima T (1995) Magnetic Viscosity by Localized Shear Flow Instability in Magnetized Accretion Disks. 445:767. 10.1086/175739 [Matsumoto et al(1984)Matsumoto, Kato, Fukue, and Okazaki]Matsumoto-etal1984 Matsumoto R, Kato S, Fukue J, et al (1984) Viscous transonic flow around the inner edge of geometrically thin accretion disks. 36(1):71–85 [Mitra et al(2022)Mitra, Maity, Dihingia, and Das]Mitra-etal2022 Mitra S, Maity D, Dihingia IK, et al (2022) Study of general relativistic magnetohydrodynamic accretion flow around black holes. 516(4):5092–5109. 10.1093/mnras/stac2431, https://arxiv.org/abs/2204.01412https://arxiv.org/abs/arXiv:2204.01412 [astro-ph.HE] [Molteni et al(1994)Molteni, Lanzafame, and Chakrabarti]Molteni-etal1994 Molteni D, Lanzafame G, Chakrabarti SK (1994) Simulation of Thick Accretion Disks with Standing Shocks by Smoothed Particle Hydrodynamics. 425:161. 10.1086/173972, https://arxiv.org/abs/astro-ph/9310047https://arxiv.org/abs/arXiv:astro-ph/9310047 [astro-ph] [Molteni et al(1996)Molteni, Sponholz, and Chakrabarti]Molteni-etal1996 Molteni D, Sponholz H, Chakrabarti SK (1996) Resonance Oscillation of Radiative Shock Waves in Accretion Disks around Compact Objects. 457:805. 10.1086/176775, https://arxiv.org/abs/astro-ph/9508022https://arxiv.org/abs/arXiv:astro-ph/9508022 [astro-ph] [Nagakura and Yamada(2009)]Nagakura-Yamada2009 Nagakura H, Yamada S (2009) The Standing Accretion Shock Instability in the Disk Around the Kerr Black Hole. 696(2):2026–2035. 10.1088/0004-637X/696/2/2026, https://arxiv.org/abs/0901.4053https://arxiv.org/abs/arXiv:0901.4053 [astro-ph.HE] [Nandi et al(2001)Nandi, Manickam, Rao, and Chakrabarti]Nandi-etal2001 Nandi A, Manickam SG, Rao AR, et al (2001) On the source of quasi-periodic oscillations of the black hole candidate GRS 1915+105: some new observations and their interpretation. 324(1):267–272. 10.1046/j.1365-8711.2001.04339.x, https://arxiv.org/abs/astro-ph/0012527https://arxiv.org/abs/arXiv:astro-ph/0012527 [astro-ph] [Nandi et al(2012)Nandi, Debnath, Mandal, and Chakrabarti]Nandi-etal2012 Nandi A, Debnath D, Mandal S, et al (2012) Accretion flow dynamics during the evolution of timing and spectral properties of GX 339-4 during its 2010-11 outburst. 542:A56. 10.1051/0004-6361/201117844, https://arxiv.org/abs/1204.5044https://arxiv.org/abs/arXiv:1204.5044 [astro-ph.HE] [Nandi et al(2018)Nandi, Mandal, Sreehari, Radhika, Das, Chattopadhyay, Iyer, Agrawal, and Aktar]Nandi-etal2018 Nandi A, Mandal S, Sreehari H, et al (2018) Accretion flow dynamics during 1999 outburst of XTE J1859+226—modeling of broadband spectra and constraining the source mass. 363(5):90. 10.1007/s10509-018-3314-1, https://arxiv.org/abs/1803.08638https://arxiv.org/abs/arXiv:1803.08638 [astro-ph.HE] [Narayan and Yi(1994)]Narayan-Yi1994 Narayan R, Yi I (1994) Advection-dominated Accretion: A Self-similar Solution. 428:L13. 10.1086/187381, https://arxiv.org/abs/astro-ph/9403052https://arxiv.org/abs/arXiv:astro-ph/9403052 [astro-ph] [Nayakshin(1999)]Nayakshin-1999 Nayakshin S (1999) Corona Energy Budget in AGN and GBHC's. In: Ferland G, Baldwin J (eds) Quasars and Cosmology, p 43, 10.48550/arXiv.astro-ph/9812109, astro-ph/9812109 [Okuda and Das(2015)]Okuda-Das2015 Okuda T, Das S (2015) Unstable mass-outflows in geometrically thick accretion flows around black holes. 453(1):147–156. 10.1093/mnras/stv1626, https://arxiv.org/abs/1507.04326https://arxiv.org/abs/arXiv:1507.04326 [astro-ph.HE] [Patra et al(2022)Patra, Majhi, and Das]Patra-etal2022 Patra S, Majhi BR, Das S (2022) Properties of accretion flow in deformed Kerr spacetime. Physics of the Dark Universe 37:101120. 10.1016/j.dark.2022.101120, https://arxiv.org/abs/2202.10863https://arxiv.org/abs/arXiv:2202.10863 [astro-ph.HE] [Peitz and Appl(1997)]Peitz-Appl1997 Peitz J, Appl S (1997) Viscous accretion discs around rotating black holes. 286(3):681–695. 10.1093/mnras/286.3.681, https://arxiv.org/abs/astro-ph/9612205https://arxiv.org/abs/arXiv:astro-ph/9612205 [astro-ph] [Penna et al(2010)Penna, McKinney, Narayan, Tchekhovskoy, Shafee, and McClintock]Penna-etal2010 Penna RF, McKinney JC, Narayan R, et al (2010) Simulations of magnetized discs around black holes: effects of black hole spin, disc thickness and magnetic field geometry. 408(2):752–782. 10.1111/j.1365-2966.2010.17170.x, https://arxiv.org/abs/1003.0966https://arxiv.org/abs/arXiv:1003.0966 [astro-ph.HE] [Penna et al(2012)Penna, Sadowski, and McKinney]Penna-etal2012 Penna RF, Sadowski A, McKinney JC (2012) Thin-disc theory with a non-zero-torque boundary condition and comparisons with simulations. 420(1):684–698. 10.1111/j.1365-2966.2011.20084.x, https://arxiv.org/abs/1110.6556https://arxiv.org/abs/arXiv:1110.6556 [astro-ph.HE] [Penna et al(2013)Penna, Sadowski, Kulkarni, and Narayan]Penna-etal2013 Penna RF, Sadowski A, Kulkarni AK, et al (2013) The Shakura-Sunyaev viscosity prescription with variable(r). 428(3):2255–2274. 10.1093/mnras/sts185, https://arxiv.org/abs/1211.0526https://arxiv.org/abs/arXiv:1211.0526 [astro-ph.HE] [Porth et al(2019)Porth, Chatterjee, Narayan, Gammie, Mizuno, Anninos, Baker, Bugli, Chan, Davelaar, Del Zanna, Etienne, Fragile, Kelly, Liska, Markoff, McKinney, Mishra, Noble, Olivares, Prather, Rezzolla, Ryan, Stone, Tomei, White, Younsi, Akiyama, Alberdi, Alef, Asada, Azulay, Baczko, Ball, Baloković, Barrett, Bintley, Blackburn, Boland, Bouman, Bower, Bremer, Brinkerink, Brissenden, Britzen, Broderick, Broguiere, Bronzwaer, Byun, Carlstrom, Chael, Chatterjee, Chen, Chen, Cho, Christian, Conway, Cordes, Geoffrey, Crew, Cui, De Laurentis, Deane, Dempsey, Desvignes, Doeleman, Eatough, Falcke, Fish, Fomalont, Fraga-Encinas, Freeman, Friberg, Fromm, Gómez, Galison, García, Gentaz, Georgiev, Goddi, Gold, Gu, Gurwell, Hada, Hecht, Hesper, Ho, Ho, Honma, Huang, Huang, Hughes, Ikeda, Inoue, Issaoun, James, Jannuzi, Janssen, Jeter, Jiang, Johnson, Jorstad, Jung, Karami, Karuppusamy, Kawashima, Keating, Kettenis, Kim, Kim, Kim, Kino, Koay, Patrick, Koch, Koyama, Kramer, Kramer, Krichbaum, Kuo, Lauer, Lee, Li, Li, Lindqvist, Liu, Liuzzo, Lo, Lobanov, Loinard, Lonsdale, Lu, MacDonald, Mao, Marrone, Marscher, Martí-Vidal, Matsushita, Matthews, Medeiros, Menten, Mizuno, Moran, Moriyama, Moscibrodzka, Müller, Nagai, Nagar, Nakamura, Narayanan, Natarajan, Neri, Ni, Noutsos, Okino, Oyama, Özel, Palumbo, Patel, Pen, Pesce, Piétu, Plambeck, PopStefanija, Preciado-López, Psaltis, Pu, Ramakrishnan, Rao, Rawlings, Raymond, Ripperda, Roelofs, Rogers, Ros, Rose, Roshanineshat, Rottmann, Roy, Ruszczyk, Rygl, Sánchez, Sánchez-Arguelles, Sasada, Savolainen, Schloerb, Schuster, Shao, Shen, Small, Sohn, SooHoo, Tazaki, Tiede, Tilanus, Titus, Toma, Torne, Trent, Trippe, Tsuda, van Bemmel, van Langevelde, van Rossum, Wagner, Wardle, Weintroub, Wex, Wharton, Wielgus, Wong, Wu, Young, Young, Yuan, Yuan, Zensus, Zhao, Zhao, Zhu, and Event Horizon Telescope Collaboration]Porth-etal2019 Porth O, Chatterjee K, Narayan R, et al (2019) The Event Horizon General Relativistic Magnetohydrodynamic Code Comparison Project. 243(2):26. 10.3847/1538-4365/ab29fd, https://arxiv.org/abs/1904.04923https://arxiv.org/abs/arXiv:1904.04923 [astro-ph.HE] [Riffert and Herold(1995)]Riffert-Herold1995 Riffert H, Herold H (1995) Relativistic Accretion Disk Structure Revisited. 450:508. 10.1086/176161 [Sano et al(2004)Sano, Inutsuka, Turner, and Stone]Sano-etal2004 Sano T, Inutsuka S, Turner NJ, et al (2004) Angular Momentum Transport by Magnetohydrodynamic Turbulence in Accretion Disks: Gas Pressure Dependence of the Saturation Level of the Magnetorotational Instability. 605(1):321–339. 10.1086/382184, https://arxiv.org/abs/astro-ph/0312480https://arxiv.org/abs/arXiv:astro-ph/0312480 [astro-ph] [Sarkar and Das(2016)]Sarkar-Das2016 Sarkar B, Das S (2016) Dynamical structure of magnetized dissipative accretion flow around black holes. 461(1):190–201. 10.1093/mnras/stw1327, https://arxiv.org/abs/1606.00526https://arxiv.org/abs/arXiv:1606.00526 [astro-ph.HE] [Sen et al(2022)Sen, Maity, and Das]Sen-etal2022 Sen G, Maity D, Das S (2022) Study of relativistic accretion flow around KTN black hole with shocks. 2022(8):048. 10.1088/1475-7516/2022/08/048, https://arxiv.org/abs/2204.02110https://arxiv.org/abs/arXiv:2204.02110 [astro-ph.HE] [Shakura and Sunyaev(1973)]Shakura-Sunyaev1973 Shakura NI, Sunyaev RA (1973) Reprint of 1973A&A....24..337S. Black holes in binary systems. Observational appearance. 500:33–51 [Shapiro and Teukolsky(1983)]Shapiro-Teukolsky1983 Shapiro SL, Teukolsky SA (1983) Black holes, white dwarfs, and neutron stars : The Physics of Compact Objects. Wiley-Interscience, New York (1983) [Smak(1999)]Smak1999 Smak J (1999) Dwarf Nova Outbursts. III. The Viscosity Parameter alpha. 49:391–401 [Smith et al(2001)Smith, Heindl, Markwardt, and Swank]Smith-etal2001 Smith DM, Heindl WA, Markwardt CB, et al (2001) A Transition to the Soft State in GRS 1758-258. 554(1):L41–L44. 10.1086/320928, https://arxiv.org/abs/astro-ph/0103381https://arxiv.org/abs/arXiv:astro-ph/0103381 [astro-ph] [Smith et al(2002)Smith, Heindl, and Swank]Smith-etal2002 Smith DM, Heindl WA, Swank JH (2002) Two Different Long-Term Behaviors in Black Hole Candidates: Evidence for Two Accretion Flows? 569(1):362–380. 10.1086/339167, https://arxiv.org/abs/astro-ph/0103304https://arxiv.org/abs/arXiv:astro-ph/0103304 [astro-ph] [Sorathia et al(2012)Sorathia, Reynolds, Stone, and Beckwith]Sorathia-etal2012 Sorathia KA, Reynolds CS, Stone JM, et al (2012) Global Simulations of Accretion Disks. I. Convergence and Comparisons with Local Models. 749(2):189. 10.1088/0004-637X/749/2/189, https://arxiv.org/abs/1106.4019https://arxiv.org/abs/arXiv:1106.4019 [astro-ph.HE] [Sreehari et al(2020)Sreehari, Nandi, Das, Agrawal, Mandal, Ramadevi, and Katoch]Sreehari-etal2020 Sreehari H, Nandi A, Das S, et al (2020) AstroSat view of GRS 1915+105 during the soft state: detection of HFQPOs and estimation of mass and spin. 499(4):5891–5901. 10.1093/mnras/staa3135, https://arxiv.org/abs/2010.03782https://arxiv.org/abs/arXiv:2010.03782 [astro-ph.HE] [Steinacker and Papaloizou(2002)]Steinacker-Papaloizou2002 Steinacker A, Papaloizou JCB (2002) Three-dimensional Magnetohydrodynamic Simulations of an Accretion Disk with Star-Disk Boundary Layer. 571(1):413–428. 10.1086/339892, https://arxiv.org/abs/astro-ph/0201479https://arxiv.org/abs/arXiv:astro-ph/0201479 [astro-ph] [Suková and Janiuk(2015)]Sukova-Janiuk2015 Suková P, Janiuk A (2015) Oscillating shocks in the low angular momentum flows as a source of variability of accreting black holes. 447(2):1565–1579. 10.1093/mnras/stu2544, https://arxiv.org/abs/1411.7836https://arxiv.org/abs/arXiv:1411.7836 [astro-ph.HE] [Sunyaev and Titarchuk(1980)]Sunyaev-Titarchuk1980 Sunyaev RA, Titarchuk LG (1980) Comptonization of X-Rays in Plasma Clouds - Typical Radiation Spectra. 86:121 [Yang and Kafatos(1995)]Yang-Kafatos-1995 Yang R, Kafatos M (1995) Shock study in fully relativistic isothermal flows. II. 295:238–244 [Zhu and Stone(2018)]Zhu-Stone2018 Zhu Z, Stone JM (2018) Global Evolution of an Accretion Disk with a Net Vertical Field: Coronal Accretion, Flux Transport, and Disk Winds. 857(1):34. 10.3847/1538-4357/aaafc9, https://arxiv.org/abs/1701.04627https://arxiv.org/abs/arXiv:1701.04627 [astro-ph.EP] § DETAIL EXPRESSION OF THE WIND EQUATION With some simple algebraic steps, the radial momentum equations, azimuthal momentum equations and entropy generation equations are reduced to the following form as,R_0+ R_υd υ/d r + R_Θd Θ/d r + R_λd λ/d r = 0,L_0 + L_υd υ/d r + L_Θd Θ/d r + L_λd λ/d r = 0 ,E_0 + E_υd υ/d r + E_Θd Θ/d r + E_λd λ/d r = 0. Using the equations (A1-A3), we obtain the wind equation, derivative of angular momentum and derivative of temperature which are given by,dυ/d r =N/ D,dλ/d r = λ_1+λ_2dυ/d r,dΘ/d r = Θ_1+Θ_2dυ/d r,where,N =E_λ(-R_Θ L_0+ R_0 L_Θ) + E_Θ(R_λ L_0- R_0 L_λ) + E_0(-R_λ L_Θ+ R_Θ L_λ) ,D = E_λ(R_Θ L_υ+ R_υ L_Θ) + E_Θ(-R_λ L_υ+ R_υ L_λ) + E_υ(R_λ L_Θ -R_Θ L_λ),Θ_1 =Θ_11/Θ_33Θ_2 = Θ_22/Θ_33 ,λ_1 = λ_11/Θ_33, λ_2 = λ_22/Θ_33,Θ_11 = E_λ L_0- E_0 L_λ,Θ_22 = E_λ L_υ - E_υ L_λ,Θ_33 = - E_λ L_Θ + E_Θ L_λ ,λ_11 = -E_Θ L_0+ E_0 L_Θ,λ_22 = -E_Θ L_υ + E_υ L_Θ,R_0 = d Φ_e^eff/d r - 3Θ/r τ h + F_1Θ/τ F h - ΘΔ^'/τΔ h, R_Θ = 1/τ h, τ = 1 + m_p/m_e,R_λ =F_2Θ/τ F h , R_υ = υ -2Θ/τυ h, Δ^' = dΔ/dr,E_0 =-Q^-/ρ -r αυ^2ω_1 - 2 r αΘω_1/τ + υΘ(-r F_1Δ +F(3Δ + r Δ') )/rτΔ,E_Θ =(1+ 2 n υ)/τ, E_λ - (F_2υΘ +r α F( τυ^2 + 2Θ)ω_2)/τ F, E_υ = 2θ/τ,L_0= -2αυ^2 - 4αΘ/τ + r αυ^2Δ'/2 Δ + r αΘΔ^'/τΔ - r/τ(τυ^2+2Θ)dα/dr,L_Θ = -2 r α/τ, L_λ = υ, L_υ= -r αυ + 2 r αΘ/τυ,F_1 =F λω_1/(1-λΩ)^2 +1/1-λΩdF/dr,F_2 = F Ω/(1-λΩ)^2 + F λω_2/(1-λΩ)^2, F = 1/(1-λΩ) F ,F = (r^2+a_k^2)^2 + 2 Δ a_k^2/(r^2+a_k^2)^2 - 2 Δ a_k^2, dF/dr = F_1 + F_2d λ/d r,d Ω/d r = ω_1+ω_2d λ/d r,ω_1 = -2(a_k^3 + 3 a_k r^2+ λ(a_kλ - 2 a_k^2 + r^2(r-3)))/(r^3 + a_k^2 (r+2) - 2 a_kλ)^2,ω_2 = r^2(a_k^2 + r (r - 2))/(r^3 + a_k^2 (r+2) - 2 a_kλ)^2.
http://arxiv.org/abs/2312.16001v1
{ "authors": [ "Monu Singh", "Santabrata Das" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231226111113", "title": "Properties of relativistic hot accretion flow around rotating black hole with radially varying viscosity" }
Algebraic and geometric extensions of Goldbach's conjecture] On some algebraic and geometric extensions of Goldbach's conjecture Danny A. J. Gómez-Ramírez]Danny A. J. Gómez-Ramírez Institución Universitaria Pascual Bravo, Medellín, Colombia. Visión Real Cognitiva (Cognivisión) S.A.S. Itaguí, Colombia. daj.gomezramirez@gmail.comA. F. Boix]Alberto F. Boix IMUVA–Mathematics Research Institute, Universidad de Valladolid, Paseo de Belen, s/n, 47011, Valladolid, Spain. alberto.fernandez.boix@uva.es[2020]11R09, 52B20. The goal of this paper is to study Goldbach's conjecture for rings of regular functions of affine algebraic varieties over a field. Among our main results, we define the notion of Goldbach condition for Newton polytopes, and we prove in a constructive way that any polynomial in at least two variables over a field can be expressed as sum of at most 2r absolutely irreducible polynomials, where r is the number of its non–zero monomials. We also study other weak forms of Goldbach's conjecture for localizations of these rings. Moreover, we prove the validity of Goldbach's conjecture for a particular instance of the so–called forcing algebras introduced by Hochster. Finally, we prove that, for a proper multiplicative closed set S of ℤ, the collection of elements of S^-1ℤ that can be written as finite sum of primes forms a dense subset of the real numbers, among other results. [ [ January 14, 2024 ====================§ INTRODUCTIONGoldbach´s conjecture has been one of the most famous open problems in elementary Number Theory and in Mathematics, partly because its simple description and for the difficulty of finding a general solution for it. It was originally proposed in a letter exchange between the mathematicians Christian Goldbach and Leonard Euler in 1742 (see <cit.> for the original letter, for an English translation see <cit.>). In modern terms it states that any even integer number bigger or equal than 4 can be written as the sum of two prime numbers <cit.>. It is elementary to see that the conjecture is equivalent to show that any integer number bigger or equal to 5 can be written as the sum of at most 3 primes.[Here, as usual, 1 is not considered a prime number. However, in the original description of Goldbach, 1 was considered a prime.] Another elementary equivalent form states that for any integer x and n, both bigger or equal than 2, the number nx can be expressed as the sum of exactly n prime numbers. In other words, there exists prime numbers p_1,⋯,p_n such thatnx=∑_i=1^np_i. Although the original conjecture has not been resolved, there has been a lot of outstanding results partial results by J. J. Sylvester, G. H. Hardy, J. E. Littlewood, S. Ramanujan, I. M. Vinogradov, using asymptotic methods, whose combined and improved techniques are known nowadays as the Hardy-Littlewood-Ramanujan-Vinogradov method; as well as closed related statements developed over the integers and proved by E. Bombiery, T. Tao and B. J. Green, among many others (for a complete historical and technical review see <cit.>). Regarding extensions of Goldbach's conjecture to other commutative rings with unity, one of the most interesting elementary results is the one proved in 1965 by D. R. Hayes (rediscovered 30 years after by A. Rattan and C. Stewart <cit.>) which proves the conjecture for the ring ℤ[x]. More precisely, it says that any polynomial of degree n with integer coefficients can be written as the sum of two irreducible polynomials of the same degree <cit.>. His proof was completely elementary and used mainly the classic Eisenstein's criterion for irreducibility of polynomials over the integers. In the same paper, Hayes also proved the conjecture for D[x] where D is a principal ideal domain which contains infinitely many prime elements <cit.>. Hayes result was also proved by F. Saidak <cit.>, who also provides an upper bound for the number of ways one can write a polynomial with integer coefficients as sum of two irreducible ones with a prescribed upper bound on the coefficients. More than 40 years after, P. Pollack was able to find a kind of generalization of Hayes' argument to prove the corresponding version of Hayes' theorem but for polynomials over a Noetherian integral domain with infinitely many maximal ideals <cit.>; or over a ring of polynomials over an integral domain <cit.>. In this context, A. Bodin, P. Dèbes and S. Najib, building upon new results on the Schinzel's hypothesis, proved Goldbach's conjecture for the ring R[x], where R is a unique factorization domain with fraction field satisfying certain technical conditions, the interested reader can consult <cit.> for further details.For rings of formal power series, the situation is a bit more complicated. As shown by E. Paran in <cit.>, a formal power seriesf=∑_i≥ 0f_i x^i∈ℤ[[x]]is a sum of two irreducible power series if and only if f_0 is either of the form ± p^k± q^l or of the form ± p^k, where p, q are prime integers and k, l are positive integers. In particular, this shows that not all the elements of ℤ[[x]] can be written as sum of two irreducible power series, the interested reader may like to consult <cit.> for a concrete example.On the other hand, we will get heuristic inspiration in the form of using the cognitive (metamathematical) processes called (generic) exemplification in the context of Cognitive-Computational Metamathematics (CCMM) (or Artificial Mathematical Intelligence <cit.>)as a starting point for developing new interesting mathematical structures (e.g. concepts, proofs and theories) <cit.>. In other words, an implicit heuristic pillar of our presentation will be to use (<ref>), (as an initial (generic) exemplification) and slight generalizations of it as a kind of working criterion for studying and generating further algebraic structures in the context of Commutative Algebra and classic Algebraic Geometry, e.g. coordinate rings of affine algebraic varieties over an algebraically closed field. To clarify more explicitly what we mean here in terms of exemplification, we refer the more curious reader to <cit.> and <cit.>, where one appreciates how important a simple concept can be, like the one of forcing algebra and a concrete instance of it, as a conceptual intersection point where other seminal mathematical notions ad-hoc meet. In this way, one can start adopting a paradigm-shifting perspective in doing mathematics, where concrete examples constitute the starting working points to compare and develop new theories and interesting mathematical structures.Going back to our original question, the surprising work by G. Effinger, K. Hicks and G. L. Mullen, (see <cit.> and the references given therein), shows in a highly intuitive manner that the integers and the ring of polynomials over a finite field are highly close algebraic structures, perhaps closer than it might seem at first glance. For instance, one can define and extend almost exactly some seminal aspects of classic analytic methods (due originally to Hardy, Littlewood, Ramanujan and Vinogradov) for stating and subsequently proving (contingent) results related to Goldbach and the twin-primes conjectures for the ring 𝔽_q[x], that were originally developed for ℤ.This suggests that studying Goldbach conjecture for rings of polynomials in several variables over a (several types of) field(s) could give considerable additional insight for obtaining new techniques for tackling the classic Goldbach conjecture.So, we can start with a field of coefficients being algebraically rich enough such that one can easily check if Goldbach conjecture (GC) holds (or not) for the corresponding ring of regular functions <cit.>. Thus, in the next section we study GC for special rings of coordinates of affine varieties. Now, we provide a brief summary of the contents of this paper for the convenience of the reader. In Section <ref>, we present one of the main results of the paper, which essentially says that any polynomial in at least two variables over a field can be written explicitly as sum of at most 2r absolutely irreducible polynomials, where r denotes the number of its non–zero monomials (see teo1). Our proof of teo1 is constructive, which leads us in Section <ref> to present an algorithm, implemented in Macaulay2 <cit.>, that given as input a polynomial in at least two variables over a field, returns as output its decomposition in 2r absolutely irreducible polynomials. On the other hand, in Section <ref> we study some weak forms of Golbach's conjecture over the integers and its localizations; more precisely, we want to provide some partial positive answers to the non–trivial question of whether Golbach's conjecture holds over S^-1ℤ, where S is a non–trivial multiplicative closed set. In Section <ref>, we study again Golbach's conjecture on the localization of a polynomial ring over a field, showing as main result (see localization) that the conclusion of teo1 still holds replacing the ring of polynomials by a suitable localization of it. In Section <ref>, we obtain as main result (see Goldbach conjecture and forcing algebras) that Goldbach's conjecture holds over a very particular case of the so–called forcing algebras, that were introduced by M. Hochster in <cit.> and later developed specially by H. Brenner and D. A. J. Gómez–Ramírez (see <cit.> and the references given therein). In Section <ref>, essentially buinding upon Pollack's result, we exhibit some coordinate rings of affine algebraic varieties where any regular function can be written as a sum of two irreducible ones (see prop2 and its corollaries). Finally, in Section <ref> we present some examples to illustrate that the validity or not of Golbach's conjecture is more complicated to dealt with for rings attached to affine algebraic curves.§ EXPLICIT WEAK FORMS OF THE GOLDBACH CONJECTURE FOR SOME SPECIAL CLASSES OF POLYNOMIAL (COORDINATE) RINGS OVER FIELDS AND COMMUTATIVE RINGS WITH UNITY As we mentioned in the introduction, we want to study a more general form of (<ref>), where we allow more flexibility in the parameter n. In other words, we will study some special collections of coordinate rings where equations of the formn_1H=∑_i=1^n_2p_iholds, where n_1,n_2∈ℕ are parameters that can vary independently or not (from each other as well as from the element H).Let us start with a fundamental (algebraic) structure in classic algebraic geometry: the ring of regular functions on the complex affine space 𝔸_ℂ^n, i.e., the ring of polynomials 𝒪(𝔸_ℂ^n)=ℂ[x_1,…,x_n]. This classic geometric-algebraic space is a suitable starting point, since a lot of further algebraic constructions (in characteristic zero) are implicitly related with it (e.g. the ring of coordinates of any affine complex variety is simply a quotient of the one of𝔸_ℂ^n). Moreover, it is highly surprising that despite of𝔸_ℂ^n being one of the most canonical and primary spaces of study in classic algebraic geometry, a lot of information regarding its geometry and its symmetries is still a mystery <cit.>.Now, regarding (<ref>) for the ring 𝒪(𝔸_ℂ^n), one sees straightforwardly that prop2 implies that a stronger form of the GC holds for 𝒪(𝔸_ℂ^n) for n≥ 2, i.e., any polynomial can be written as the sum of two irreducible ones (i.e., n_1=1 and n_2=2 in (<ref>)). However, the proof of this result (and the necessary lemmas needed) do not give explicit description of the desired decomposition into irreducibles.In order to do so, we assume that the reader has some basic familiarity with the notion of Newton polytope associated with a polynomial (see for example <cit.>). Anyway, in what follows we recall some notions for the sake of completeness.A polynomial f∈ K[x_1,…,x_n] is absolutely irreducible if it remains irreducible over each algebraic extension of K. More generally, and inspired by <cit.>, given a commutative ring R and g∈ R[x_1,…,x_n], we say that g is absolutely irreducible if it remains irreducible over each ring extension R⊆ S. We also recall here that, given a multiindex 𝐚=(a_1,…,a_n)∈ℕ^n, we will denote by 𝐱^𝐚 the monomial𝐱^𝐚:=∏_i=1^n x_i^a_i,and by (𝐚) the set (𝐚):={i∈ [n]: a_i≠ 0}, where [n]:={1,…,n}. Moreover, we denote by (𝐚) the greatest common divisor of its components; that is (𝐚):= (a_1,…,a_n). Finally, we review the notion of integrally indecomposable polytope. A point in ℝ^n is called integral if its coordinates are integers. A polytope in ℝ^n is called integral if all its vertices are integral.Moreover, an integral polytope C is called integrally decomposable if there are integral polytopes A and B such that C=A+B, where both A and B have at least two points, and + denotes the Minkowski sum of polytopes. Otherwise, we say that C is integrally indecomposable.Finally, given two points 𝐩, 𝐪∈ℝ^n we denote by 𝐩𝐪 the line segment starting at 𝐩 and ending at 𝐪.As pointed out along <cit.>, the problem of testing whether a given polytope is integrally indecomposable is NP–complete. Our interest for integrally indecomposable polytopes stems from the following irreducibility criterion obtained by Gao in <cit.>.Gao irreducibility criterion Let K be any field, and let f∈ K[x_1,…,x_n] be a non–zero polynomial not divisible by any of the x_i's. If the Newton polytope of f is integrally indecomposable, then f is absolutely irreducible. The extension of Gao irreducibility criterion for certain rings was given by Koyuncu in <cit.>.Koyuncu irreducibility criterion Let R be a commutative ring, and f∈ R[x_1,…,x_n] a non–zero polynomial not divisible by any of the x_i's. Suppose that the coefficients of all the terms forming the vertices of the Newton polytope P_f of f are non–zero divisors of R. Then, if P_f is integrally indecomposable, then f is absolutely irreducible. In the proof of our first main result (see teo1) we plan to use in a crucial way Gao irreducibility criterion combined with the following construction, also due to Gao <cit.>.integrally indecomposable pyramid Let Q be any integral polytope in ℝ^n contained in a hyperplane H and let 𝐯∈ℝ^n be an integral point lying outside of H. Suppose that the vertices 𝐯_1,…,𝐯_r are all the vertices of Q. Then, the polytope (Q,𝐯) given by the convex hull of 𝐯_1,…,𝐯_r,𝐯 is integrally indecomposable if and only if(𝐯-𝐯_1,…,𝐯-𝐯_r)=1.Inspired also by Gao and Koyuncu criteria, we also want to introduce here the following definition. Let n≥ 1 be an integer, and let P⊆ℝ^n be a polytope of the form P= (𝐯_1,…,𝐯_r) for some 𝐯_1,…,𝐯_r∈ℕ^n. We say that P satisfies Goldbach condition if either P is integrally indecomposable, or there are points 𝐰_1,…,𝐰_s∈ℕ^n such that: * (𝐰_1,…,𝐰_s) is integrally indecomposable. * We have⋂_i=1^s(𝐰_i)=∅.* (𝐯_1,…,𝐯_r,𝐰_1,…,𝐰_s) is integrally indecomposable.Our reason for introducing Goldabch condition on polytopes is given by the following result.from polytopes to polynomials Let K be any field, and let f∈ K[x_1,…, x_n] be a polynomial not divisible by any of the x_i's. Assume that the Newton polytope N(f) of f satisfies Goldbach condition. Then, we have that either f is absolutely irreducible, or f=f_1+f_2, where both f_1 and f_2 are absolutely irreducible.Set P:=N(f)= (𝐯_1,…,𝐯_r). If P is integrally indecomposable, then by Gao irreducibility criterion we have that f is absolutely irreducible. On the other hand, assume that P is not integrally indecomposable. Since P satisfies Goldbach condition, there is an integrally indecomposable polytope Q:= (𝐰_1,…,𝐰_s) for some 𝐰_i∈ℕ^n such that (𝐯_1,…,𝐯_r,𝐰_1,…,𝐰_s) is integrally indecomposable. Moreover, since⋂_i=1^s(𝐰_i)=∅.we have that bothf_1:=f+∑_i=1^s 𝐱^𝐰_i, f_2:=-∑_i=1^s 𝐱^𝐰_iare both non divisible by any of the x_i's. Therefore, again by Gao irreducibility criterion we can guarantee that both f_1 and f_2 are absolutely irreducible and, by construction, f=f_1+f_2, just what we finally wanted to prove. So, in our next result we will prove a weaker form of GC for 𝒪(𝔸_K^n), where K denotes an arbitrary field, and where the number of irreducible polynomials in the decomposition depends on the number of (non-zero) terms of the particular polynomial H, but with the advantage that we construct in a more explicit manner the corresponding irreducible polynomials. The only slightly similar explicit result known by the authors in this direction is <cit.>. However, whereas the proof of <cit.> is not constructive, our proof is completely explicit and algorithmic, as we shall see soon.teo1 Let K be any field. Let 𝒪(𝔸_K^n)=K[x_1,…,x_n], with n≥ 2. Then, any polynomial H in 𝒪(𝔸_K^n) with r non-zero terms can be explicitly written as the sum of at most 2r absolutely irreducible polynomials. The zero polynomial can be written as the sum of two absolutely irreducibles.We will distinguish essentially three cases among the monomial terms of H, and in each of the cases, we will explicitly generate the corresponding absolutely irreducible polynomials that will be used in the decomposition, where there will be a relatively wide spectrum of possibilities for choosing them each time.Before giving the precise construction, we first explain how we plan to proceed. * Starting from 𝐢∈ (f), let H_i=a_𝐢𝐱^𝐢 be a monomial of H. We consider the origin 0=(0,…,0)∈ℝ^n. * If (𝐢)=1, then the situation is quite easy to handle; indeed, because of integrally indecomposable pyramid the line segment 0𝐢 is integrally indecomposable and therefore Gao irreducibility criterion ensures that 𝐱^𝐢-1 is an absolutely irreducible polynomial, so we have thata_𝐢𝐱^𝐢=(a_𝐢𝐱^𝐢-1)+(1),where we add the constant polynomial (+1) to the independent term of H (to be handled later as a particular case). So, we change the monomiala_𝐢𝐱^𝐢 by the absolutely irreducible monomial a_𝐢𝐱^𝐢-1.* Now, assume that (𝐢)>1, in this case the idea is to choose a point 𝐰 and a hyperplane G containing the line 𝐢𝐰 but not containing the origin, such that we can apply integrally indecomposable pyramid with the triangle 0𝐢𝐰. Now, we give the details. First, let us assume that n≥ 3. Let H_𝐢=a_𝐢x^𝐢 be a monomial of H, where 𝐢=(i_1,…,i_n)∈ℕ^n, and set d:=(𝐢). We have to distinguish two cases. * Assume that d>1. Up to permutation of the variables we can assume, without loss of generality, that there exists a subindex s with i_s≠ 0 such that 3≤ s≤ n. Now, setp:=(∏_j∈ (𝐢)i_j)+2,and 𝐰:=(p,p+1,w_3,…,w_s-1,2i_sp,w_s+1,…,w_n), where the w_t can be any natural number for any subindex t≠ 1,2,s. Notice that, by construction, p≥ 3. On the other hand, let G be the hyperplanei_s(1-2p)(x_1-i_1)+(p-i_1)(x_s-i_s)=0Note that for any (integer) values of w_t, 𝐢,𝐰∈ G, but 0=(0,…,0)∉ G. Indeed, let us explicitly check that 0∉ G. Notice that 0∈ G if and only ifi_s(1-2p)(-i_1)+(p-i_1)(-i_s)=0.Since i_s≠ 0, this is equivalent to say that(1-2p)i_1+i_1-p=0⟺ i_1=p/2(1-p).However, i_1≥ 0 whereas p/2(1-p)<0, hence we reach a contradiction. Summing up, we have shown that 0=(0,…,0)∉ G.Now, letA_1:=H_i+𝐱^𝐰+1, A_2:=-𝐱^𝐰-1.Since the line segment 𝐢𝐰 lies inside the hyperplane G and 0∉ G, the triangle 0𝐢𝐰 satisfies the hypothesis of integrally indecomposable pyramid. Therefore, 0𝐢𝐰 is integrally indecomposable if and only if (0-𝐢,0-𝐰)=(𝐢,𝐰)=1. But this holds because (𝐢,𝐰)|(p,p+1)=1. Moreover, by definition A_1 is not divisible by any of the x_i's. Thus, by Gao irreducibility criterion, A_1 is absolutely irreducible. Similarly, by <cit.>, the segment 0𝐰 is integrally indecomposable since (𝐰-0)=(𝐰)=1. So, again by Gao irreducibility criterion A_2 is also absolutely irreducible. Thus, H_i=A_1+A_2 can be written as the sum of two absolutely irreducible polynomials. It is worth noting that we can apply the former argument for any permutation of the variables x_1,x_2 and x_s, and the original order of the variables should be taken into account for the final decomposition of H as a sum of absolutely irreducible polynomials.* 𝐢=0. So, H_i=c is a constant of K. Then,c=(x_1+c)+(-x_1), where x_1+c and -x_1 are trivially absolutely irreducible polynomials. We could also choose here the irreducible polynomials x_1+x_2+c and -x_1-x_2, for given an explicit decomposition into (absolutely) irreducible polynomials being non monomials.[In the proof of localization we can see more explicitly and in more detail why this fact can be relevant.]Clearly the last two former cases applies as well for n=2. In the first case, we only need to differentiate two simple sub-cases: * Assume that i_1≠ 0. In this case, A_1=H_𝐢+x_1^i_1x_2^(i_1+1)^(i_2+1)+1 and A_2=-x_1^i_1x_2^(i_1+1)^(i_2+1)-1 works, since (i_1,(i_1+1)^(i_2+1))=1,(i_1+1)^(i_2+1)≠ i_2, and the line connecting the points (i_1,i_2) and (i_1,(i_1+1)^(i_2+1)) does not meet the origin (0,0). Note that we can use as well any pair of the form (i_1^w_1+1,(i_1+1)^(i_2+1)+w_2), for any w_1,w_2∈ℕ. * A similar procedure applies when i_2≠ 0. Summing up, for each non-zero term of H we generate either one or two explicit absolutely irreducible polynomials in the decomposition of H. Thus, H can be explicitly expressed as the sum of at most 2r absolutely irreducible polynomials.From the former proof we can deduce straightforwardly that there are infinitely many decompositions of a fixed polynomial in teo1 as the sum of absolutely irreducible polynomials.In teo1, we show that any polynomial with r non–zero monomials can be expressed as the sum of at most 2r absolutely irreducible polynomials. The reader might think that, in general, 2r is a very pessimistic upper bound. However, it turns out that sometimes this upper bound is sharp, as the following remark illustrates, among other things. Note that in the generality of teo1, the classic stronger form of Goldbach's conjecture (i.e., with two prime polynomial summands), where the degree of the summands is bounded by the degree of the original polynomial is not true. For example, taking inspiration from <cit.>, let R=𝔽_2[x], and consider f(x)=x^2+x. Then, due to the elementary fact the only irreducible quadratic polynomial in R is x^2+x+1, we see that f(x) cannot be written as the sum of two quadratic or linear irreducible polynomials in R. Nonetheless, our constructive proof of teo1 gives the explicit description of f(x)=(x^2+x+1)+(x+1)+(x) as the explicit sum of three irreducibles.The proof of teo1, replacing Gao irreducibility criterion by Koyuncu irreducibility criterion, gives also us the following result.teo2 Let R be a commutative ring, and let 𝒪(𝔸_R^n)=R[x_1,…,x_n], with n≥ 2. Then, any polynomial H in 𝒪(𝔸_R^n) with r non-zero terms that are non–zero divisors of R can be explicitly written as the sum of at most 2r absolutely irreducible polynomials. The zero polynomial can be written as the sum of two absolutely irreducibles. The proof is essentially the same as the given in teo1, except that independently of the value of (𝐢), we will always proceed as if (𝐢)≥ 2. In other words, we will always generate the integrally indecomposable triangle by adding and subtracting a suitable monomial with multi-exponent 𝐰, getting expressions of the form A_1:=H_i+𝐱^𝐰+1, A_2:=-𝐱^𝐰-1.Note that in the corresponding construction of the former polynomials the specific value of (𝐢) is not relevant. We proceed in the former way because we need to avoid adding units several times to the independent term of H, because the resulting constant polynomial can be a zero divisor.teo1 also holds if we replace absolutely irreducible by irreducible, K by any integral domain, and 2r by 2. Let us state this fact explicitly, which is an immediate consequence of <cit.>.our result using Pollack theorem Let D be an integral domain, and R=D[x_1,…,x_n], with n≥ 2. Then, any polynomial in R can be written as the sum of two irreducible polynomials. The disadvantage with our result using Pollack theorem is that the proof of <cit.> does notprovide a direct way to compute the irreducible ones as fast as the proof of teo1.§ AN ALGORITHMIC IMPLEMENTATION FOR THE DECOMPOSITION OF POLYNOMIALS INTO EXPLICIT (ABSOLUTELY) IRREDUCIBLE ADDITIVE FACTORS Now, we want to illustrate the explicit decomposition found in teo1 through some examples. The unjustified calculations were done with an algorithm, implemented in Macaulay2 <cit.>, that computes such decompositions.[The interested reader can see the explicit implementation of the algorithm in the following link https://github.com/DAJGomezRamirez/GoldbachDecompostionForPolynomialRings] Our first example is borrowed from <cit.>. Let R=ℚ[x,y], and let f=xy+x+y+1. In this case, its Newton polygon is the square given by vertices (0,0),(1,0),(0,1),(1,1). As pointed out in <cit.>, this polygon is not integrally indecomposable, in fact it has four integral summands. First of all, we exhibit here how our method works in Macaulay2, and afterwards we explain the meaning of the different outputs we obtain.On the one hand, the first matrix appearing as output is the one whose rows are exactly the vertices of the Newton polygon of f. On the other hand, our second matrix produces an explicit choice of the points 𝐰 constructed along teo1. Explicitly, we have thatxy=(xy+xy^4+1)+(-xy^4-1), x=(x+xy^2+1)+(-xy^2-1), y=(y+x^2y+1)+(-x^2y-1), 1=(1+x)+(-x). Let us consider an example with coefficients different from one. Let R=ℚ[x,y,z], and let f=5x^2+3y^2-7z^2-5. Its Newton polytope is the tetrahedron given by vertices (0,0,0),(2,0,0),(0,2,0),(0,0,2). In this case, we obtain:The last example we plan to consider is to illustrate a calculation involving the equation of a projective cubic hypersurface in three variables with mixed terms. Let R=ℚ[x,y,z], and let f=x^3+3x^2y-4y^3+6z^3 In this case, we obtain: § WEAK FORMS OF GOLDBACH CONJECTURE OVER THE INTEGERS AND ITS LOCALIZATIONS Let us consider our original equation (<ref>) in the case that n_2 can vary freely with H. This case can be considered as a weak form of Goldbach's conjecture. Although a proof of this kind of conjecture over polynomial rings in several variables (as in teo1) is not difficult, it is not so immediate and trivially straightforward. However, over classic rings of integers, it is a trivial fact. In other words, by adding enough copies of 2-s and 3-s, we can express any natural number bigger than one as finite sum of primes. A little more refined way of obtaining this decomposition with more than two primes is by using inductively Bertrand's Postulate stating that for any real x≥ 1 there exists a prime number between x and 2x (see either <cit.> or <cit.>).Now, if we want to extend this result to non-trivial localizations of ℤ, then we need to be very careful, since localizing we used to lose prime numbers at disposition. So, the direct trick of adding copies of 2-s and 3-s cannot be used anymore. Even more, we can lose infinitely many prime numbers on particular localizations of ℤ, so, Bertrand's postulate would not be a strong tool for generating the precise representations into prime additive factors that we could need. Let S be a non-trivial multiplicative system of ℤ, i.e., S≠{1},ℤ^ *.[Note that in the first trivial case the localization is equal to ℤ, and in the second trivial case, the localization is the field of fractions of ℤ, i.e., ℚ. Now, by definition, no form of Goldbach's conjecture is true over any field, because fields contain no prime numbers (or irreducibles).] Then, the weak form of Goldbach's conjecture over the ring S^-1ℤ is equivalent to say that for all (positive) integer a, there exist a natural number m∈ℕ, s,s_1,…,s_m∈ S and prime numbers p_1,…,p_m not belonging to S such that sa=∑_i=1^ms_ip_i.Note that the last expression seems to be highly more constrained than the original version over the integers. Effectively, in the localization we can lose almost arbitrarily large collections of prime additive generators although we gain certain additional multiplicative freedom in terms of the elements of the multiplicative system S, which is caused by the fact that on S^-1ℤ each element possesses infinitely many associates. In conclusion, at first sight there is no global manner of extend the weak form of Goldbach conjecture on S^-1ℤ. Nonetheless, we can ask ourselves how are distributed topologically into the real (or rational) numbers the collection of elements of S^-1ℤ that can be written as a finite sum of primes. To answer this question we obtain the following initial positive result.density in localization Let S be a non-trivial multiplicative system of ℤ, ℙ_S be the set of prime numbers not belonging to S, and setG:={w=∑_i=1^m s'_ip_i/s_i:s'_1,s_1,…,s'_m,s_m∈ S, p_1,…,p_m∈ℙ_S}⊆ S^-1ℤ.Then, G is a dense subset of ℝ.First, note that S^-1ℤ is a unique factorization domain whose primes (i.e., irreducibles) are the elements of the form s'p/s, where p∈ℙ_S and s',s∈ S.[For several proofs of this fact in other context, see, for example, the proof of localization.] So, G is exactly the collection of elements of the localization that can be written as a finite sum of primes. In this way, in order to see that G is dense in ℝ, it is enough to show that any open interval of real numbers contains an element of G. This is what we plan to prove in what follows.Indeed, let (x_0,y_0)⊆ℝ be an open real interval, and set n_0=min{s∈ S: s>1}. Note that S∩ℕ≠∅ because S is a non–trivial multiplicative set. Therefore, S∩ℕ is a non–empty subset of natural numbers, hence has a minimum by the well order of ℕ. This justifies that n_0 is well defined. Now, fix p∈ℙ_S, with p>0. Note that we can choose such p because if not, then all the prime numbers of ℤ would be in S. Thus, by the Fundamental Theorem of Arithmetic S=ℤ^*, contradicting the non-triviality of S. Choose e∈ℕ such that p/n_0^e<y_0-x_0. Set n=min{k∈ℕ: kp/n_0^e≥ y_0}. By definition of n, it holds y_0>(n-1)p/n_0^e. On the other hand, by the former definitions we obtain that x_0-y_0<-p/n_0^e and y_0≤ np/n_0^e. Thus, adding the former inequalities we get x_0=y_0+(x_0-y_0)<np/n_0^e-p/n_0^e=(n-1)p/n_0^e.Summing up, we havex_0<(n-1)p/n_0^e<y_0.Moreover, as observed before, p/n_0^e is a prime in S^-1ℤ, and(n-1)p/n_0^e=∑_i=1^n-1p/n_0^e∈ G.Summing up, we have shown that the interval (x_0,y_0) contains an element of G. This finally shows that G is dense in ℝ, just what we finally wanted to show. From density in localization we can derive a natural and much more interesting result involving series of primes, within the real numbers, into localizations of the integers. The proof is very much like the one to justify that any real number can be expressed as a convergent series of rational ones, and therefore it is left to the interested reader.series of localized primes Let S be a non-trivial multiplicative system of ℤ, and let ℙ_S be the set of prime numbers not belonging to S. Then, any element v∈ S^-1ℤ (resp. any element v∈ℝ) can be written as a convergent series of primes in S^-1ℤ. In other words, there exists s'_1,s_1,…,s'_n,s_n,…∈ S and p_1,…,p_n,…∈ℙ_S such that v=∑_i=1^∞s'_ip_i/s_i.Notice that series of localized primes holds even when S^-1ℤ possesses only a prime number. Effectively, in such a case the corresponding series would contain suitable forms of (infinitely many) associates of the single prime element. In fact, in the particular case that S consists exactly of the powers of a prime number p, we can obtain a more concrete and explicit form of the summands given in series of localized primes. Let us state precisely what we mean in the following corollary.Let x be a real number. Fix a prime number q∈ℕ. Then, there exists an integer number m∈ℤ, a collection of prime numbers {p_i}_i≥ 0, all different from q, and a increasing collection of integers {n_i}_i≥ m such that x=∑_i≥ m^∞p_i/q^n_i. Without loss of generality, we can assume that x≥ 0 (otherwise, we consider -x and multiply the final (series) representation by minus). By series of localized primes, we obtain a series representation with summands of the form s'_ip_i/s_i, where s' and s_i are powers of q, and p_i is a prime number different from q. Moreover, by the proof of series of localized primes we can assume that each one of these summands is positive. Thus, by simplifying each term and by rearranging all the terms in the series, we can rewrite the series such that the negatives of the final exponents of q generate an increasing sequence. This does not affect the result because any rearrangement of an absolutely convergent series is also convergent, and has the same limit (see, for instance <cit.>). Thus, by considering the permuted series, we obtain our desired representation.Now, we plan to formulate a different variant of Goldbach conjecture, inspired by the results obtained along this section. Let r∈ℕ. We say that a commutative ring with unity R satisfies the r–form of Goldbach's conjecture if any element of the ring can be written as a sum of r irreducible elements of R. In the following result we prove that in order to verify the r-Form of Goldbach's conjecture in non-trivial localizations of the integers, it is enough to verify this property on arbitrarily small intervals containing the zero, or arbitrarily big interval of elements bigger than any fix parameter.sufficient condition Let S be a non-trivial multiplicative system of ℤ, let ℙ_S be the set of prime numbers not belonging to S. Fix ε,n>0, where n∈ℕ and ε∈ℝ. Assume that one of the following two conditions hold. * The r-Form of Goldbach's conjecture holds for any w∈ (-ε,ε)∩ S^-1ℤ. * The r-Form of Goldbach's conjecture holds for any w∈ (n,∞)∩ S^-1ℤ. Then, the r–form of Goldbach's conjecture holds for S^-1ℤ.Let v∈ S^-1ℤ and s∈ S, with s>1. Choose m∈ℕ such that v/s^m∈(-ε,ε) (resp. such that s^mv∈ (n,∞)). Then, by hypothesis there exist s'_1,s_1,…,s'_r,s_r∈ S and p_1,…,p_r∈ℙ_S such that v/s^m=∑_i=1^rs'_ip_i/s_i (resp. s^mv=∑_i=1^rs'_ip_i/s_i). Then, v=∑_i=1^rs^ms'_ip_i/s_i (resp. v=∑_i=1^rs'_ip_i/(s^ms_i)), giving us a representation of v as the sum of r irreducibles. This finishes our proof. § GOLDBACH'S CONJECTURE OVER SPECIAL CLASSES OF LOCALIZED RINGSOur next goal will be to extend teo1 to suitable localization of coordinate rings of affine varieties over unique factorization domains. With that purpose in mind, we present the following construction.making irreducibility in a localization Let K be any field, let R=K[x_1,…, x_n] with n≥ 2. For any 𝐢∈ℕ^n with (𝐢)>1, setS_𝐢:={q(x_1,…,x_n)=𝐱^𝐢+𝐱^𝐰+1: 𝐰∈ℕ^n, q is absolutely irreducible}, T_𝐢:={q(x_1,…,x_n)=𝐱^𝐰+1: 𝐰∈ℕ^n, q is absolutely irreducible},U_𝐢:={x_1+x_2+c: c∈ K}, W_𝐢:=S_𝐢∪ T_𝐢∪ U_𝐢.Moreover, we also setW:=⋃_𝐢∈ℕ^n(𝐢)>1 W_𝐢.Note that the set W fulfills the condition that it contains all the (absolutely) irreducible polynomials that acts as the additive factors in the proof(s) of teo1 and teo2. In other words, most of the polynomials of W are the ones allowing us to guarantee the weak form of Goldbach condition for arbitrary polynomials in the corresponding polynomial rings. Nonetheless, Gao irreducibility criterion and related results in <cit.> and <cit.> give us freedom to choose systematically and explicitly different families of (absolutely) irreducible polynomial as additional classes of additive factors in alternative proofs of these theorems. These facts motivate the following definition. Let D be an integral domain,R=D[x_1,…,x_n], and W a collection of (absolutely) irreducible polynomials in R. We say that R is a system of explicit irreducible polynomials, if W can be chosen explicitly as an alternative collection of (absolutely) irreducible polynomials in the proof(s) of teo1 and teo2.It is straightforward to see that making irreducibility in a localization gives an explicit example of a system of explicit irreducible polynomials. Our next step will be to extend teo1 and teo2 in a explicit manner for special kinds of localizations of rings of polynomials over a suitable ring. The utility of the former definition lies in the fact that it gives more flexibility and a wider range of possibilities to the collection of the multiplicative systems that we can use in such a generalization given by the following theorem. localization Let R be a unique factorization domain (UFD), and let 𝒪(𝔸_R^n)=R[x_1,…,x_n], with n≥ 2. Let W be a collection of explicit irreducible polynomial in 𝒪(𝔸_R^n). Let S⊊ R be a multiplicative system generated (multiplicatively) by a set of irreducible polynomials S_0 such that S_0∩ W=∅. Let T=S^-1R be the localization of R with respect to S. Then, any element L in T with r non-zero terms can be explicitly written as the sum of at most 2r irreducibles.First, let us give three simple proofs of a central elementary fact in this context: if H is an irreducible polynomial of 𝒪(𝔸_R^n) such that H∉ S, then H/1 is an irreducible element of T. Sub-proof 1: Effectively, let us assume by the sake of contradiction that H/1 is a reducible element of T. So, there exists irreducible elements I_1,⋯, I_m of 𝒪(𝔸_R^n), not belonging to S, with m≥ 2, and a Z∈ S such that ZH=I_1⋯ I_m.Let us write Z as a product of irreducible elements of S, Z_1,⋯,Z_q∈ S. Therefore, the last equation becomesZ_1⋯ Z_qH=I_1⋯ I_m.Due to the elementary fact that R[x_1,⋯, x_n] is a unique factorization domain as well (since R so is), and H is irreducible; up to associates (or units), we can cancel H with one I_j, for some j∈ [m]. Let us assume without lost of generality that j=1. So, we derive the equationZ_1⋯ Z_q=I_2⋯ I_m,which contradicts the unique factorization in 𝒪(𝔸_R^n)=R[x_1,…,x_n], due to the fact that I_2 is an irreducible polynomial do not belonging to S, but all the Z_d belong to S. An immediate consequence of the former proof is that if A∈ S, then the element H/S is irreducible in T. Sub-proof 2: First of all, since R is a UFD, we have that R[x_1,…,x_n] is a UFD. Since the localization of a UFD is also a UFD, we have that T=S^-1R is a UFD.Now, let H∈ R[x_1,…,x_n] be an irreducible polynomial such that H∉ S. We claim that H/1 is irreducible in T. Indeed, since H is irreducible and R[x_1,…, x_n] is a UFD, we have that (H) is a prime ideal of R[x_1,…,x_n]. Moreover, since H∉ S we also have that (H/1) is a prime ideal of T. In this way, since (H/1) is a prime ideal of T, and T is an integral domain, we have (see for instance <cit.>) that H/1 is irreducible in T. In particular, we have that if A∈ S, then the element H/A is irreducible in T. Sub-proof 3: Since R is a UFD, we have that R[x_1,…,x_n] is a UFD. Since the localization of a UFD is also a UFD, we have that T=S^-1R is a UFD.Now, let H∈ R[x_1,…,x_n] be an irreducible polynomial such that H∉ S. We claim that H/1 is irreducible in T. Indeed, S is a multiplicative set generated by irreducible polynomials. Since R[x_1,…,x_n] is a UFD, any irreducible is prime and therefore we have that S is generated by prime elements. Therefore, using <cit.> we have that the extension R[x_1,…, x_n]⊂ T is inert in the sense of Cohn (see for instance <cit.>). Moreover, we also have that the units of T that belongs to R[x_1,…,x_n] are exactly the units of R[x_1,…, x_n]. Therefore, using <cit.> we have that H is irreducible in R[x_1,…,x_n] if and only if H/1 is irreducible in T, proving our claim. In particular, we have that if A∈ S, then the element H/A is irreducible in T.Finally, let us consider a rational polynomial in T, let us say H/W, where W∈𝒪(𝔸_R^n) and W∈ S. By teo2, H can be written as the sum of at most 2r absolutely irreducible polynomials, i.e.,H=∑_b=1^pI_b,for absolutely irreducible polynomials I_1,⋯, I_p; with p≤ 2r. Moreover, by construction of S, I_b∉ S, for all b∈ [p]. Thus, by the former central elementary fact, H/W=∑_b=1^p(I_b/W),where the elements I_b/W are irreducible in T for all b∈ [p]. This finishes our constructive proof. Notice that, since R is an integral domain, in the proof of localization we can choose the explicit irreducible elements described in the proof of teo1 if we want. A particular special case of localization is given by the next statement.Let R be a unique factorization domain, and let 𝒪(𝔸_R^n)=R[x_1,…,x_n], with n≥ 2. Let S_m⊊ R be a multiplicative system consisting of the monomials of R[x_1,…,x_n] . Let T=S_m^-1R the localization of R in S_m. Then, any element L in T with r non-zero terms can be explicitly written as the sum of at most 2r irreducibles. The zero element can be written as the sum of twoirreducibles.In localization we can set S=S_m due to the fact that all the irreducible polynomials in the making irreducibility in a localization are non monomials.One natural way to proceed to generalize teo1 for coordinate rings of varieties would be to consider highly simple coordinate rings of varieties given by irreducible polynomials produced by Gao irreducibility criterion, and, subsequently try to use the same method of the proof of teo1 for constructing explicitly the (absolutely) irreducibles polynomials for each monomial of the polynomial in consideration. However, this methodological line of generalization seems not to work so straightforwardly due to the fact that in a quotient ring the property of being (absolutely) irreducible could be materialized qualitatively in a strong different manner as in the ring of polynomials. Let us show exactly what we mean with the following example.Consider the (absolutely) irreducible polynomial g=w^px^p+1+y^2pi∈ R:=K[w,x,y], for some positive integers p,i∈ℕ, due to Gao irreducibility criterion. Set V=V(g). Then, in 𝒪(V)=R/(g) a typical polynomial that we considered in the constructive proof of teo1 has the form of A=w^px^p+1y^2pi+1.Now, in R, A is an (absolutely) irreducible polynomial by Gao irreducibility criterion. Nonetheless, in 𝒪(V), A turns out to be a reducible polynomial. Indeed, regard both polynomials g and A as polynomials in the variable y; that is, g, A∈ K[x,w][y]. If we perform here the euclidean division of A by g we obtainA=(w^px^p+1)g+(1-w^2px^2(p+1)),and therefore, keeping in mind this equality we clearly have thatA≡ 1-w^2px^2(p+1)≡ (1-w^px^p+1)(1+w^p x^p+1) g.In this way, if W,X,Y denote the classes in 𝒪(V) of the corresponding variables in R, then the following equation holds in 𝒪(V).W^pX^p+1Y^2pi+1=(W^pX^p+1+1)(Y^2pi+1).This equation, taking into account that y^2pi≡ -w^p x^p+1 g, can also be written in the following way.W^pX^p+1Y^2pi+1=(1-Y^2pi)(1+Y^2pi)=(1-Y^pi)(1+Y^pi)(1+Y^2pi).Finally, using that 1-Y^pi=(1-Y)(Y^pi-1+Y^pi-2+…+Y+1), we end up with the following factorization of our polynomial A in R/(g).W^pX^p+1Y^2pi+1=(1-Y)(Y^pi-1+Y^pi-2+…+Y+1)(1+Y^pi)(1+Y^2pi).In conclusion, in 𝒪(V) some of the fundamental (absolutely) irreducible polynomials used for our additive decomposition of polynomials are not (absolutely) irreducible anymore.§ CHARACTERIZING GOLDBACH'S CONJECTURE OVER SUITABLE FORMS OF FORCING ALGEBRAS Let us continue with an elementary result involving a characterization of GC in the special case of forcing algebras over an algebraically closed field K. Forcing algebras emerge naturally when we want to translate within an algebraic canonical structure how close is an element f of a commutative ring with unity R of belonging to a finitely generated ideal I=(f_1,…,f_m)⊆ R <cit.>.Goldbach conjecture and forcing algebras Let K be any field, let R=K[x_1,…,x_n] the ring of polynomials over K in finitely many variables; f,f_1,…,f_n∈ K, withf_i≠ 0, for some index i; and A=K[x_1, …, x_n] /(f_1x_1 + … + f_nx_n+f )the corresponding forcing algebra. Then, the following statements hold. * Assume that n≥ 3. Then, any element of A can be written as the sum of two irreducible elements of A. * If, in addition, K is algebraically closed, then any element of A can be written as the sum of two irreducible elements if and only if n≥ 3. First of all, we prove part (i). Indeed, assume that n≥ 3. Then, we have a K–algebra isomorphismA≅ K[x_1,…,x_i-1,x_i+1,x_i+2,…,x_n]given by sending x_i↦ -∑_j≠ i(f_j/f_i)x_j-(f/f_i) and x_r↦ x_r for r≠ i. So, A is isomorphic to the ring of polynomials in (n-1)≥ 2 variables. Thus, by <cit.> any non-constant polynomial of A of degree m≥ 1 can be written as the sum of two irreducible polynomials of degree m. Now, if h is a constant polynomial in A, then h can be written as the sum of two irreducible polynomials h=x_j+(-x_j+h), where j≠ i, and x_j and -x_j+h are obviously irreducible polynomials in A. This completes the proof of part (i)In order to prove (ii), hereafter we assume that K is algebraically closed. If n≥ 3 then we are done by part (i), so we suppose that n ≤ 2. In the case that n=1, by the same argument as before, A is isomorphic to K. Therefore, A do not fulfill our thesis, since all the elements of A, but zero, are units, which, by definition are not irreducible. When n=2, A≅ K[T]. Since K is algebraically closed, the only irreducible polynomials of A have degree one. Then, no polynomial in A of degree ≥ 2 can be written as the sum of two irreducible polynomials.§ THE STRONG GOLDBACH CONDITION ON SPECIAL CLASSES OF COORDINATE RINGS OF AFFINE VARIETIES OVER SEVERAL TYPES OF FIELDS Our next proposition involves a certain class of affine varieties whose rings of regular functions (or coordinate rings) fulfills the stronger version of Goldbach's conjecture in Goldbach conjecture and forcing algebras, i.e. any regular function can be written as the sum of two irreducible regular functions. Let us formulate this property in a precise manner before stating our result. Let R be a commutative ring. We say that R satisfies the Strong Goldbach Condition (SGC) if any element of R can be written as the sum of two irreducible elements of R.prop2 Let K be any field, let f_1,…,f_r∈ K[x_1,…,x_n] be such that X_n=V(f_1,…,f_r)⊊𝔸_K^n is an irreducible affine variety with infinitely many points. Let X_n+1=V(f_1,…,f_r)⊆𝔸_K^n+1 be the corresponding irreducible variety embedded in 𝔸_K^n+1. Then, the ring of regular functions of X_n+1, 𝒪(X_n+1) fulfills the SGC. It is straightforward to see that𝒪(X_n+1) ≅ K[x_1,…,x_n+1]/I(X_n+1)≅ K[x_1,…,x_n+1]/(f_1,…,f_r)≅ K[x_1,…,x_n]/(f_1,…,f_r))[x_n+1]=𝒪(X_n)[x_n+1].Moreover, by hypothesis and by Hilbert Basis Theorem, the ring 𝒪(X_n) is a Noetherian integral domain with infinitely many maximal ideals (corresponding to the infinitely many points of X_n). Thus, the ring 𝒪(X_n) fulfills the hypothesis of <cit.>. Therefore, any polynomial of degree m≥ 1 in 𝒪(X_n)[x_n+1] can be expressed as the sum of two irreducibles of degree m. With the same argument, we also have that any constant polynomial can be written as the sum of two irreducible polynomials. In conclusion, the ring 𝒪(X_n+1) satisfies the SGC. Our next corollary involves an equivalent condition in terms of the dimension of the corresponding affine variety. dimension zero Let K be a field, X_n=V(f_1,…,f_r)⊊𝔸_K^n an irreducible affine variety for some polynomials f_1,…,f_r∈ K[x_1,…,x_n]. Let X_n+1=V(f_1,…,f_r)⊆𝔸_K^n+1 be the corresponding irreducible variety embedded in 𝔸_K^n+1. Assume that (X_n)>0. Then, 𝒪(X_n+1) fulfills the SGC.Due to <cit.> we have that (X_n)>0 if and only if the affine variety X_n has infinitely many points. So, our corollary follows immediately from prop2. In the case of algebraic varieties over an algebraically closed field, the fact that an irreducible variety has infinitely many points can be formulated in several equivalent ways. We want to single out the following one in the next statement.the case of an algebraically closed field Let K be an algebraically closed field, X_n=V(f_1,…,f_r)⊊𝔸_K^n an irreducible affine variety for some polynomials f_1,…,f_r∈ K[x_1,…,x_n]. Let X_n+1=V(f_1,…,f_r)⊆𝔸_K^n+1 be the corresponding irreducible variety embedded in 𝔸_K^n+1. Assume that 𝒪(X_n) is an infinite dimensional K-vector space. Then, 𝒪(X_n+1) fulfills the SGC.First of all, since K is algebraically closed, by the Finiteness Theorem <cit.> we have that 𝒪(X_n) is an infinite dimensional K–vector space if and only if the affine variety X_n has infinitely many points. So, our corollary follows immediately from prop2.Note that in the case of an algebraically closed field, by the Finiteness Theorem <cit.> (see also <cit.>), we can replace the condition involving the endlessness of the dimension of 𝒪(X_n) as K–vector space by any other of the conditions described there, which describe properties involving certain classes of monomials (not) belonging to the leading terms of the elements of the polynomials in I=(f_1,…,f_r)⊆𝒪(X_n), as well as the corresponding Gröbner basis. The reader will easily note that, in order to apply prop2 we need to guarantee that our affine variety is positive dimensional and irreducible. Irreducibility is in general a property that it is not so easy to check, however we want to single out the case of plane algebraic curves defined over the reals, where irreducibility can be characterized in a nice way.the case of real plane algebraic curves Let f∈ℝ[x,y] be an irreducible indefinite polynomial, let X:=V(f)⊆𝔸_ℝ^2 be the corresponding affine real plane algebraic curve, and let Y:=V(f)⊆𝔸_ℝ^3 be the corresponding variety embedded in 𝔸_ℝ^3. Then, the ring of regular functions of Y, 𝒪(Y) fulfills the SGC. On the one hand, since f is indefinite and irreducible, X is irreducible by <cit.>. On the other hand, <cit.> implies that X has infinitely many points. In this way, the result follows immediately again from prop2.It is known <cit.> that, given any field K and given two polynomials f, g∈ K[x,y] both of positive degree and coprime, then we have that V(f)∩ V(g) is either empty or finite. Therefore, in this case, we can not apply prop2. § SOME ENLIGHTENING EXAMPLES IN THE ONE DIMENSIONAL CASEAs we have already illustrated in the case of real plane algebraic curves, for affine curves the situation concerning Goldbach's condition is more delicate. We want to illustrate this fact with a pair of concrete examples. Let n∈ℕ, n≥ 1, let K be an algebraically closed field, and let f=x^ny-1∈ K[x,y], X=V(f). Then, (X)=(𝒪(X))=1, and, since the class of x^n is invertible in 𝒪(X), we immediately check that 𝒪(X)≅ K[x,x^-1,y]/(y-x^-n)≅ K[x,x^-1].Now, since K is algebraically closed, any Laurent polynomial h∈ K[x,x^-1] such that the difference between the degree of h and the valuation of h is bigger strictly than one, then h can be factored as h=x^zh', where z∈ℤ, and h'∈ K[x], with (h')≥ 2. So, this kind of polynomials are not irreducible because, since K is algebraically closed, the only irreducible polynomials are the ones of degree one. Thus, we deduce that the irreducible elements in K[x,x^-1] are of the form x^w(a_1x+a_0), where w∈ℤ and a_1,a_0∈ K. From the former fact, we can conclude that any Laurent polynomial with at least five terms cannot be written as the sum of two irreducible polynomials. So, 𝒪(X) do not fulfill the SGC.Let K be any field, let f=x^3y^2-1∈ K[x,y], X=V(f). Again, (X)=(𝒪(X))=1, and, because x^3 is invertible in 𝒪(X), one sees that 𝒪(X)≅ K[x,x^-1,y]/(y^2-x^-3).So, each element G∈𝒪(X) can be written essentially in the formG=H_1(X,X^-1)Y+H_0(X,X^-1),where the capital letters denote the classes of the corresponding polynomials and variables x,x^-1,y,h_0,h_1,g∈ K[x,x^-1,y]. So, G can be written as follows G=[(H_1(X,X^-1)-1)Y-1]-[Y+(H_0(X,X^-1)+1)].Now, one can easily check that both polynomials (classes) [(h_1(x,x^-1)-1)y-1] and [y+(h_0(x,x^-1)+1)] are irreducible in 𝒪(X), due to the fact that each pair of coefficients in K[X,X^-1] of both of them are coprime in K[x,x^-1]. In conclusion, 𝒪(X) satisfies the SGC.§ ACKNOWLEDGEMENTSPart of this work was done when D. A. J. Gómez Ramírez visited the University of Valladolid in November, 2023. The authors would like to thank Ricardo García and Pedro González Pérez for some comments concerning the content of this paper. Alberto F. Boix was partially supported by Spanish Ministerio de Economía y Competitividad grant PID2019-104844GB-I00. D. A. J. Gómez Ramírez would like to thank Michelle Gómez for all her support and love.alpha
http://arxiv.org/abs/2312.16524v1
{ "authors": [ "Alberto F. Boix", "Danny A. J. Gómez-Ramírez" ], "categories": [ "math.NT", "math.AC", "math.AG", "11R09, 52B20" ], "primary_category": "math.NT", "published": "20231227110442", "title": "On some algebraic and geometric extensions of Goldbach's conjecture" }
Efficient Cost Modeling of Space-filling Curves Jianzhong Qi=============================================== Recently, Pyramid Adversarial training <cit.> has been shown to be very effective for improving clean accuracy and distribution-shift robustness of vision transformers. However, due to the iterative nature of adversarial training, the technique is up to 7 times more expensive than standard training. To make the method more efficient, we propose Universal Pyramid Adversarial training, where we learn a single pyramid adversarial pattern shared across the whole dataset instead of the sample-wise patterns. With our proposed technique, we decrease the computational cost of Pyramid Adversarial training by up to 70% while retaining the majority of its benefit on clean performance and distribution-shift robustness. In addition, to the best of our knowledge, we are also the first to find that universal adversarial training can be leveraged to improve clean model performance. § INTRODUCTION Introduce the differences between clean accuracy adversarial robustness/ robustness to distribution shift/ and distribution shift Human intelligence is exceptional at generalizing to previously unforeseen circumstances. While deep learning models have made great strides with respect to clean accuracy on a test set drawn from the same distribution as the training data, a model's performance often significantly degrades when confronted with distribution shifts that are qualitatively insignificant to a human. Most notably, deep learning models are still susceptible to adversarial examples (perturbations that are deliberately crafted to harm accuracy) and out-of-distribution samples (images that are corrupted or shifted to a different domain). Introduce the idea that adversarial training can benefit model performance Adversarial training has recently been shown to be a promising avenue for improving both clean accuracy and robustness to distribution shifts. While adversarial training was historically used for enhancing adversarial robustness, recent works <cit.> found that properly adapted adversarial training regimens could be used to achieve state-of-the-art results (at the time of publication) on Imagenet <cit.> and out-of-distribution robustness <cit.>.Raise the computational concern with respect to existing techniques However, both proposed techniques <cit.> use up to 7 times the standard training compute due to the sample-wise and multi-step procedure for generating adversarial samples. The expensive cost has prevented it from being incorporated into standard training pipelines and more widespread adoptions. In this paper, we seek to improve the efficiency of the adversarial training technique so that it can become more accessible for practitioners and researchers.Describe prior effort for increasing the efficiency of adversarial training, and how we differ Several prior works <cit.> have proposed methods to increase the efficiency of adversarial training in the context of adversarial robustness, where they try to make models robust to deliberate malicious attacks. <cit.> proposed reusing the parameter gradient for training during the sample-wise adversarial step for faster convergence. Later, <cit.> proposed making adversarial training more efficient with a single-step adversary rather than the expensive multi-step adversary. However, all prior works focus on the efficiency trade-off concerning adversarial robustness rather than clean accuracy or out-of-distribution robustness. In the setting of adversarial robustness, one often assumes a deliberate and all-knowing adversary. Security is crucial, yet in reality, deep learning systems already exhibit a significant number of errors without adversaries, such as self-driving cars making mistakes in challenging environments. Consequently, clean accuracy and robustness to out-of-distribution data are typically prioritized in most industrial settings. Yet, few works seek to improve the efficiency trade-off for the out-of-distribution metric. Our work aims to fill this gap. describe how the shift in focus from adversarial robustness to clean accuracy allows us to propose novel technique that is specific to the new contextBy shifting the context from adversarial robustness to clean accuracy and out-of-distribution robustness, we can free ourselves from certain constraints, such as the need to train the model on sample-wise adversaries, which is very expensive to compute. Instead, we can leverage universal perturbations, which are shared across the whole dataset. By leveraging this simple idea, we can generate adversarial samples for free while getting more performance improvement on clean accuracy compared to prior work <cit.>.describe why we focus on ViT In this paper, we focus our experiments on the Vision Transformer architecture <cit.>. We focus on this architecture as it is the most general and scalable architecture that applies to many domains, including vision, language, and audio, while simultaneously achieving SOTA on many of them. We believe focusing on this architecture will lead to more valuable techniques for the community. summarize contribution In summary, here are our three main contributions: * We propose Universal Pyramid Adversarialtraining that is 70% more efficient than the multi-step approach while increasing ViT's clean accuracy more than Pyramid Adversarial training. * We evaluate our technique on 5 out-of-distribution datasets and find that Universal Pyramid Adversarial training effectively increases the distributional robustness and is competitive with Pyramid Adversarial Training while being efficient. * To the best of our knowledge, we are the first to identify universal adversarial training as a viable technique for improving clean performance and out-of-distribution robustness on Imagenet 1K. In our ablations, , we found that the pyramid structure is critical for the performance gain and plain universal adversarial training is detrimental to performance, unlike <cit.>, which found both instance wise adversarial training and pyramid adversarial training to be beneficial.§ RELATED WORKDescribe methods that also increase the efficiency of the adversarial training Improving the efficiency of adversarial training has been widely studied <cit.>,but they have mainly been in the context of adversarial robustness. <cit.> proposed reusing the parameter gradients from the adversarial step. By reusing the free parameter gradients for training, they were able to achieve much faster convergence. Even though the proposed approach was much more efficient, <cit.> could not reach the same robustness level as the original multi-step training on Imagenet-1K. <cit.> proposed making the iterative attack cheaper by updating the noise based on the Hamiltonian functions of the first few layers. <cit.> proposed using a single-step adversary for training instead of a multi-step adversary. They found that random initialization and early stopping could prevent adversarial over-fitting, where label leakage happens from using adversaries with fewer steps. <cit.> proposed reusing the adversarial perturbations between epochs with the observation that adversarial noises are often transferable. The downside of the method is that the memory requirement grows with the data size, which can be quite large given the size of modern datasets. We differ from all the prior works in that we aim to investigate the efficiency gain of adversarial training in the context of clean accuracy. By focusing on clean accuracy and out-of-distribution robustness, we gained more flexibility concerning the formulation of the min-max problem. Describe related work that increase the performance of the model with adversarial training <cit.> was the first paper that showed adversarial training could improve clean performance of convolutional networks. To achieve this, <cit.> employed split batchnorms (AdvProp) for adversarial and clean samples. They argued that clean and adversarial samples have very different distributions and that split batchnorms are needed to make optimization easier. Before <cit.>, the community commonly believed that adversarial training leads to a decrease in clean accuracy <cit.>.Describe the only method that also seeks to improve efficiency of adversarial training in the context of better model performance In a similar line of work, <cit.> proposed a faster variant of AdvProp <cit.> that makes the training speed comparable to standard training while retaining some of Advprop's benefits. Even though the proposed method is more efficient, it substantially trades off the performance of the original multi-step method. Our work is similar to<cit.>because we also focus on improving the efficiency of the adversarial training process, but we differ in that we focus on ViT architecture with Pyramid Adversarial training where their proposed approach is not applicable. Also, we can achieve a performance gain that is comparable or better than the multi-step approach, whereas the previous method trades off performance for efficiency.Several recent approaches have shown that adversarial training can be used to improve the performance of vision transformers. <cit.> showed that ViT relies more on low-frequency signals than high-frequency signals. By adversarially training the model on high-frequency signals, <cit.> further boosted ViT's performance. <cit.> showed that by converting images to discrete tokens, adversarial training could further increase the performance of ViTs. Later, <cit.> showed that by incorporating the pyramid structure into standard adversarial training, they could boost the performance of ViT, where the split batchnorms idea introduced in <cit.> were not directly applicable to ViT models. In our work, we focus on the Pyramid Adversarial training technique proposed by <cit.> since it is the best-performing method that achieves SOTA on multiple fronts while being applicable to ViT, a more modern architecture. The main drawback of <cit.> is the significantly higher training time which can go up to 7x of the standard training time. In this work, we propose Universal Pyramid Adversarial training to improve the efficiency of Pyramid Adversarial training while retaining its effectiveness.Describe prior works that also leverages universal adversarial training, and how we differ from themWhile universal adversarial samples have been used in prior work <cit.> for training to defend against universal adversarial attacks,our proposed approach differs from them in that it leverages these samples to improve clean model performance.<cit.> finds that universal perturbations tend to slide images into some classes more than others. They find that by updating universal perturbations in a class-wise manner, they can achieve better robustness compared to <cit.>. Both prior works <cit.> show that universal adversarial training consistently decreases the performance of the model, similar to standard adversarial training. Our ablation study shows that incorporating both clean loss and pyramid structures are crucial for the performance gain observed in Universal Pyramid Adversarial training. Without our proposed modifications, universal adversarial training consistently decreases the clean performance of the model. To the best of our knowledge, we are the first to show that universal adversarial training can be leveraged for improved model performance. § METHODIn this section, we will go over the formulation of the proposed adversarial training objective, the pyramid structure that we leveraged from <cit.>, and our more efficient Universal Pyramid Adversarial training. §.§ Adversarial TrainingExplain the original adversarial objective Adversarial training remains one of the most effective methods for defending against adversarial attacks <cit.>. Adversarial training is aimed at solving the following min-max optimization problem: min_θ E_(x, y)∼ D [ max_δ∈ B L(f(x+δ; θ), y) ],where θ is the model parameter, δ is the adversarial perturbation, L is the loss function, D is the data distribution, and B is the constraint for the adversarial perturbation, which is often an ℓ_∞ ball. The inner objective seeks to find an adversarial perturbation within the constraint, and the outer objective aims to minimize the worst-case loss by optimizing the model parameters. While the method effectively improves robustness to adversarial attacks, it often reduces clean performance. However, the loss of clean performance is often not acceptable for most practical applications. Explain the original clean + adversarial objectiveSince our goal is to improve performance as opposed to worst case robustness, wetrain the model on the following formulation (similar to <cit.>) instead where the clean loss is optimized in addition to the adversarial loss:min_θ(x, y)∼ DE[ L(f(x; θ), y) + λmax_δ∈ BL(f(x+δ; θ), y) ].Here, the λ controls the trade-off between adversarial and clean loss. However, adding clean loss alone is often not sufficient for improving the performance of the model <cit.>. Additional techniques such as split batchnorms <cit.> and pyramid structures <cit.> are necessary for performance gain.Explain why adversarial training is expensive The main problem with the adversarial training formulation is that the inner maximization is often expensive to compute, requiring several steps to approximate accurately <cit.>. Specifically, for each iteration of the adversarial step, one needs a full forward and backward pass on all of the examples in a batch (see Figure <ref>). For example, if five steps are used, which is the setting in both <cit.>, then five forward and backward passes are needed. The generation of adversarial samples is already five times more expensive than regular training. In addition, one needs to use both the clean and the generated adversarial samples for training, so now we have doubled the batch size. The larger batch size increases the cost by another factor of two. When training with a 5-step adversary, the total computational cost will be seven times more expensive than standard training. In Section <ref>, we describe our proposed Universal Pyramid Adversarial training, where we can substantially reduce the computational cost. §.§ Pyramid StructureAdversarial training alone even when coupled with the clean loss does not typically increase performance of the model <cit.>. In order to increase clean accuracy, certain techniques have to be used. Here we leverage the pyramid structure from <cit.>. <cit.> aimed to endow the adversarial perturbation with more structure so that the adversary can make larger edits without changing an image’s class. The pyramid adversarial noise is parameterized with different levels of scales as follows:δ = ∑_s ∈ S m_s· C_B(δ_s),where C_B clips the noise within the constraint set B, S is all of the scales used, m_s is the multiplicative constant, and δ_s is perturbation at scale s. For the δ_s at a given scale, s × s number of pixels within a square tile share a single parameter giving greater structure to the noise. Since the larger scale can often tolerate more changes, larger m_s at the coarser scales allow us to update the coarser noises more quickly relative to the granular noise.§.§ Universal Pyramid Adversarial Training Introduce our method universal adversarial training While Pyramid Adversarial training <cit.> is effective at increasing clean model performance, it is seven times more expensive compared to standard training. To address this, we propose Universal Pyramid Adversarial training, an efficient adversarial training approach to improve model performance on clean and out-of-distribution data. Our proposed approach learns a universal adversarial perturbation with pyramid structure, thus unifying both the effectiveness of Pyramid Adversarial training and the efficiency of universal adversarial training <cit.>. Specifically, we attempt to solve the following objective:min_θmax_δ∈ B(x, y)∼ DE[ L(f(x; θ), y) + λL(f(x+δ; θ), y) ].With this objective, we only have to solve for a single universal adversarial pattern that can be shared across the whole dataset, and we do not have to optimize a new adversary for each sample. Even though the objective looks similar, they are not the same. Due to Jensen's inequality, Equation <ref> is always strictly upper-bounded by Equation <ref>. We have described the complete method in Algorithm <ref>. This yields up to 70% saving compared to the 5-step sample-wise approach (see Table <ref>). Further, we update the universal adversarial pattern during the backward pass of training,where we can get the gradients of δ for free (see Figure <ref> for an illustration of how universal adversarial training can help save compute). However, since we still need to train the model on twice the number of samples, our proposal is still twice as expensive as standard training, but it is already 33% cheaper than the fastest (one-step) sample-wise adversarial training approach.More concretely, the generation step for the one-step sample-wise adversarial training costs a single forward-backward pass, and the training step is twice as expensive as standard training. Overall, one-step sample-wise adversarial training is 3x the cost of standard training making our method 33% faster. This is because in case of the one-step adversarial training, the gradient from the first generation step cannot be reused because the patterns are randomly initialized and the induced gradient is different from the clean training gradient.§.§ Radius ScheduleExplain why we use a radius schedule In our experiments, we find that a radius schedule occasionally benefits performance. The radius dictates the extent to which an adversary is permitted to alter the image, as measured by the ℓ_∞ distance between the original and perturbed images. A larger radius permits greater perturbations, thus strengthening the adversary, while a smaller radius restricts the perturbation, rendering the adversary weaker. We propose this schedule as we observe that a more aggressive/larger radius tends to promote faster convergence at the beginning, but these images are very far out of distribution, resulting in poor performance. By using a linearly decreasing radius schedule, we are sometimes able to get a considerable performance boost while maintaining fast convergence. Precisely, we calculate the radius at a given epoch as followsr(e) = r_start + (r_end-r_start)max(e-e_start, 0)/e_end-e_start,where r_start, r_end are the starting and ending radius with r_start > r_end, e_start, e_end are the starting and ending epochs for the radius schedule, and r(e) is the radius at a given epoch e. § EXPERIMENTS §.§ Experimental Set-upIn all of our experiments, we focus on the training setup in <cit.> since it allows us to achieve a competitive 79.8% on Imagenet-1K with a ViT-S/16. The setup allows us to study ViT in a computationally feasible setting. Following <cit.>, we use the AdamW optimizer, with a batch size of 1024, a learning rate of 0.001 with a linear warm-up for the first 8 epochs, and weight decay of 0.1. We train the model for a total of 300 epochs across all settings. For augmentation, we apply a simple inception crop and horizontal flip. For experiments with strong data augmentation, we apply RandomAugment <cit.> of level 10 and MixUp <cit.> of probability 0.2. We used strong data augmentation in all of our experiments except for the first part of experiments in Table <ref>. For Pyramid Adversarial training, following <cit.> we use S=[32, 16, 1], M=[20, 10, 1], and radius of 6/255. For step size, we simply divide the radius by the number of steps used. For our proposed Universal Pyramid Adversarial training, we use the same M and S as Pyramid Adversarial training, but with a radius of 8/255. When a radius schedule is used, we linearly decrease the radius by 90% in increments starting from epoch 30.In addition to Imagenet-1K, we also evaluated our models on five out-of-distribution datasets: Imagenet-C <cit.>, Imagenet-A <cit.>, Imagenet-Rendition, Imagenet Sketch <cit.>, and Stylized Imagenet <cit.>. Using a diverse set of out-of-distribution datasets, we can more thoroughly evaluate the model's robustness to unexpected distribution shifts. §.§ Experimental ResultsOverall, we find that Universal Pyramid Adversarial training effectively increases the clean accuracy and out-of-distributional robustness similar to the original Pyramid Adversarial training while being much more efficient. Clean Accuracy Here we first analyze the effectiveness of our proposed Universal Pyramid Adversarial training when applied to ViT with weak data augmentation. Pyramid Adversarial training, as expected, significantly increases the performance of the ViT by up to 1.9% (Table <ref>) when a 4-step attack is used. As we increase the step count to 5, the benefit of Pyramid Adversarial training starts diminishing. On the other hand, our Universal Pyramid Adversarial training increased the performance even further. With the radius schedule, we can obtain a performance increase of 1.97%, exceeding the performance benefit of the Pyramid Adversarial training for all step counts. Without the radius schedule, we still obtain a competitive gain of 1.67%. In addition to better performance, our method is much more efficient than the original Pyramid Adversarial training. In Table <ref>,we also reported the training time of each method on 8 Nvidia A100 GPUs in hours, and our approach is 70% faster compared to the 5-step Pyramid Adversarial training.Show that our method continues to perform well in more challenging setting To further verify our proposal’s effectiveness, we analyze our method’s performance when coupled with strong data augmentations. The setting with strong data augmentation is a more challenging setting since all components of the training pipelines are heavily tuned. It is worth noting that our baseline ViT-S performs comparably with the baseline ViT-B/16 in <cit.>. Again, we continue to see the benefit of Universal Pyramid Adversarial training compared to standard training.In the more challenging setting, we see less benefit from both approaches, as expected.As we increase the number of steps for the Pyramid Adversarial training, the accuracy first increases, reaching the maximum at 3 steps, and starts decreasing with 4 or more steps. On the other hand, our Universal Pyramid Adversarial training achieves more performance gain than all of the step count tested while being more efficient. Explain our experiment results with respect to no data augmentation Out-of-Distribution Robustness We found that Universal Pyramid Adversarial training effectively increases models' out-of-distribution robustness and is comparable to 1-step and 2-step Pyramid Adversarial training. In Table <ref>, we see that our Universal Pyramid Adversarial training consistently increases models' performance on all five out-of-distribution datasets relative to the baseline. When compared with Pyramid Adversarial training, we find that Universal Pyramid Adversarial training with a radius of 12/255 consistently improves performance with respect 1-step Pyramid Adversarial training and is comparable with the 2-step Pyramid Adversarial training. Note that both 1-step and 2-step Pyramid Adversarial training are already 50% and 100% more expensive than our proposed Universal Pyramid Adversarial training. However, unlike the case of clean accuracy, Universal Pyramid Adversarial training still underperforms relative to the more costly Pyramid Adversarial training with 3 or more steps.§.§ AblationsIn this section, we ablated several components of Universal Pyramid Adversarial training, including its sensitivity to the selected radius, the importance of the pyramid structure, and the benefit of incorporating clean loss.Explain how the method is relatively stable with respect to the size of the radius and why this is good Sensitivity to Radius Hyperparameter sensitivity is crucial for a method's practicality, and we find that our Universal Pyramid Adversarial training is consistent and stable with respect to the selected radius. In Table <ref>, we see that Universal Pyramid Adversarial training consistently increases the model's performance across a wide range of radii from 2/255 to 12/255. This consistency is important because it allows us to benefit from the method without finely tuning the radius. We also find that the performance varies in a predictable upside-down U-shape. As we increase the radius, we see that the performance steadily increases until radius 8/255 after which the performance decreases. The way that performance changes with respect to the radius gives practitioners a clear signal on whether to increase or decrease the radius and makes hyperparameter tuning easier.Show that pyramid structure is helpfulPyramid Structure We also found that the pyramid structure is crucial for performance gain with our proposed universal adversarial approach. We experimented with naively combining clean loss with universal adversarial training as in <cit.> as an additional regularizer. However, in Table <ref>,we see that without using the pyramid structure, the model consistently performs worse after adding the adversarial samples.Show that clean loss is helpfulClean Loss In addition to using pyramid structure, we found that incorporating clean loss to the Universal Pyramid Adversarial training is vital to obtain the performance gain we see. In Table <ref>, we see that removing the clean loss results in a performance decrease of 1.76%, making the model much worse than the baseline.§.§ AnalysisIn this section, we try to understand the similarities between universal and sample-wise adversarial pyramid training, given their similar benefits and formulation. We analyzed the attack strength, the noise pattern, and the loss landscape and found that despite their similarities, they are quite different in many aspects. The findings point to the need to further understand the mechanism that both universal and sample-wise adversarial pyramid training use to increase model performance.Attack Strength Given that the Universal Pyramid Adversarial training achieves similar performance gain compared to the Pyramid Adversarial training, one may expect that their attack strength is similar. However, we found that the sample-wise adversary is significantly stronger than the universal adversary even though the universal adversary uses a larger radius (8/255 vs. 6/255). In Figure <ref>, the multi-step adversary consistently achieves a much higher adversarial error rate throughout the training process. The observation shows that we don't necessarily need to make the attack very strong to benefit from adversarial training. Qualitative Differences in Perturbation Pattern In Figure <ref>, we visualize the universal and sample-wise adversarial patterns used during training. We find the perturbations to have qualitatively different patterns despite their similar effectiveness in improving clean accuracy. For the perturbation at the coarser scales, universal perturbation has a more diverse color than sample-wise perturbation. The diversity may be because universal perturbations need to transfer between images, and large brightness changes may be effective in removing some information from the samples. On the other hand, the sample-level perturbations may exploit the color cues to move an image to the adversarial class by consistently changing the color of the image. For the perturbation at the pixel level, sample-wise perturbation is much more salient compared to universal perturbation. We can see that the sample-wise perturbations have some resemblance to objects. Even though the universal perturbations have some salient patterns, they are less obvious than the pattern from the sample-wise perturbations.Loss Landscape Analyze the loss landscape To understand how Pyramid Adversarial training and Universal Pyramid Adversarial training improve the performance of a model, we visualize the loss landscape of models trained with both approaches to see whether it achieves the performance by implicitly inducing a flatter minimum <cit.>. Surprisingly, we found this not to be the case. Sample-wise Pyramid Adversarial training produces sharper minima compared to regular training and yet has better performance (see Figure <ref>). On the other hand, Universal Pyramid Adversarial training does not noticeably change the sharpness of the minimum and yet produces the greatest performance improvement. This finding suggests that both adversarial pyramid approaches rely on different underlying mechanisms to improve the model's performance compared to an optimizer, such as SAM <cit.> that explicitly searches for flatter minima.§ CONCLUSION summarizing the paper In this paper, we propose Universal Pyramid Adversarial training to improve the clean performance and out-of-distribution robustness of ViT. It obtains similar accuracy gain as sample-wise Pyramid Adversarial training while being up to 70% faster than the original approach. To the best of our knowledge, we are also the first to identify universal adversarial training as a possible technique for improving the model's clean accuracy. We hope that the proposed method will help make the adversarial technique more accessible to practitioners and future researchers. tmlr § FURTHER DETAILS FOR TABLE 2 § VISUALIZATION OF ATTENTION OPERATIONS§ RADIUS ABLATION WITH RESPECT TO IMAGENET V2
http://arxiv.org/abs/2312.16339v1
{ "authors": [ "Ping-yeh Chiang", "Yipin Zhou", "Omid Poursaeed", "Satya Narayan Shukla", "Ashish Shah", "Tom Goldstein", "Ser-Nam Lim" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20231226212646", "title": "Universal Pyramid Adversarial Training for Improved ViT Performance" }
RL-MPCA]RL-MPCA: A Reinforcement Learning Based Multi-Phase Computation Allocation Approach for Recommender SystemsCorresponding author Meituan, Beijing, Chinazhoujiahong02@meituan.com Meituan, Beijing, Chinamaoshunhui@meituan.com Meituan, Beijing, Chinayangguoliang@meituan.com Meituan, Beijing, Chinatangbo17@meituan.com Meituan, Beijing, Chinaxieqianlong@meituan.com Meituan, Beijing, Chinalinlebin@meituan.com Meituan, Beijing, Chinawangxingxing04@meituan.com Meituan, Beijing, Chinawangdong07@meituan.com Recommender systems aim to recommend the most suitable items to users from a large number of candidates. Their computation cost grows as the number of user requests and the complexity of services (or models) increases. Under the limitation of computation resources (CRs), how to make a trade-off between computation cost and business revenue becomes an essential question.The existing studies focus on dynamically allocating CRs in queue truncation scenarios (i.e., allocating the size of candidates), and formulate the CR allocation problem as an optimization problem with constraints. Some of them focus on single-phase CR allocation, and others focus on multi-phase CR allocation but introduce some assumptions about queue truncation scenarios. However, these assumptions do not hold in other scenarios, such as retrieval channel selection and prediction model selection. Moreover, existing studies ignore the state transition process of requests between different phases, limiting the effectiveness of their approaches. This paper proposes a Reinforcement Learning (RL) based Multi-Phase Computation Allocation approach (RL-MPCA), which aims to maximize the total business revenue under the limitation of CRs. RL-MPCA formulates the CR allocation problem as a Weakly Coupled MDP problem and solves it with an RL-based approach. Specifically, RL-MPCA designs a novel deep Q-network to adapt to various CR allocation scenarios, and calibrates the Q-value by introducing multiple adaptive Lagrange multipliers (adaptive-λ) to avoid violating the global CR constraints. Finally, experiments on the offline simulation environment and online real-world recommender system validate the effectiveness of our approach.<ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003260.10003272</concept_id> <concept_desc>Information systems Online advertising</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002951.10003227.10003447</concept_id> <concept_desc>Information systems Computational advertising</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012>[500]Information systems Recommender systems [300]Information systems Online advertising [300]Information systems Computational advertising[ Dong Wang=============§ INTRODUCTION Recommender systems aim to recommend the most suitable items to users from a large number of candidates and expect to gain revenue from users' views, clicks, and purchases. They are playing an increasingly important role in e-commerce platforms <cit.>.Industrial recommender systems are often designed as cascading architectures <cit.>.As shown in Figure <ref>, a typical recommender system consists of several stages, including retrieval, coarse-ranking, fine-ranking, etc.In these stages, online advertising systems (a kind of recommender system applied to online advertising) generally contain several computation-intensive services or models, including bid models <cit.>, prediction services <cit.>, etc. These services require a lot of computation resources[In general, computation resources include CPU/GPU computing capacity, memory capacity and response time, etc.] (CRs).Take the display advertising system of Meituan Waimai platform[<https://waimai.meituan.com/>, one of the largest e-commerce platforms in China.] (hereinafter referred to as Meituan advertising system), for example. It consumes a lot of CRs in both the retrieval stage and the fine-ranking stage. As the number of user requests increases dramatically, the system's CR consumption rises accordingly.Due to the limitation of CRs, recommender systems need to make a trade-off between CR cost and business revenue when the traffic exceeds the system load.From the perspective of CR utilization efficiency, the goal of recommender systems is to maximize the total business revenue under the CR constraint. To address the challenges of huge traffic and a large number of candidate items, the real-world recommender systems usually use two types of strategies: static strategies and dynamic strategies <cit.>. Static strategies select suitable fixed rules through stress testing and practical experience to allocate CRs. They also provide fixed downgrades to cope with unexpected traffic. Static strategies require constant manual intervention to adapt to quick changes in traffic, and fixed downgrades provided by static strategies are generally detrimental to business revenue and user experience.Dynamic strategies <cit.> dynamically allocate CRs for requests based on the value of requests. They prioritize allocating CRs to more valuable requests to achieve better revenue. Compared to static strategies, dynamic strategies are more efficient in utilizing CRs and require fewer manual intervention.Recommender systems with multiple stages have various CR allocation scenarios. Based on the application scenario, we summarize the dynamic CR allocation methods into three types: Elastic Channel, Elastic Queue, and Elastic Model:∙ Elastic Channel: dynamically adjust the retrieval strategy. A typical recommender system contains multiple retrieval channels. When CRs are insufficient, static strategies usually use fixed rules to drop some retrieval channels with high computation consumption. Different to static strategies, Elastic Channel dynamically adjusts the retrieval strategy for each request according to the online environment and the features of the request.∙ Elastic Queue: dynamically adjust the length of queue. Under the limitation of CRs, recommender systems cannot provide the prediction service and ranking service for all candidate items. In static strategies, before entering the prediction service and ranking service, the queue of items needs to be truncated to a global fixed length. In contrast, Elastic Queue dynamically adjusts truncation for each request length according to the online environment and the features of the request.∙ Elastic Model: dynamically select prediction models.Recommender systems often provide multiple prediction models with different computation consumption for one prediction service. A complex model achieves better revenue while taking more computation consumption. When CRs are insufficient, static strategies usually use fixed rules to downgrade high computation consumption models to low consumption models.In contrast, Elastic Model dynamically adjusts the prediction model for each request according to the online environment and the features of the request. Recently, some dynamic strategies <cit.> have been proposed to achieve “personalized" CR allocation. DCAF <cit.> focuses on a single CR allocation phase. CRAS <cit.> focuses on multi-phase queue truncation problems, but it introduces some assumptions about Elastic Queue scenario. For example, it uses the queue length to represent the computation cost when modeling the CR allocation problem, and assumes that the revenue varies logarithmically with the queue length. However, these assumptions do not hold in Elastic Channel and Elastic Model scenarios. Moreover, existing studies ignore the state transition process of requests between different phases, which limits the effectiveness of their approaches.To address the limitations of existing studies, we propose RL-MPCA, which formulates the CR allocation problem as a Weakly Coupled Markov Decision Process (Weakly Coupled MDP) <cit.> problem and solves it with an RL-based approach. Compared to Constrained Markov Decision Process (CMDP) <cit.>, Weakly Coupled MDP allows global weakly coupled constraints across sub-MDPs. Thus, it can model the problem of CR allocation across requests better than CMDP <cit.>.Our main contributions are summarized as follows:* We propose an innovative CR allocation solution for recommender systems. To the best of our knowledge, this is the first work that formulates the CR allocation problem as a Weakly Coupled MDP problem and solves it with an RL-based approach. * We design a novel multi-scenario compatible Q-network adapting to the various CR allocation scenarios, then calibrate Q-value by introducing multiple adaptive Lagrange multipliers (adaptive-λ) to avoid violating the global CR constraints in training and serving. * We validate the effectiveness of our proposed RL-MPCA[The publicly accessible code at <https://anonymous.4open.science/r/RL-MPCA-130D>.] approach through offline experiments and online A/B tests. Offline experiment results show that RL-MPCA can achieve better revenue than baseline approaches while satisfying the CR constraints.Online A/B tests demonstrate the effectiveness of RL-MPCA in real-world industrial applications. § RELATED WORK §.§ CR Allocation and RL for Recommender SystemsRecommender systems have been a popular topic in industry and academia in recent years.Most studies focus on improving the business revenue under the assumption of sufficient CRs <cit.>.Some of these studies focus on applying RL to recommender systems, including recommendations <cit.>, real-time bidding<cit.>, ad slots allocation <cit.>, etc. Some studies concern CR consumption and try to reduce it through model compression <cit.>.All the above studies rarely focus on CR allocation. As an exception, DCAF <cit.> and CRAS <cit.> propose two “personalized” CR allocation approaches. They formulate the Elastic Queue CR allocation problem as an optimization problem, and then solve it with linear programming algorithms. Different from the above studies, our proposed RL-MPCA uses an RL-based dynamic CR allocation approach to improve the effectiveness. §.§ RL and Weakly Coupled MDPs A Weakly Coupled MDP <cit.> comprises multiple sub-MDPs, which are independent except that global resource constraints weakly couple them <cit.>.Due to the linking constraints, the scale of the problem grows exponentially in the number of sub-problems <cit.>. Some studies try to relax Weakly Coupled MDP to CMDP <cit.> and then solve it <cit.>. The solutions to the CMDP problem include CPO <cit.>, RCPO <cit.>, IPO <cit.>, etc. They focus on the internal constraints of MDP. Recently, some studies focus on directly solving Weakly Coupled MDP problems.BCORLE(λ) <cit.> solves it with λ-generalization. BCRLSP <cit.> first trains the unconstrained reinforcement model and then imposes a global constraint on the model with linear programming methods in near real-time. Both BCORLE and BCRLSP guarantee that budget allocations strictly satisfy a single global constraint. CrossDQN <cit.> attempts to make the model avoid violating a single global constraint by introducing auxiliary batch-level loss. It uses a soft version of argmax to solve the problem of non-derivability of the native argmax function, which makes the model unable to strictly satisfy the global constraints during both offline training and online serving. Offline RL methods aim to learn effective policies from a fixed dataset without further interaction with the environment <cit.>.Off-policy methods (e.g., DQN <cit.>, DDQN <cit.>) can be directly applied to Offline RL while ignoring the out-of-distribution (OOD) problem. To solve the OOD problem, some offline RL methods are also proposed, including BCQ <cit.>, CQL <cit.>, COMBO <cit.>, etc.BCQ addresses the problem of extrapolation error via restricting the action space to force the agent towards behaving close to on-policy with respect to a subset of the given data.In addition, REM <cit.> enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates to enhance the generalization capability in the offline setting. In the experiments of this paper, we choose three popular methods (DDQN, BCQ, and REM) as base models. Essentially, our proposed RL-MPCA only modifies the Q-network, so it can also apply to other Q-learning methods.In addition, we can also consider the Weakly Coupled MDP problem as a black-box optimization problem, then solve it with evolutionary algorithms, such as Cross-Entropy Method (CEM) <cit.> and Natural Evolution Strategies (NES) <cit.>.§ PROBLEM FORMULATION §.§ Original Problem DescriptionThe recent work <cit.> formulated the single-phase CR allocation problem as a knapsack problem. Similarly, we formulate the multi-phase CR allocation problem as a knapsack problem. max_j_1, …, j_T∑_i=1^M ∑_j_1…∑_j_T( ∏_t=1^T x_i,j_t)Value_i,j_i,…,j_Ts.t. ∑_i=1^M ∑_j_1…∑_j_T(∏_t=1^T x_i,j_t)Cost_i,j_i,…,j_T≤ C∑_j_tx_i,j_t≤ 1, ∀ i,t x_i,j_t∈{0, 1}, ∀ i,j_t We suppose there are M online requests {i=1,…,M} in a given time slice, and the maximum computation budget of the system in this time slice is C. For each request i, T phases need to make computation decisions, and N_t actions can be taken for the specified phase t. We define j_1,…,j_T as a complete decision process of a request, and the decision action of phase t is j_t (j_t ∈{1,…,N_t}). Meanwhile, for request i, if the decision process is j_1,…,j_T, we use Value_i,j_i,…,j_T and Cost_i,j_i,…,j_T to represent the expected revenue and computation cost, respectively. x_i,j_t is the indicator that request i is assigned action j in phase t. In phase t, for request i, there is one and only one action j_t can be taken.InEq. (<ref>) above assumes that all phases share an overall computation budget. However, in a real-world online recommender system, the CRs of each phase are often relatively independent. For example, recommender systems often deploy prediction and retrieval services on different clusters for ease of maintenance, and their CRs cannot be shared.Considering that each phase has a separate CRs budget, we replace the global constraint (InEq. (<ref>)) with multiple constraints InEq. (<ref>), where Cost_i,j_t represents the computation cost when the decision of phase t is j_t for request i, and C_t is the computation budget of phase t. This paper focuses on the scenario of single-constraint CR allocation at each phase. If there is more than one constraint per phase, we can relax multiple constraints in the same phase and combine them into one. s.t. ∑_i=1^M ∑_j_t=1^N_t x_i,j_t Cost_i,j_t≤ C_t, ∀ t=1,…,T§.§ Weakly Coupled MDP Problem Formulation The decision results before phase t affect the input state of phase t. To better describe our approach, we take a three-phase CR allocation situation as an example in this paper. It contains one Elastic Channel phase, one Elastic Queue phase, and one Elastic Model phase, which is a typical case of recommender system CR allocation. As shown in Figure <ref>, for request i, the decision result of Elastic Channel phase determines the real retrieval queue, and it directly affects the input state of Elastic Queue phase.Similarly, the decision result of Elastic Queue phase affects the input state of Elastic Model phase. Therefore, to better adapt to the state transition process, in the multi-phase joint CR allocation, we introduce the “state” of the request.In this paper, we formulate the CR allocation problem as a Weakly Coupled MDP <cit.> problem. Formally, the Weakly Coupled MDP consists of a tuple of six elements (𝒮,𝒜,ℛ,𝒫,γ,𝒞), which are defined as follows:* State Space 𝒮. For phase t of request i, s^i_t ∈𝒮 consists of user information u, time slice information ts, context information c, ad items information {ad_1, …, ad_N_ad}, and the CR allocation decision results of phase t-1. * Action Space 𝒜. Our CR allocation situation has three phases with different action spaces. The actions of the Elastic Channel phase, the Elastic Queue phase, and the Elastic Model phase are the retrieval strategy number, the truncation length, and the prediction model number, respectively.* Reward ℛ. For request i, after the agent takes action for the final phase, the system returns the final sorted items to the user. The user browses the items and gives feedback, including order price price_o and advertising fee fee_ad of request i. The reward r(s_t, a_t) is the weighted sum of them:r(s_t, a_t) = k_1 * fee_ad + k_2 * price_o * Transition Probability 𝒫. P(s_t+1|s_t, a_t) is the state transition probability from phase t to phase t+1 after taking action a_t. For each request i, trajectory (τ_i) is its whole state transition process in the recommender system.* Discount Factor γ. γ∈ [0,1] is the discount factor for future rewards. * Global Constraint 𝒞. Each phase has its global constraint that couples sub-MDPs. InEq. (<ref>) defines these constraints. § METHODOLOGY Deep Q-Network (DQN) <cit.> and its improved versions <cit.> are very popular in solving sub-MDPs with discrete actions. The Q-network is the essential structure of these models, Q_θ(s, a). To adapt to various CR allocation scenarios, we design a novel deep Q-Network with multiple separate networks (As Figure <ref> shows). In particular, the state space of each phase is defined as follows: ∙In the Elastic Channel phase, the candidate action space is the retrieval strategy numbers.For a recommender system with N_r retrieval channels, the number of retrieval strategies is N_c = 2^N_r and the candidate action space is {1,…,N_c}. For example, for three candidate retrieval channels {A, B, C}, retrieval strategy (0,1,1) indicates that channel A is not retrieved, and channels B and C are retrieved. We convert the indicator vector as a binary value, then the strategy number of the strategy (0,1,1) is the integer 3. ∙In the Elastic Queue phase, the action space is the truncation length. To reduce the candidate action space, we can put the candidate actions into buckets, e.g., set every ten adjacent truncation lengths as one bucket. Then the candidate action space is {10, 20,…}.∙In the Elastic Model phase, the candidate action space is the prediction model numbers {1,…,N_m}.Each phase of CR allocation has its own action spaces. As Figure <ref> shows, we model each phase using separate networks to adapt the different action spaces. In the last layer of the Q-Network, we use the selection unit to select the q-logits of a specific phase based on phase number t.§.§ Constraint Layer For any phase t, suppose that we have the optimal policy π_ t^* that satisfies the constraints of all phases except t. Then the decision problem for current phase t can be modeled separately as the following single-phase CR allocation problem with a single constraint:max_a_t∑_i=1^M ∑_a_t=1^N_t x_i,a_tValue_i,a_ts.t.∑_i=1^M ∑_a_t=1^N_t x_i,a_tCost_i,a_t≤ C_t∑_a_t=1^N_tx_i,a_t≤ 1, ∀ ix_i,a_t∈{0, 1}, ∀ i,a_tBy constructing and solving the Lagrange dual problem, we have the optimal solution to this problem. The proof is provided in Appendix <ref>. For request i, the optimal action of phase t is a_t^*:a_t^* = max_a_t(Value_i,a_t - λ_t Cost_i, a_t)where λ_t ≥ 0 is the Lagrange multiplier.Further, we use Q^π_ t^*(s_t,a_t) to represent the expected cumulative reward for taking action a_t in state s_t and subsequent actions are decided following policy π_ t^*.Q^π_ t^*(s_t,a_t) = 𝔼_τ∼π_ t^* [R_t|s_t,a_t] R_t = ∑_i=t+1^∞γ^i r(s_i, a_i, s_i+1) For phase t of request i, we have:Q^π_ t^*(s_t,a_t) = Value_i,a_t Cost(s_t,a_t) = Cost_i,a_twhere Cost(s_t,a_t) is the computation cost for taking a_t in s_t, determined by (s_t, a_t), and independent of both prior and subsequent strategies. Thus, for phase t, the optimal action for request i in state s_t is a_t^*:a_t^* = max_a_t(Q^π_ t^*(s_t,a_t) - λ_t Cost(s_t, a_t)) Compared to the action selection formula in original DQN <cit.> networks, we only need to add a layer (Constraint Layer) to obtain the optimal action that satisfies the CR constraints.Q_λ_t^π_ t^*(s_t,a_t) = Q^π_ t^*(s_t,a_t) - λ_t Cost(s_t, a_t) Then the optimal action is:a_t^* =max_a_t Q_λ_t^π_ t^*(s_t,a_t) §.§.§ Adaptive-λ in Offline Model TrainingAs mentioned in DCAF <cit.> and CRAS <cit.>, Assumptions (<ref>) and (<ref>) usually hold in general recommender systems.Value_i,a_t is monotonically increasing with Cost_i,a_t.Value_i,a_t/Cost_i,a_t is monotonically decreasing with Cost_i,a_t.From our observations, they also hold for most requests in Meituan advertising system. However, it is worth noting that our assumptions differ from those of CRAS. CRAS uses the queue length to represent the computation cost, while we make no assumptions about the relationship between computation cost and queue length (or other actions). Given a fixed {Value_i,a_t}_i=1^M and variable λ_t, the optimal action of phase t is a_t^* (Eq. <ref>).For each request i, its optimal action a_t^* varies with λ_t, then the total computation cost Ĉ_t(λ_t) (Eq. <ref>) and the total revenueR̂_t(λ_t) (Eq. <ref>) of phase t vary with λ_t.Ĉ_t(λ_t)= ∑_i=1^M Cost_i,a_t^* R̂_t(λ_t)= ∑_i=1^M Value_i,a_t^* We can obtain the optimal λ_t which satisfies the CR constraint and maximizes R̂_t(λ_t) through updating λ_t iteratively based on Ĉ_t(λ_t). Suppose Assumptions (<ref>) and (<ref>) hold, for any λ_t^k, let λ_t^k+1 be:λ_t^k+1←λ_t^k + α( Ĉ_t(λ_t^k)/C_t - 1 )where C_t is the computation budget of phase t and α∈ℝ^+ is learning rate of λ. Then, the following conclusion holds:* Conclusion 1. Ĉ_t(λ_t^k+1) ≤Ĉ_t(λ_t^k) will holds if Ĉ_t(λ_t^k) > C_t.* Conclusion 2. R̂_t(λ_t^k+1) ≥R̂_t(λ_t^k) will holds if Ĉ_t(λ_t^k) < C_t.* Conclusion 3. λ_t^k+1 = λ_t^k will holds if Ĉ_t(λ_t^k) = C_t.Suppose Assumptions (<ref>) and (<ref>) hold, Ĉ_t(λ_t) is monotonically decreasing with λ_t (see more details in <cit.>). Further, under Assumption <ref>, R̂_t(λ_t)/Ĉ_t(λ_t) is monotonically decreasing with λ_t, and under Assumption <ref>, R̂_t(λ_t) is monotonically decreasing with λ_t.* when Ĉ_t(λ_t^k) > C_t, we have λ_t^k+1 > λ_t^k, then Ĉ_t(λ_t^k+1) ≤Ĉ_t(λ_t^k) holds.* when Ĉ_t(λ_t^k) < C_t, we have λ_t^k+1 < λ_t^k, then R̂_t(λ_t^k+1) ≥R̂_t(λ_t^k) holds.* when Ĉ_t(λ_t^k) = C_t, we have λ_t^k+1 = λ_t^k holds.In summary, Lemma (<ref>) specifies that it is feasible to update λ with formula (<ref>).Initially, conclusion 1 of Lemma (<ref>) indicates that when the total computation cost exceeds the computation budget, updating λ_t with formula (<ref>) will obtain less total computation cost. It helps to avoid violating the constraint.Furthermore, conclusion 2 of Lemma (<ref>) indicates that when the total computation cost is less than computation budget, updating λ_t with formula (<ref>) will obtain a better total revenue.Finally, conclusion 3 of Lemma (<ref>) indicates that when the total computation cost equals to computation budget, updating λ_t with formula (<ref>) will obtain the original value of λ_t. Updating λ_t with formula (<ref>) until convergence, we will obtain the optimal λ_t^*, where Ĉ_t(λ_t^*) = C_t.As described in Algorithm <ref>, we dynamically update the λ in the offline training phase. At iteration step i, we take a mini-batch of samples 𝒟_i (a bigger batch is generally taken here, e.g., 8192 samples per batch), and update λ = (λ_1,…,λ_T) K times.At the k-th update, for each s_t in 𝒟_i, take action a_t^k with: a_t^k = max_a_t(Q_θ(s_t, a_t)-λ_t^i,k Cost(s_t, a_t)) and for each phase t ∈{1,…,T} at the k-th update, update λ_t^i,k+1 with:λ_t^i,k+1←max{0, λ_t^i,k + α(∑_(s_t,a_t^k) ∈𝒟^i,kCost(s_t,a_t^k)/C_t(𝒟^i) - 1) } where α∈ℝ^+ is the learning rate of adaptive-λ, and C_t(𝒟_i) is the maximum CR budget that the system can allocate for dataset 𝒟_i at phase t.C_t(𝒟_i) can be calculated through an offline fixed rule, which is designed by stress testing and practical experience. Algorithm <ref> describes the training process of DDQN-based RL-MPCA. Essentially, the Constraint Layer module of RL-MPCA only modifies the Q-network, so it can also apply to other Q-learning methods. Take a popular offline RL method with Q-network, REM <cit.>, for example. The only difference between Algorithm <ref> and the REM-based RL-MPCA approach is Q-network Q_θ. Specifically, for REM model with H heads, we replace Q_θ withQ_θ^REM = ∑_hβ_h Q_θ^h, where β = (β_1,…,β_H) is categorical distribution, which is randomly drawed for each mini-batch (see more details in <cit.>).§.§.§ λ Correction in Offline Model Evaluation After the offline model is trained, the λ-calibrated Q-value guarantees that the agent's decisions satisfy the CR constraints on all training datasets.However, when applying λ to the real online system, it still faces the following problems:(1) The online and offline data distributions are inconsistent because the behavioral policy of collecting offline data differs from the target policy, which leads to the possibility that λ may not satisfy the CR constraints in the online system.(2) The traffic of the recommender system varies over time, and the existing λ cannot satisfy the CR constraints on each time slice. To solve problem (1), we build an offline simulation system, which interacts with the agent and gives feedback on the computation cost and revenue in imitation of the real online environment. Through the evaluation in the simulation system, we select the optimal λ^* that satisfies the CR constraints in order of the decision phases. When both Assumptions (<ref>) and (<ref>) hold, we can find optimal λ^* through bisection search in each phase (please refer to <cit.> for the detailed proof). Otherwise, we can find optimal λ^* through grid search <cit.>.To solve problem (2), we select the optimal λ^* for each time slice based on the offline simulation system. We observe that the traffic of the recommender system generally varies periodically except for special holidays. Taking Meituan advertising system as an example, its traffic variation cycle is one day.Therefore, we can divide a day into multiple time slices with similar traffic distribution in the same time slice. Considering that the cost of training a separate model for each time slice is expensive and not easy to maintain, we first train a uniform model for all time slices, and then solve a separate λ for each time slice. Alternatively, for systems with non-periodic traffic, a possible solution is to use the traffic from the previous time slice to represent the current time slice. Specifically, we can update λ in near real-time, thus allowing λ to automatically adapt to irregular traffic changes.§.§ System ArchitectureWe illustrate the overview of the architecture in Figure <ref>. In each phase, for instance, in the Elastic Queue phase, we need to allocate and control CRs through Computation Allocation System and Computation Control System. Computation Allocation System aims to maximize the total business revenue under the CR constraints.Computation Control System aims to guarantee system stability by means of feedback control. Dynamic allocation of CRs poses a significant challenge in guaranteeing the stability of recommender systems. We use Flink <cit.> to collect real-time system load information, such as failure rate, CPU utilization, etc., and then use PID <cit.> control algorithms to achieve feedback control. When the system load exceeds the target value of the PID, the PID will control the consumption of CRs. For instance, in the Elastic Queue scenario, when the system's failure rate rises above the target value, the PID Controller will reduce the upper bound of the queue for all requests. The result of online A/B tests shows that the Computation Control System reduces the degradation rate by 0.1 percentage point, provides automatic and timely responses to unexpected traffic, and guarantees the stability of recommender systems. § EXPERIMENTS Our experiments aim to study four questions:(1) Does adaptive-λ of constraint layer help to avoid violating the global CR constraints in the training process? (2) After λ correction, does the model with constraint layer satisfy the global CR constraints and improve business revenue? (3) How does RL-MPCA approach perform in comparison to other state-of-the-art CR allocation approaches and RL algorithms? (4) How do different hyper-parameter settings affect the performance of RL-MPCA?To answer these questions, we conduct various experiments in a three-phase joint modeling CR allocation situation, which contains one Elastic Channel phase, one Elastic Queue phase, and one Elastic Model phase. §.§ Offline ExperimentsTo demonstrate the performance of the proposed RL-MPCA, we evaluate and compare various related approaches for CR allocation on a real-world dataset. In offline experiments, we use the simulation system to evaluate these approaches.§.§.§ DatasetWe run random exploratory policies and superior policies (see more details about behavioral policies in Appendix <ref>) to collect the dataset on Meituan advertising system during July and August 2022. Finally, we sample 568,842,204 requests from 101,368,290 users as the dataset, which includes user profile features, context features, time slice features, etc. §.§.§ Offline Simulation System It is dangerous to deploy a model to an online system when its effect is unknown, which may significantly damage the online revenue of the recommender system and cause the online service to crash. To solve this problem, we build an offline simulation system, which can imitate the online real-world environment to interact with the model (agent) and give feedback on the computation consumption and revenue.More details about the offline simulation system are described in Appendix <ref>.§.§.§ Evaluation MetricsWe use computation (cost) and revenue (return) to evaluate the performance of approaches in offline experiments. The computation cost of each phase is defined as the sum of the CR consumption of all requests in that phase (see more details in Appendix <ref>). To facilitate analysis, we define total computation cost as (cost = ∑_t (Ĉ_t/C_t-1)). return is defined as the total revenue of all requests, specifically, return = ∑ fee_ad + ∑ price_o.With reference to D4RL <cit.>, to facilitate the analysis of the effectiveness of different approaches while ignoring the impact of our application scenarios, we normalize scores by: normalized_score = 100 * score-random_score/expert_score - random_score§.§.§ Hyper-parameters SettingsRL-MPCA contains several hyper-parameters. We employed the grid search <cit.> to determine the hyper-parameter values. Appendix <ref> provides the hyper-parameters of experiments. §.§.§ BaselinesWe compare RL-MPCA with several baselines. Our situation has only one Elastic Queue phase (the two other phases are Elastic Channel and Elastic Model). In the situation containing only one Elastic Queue phase, the modeling methods of DCAF and CRAS are consistent. Therefore, in the later experiments, we only show the details of DCAF. * Static. Static approach allocates CRs with global fixed rules, including fixed retrieval channels, fixed truncation length of candidate items, and fixed prediction models.* DCAF. DCAF <cit.> formulates the CR allocation problem as an optimization problem with constraints, then solves the optimization problem with linear programming algorithms. In online A/B tests, we use fixed rules in Elastic Channel phase and Elastic Model phase, and DCAF is deployed in the Elastic Queue phase.* ES-MPCA. Before RL-MPCA, we designed an evolutionary strategies based multi-phase computation allocation approach (ES-MPCA, see more details in Appendix <ref>), which has been deployed on Meituan advertising system.* Ex-RCPO. RCPO <cit.> solves a CMDP problem by introducing the penalized reward functions (i.e., calibrate rewards with Lagrange multiplier λ). We replace adaptive-λ of RL-MPCA with the penalized reward functions when training the model, and name it Ex-RCPO. * Ex-BCORLE(λ). BCORLE <cit.> solves a single-constraint budget allocation problem with λ-generalization. It cannot be directly applied to the multi-constraint CR allocation. We extend BCORLE from single-λ to multi-λ, and name it Ex-BCORLE.* Ex-BCRLSP. BCRLSP <cit.> solves the single-constraint budget allocation problem by calibrating Q-value in near real-time. It cannot be directly applied to the multi-constraint CR allocation. We extend BCORLE from single-λ to multi-λ, and name it Ex-BCORLE.* Ex-CrossDQN. CrossDQN <cit.> solves a single-constraint ads allocation problem by introducing auxiliary batch-level loss when training the model. We replace adaptive-λ of RL-MPCA with auxiliary batch-level loss when training the model, and name it Ex-CrossDQN. §.§.§ Offline Experiment Results To answer question (1) and question (2), we train multiple models: DDQN, BCQ, REM, and their improved versions of introducing adaptive-λ, then use the simulation system to evaluate them. As shown in Figure <ref>, during the training process, introducing adaptive-λ can control the CRs of the model always around the target constraints for each phase (Figure <ref>.a.3 and Figure <ref>.b).However, the CRs of models swing around the target constraints due to the inconsistent distribution of the mini-batch sampled during training and the evaluation dataset (see more details in Section <ref>). After the λ correction, for each phase, the CRs of models strictly satisfy the constraints except for BCQ+λ (BCQ with adaptive-λ), and Figure <ref>.a.4 shows the total CRs of all phases. Adaptive-λ allows the model to learn the Q-value under the case that CRs conform to the constraint (or in the near range of the constraints) at each phase. Thus, we can observe that the effectiveness of all three models improves after introducing adaptive-λ (Figure <ref>.a.2).An interesting phenomenon is that after introducing adaptive-λ, the CRs of BCQ instead cannot be stably calibrated to conform to the constraint. A potential reason is that the BCQ model contains an imitation component, which causes the BCQ model to imitate the behavioral strategy. As a result, the value of λ changes in an unknown direction during the training process, and eventually, the Q value cannot be calibrated to satisfy the target constraints. To answer question (3), we compare RL-MPCA to the state-of-the-art CR allocation approach DCAF and other related approaches. The results are shown in Table <ref>. Experiment results show that RL-MPCA outperforms other approaches in return when the CR constraints are satisfied. §.§.§ Hyper-parameter Analysis To answer question (4), We compare the effect of two critical parameters, α and K, on the performance of RL-MPCA. α is the learning rate of adaptive-λ. K is λ update times in one global step.Hyper-parameter α. Like the learnng rate of the model's common parameters, the learning rate of adaptive-λ α cannot be too big or too small. Too small a learning rate will lead to slow learning, while too big a learning rate will cause λ to swing around the optimal value. Table <ref> shows the model performance at different learning rates, and we finally choose 0.1 as the parameter value.Hyper-parameter K. A bigger K indicates more updates to the λ at once update of the model parameter during training, which will make constraints easier to be satisfied. As seen in Table <ref>, the return increases with the increase of K. However, a bigger K also means more time consumption for training. To trade off the revenue and time, we choose 10 as the parameter value. §.§ Online A/B test ResultsWe also evaluate the RL-MPCA approach for two weeks in the online environment.In online A/B tests, we compare our proposed RL-MPCA approach with several previous strategies deployed on Meituan advertising system.Table <ref> lists the performance of several primary online metrics, including gross merchandise volume per mille (GPM, i.e., GPM = avg(price_o) * 1000, where avg(price_o) is the average of price_o), cost per mille (CPM, i.e., CPM = avg(fee_ad) * 1000, where avg(fee_ad) is the average of fee_ad), click-through rate (CTR), and post-click conversion rate (CVR). RL-MPCA outperforms all other approaches, and ES-MPCA and DCAF take second and third place, respectively.§ CONCLUSION AND FUTURE WORKThis paper proposes a Reinforcement Learning based Multi-Phase Computation Allocation approach, RL-MPCA, for recommender systems. RL-MPCA creatively formulates the computation resource (CR) allocation problem as a Weakly Coupled MDP problem and solves it with an RL-based approach.Besides, RL-MPCA designs a novel multi-scenario compatible Q-network adapting to various CR allocation scenarios, and calibrates Q-value by introducing multiple adaptive Lagrange multipliers (adaptive-λ) to avoid violating the global CR constraints when maximizing the business revenue. Both offline experiments and online A/B tests validate the effectiveness of our proposed RL-MPCA approach.In future work, we plan to explore more general CR allocation approaches and more CR allocation application scenarios.Moreover, we plan to explore a new simulation scheme to capture the stochastic variation of response time and system load and then jointly model the response time constraint and the CR constraint to improve the system's availability. ACM-Reference-Format § SIMULATION SYSTEM The offline simulation system contains two modules: the request simulation module and the revenue estimation module. For a given request, the request simulation module is responsible for interacting with an agent and generating interaction results. The revenue estimation module is a deep neural network model based on supervised learning, which evaluates the simulation results and predicts the user views, clicks, and purchases for each request. Although the offline simulation system requires a lot of time and computation resources, the prediction results of the revenue estimation module are relatively accurate because the request simulation module can generate detailed information about the requests.Finally, after calibrating the output of the revenue estimation model, our offline simulation system can achieve fairly confident revenue estimation results. As Figure <ref> shows, for each request i, the interaction of the simulation system and the agent involves multiple steps.* Step 1. The simulation system constructs and feeds the initial state s_1^i to the agent.* Step 2. The agent takes Elastic Channel action a_1^i based on state s_1^i.* Step 3. The simulation system retrieves the ads with action a_1^i, and feeds state s_2^i (including the retrieval ad list) to the agent.* Step 4. The agent takes Elastic Queue action a_2^i based on state s_2^i.* Step 5. The simulation system simulates the truncation operation with the truncation length corresponding to action a_2^i, and feeds state s_3^i (including the truncated ad list) to the agent.* Step 6. The agent takes Elastic Model action a_3^i based on state s_3^i.* Step 7. The simulation system provides the prediction service for ads with the prediction model corresponding to action a_3^i, and outputs state s_4^i (including the truncated ad list and its prediction scores).* Step 8. The simulation system takes state s_4^i as input features, and predicts the final revenue (i.e., user views, clicks, and purchases) with a supervised learning based deep neural network model (see the architecture in Figure <ref>). § PROOF To slove the single-phase computation resource (CR) allocation problem in Section <ref>, we introduce a Lagrange multiplier λ_t, and construct the dual problem:min_λ_tmax_a_t∑_i=1^M ∑_a_t=1^N_tx_i,a_tValue_i,a_t - λ_t (∑_i=1^M ∑_a_t=1^N_t x_i,a_tCost_i,a_t - C_t )s.t.∑_a_t=1^N_t x_i,a_t≤ 1, ∀ i,tx_i,a_t∈{0, 1}, ∀ i,a_t λ_t ≥ 0In phase t, for request i, there is one and only one action a_t can be taken. Then the dual problem above can be further transformed as:min_λ_t∑_i=1^M max_a_t∈{1,…,N_t}{Value_i,a_t - λ_t Cost_i,a_t} + λ_t(C_t)s.t λ_t ≥ 0Thus, we have the global optimal solution to original problem, x_i,a_t^* = 1 when:a_t^* = max_a_t(Value_i,a_t - λ_t Cost_i, a_t)Note that a similar proof has been provided in <cit.>, but the constraint definition of our optimization problem is different from it. § COMPUTATION COST ESTIMATIONEssentially, CRs include computing resources, memory resources, network transmission resources, etc. In real industrial applications, computation cost estimation aims to find a metric that is easy to calculate and can be directly mapped to the amount of computation consumed. CRAS uses queue length as the computation cost metric, which is simple and feasible in Elastic Queue scenarios, and we have verified this in Meituan advertising system.However, queue length does not apply to Elastic Channel and Elastic Model scenarios. Specifically, in Elastic Channel, the primary metric affecting the CR consumption of the retrieval service is the number of requests entering the service. In Elastic Model, the primary metrics affecting the resource consumption of the prediction service are the number of requests and the total number of ads entering the model. During the model training, we use the number of requests entering the retrieval channel and the number of requests entering the complex prediction model as the computation cost evaluation metrics to facilitate the evaluation of system computation.Because the Elastic Queue guarantees the number of ads entering the prediction model, it is reasonable to ignore the number of ads in Elastic Model when training the model.In the offline experiments and online A/B tests, we also ensured that the number of ads entering the complex prediction model did not exceed the target value.§ HYPER-PARAMETERS Table <ref> lists the hyper-parameters of experiments. § ES-MPCASame as RL-MPCA (see more details in Section <ref>), ES-MPCA also formulates the multi-phase CR allocation problem as a Weakly Coupled MDP problem. The difference is that ES-MPCA solves it with an evolutionary strategies based (ES-based) approach. To solve the Weakly Coupled MDP problem, we consider it as a black-box optimization problem, aiming to maximize the total business revenue under the CR constraints.In this paper, we use Cross-Entropy Method (CEM) <cit.> to solve the black-box optimization problem. ES-MPCA designs the actions as:channelQuota = f_c(θ_cx_c)queueLen =f_q(θ_qx_q) modelQuota =f_m(θ_mx_m)where channelQuota, queueLen and modelQuota are retrieval strategy number, truncation length and prediction model number, respectively. (θ_c, θ_q, θ_m) and (x_c, x_q, x_m) are parameters and features, respectively.Algorithm <ref> describes the training process of CEM-based ES-MPCA. By imposing an extremely large penalty on the parameters that violate the constraint (λ_t is generally an extremely large value, e.g., for each phase t, λ_t = 10^8 in our experiments), ES-MPCA always guarantees that the final output optimal parameters θ^* are those that satisfy the CR constraints. Experiment results show that the optimal parameters θ^* outputted by ES-MPCA always exactly satisfy the CR constraints (i.e., for each phase t, ∑ Cost_t(θ^*) = C_t holds), which is consistent with the assumptions and conclusions in Section <ref>. § BEHAVIORAL POLICIES In this section, we provide a detailed introduction to behavioral policies.Random exploratory policies randomly make decisions in each phase to explore the revenues under different actions, including randomly selecting retrieval channels, truncation lengths, and prediction models.Superior policies include ES-based policies and RL-based policies. We train them on a random dataset collected by random exploratory policies. More details of ES-based policies are provided in Appendix <ref>.§ ONLINE SERVING After model training (Algorithm <ref>) and λ-correction (see more details in Section <ref>), we obtain the trained network Q_θ and trained constraint parameter λ=(λ_1,…,λ_T). Algorithm <ref> shows the process of online serving for a given request.
http://arxiv.org/abs/2401.01369v1
{ "authors": [ "Jiahong Zhou", "Shunhui Mao", "Guoliang Yang", "Bo Tang", "Qianlong Xie", "Lebin Lin", "Xingxing Wang", "Dong Wang" ], "categories": [ "cs.IR", "cs.AI", "cs.LG" ], "primary_category": "cs.IR", "published": "20231227124019", "title": "RL-MPCA: A Reinforcement Learning Based Multi-Phase Computation Allocation Approach for Recommender Systems" }
18ptFar-field Petahertz Sampling of Plasmonic Fields]Far-field Petahertz Sampling of Plasmonic Fields1,2]Kai-Fu Wongkai-fu.wong@cfel.de These authors contributed equally to this work. 3,4]Weiwei Liweiwei.li@physik.uni-muenchen.de These authors contributed equally to this work. 3,4]Zilong Wangzilong.wang@physik.uni-muenchen.de 2]Vincent Wanievincent.wanie@desy.de 2]Erik Månssonerik.maansson@desy.de 1]Dominik Hoeingdominik.hoeing@uni-hamburg.de 3,4]Johannes Blöchljohannes.bloechl@physik.uni-muenchen.de 3,4]Thomas Nubbemeyerthomas.nubbemeyer@physik.uni-muenchen.de 5]Abdallah M. Azzeerazzeer@ksu.edu.sa 2,6]Andrea Trabattoniandrea.trabattoni@desy.de [1,7]Holger Langeholger.lange@uni-hamburg.de [1,2]Francesca Calegarifrancesca.calegari@desy.de [3,4,8,9]Matthias F. Klingkling@stanford.edu [1]The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany [2]Center for Free-Electron Laser Science CFEL, Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany [3]Max Planck Institute of Quantum Optics, Hans-Kopfermann-Str. 1, 85478 Garching, Germany [4]Physics Department, Ludwig-Maximilians-Universität Munich, Am Coulombwall 1, 85748 Garching, Germany [5]Attosecond Science Laboratory, Physics and Astronomy Department, King-Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia [6]Institute of Quantum Optics, Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany [7]Institute of Physics and Astronomy, Universität Potsdam, Karl-Liebknecht-Str. 24, 14476 Potsdam, Germany [8]Stanford PULSE Institute, SLAC National Accelerator Laboratory, 2575 Sand Hill Rd, Menlo Park, CA 94025, USA [9]Applied Physics Department, Stanford University, 348 Via Pueblo, Stanford, CA 94305, USA The collective response of metal nanostructures to optical excitation leads to localized plasmon generation with nanoscale field confinement driving applications in e.g. quantum optics, optoelectronics, and nanophotonics, where a bottleneck is the ultrafast loss of coherence by different damping channels. The present understanding is built-up on indirect measurements dictated by the extreme timescales involved. Here, we introduce a straightforward field sampling method that allows to measure the plasmonic field of arbitrary nanostructures in the most relevant petahertz regime. We compare experimental data for colloidal nanoparticles to finite-difference-time-domain calculations, which show that the dephasing of the plasmonic excitation can be resolved with sub-cycle resolution. Furthermore, we observe a substantial reshaping of the spectral phase of the few-cycle pulse induced by this collective excitation and we demonstrate ad-hoc pulse shaping by tailoring the plasmonic sample. The results pave the way towards both a fundamental understanding of ultrafast energy transformation in nanosystems and practical applications of nanostructures in extreme scale spatio-temporal control of light.[ * Received 29 September 2023 / Accepted 18 December 2023 ==========================================================§ INTRODUCTIONIn metallic nanoparticles (NPs) the light electric field can drive the conduction band electrons into a collective oscillation on the nanoscale, referred to as localized surface plasmon (LSP) <cit.>. Coupling of light to a LSP resonance leads to the local enhancement of the electromagnetic field and to the confinement of the light-matter interaction on the nanoscale, therefore enabling a manifold of applications including surface enhanced spectroscopy <cit.>, enhanced luminescence <cit.>, strong-field driven nanoscale currents <cit.>, enhanced nonlinear optical effects <cit.>, and strong-coupling quantum optics <cit.>. However, a direct characterization method for the electric fields emerging from the resonantly excited nanostructure is still lacking. Subsequent to its coherent driving by the light field, Landau damping, electron–electron, electron–phonon, and electron–surface scattering result in an ultrafast (about 10 fs) plasmon decay with the energy transferred to highly-excited, nonequilibrium carriers <cit.>. These hot carriers can contribute to chemical transformations on the NP surface, intensely studied in the field of heterogeneous catalysis <cit.>. The details of the plasmon decay channels, however, have only been deduced from theory and dependencies on material parameters and e.g. excitation conditions have not yet been experimentally confirmed in a direct manner <cit.>. For this reason, experiments enabling a more direct access to the plasmon decay of different NP systems are of strong interest. Field sampling of plasmonic nanoantennas has been recently achieved with electro-optical sampling, limiting the approach, however, to the terahertz domain <cit.>. The vast majority of plasmonic nanostructures discussed in the literature exhibits resonances in the visible spectral region, strongly motivating the extension to the petahertz (PHz) domain. As a first example heading in this direction, plasmonic nanoantennas have been utilized as near-field sensors to enhance the sensitivity for the reconstruction of the incident light electric field (E-field) <cit.>. What is still lacking despite its high relevance, is the realization of PHz plasmonic field sampling.Here, based on recent advances in methodology <cit.> we demonstrate PHz far-field sampling of plasmonic responses of colloidal NPs utilizing the tunneling ionization with a perturbation for the time-domain observation of the electric field (TIPTOE) technique <cit.>. We compare the experimental data to results from finite-difference time-domain (FDTD) calculations showing that the temporal build-up and decay of the plasmon field can be resolved. Furthermore, we demonstrate extreme scale control of the transmitted fields with the NPs.With our approach we are not only sensitive to theelectric field in the time domain, but also directly sensitive to the phase response, which is directly imprinted in the phase of the sampled field. This in turn allows for the specific design of the plasmonic material to optimize the light-matter interaction for aforementioned applications. In our case we demonstrate the ability to shape the dispersion ofultrafast light pulses, by changes of the geometry of our NPs. Our results provide a simple way of sampling plasmon fields of arbitrary nanostructures on the PHz scale and show important practical applications in the control of light fields.§ RESULTSFew-cycle pulses with a pulse duration of 4.5 fs, central wavelength of 780 nm, and a repetition rate of 10 kHz were sent into the experimental setup shown in Fig. <ref>a. The incident light was split interferometrically into a fundamental and a signal beam. The signal beam was propagating through the plasmonic sample before being recombined with the fundamental beam for field sampling. The intensity ratio between signal and fundamental beams was chosen to be roughly 1:1000, to remain in the perturbative regime for the TIPTOE technique <cit.>. The peak intensities for the excitations were chosen to be under 10^10 W/cm^2 to avoid damage of the sample, and to ensure that the interaction between the few-cycle signal pulse and the sample is in the linear regime (see Supplementary Notes 3-5).As samples we employed gold nanospheres (AuNS) with 20 nm in diameter, and gold nanorods (AuNR) with dimensions of 80 nm × 26 nm (aspect ratio of  3.1), respectively. Transmission electron microscopy (TEM) images of each sample are shown in Fig. <ref>b-c. From the TEM images we also obtained the size distributions of the colloidal samples (see Supplementary Note 1).For AuNS, the plasmon resonance only marginally overlaps with the broad bandwidth of the few-cycle near-infrared (NIR) pulse (non-resonant case), while the longitudinal surface plasmon resonance of AuNRs lies right within the NIR spectrum (resonant case); cf. Fig. <ref>d. The samples were deposited onto a fused silica substrate. A replica silica substrate was used as a reference. The dispersive contribution from the 1 mm thick fused silica substrate for the deposited NPs was carefully compensated with chirped mirrors. Observed changes should therefore only arise from the plasmon field itself. Fig. <ref> displays the results obtained from the TIPTOE measurements for both the non-resonant and resonant cases in the time domain.As expected for a non-resonant sample, the TIPTOE measurement performed for AuNS shown in Fig. <ref>a exhibits a very similar E-field as the reference field with a small attenuation that can be attributed to intraband transitions in gold. This also results in almost identical spectral amplitudes and spectral phases between sample and bare substrate as shown in Fig. <ref>c and e, respectively. Note that the non-resonant case serves as a benchmark in our field sampling approach.In the case of AuNRs, a strong reduction in the amplitude of the E-field is observed as expected by the resonant absorption (Fig. <ref>d). More interestingly, the sampled E-field in the time-domain (Fig. <ref>b) exhibits deviations from the reference: starting from the peak of the pulse envelope we can observe a significant distortion of the optical cycles with oscillations extending in the tail of the few-cycle pulse. These distortions result in a drastic change of the spectral phase as well as shown in Fig. <ref>e, which exhibits a crossing with the reference phase at the peak of the plasmon resonance around 810 nm. The phase shift due to the resonance is a well known effect and has been reported in previous studies <cit.>.The experimental observations were compared to FDTD calculations implemented using Lumerical 2022 R1 software (ANSYS, Inc) <cit.>. As input parameters we used the particle dimensions, the dielectric constants of the materials and the sampled incident E-field. To account for the inhomogenous broadening of the plasmon resonance, we calculated the interaction between the experimental few-cycle field with five particles of different sizes, considering the determined size distribution from TEM analysis as weighting factor. The calculation results are summarized in Fig. <ref>. The obtained traces for the plasmonic interaction agree with the experimental traces (Fig. <ref>a). At 5 fs the discontinuity of the E-field is indicated and the delayed oscillation can be observed until 12 fs. The agreement persists for the spectral phases. As in the experiment, the spectral phase between reference and plasmonic system exhibits a visible phase crossing at the central resonance around 750 nm in this case, cf. Fig. <ref>b (indicated by the arrow). To resolve the onset of the plasmonic excitation from the time-resolved measurements, we subtracted the TIPTOE traces between the sample and reference. In this way, we isolate the contribution of the plasmon from the driving few-cycle field in the far-field domain. The results are reported in Fig. <ref>.For the non-resonant case, the differential signal is almost negligible. For the resonant case, a clear build-up of an additional field component can be resolved, with a subsequent decay with a lifetime in the order of 10 fs, after which the differential field approaches the baseline.Treating the calculated data the same way, we observe a qualitative agreement with the experiment. The plasmon drive is well reproduced by the simulations, while we experimentally observe a faster decay, which could be attributed to inhomogeneous broadening due to plasmon coupling and increased damping at elevated temperatures <cit.>. The visible ultrafast decay can be attributed to the ultrafast dephasing time of the plasmon, which conversely to previous measurements <cit.> can now be retrieved from far-field measurements (cf. Supplementary Note 7).The pulse interaction with the plasmon is also reflected in the optical dispersion. Applying a polynomial fit to extract the group delay dispersion (GDD) yields different values for the plasmonic interaction compared to the reference sample. A positive GDD enhancement of approximately 8.03 fs^2 compared to the original phase value was observed after propagation through the resonant AuNRs sample. The enhancement was consistent with differently chirped pulses (see Supplementary Notes 2). Interestingly, we observe that the residual spectral phase, representing the plasmonic contribution, displays a positive parabolic shape close to the peak of the plasmon resonance. In contrast, the non-resonant AuNS induce no significant change of the dispersion. To further explore this effect, we performed measurements with samples displaying different plasmon resonances by altering the aspect ratio (AR) of the AuNRs. As samples we used AuNRs with dimensions of 71 nm × 26 nm and 78 nm × 19 nm corresponding to an AR of 2.7 and 4.1 respectively. As the AR for AuNRs changes, the longitudinal plasmon resonance shifts in frequency.We observe that the spectral phase of the few-cycle pulse is altered correspondingly; cf. Fig. <ref>.As the plasmon absorption shifts relative to the previous resonant case, we also observe a shift of the positive parabolic contribution. In particular, for the plasmon resonance overlapping with the blue part of the spectrum, the minimum of the parabola shifts towards the blue, and correspondingly the remaining spectral phase exhibits a slightly negative parabolic shape that results in a negative GDD. In contrast, for the plasmon resonance overlapping with the red part of the spectrum, the minimum of the parabola shifts towards the red and results in an increase of the positive GDD. Tailoring of the plasmonic resonance is straightforward by changing the properties of the NP such as shape, size or environment and thus allows the realization of ad-hoc pulse shaping. Moreover, the ability to tailor plasmon resonances with a broad bandwidth facilitates the possibility to manipulate the spectral properties of broadband few-optical-cycle pulses. This concept was already anticipated in a theoretical study proposing the use of plasmonic NPs as metasurfaces to engineer ultrashort pulses <cit.> and it has been experimentally demonstrated with 45 fs long pulses <cit.>. In this context, our approach demonstrates the feasibility of shaping broadband few-cycle pulses using broad plasmon resonances.§ CONCLUSIONWe demonstrated the PHz sampling of resonant plasmon fields from gold nanostructures in the far-field. The implementation of the TIPTOE technique allowed direct access to the plasmon fields at visible wavelengths, which are the most relevant in nanoplasmonics. The plasmon dephasing dynamics was resolved on a subcycle scale. This will allow to benchmark theoretical descriptions and address the different channels contributing to the ultrafast dephasing. The free space integration of the samples enables investigating details of the plasmon dephasing of arbitrary plasmonic nanostructures. For example coherence in strongly coupled hybrid systems can be addressed directly <cit.>. Furthermore, we demonstrated that the broadband plasmon resonance of nanostructures can alter the optical properties of few-cycle pulses. Additionally, the field sampling allows for direct extraction of the spectral phase without applying any reconstruction algorithms.With careful design of such plasmonic nanostructures their usage for shaping of ultrashort laser pulses becomes in principle feasible.§ METHODS Sample preparation. Monocrystalline NPs with uniform size distribution were synthesized via established wet-chemistry approaches namely the seed-mediated growth-approach. The detailed protocol for the synthesis of AuNS is stated in ref. <cit.>. The protocol for AuNRs is stated in ref. <cit.>. To prepare the NPs for the optical experiment, in a first step the hexadecyltrimethylammonium chloride (CTAC)/hexadecyltrimethylammonium bromide (CTAB) stabilized particles in aqueous solution were transferred to an organic solution (toluene), in which the particles are stabilized by thiol-terminated polystyrene (PSSH) with a molar weight of 25k g/mol. The reaction was carried out in 1 mL of tetrahydrofuran (THF), where the solution was stirred for three minutes in a glass vial. By shaking the vial after the reaction a thick product stuck at the side of the vial, with the remaining supernatant on the bottom, which was removed. The remained product in the vial was treated under nitrogen atmosphere to dry the sample and was then redispersed with toluene. After three washing steps via centrifugation (10k g, 20 minutes each washing step) with toluene in which 0.1 M PSSH was dispersed, the 25 µL of the particle solution were spin-coated onto the plasma cleaned silica substrate (EKSMA Optics) at low spin speeds (100 rpms) until the organic solution evaporated completely. Materials: CTAB and CTAC were purchased from Sigma-Aldrich (USA), thiol-terminated PSSH was purchased from Polymer Source (Canada), THF (>99.5%) was purchased from VWR Chemicals (USA) and toluene (>99.8%) was purchased from Thermo Fisher Scientific (USA). Characterization of samples. For the as-synthesized particles, the characterization was carried out via UV/Vis-absorption spectroscopy and transmission electron microscopy (TEM). Absorption spectra were recorded using a Varian Cary 50 spectrometer. For TEM analysis a droplet of AuNS or AuNP solution was deposited on amorphous carbon-coated copper grids. The grids were dried in air over night to remove residual solvent. TEM images were obtained using a Joel JEM-1011 transmission electron microscope operating at 100 kV The particles displayed a narrow size distribution, which was determined based on the width of the plasmon absorption and the TEM data. The deposited NPs were characterized by UV/Vis-absorption spectroscopy. For the NPs on the substrate a slight red-shift and broadening can be observed, one possible source originating from plasmon coupling. This would most likely lead to a faster damping of the plasmon oscillation. Further effects which play a role for the broadening could be changes of the environment. Nonetheless, the assumption that the observed plasmon response mainly displays the properties of the individual particles is valid, as the shift is not too pronounced.§ DATA AND MATERIALS AVAILABILITYAll data needed to evaluate the conclusions in the paper are present in the paper or the supplementary materials.§ ACKNOWLEDGEMENTSWe acknowledge fruitful discussions with Nirit Dudovich and Ferenc Krausz. This work was supported by the German Research Foundation (DFG) via the Cluster of Excellence "Advanced Imaging of Matter" (EXC 2056, 390715994). H.L. acknowledges funding by the DFG via project 432266622. M.F.K. is grateful for partial support by the Max Planck Society via the Max Planck Fellow program. J.B. acknowledges support by the Max Planck School of Photonics. M.F.K.'s work at SLAC is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under DE-AC02-76SF00515, and FWP SC0063. Z.W. acknowledges support from the Alexander von Humboldt Foundation. A.M.A. is grateful for support by the Researchers Supporting Project RSP-2021/152, King Saud University, Riyadh, Saudi Arabia.§ AUTHOR CONTRIBUTIONSM.F.K. and F.C. conceived the research project and designed the experiment with H.L.; D.H. and H.L. developed the samples. K.F.W. and W.L. performed the experiments under guidance of Z.W.. A.M.A. was involved in the original design of the field sampling setup. W.L. and Z.W. performed the FDTD simulations. J.B. and T.N. supported the laser and experimental operations. V.W., E.M. and A.T. were involved in the analysis of the experimental data and assisted the preliminary characterization of the samples. K.F.W., and W.L. wrote the initial draft of the manuscript, on which all co-authors commented. § COMPETING INTERESTSNone declared.plain Supplementary Material on'Far-field Petahertz Sampling of Plasmonic Fields'by Kai-Fu Wong, Weiwei Li et al. § SUPPLEMENTARY NOTE 1: DETERMINATION OF NANOPARTICLE AVERAGE SIZE AND ASPECT RATIO For the determination of the average AuNP dimensions, TEM images were analyzed. For the AuNS, the diameter is used as main quantity, while the aspect ratio (AR, length per width) serves as quantity for the characterization of the AuNRs. The determined distributions are displayed in Fig. <ref>.From the obtained size distribution a mean diameter of 19.63 nm with a standard deviation of 0.54 nm for the AuNS is obtained. For the AuNRs a mean aspect ratio of 3.16 with a standard deviation of 0.35 is obtained. Both results are in good agreement with the values expected from the synthesis parameters of 20 nm and 3.08 respectively.§ SUPPLEMENTARY NOTE 2: EXTRACTION AND VALIDATION OF GROUP DELAY DISPERSION To validate the GDD values obtained from our experiments, we performed additional TIPTOE measurements by chirping the compressed few-cycle pulses using materials with defined GDD values. In our case we inserted a 1 mm thick fused silica substrate into the pathway of the signal pulse before the plasmonic interaction takes place. The results for the measurements performed in the main text while using chirped pulses are shown in Fig. <ref>:Fig. <ref> displays a clear broadening in the time- as well as in the frequency domain. Most notably the spectral phase displays a strong positive parabolic shape, indicating a positive chirp. Applying a polynominal fit to the spectral phase yields the GDD value, which for the reference is determined to be 41.72 fs^2 whereas the literature value for the GDD at 780 nm is 37.8 fs^2, which agrees well with our experimental observations. Thus, we can also assume that the deviations induced by the resonance are real, which is again confirmed by the chirped measurements, as the non-resonant case displays similar values compared to our reference as in the non-chirped case, while the resonant case shows a significant higher value compared to the reference.§ SUPPLEMENTARY NOTE 3: EXPERIMENTAL SETUP AND LINEAR-RESPONSE CALIBRATION To obtain the few-cycle pulses required for performing the TIPTOE measurement, the output of a 10 Ti:sapphire chirped pulse amplification (CPA) system was sent through an Argon filled hollow-core fiber, generating white light with a broad spectrum ranging from 500 to 950. The white light pulses were then compressed down to 4.56 using chirped mirrors, and the compressed few-cycle pulses were sent into the setup for performing the TIPTOE measurements.A sketch of the complete experimental setup is shown in Fig. <ref>. The few-cycle laser pulse is split into two pathways, i.e. the strong pump beam and the weak signal beam. While the pump pulses were sent to a delay stage, the signal pulses interacted with the sample exciting the plasmon resonance. A wedge-pair (WPF) was used to finely compress the fundamental pulses which induce tunnel ionization serving as a subcycle gate for the field sampling, while another wedge-pair (WPS1) was employed to finely compress the signal pulses that interact with the gold NPs. The extra dispersive contribution from the 1 thick fused silica substrate was carefully compensated with two pairs of chirped mirrors (CMs) in combination with an additional wedge-pair (WPS2). Thus, observed changes should only arise from the plasmonic field itself. Moreover, an intermediate focus was formed in the probe beam path using a pair of parabolic mirrors, where an optical chopper operating at 5 was applied, enabling the lock-in detection of the modulated ionization yield in the form of current. Both beams were recombined and focused in between a pair of electrodes with a fixed bias of 40 in ambient environment. The transmitted probe light field which interacted with the sample can be sampled while scanning the delay stage and changing the time delay between the fundamental pulses and the signal pulses.A scan of the signal E-field strength was done during a series of test measurements on the fused silica reference. The measured temporal traces under different E-field strengths incident in-between the electrodes were Fourier-transformed into the frequency domain, and the obtained spectral magnitudes were integrated in the concerned wavelength range. As can be seen in Fig. <ref>, by plotting the integration against the signal E-field strengths, a linear response can be identified throughout the whole measured range by a linear slope in the log-log plot. This observation confirms that the setup shows a linear response to the incident signal E-field in a wide range, and the used signal E-field, as marked by the black arrow in Fig. <ref>, is also weak enough to avoid inducing any unexpected influence on the measurements. § SUPPLEMENTARY NOTE 4: LINEAR PLASMONIC RESPONSE DETERMINATION To ensure that the signal pulses are weak enough to only excite linear plasmonic response from the gold nanoparticles, a determination of the linear plasmonic response was conducted. This was done by performing the TIPTOE measurements using the resonant (AuNRs) sample with varying signal power. A Fourier transformation was applied to the measured temporal traces to obtain the corresponding spectral information in frequency domain. Similar to the process in Appendix B, in order to determine the linear response regime of the sample, the spectral magnitude profiles under different signal intensities were integrated and plotted as shown in Fig. <ref>. In the linear regime the integration should in principle increase linearly with increasing signal E-field strength.The plot in Fig. <ref> reveals no nonlinear behavior for the system in the measured range of signal E-field strengths, where the typically used signal E-field strength was 51.1 MV·m^-1, as marked by the black arrow. Based on this observation, the experimental measurement with AuNRs samples can most likely be assumed to happen in the regime where the plasmonic response is linear. § SUPPLEMENTARY NOTE 5: LASER PULSE CHARACTERIZATION AND SAMPLE DAMAGE EVALUATION The few-cycle signal pulses used for interaction with the NPs were characterized using the dispersion-scan (D-scan) technique, and the reconstructed results are shown in Fig. <ref>. The spectral intensity (blue) and phase (orange) are plotted in Fig. <ref>a, where the spectrum shows a wavelength range from 500 to 950 with an almost flat phase curve in the region of interest, indicating a good compression of the pulses. Additionally, the reconstructed spectrum (blue) exhibits nearly the same profile as that of the measured laser spectrum (dashed red), which confirms the reliability of the D-scan measurement results. Fig. <ref>b shows the reconstructed temporal intensity profile of the pulse (orange) together with the calculated Fourier Transform limited pulse (blue). The reconstructed pulse duration (full-width-half-maximum, FWHM) was measured to be 4.56 compared to the transform-limited duration of 3.98, with a reconstruction error of 0.058%.Furthermore, since the TIPTOE traces in principle provide a replica of the signal laser field, information regarding the original signal laser pulse can be obtained from the TIPTOE traces measured from the bare substrate with no plasmonic contribution, as long as the induced dispersion is carefully compensated. As shown in Fig. <ref>c, by taking the square of the amplitude of the measured E-field (orange) and taking the envelope of the positive intensity trace (blue), a laser pulse profile with a FWHM pulse width of 4.53 can be obtained, which is in great consistency to the results of the D-scan measurement.The displayed TIPTOE traces are averaged over three consecutive measurements. To the averaged trace we applied the Savitzky-Golay-filter to filter out residual noise components. The comparison between non-filtered and filtered trace can be seen in Fig. <ref>a. To validate the accuracy of the TIPTOE measurement the FFT retrieved spectral amplitude is compared to a measured spectrum of the probe pulse under the same conditions using a commercial spectrometer, which is shown in Fig. <ref>b.In Fig. <ref>b the retrieved spectrum from the TIPTOE compared to the spectrum obtained with the spectrometer agrees well. Additionally, the retrieved spectral phase displays a relative flat phase with a small residual negative GDD.The damage threshold for the synthesized samples was determined by using comparable few-cycle pulses (5.9) as in the experiment at a repetition rate of 1 kHz. The range of peak intensities, which was applied to the samples ranged between 10^10 W/cm^2 and 10^14 W/cm^2. The damage was mainly determined by observation of the transmission spectrum to observe significant changes. At intensities starting from 1.5·10^12 W/cm^2 we observe strong modulations of the spectrum with fringes in the few-cycle spectrum. These fringes also appear for the pure substrate, but at higher peak intensities at 2.5·10^12 W/cm^2. Therefore, the determined threshold for the synthesized NPs should not exceed a value above 10^12 W/cm^2, as the range from induced damage for NPs to the fused-silica substrate itself is relatively small. We do not expect significant changes for a higher repetition rate of 10 kHz as used in the experiment, as at this frequency rate heat dissipation can still occur after two consecutive pulses. Literature which also performed damage threshold measurements on gold NPs with ultrashort laser pulses showed remarkably similar values on the same magnitude <cit.>. During our experiments we operated at intensities which were substantially lower than the determined threshold, as the highest peak intensity used in the experiment was below 10^10 W/cm^2, which means that we operated at least two magnitudes lower. This way we also ensure that we are operating in the linear regime for the interaction between the pulse and the plasmonic sample as also confirmed by the linear calibration measurements. The modulations at high intensities compared to transmission spectra below the determined damage threshold are shown in Fig. <ref>:In terms of the determined laser induced damage threshold (LIDT) value, it is ensured that the value is not exceeded for the set of experiments due to the use of relatively low power and the fact that the sample is placed in a collimated beam path. The actual laser peak intensity on the sample surface could be estimated using the following equation (<ref>),I_0=√(16 ln(2)/π^3)·W/τ_FWHMω_0^2 where W is the pulse energy, τ_FWHM is the pulse duration, and ω_0 is the radius of the beam. With a Gaussian beam diameter of 5 on the sample and a pulse duration of 4.53, the maximum peak intensity achieved with a pulse energy of 0.18 is 3.464·10^10 W/cm^2, which is two magnitudes below the damage threshold value determined for the AuNPs (and below the estimated value for the AuNRs as well).§ SUPPLEMENTARY NOTE 6: BASIC CONCEPT FOR DATA ANALYSIS Taking a bird's eye view, in the experimental scheme without sample, the lightwave detector, i.e. the TIPTOE detector, is a linear time-invariant (LTI) system when it is operated in the linear regime as shown in the dashed box in Fig. <ref>. Therefore, the measured electric current u(t) can be modeled as a temporal convolution of the instrument response function (IRF) of the TIPTOE detector and the incident light electric field E_0(t):u(t)=E_0(t) ∗IRF(t). The linearity of the detector has been verified in Supplementary Note 3 (cf. Fig <ref>). Assuming a flat response of the detector as proven elsewhere <cit.>, the measured temporal trace from the electric current is directly connected to the incident electric field as shown below:H_IRF(ω)=const , IRF(t) ∼const∗δ(t) , u(t)=E_0(t) ∗δ(t)=E_0(t) .Note that the time and frequency domain signals are connected by the Fourier transform, where the constant number, i.e. the spectrally flat response, in frequency domain is a delta function in time domain.The linear light-matter interaction system, like in our experiment, is also a LTI system, which can be modeled as shown in Fig. <ref>. With the sample in beam path, the entire system can be modelled as two LTI systems put in series. It has already been demonstrated that measured electric current u(t) in the experiment directly reflects the light electric field reaching the detector, therefore, we can safely consider the measured results as light electric field transmitted through the sample, i.e. u(t) ∼ E_sample(t).Similarly, since it is an LTI system where its linearity has been verified in Supplementary Note 4, the measured transmitted E-field, E_sample(t), is a temporal convolution of the impulse response function of the sample, R(t), and the incident E-field E_0(t). Considering very close values between the estimated time scale of plasmonic response and the pulse duration our incident light, a proper way of extracting the plasmonic response of the sample would be to perform a deconvolution process. In the Fourier deconvolution method, one needs to first Fourier transform E_sample(t) and E_0(t) in time domain to frequency domain E_sample(ω) and E_0(ω). The impulse response of the sample in frequency domain can then be calculated as R(ω)=E_sample(ω)/E_0(ω) .Finally, the plasmon response in time domain can be obtained by inverse transforming the R(ω), and R(t)=iFFT[R(ω)].Therefore, we note that while the subtraction of time traces between the transmitted and incident field as shown in the main text may not ultimately yield the plasmonic response in time domain, it can well serve to expose the distortion of the E-field induced by the plasmon resonance and to magnify the non-trivial contribution of the plasmonic resonances. A similar analysis was reported in <cit.>. § SUPPLEMENTARY NOTE 7: DETAILS OF THE FDTD SIMULATION To theoretically determine the influence induced by the plasmonic field on the incident E-field, finite difference time domain (FDTD) simulations were performed using a commercial software from Lumerical. In the simulation, the dimensions of the samples with the AuNRs (AuNS as well) including the dielectric constants were given as the input parameters. The measured distinction spectra for both resonant and non-resonant cases (see Fig. 2), showing a broadband plasmonic resonance due to the fact that the prepared AuNRs and AuNS are not perfectly uniform in size. Therefore, to mimic the experimental conditions of the broadband plasmonic resonance, five different sized AuNRs with sizes considering the determined size distributions from the colloidal samples were modelled in the simulation box. Additionally, AuNS with sizes of 20 in diameter were also simulated in the same condition, serving as the non-resonant reference to the resonant case. The gold NPs were embedded in a 30 thick polystyrene layer coated on a bulk fused silica substrate (Fig. <ref>a). To accurately model the fused silica substrate, we assigned a constant refractive index of n=1.45. Optical data for gold were obtained from ref. <cit.>. Periodic boundary conditions were applied in the directions perpendicular to the substrate surface, while perfectly matched layer (PML) boundaries were utilized in the parallel direction. The light source used in the simulations was the incident field obtained from our experiment, which is a linearly polarized few-cycle pulse of 4.6 duration centered around a wavelength of 780. Time and frequency monitors are placed 200 before and after the structure, ensuring a far-field detection with no influence from the gold NPs.In the frequency domain, as shown in Fig. <ref>b, for the resonant case, the bandwidth of the extinction spectrum can be well reproduced using the above-mentioned model, indicating that the simulated sample conditions are in good agreement with our experimentally investigated sample conditions. The time domain results of AuNRs (resonant case) have been shown and discussed in the main text, therefore, only the results of AuNS (non-resonant case) are shown here in Fig. <ref>. Comparing to the case of AuNRs, the E-field interacted with AuNS does not show any obvious phase shift, and the temporal trace almost remains the same (Fig. <ref>a), showing no plasmonic resonance effect from the sample. By performing a Fourier transform of the temporal traces of the incident field and the detected field, one can see that both the spectral amplitude and the spectral phase curves are almost identical (Fig. <ref>b), further confirming that the observed plasmonic dephasing and phase crossing for the resonant case can not be detected in non-resonant AuNS. These results are quite reliable and consistent to the experimental observations.In addition, a comparison between the simulated near- and far-field differential signal of the AuNRs was conducted. The results are displayed in Fig. <ref>. All data show a qualitative agreement in the first few oscillations after the driving electric field maximum. As expected the decaying window for the near-field differential is much longer than for the far-field conditions (cf. main text).§ SUPPLEMENTARY NOTE 8: TIME-FREQUENCY ANALYSIS A time-frequency analysis was conducted by applying a Wigner-Ville distribution (WVD) function on the measurements, simulations and the respective differentials. The WVD response of each fields are summarized in Fig. <ref> An agreement between experimental and simulated traces for the sample response is qualitatively observable. In the case of the reference a slight positive chirp of the red component is evident. We also observe prepulses at negative time delays, which are also indicated in the time traces as shown in the main manuscript. For the differentials we observe the buildup of the plasmonic contribution after zero delay and a chirp for both ends of the spectral frequencies as well. § SUPPLEMENTARY NOTE 9: INFLUENCE OF DIELECTRIC MEDIUM To investigate the influence of the dielectric medium, which in this case is the matrix of polystyrene, in which the particles are embedded, aFDTD simulation with and without the matrix was conducted to investigate the plasmon response depending on its environment. To account for the shift of the plasmon resonance, due to change of the medium, which leads to a change of the refractive index, we shifted our defined Gaussian pulse, with a pulse duration of 4.6, accordingly on the wavelength axis and assume a negligible residual chirp due to the different dispersion response at different wavelengths. The results are shown in Fig. <ref> The simple simulations display the influence of the dielectric medium on the plasmon resonance. We observe a strong enhancement of the plasmon resonance response in the polystyrene medium, whereas the response in vacuum, in which the particles are only attached to the substrate surface displays a lower intensity. This effect is also reflected in the spectral phase, as the phase response for the interaction with the polystyrene medium is much more pronounced. We are confident that our field sampling technique is sensitive to these changes in phase, which would prove to be advantageous in the design of metasurfaces.
http://arxiv.org/abs/2312.16121v1
{ "authors": [ "Kai-Fu Wong", "Weiwei Li", "Zilong Wang", "Vincent Wanie", "Erik Månsson", "Dominik Hoeing", "Johannes Blöchl", "Thomas Nubbemeyer", "Abdallah M. Azzeer", "Andrea Trabattoni", "Holger Lange", "Francesca Calegari", "Matthias F. Kling" ], "categories": [ "physics.optics", "cond-mat.mes-hall" ], "primary_category": "physics.optics", "published": "20231226170745", "title": "Far-field Petahertz Sampling of Plasmonic Fields" }
plain /TemplateVersion (IJCAI.2023.0)LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States AnalysisJinwen He^1,2 Yujia Gong^1,2 Kai Chen^1,2 Zijin Lin^1,2 Chengan Wei^1,2Yue Zhao^1,2 ^1SKLOIS, Institute of Information Engineering,Chinese Academy of Sciences^2School of Cyber Security, University of Chinese Academy of Sciences{hejinwen, gongyujia, linzijin, weichengan, zhaoyue, chenkai}@iie.ac.cn ======================================================================================================================================================================================================================================================================================================================== Large Language Models (LLMs) have revolutionized various domains with extensive knowledge and creative capabilities. However, a critical issue with LLMs is their tendency to produce outputs that diverge from factual reality. This phenomenon is particularly concerning in sensitive applications such as medical consultation and legal advice, where accuracy is paramount. In this paper, we introduce the LLM factoscope, a novel Siamese network-based model that leverages the inner states of LLMs for factual detection. Our investigation reveals distinguishable patterns in LLMs' inner states when generating factual versus non-factual content. We demonstrate the LLM factoscope's effectiveness across various architectures, achieving over 96% accuracy in factual detection. Our work opens a new avenue for utilizing LLMs' inner states for factual detection and encourages further exploration into LLMs' inner workings for enhanced reliability and transparency.§ INTRODUCTION Large Language Models (LLMs) have gained immense popularity, revolutionizing various domains with their remarkable creative capabilities and vast knowledge repositories. These models reshape fields like natural language processing <cit.>, content generation <cit.>, and more. However, despite their advanced abilities, a growing concern surrounds their propensity for “hallucination" — the generation of outputs that deviate from factual reality <cit.>. In critical applications like medical consultation <cit.>, legal advice <cit.>, and educational tutoring <cit.>, factual LLM outputs are not just desirable but essential, as non-factual outputs from these models could potentially lead to detrimental consequences for users, affecting their health, legal standing, or educational understanding. Recognizing this, LLM-generated content's factual detection has emerged as an area of paramount importance <cit.>. Current research predominantly relies on cross-referencing LLM outputs with external databases <cit.>. While effective, this approach necessitates extensive external knowledge bases and sophisticated cross-referencing algorithms, introducing more complexity and dependency. This raises a compelling question: Could we possibly exclude external resources but only leverage the inner states of LLMs for factual detection? Drawing inspiration from human lie detectors, which assess physiological changes like heart rate and micro-expressions to detect statement inconsistencies <cit.>, our study proposes a similar approach for LLMs' factual detection. We hypothesize that LLMs, having been exposed to a broad spectrum of world knowledge during training, might exhibit distinguishable patterns in their inner states when generating outputs that are either factual or non-factual. While LLMs might also learn from non-factual sources, they prefer choosing more factual sources for training <cit.>. Therefore, they may exhibit different inner states that imply factual or non-factual outputs. Our investigation observed distinct activation patterns in LLMs when they output factual versus non-factual content. Figure <ref> displays the average activation values at each layer where we queried Llama-2-7B six times about different movies and their respective directors. The figure shows the average activation values as a line, with the shaded area representing the minimum to maximum activation values observed across these queries. Out of these six queries, three responses were factually correct, and three were incorrect. Notably, there is a discernable difference in activation values between layers 16 to 19 and 21 to 24. This phenomenon stems from the differential areas within LLMs responsible for factual information and creative output, leading to varying internal state behaviors when producing factual versus non-factual content<cit.>.Based on our preliminary observations that LLMs exhibit distinct activation patterns when outputting factual versus non-factual content, we introduce the LLM factoscope, a Siamese network-based factual detection model. The LLM factoscope analyzes the inner states from LLMs, including activation maps, final output ranks, top-k output indices, and top-k output probabilities, each offering a unique perspective on the model's internal decision-making process. Activation maps are utilized to understand information processing within the LLM, highlighting the neurons actively generating factual versus non-factual outputs. Concurrently, final output ranks indicate the evolving likelihood of the final output token across the layers, providing insights into the model's shifting output preferences. Additionally, top-k output indices identify the most probable output tokens at each layer, reflecting the model's decision-making priorities and its process of narrowing down choices. Complementing these, top-k output probabilities reveal the model's confidence in its top choices at each layer, offering a window into its probabilistic reasoning. Together, these diverse inner states enable our LLM Factoscope model to effectively discern the factual accuracy of LLM outputs, leveraging the nuanced insights provided by each type of intermediate data in a cohesive, integrated manner. LLM factoscope assesses the factuality of the model's current output, providing a novel approach to fact-checking within LLMs. In our experiments, we empirically demonstrate the effectiveness of the LLM factoscope across various LLM architectures, including GPT2-XL-1.5b, Llama2-7b, Vicuna-7b, Stablelm-7b, Llama2-13b, and Vicuna-13b. The LLM factoscope achieves an accuracy rate exceeding 96% in factual detection. Additionally, we extensively examine the model's generalization capabilities and conduct ablation studies to understand the impact of different sub-models and support set sizes on the LLM factoscope's performance. Our work paves a new path for utilizing inner states from LLMs for factual detection, sparking further exploration and analysis of LLMs' inner data for enhanced model understanding and reliability. Our contributions are as follows:* We designed a pipeline for LLM factoscope, encompassing factual data collection, creation of a factual detection dataset, model architecture design, and detailed training and testing procedures. All the datasets and implementation will be released for further research and analysis.* We empirically validated the effectiveness of LLM factoscope, explored its generalizability across various domains, and conducted thorough ablation experiments to understand the influence of different model components and parameters settings. § BACKGROUND§.§ Large Language Models Large Language Models (LLMs), predominantly structured around the transformer decoder architecture <cit.>. These models, typically comprising billions of parameters, are adept at capturing intricate language patterns <cit.>. A formalized view of their inner workings can be presented as follows: Consider an LLM defined as a function F mapping an input sequence 𝐱 = (x_1, x_2, …, x_n) to an output sequence 𝐲 = (y_1, y_2, …, y_m), where 𝐱 and 𝐲 consist of tokens from a predefined vocabulary 𝒱. Each token x_i is first transformed into a high-dimensional space through an embedding layer, resulting in a sequence of embeddings 𝐄 = Embed(𝐱). The core of an LLM lies in its multiple layers of transformers, each comprising two main components: a self-attention module 𝒜 and a multilayer perceptron (MLP) module ℳ. For a given layer l, the hidden state 𝐇^(l-1) (with 𝐇^(0) = 𝐄) is first processed by the self-attention mechanism. The output of the attention layer, denoted as 𝐀^(l), is then passed through the MLP layer. The MLP, a series of fully connected layers, further processes this data to produce the output, denoted as 𝐌^(l). The process within each layer can be mathematically represented as: 𝐀^(l) = 𝒜(𝐇^(l-1)), 𝐌^(l) = ℳ(𝐀^(l),𝐇^(l-1)),𝐇^(l) = 𝐇^(l-1)+𝐀^(l)+𝐌^(l), where 𝒜 and ℳ encapsulate the operations within the attention and MLP, respectively. After the final layer L, the output 𝐇^(L) is typically passed through a linear layer followed by a softmax function to generate a probability distribution over the vocabulary 𝒱 for each token in the output sequence: 𝐲 = softmax(W ·𝐇^(L) + b), where W and b are the weights and bias of the linear layer, respectively. Our method leverages inner states from the LLM, such as output from the hidden layer and MLP module, to detect whether the next output of the LLM is factual or not. §.§ LLM Factual Detection Fact-checking LLM outputs has become an increasingly critical task. Current approaches to mitigate LLM-generated inaccuracies include scrutinizing training datasets and cross-referencing external databases. Manual examination <cit.> of training datasets is labor-intensive, while external database referencing <cit.> <cit.> incurs additional computational costs and relies heavily on the effectiveness of cross-verification techniques.A recently proposed SAPLMA <cit.> investigates whether LLMs can discern the factuality of an input sentence. They use output from a single layer of LLM to train a fully connected neural network. Our method aims to distinguish each output as factual or non-factual, closely emulating the typical usage of LLMs. We leverage not just activation values from a single layer, but also the inter-layer changes in activations and hidden states within the LLM. This multi-dimensional analysis of the LLM's inner data is akin to observing various physiological responses in a human lie detector <cit.>. By aggregating these intermediate states, our method provides a more effective, generalized, and explainable tool for analyzing the factual accuracy of the LLM's output. §.§ Siamese Network Siamese Networks are designed to address few-shot learning challenges by discerning the similarities and differences between input pairs rather than conventional classification <cit.>. These networks consist of two identical sub-networks with shared weights, ensuring uniform processing of inputs. Their primary aim is to learn a feature space where similar items are brought closer together, and dissimilar ones are pushed apart, using a contrastive loss function. This approach is particularly effective for few-shot learning, as it allows the network to learn robust representations of relationships between inputs, rather than direct classifications.Our LLM Factoscope model uses the Siamese Network framework to analyze inner states within LLMs. The training phase is guided by maximizing the similarity between similar (both from factual or both from non-factual) data points and minimizing it for dissimilar (one from factual and the other from non-factual) ones.During testing, the LLM Factoscope model uses a support set comprising a collection of labeled data for classification. When a test sample is introduced, the model computes its embedding and compares it with those in the support set. The classification of the test data is the same as the nearest sample in the support set, thus leveraging the learned similarities and differences from the training phase. This method ensures a more reliable and nuanced classification, especially in scenarios with limited or diverse data, by effectively utilizing the learned relationships within the model. § LLM FACTOSCOPE This section outlines our method for developing an LLM factoscope. We begin with an overview of our pipeline. Subsequently, we delve into the preprocessing steps necessary to refine this data for effective model training. Then, we present the architecture of the LLM factoscope, elaborating on integrating various sub-models for processing different types of inner states. Lastly, we explain our model's training and testing procedures. §.§ Overview We introduce a pipeline for creating an LLM factoscope that leverages the intermediate information of LLMs, as shown in Figure <ref>. Initially, we search structured data from the Kaggle repository in CSV format. Then, we extract entities and their corresponding targets that exhibit specific relations. For instance, in the context of a relation like movie-director, one such data point might be: the entity being the movie title `2001: A Space Odyssey', and the target being its director, `Stanley Kubrick'. The entity, relation, and target are the framework we construct our dataset. This dataset is then deployed to probe LLMs to check whether their responses align with factual correctness, serving as labels for our factual detection dataset.Concurrently, we capture the LLMs' inner states, which include the model's inner representation of knowledge, and use them as features for our dataset.After collecting data, we apply a series of preprocessing steps, such as normalization and transformation. These steps are crucial as they standardize the data to a uniform scale and format, thereby significantly enhancing the LLM factoscope's learning capabilities. The final step is to train a Siamese network-based model, designed to maximize the embedding similarity between similar class data (either both factual or both non-factual) and minimize the similarity between pairs of dissimilar class responses (one factual and one non-factual). §.§ Factual Data CollectionWe start our dataset collection by searching for fact-related CSV datasets on Kaggle <cit.>, a platform chosen for its diverse and extensive datasets. The CSV format's inherent structuring into entities, relationships, and targets makes it an ideal candidate for the automated generation of prompts and answers.Our dataset includes various categories, such as art <cit.> <cit.>, sport <cit.> <cit.>, literature <cit.>, geography <cit.>, history <cit.>, science <cit.>, and economics <cit.>, ensuring comprehensive coverage of various factual aspects. Within each category, we have developed datasets encompassing multiple relational types—for instance, in the art category, relationships such as artwork-artist, movie-director, movie-writer, and movie-year are included. Leveraging GPT-4's advanced capabilities <cit.> and meticulous manual adjustments, we have crafted clear and unambiguous prompts to ensure that LLMs can accurately comprehend the questions. To further enhance the dataset's robustness and diversity, we have developed multiple synonymous question templates for each relation type. Table <ref> provides an overview of the datasets. Each dataset entry consists of a prompt and a corresponding factual answer. We have made this dataset available for open-source contributions to facilitate further research. §.§ Inner States Collection After constructing our factual dataset, we present these prompts to the LLM. Beyond merely capturing the LLM's direct responses, our focus extends to gathering inner states. This data, consisting of activation values and hidden states, is captured specifically for the last token of the entire prompt. The data is crucial for comprehensively understanding the model's inner mechanisms, particularly how it processes information and makes decisions when generating responses. In the following, we detail the collection and significance of four key types of inner states: activation map, final output rank, top-k output index, and top-k output probability. Each type sheds light on different aspects of the LLM’s functioning, contributing to a deeper understanding of its response generation process.Activation map: The activation map represents the activation values of the last token of the prompt when processed by the LLM. This map encapsulates the LLM’s inner representation of the knowledge pertinent to the prompt. As the LLM traverses through its layers, it retrieves information relevant to the prompt <cit.>. When the subsequent word aligns with the ground-truth answer, it indicates successful knowledge retrieval at the intermediate layers; otherwise, it suggests inadequate knowledge retrieval. These contrasting scenarios are expected to show distinct activation patterns, which we capture through the activation map.Final output rank: This rank represents the position of the final output token in the probability distribution at each layer of the LLM. Specifically, we acquire the hidden states at each layer, apply the same vocabulary mapping as done in the final hidden layer through the linear layer, and thereby attain logits for each token in the vocabulary <cit.>. The rank is determined based on the descending order of logits for the final output token at each layer. The rank shows how the likelihood of the final token changes across layers, reflecting the model's evolving output preferences.Top-k output index and probability: From the logits used in the final output ranking, we identify the top-k tokens with the highest logits in each layer. These tokens represent the model's most likely outputs after processing the information at each layer. The relationships among these top-k tokens, both within and across layers, shed light on various cognitive aspects of the model’s processing. Applying a softmax function to the logits in each layer, we get the probability distribution of all tokens, subsequently identifying the top-k tokens with the highest probabilities. This data reflects the fluctuating probabilities of these tokens across layers, providing insights into the model's probabilistic reasoning.By closely examining the LLMs' intermediate responses to factual prompts, we not only gain valuable insights into the inner dynamics of the models' decision-making processes but also establish a foundation for more nuanced analysis and modeling of their behavior in discerning factual from non-factual outputs.Moreover, alongside these inner states, we record labels for factual detection. These labels, derived by evaluating whether the model's first word following each prompt aligns with the factual answer, serve as a key indicator of the model's accuracy in factual detection. A correct alignment is marked as positive, while a misalignment is categorized as negative. §.§ Inner States Preprocessing In this section, we introduce the preprocessing of inner states for factual detection using LLMs. This preprocessing involves normalization and transformation techniques to refine the data for effective integration into the training process. We detail the preprocessing methods applied to each category of inner states.Normalization of activation map: We calculate the mean μ and standard deviation σ of the dataset. The activation map 𝐀 is then normalized using the formula: 𝐀_normalized=(𝐀-μ) / σ. This normalization ensures a uniform scale for the activation values, enhancing their comparability and relevance in the model's learning mechanisms.Transformation of final output rank: The rank of the final output token undergoes a specific transformation to normalize the ranks to fall within the range of 0 to 1 and emphasize higher initial ranks (lower numerical values). Mathematically, the transformation of rank r can be represented as r_transformed=1 /[(1-r)+1+10^-7], r∈(1,|V|), where |V| is the size of LLM's vocabulary. When the rank r is 1 (indicating the highest initial rank), the transformed rank r_transformed becomes its maximum value, close to 1. Adding 10^-7 in the denominator is a small constant to prevent division by zero.Distance calculation for top-k output index: In processing the top-k output index, we measure the semantic proximity between token embeddings across adjacent layers. This is achieved by calculating the cosine similarity between the embeddings of tokens, providing insights into how the model's perception of these tokens evolves across layers. The distance metric helps understand the semantic continuity or shifts within the model's processing layers.It is important to note that while most categories of inner states require preprocessing to standardize their scales or enhance their interpretability, the top-k output probability data does not undergo such preprocessing. This is because the Top-k output probabilities are inherently on a consistent scale, being probabilities that naturally range from 0 to 1. Hence, they are already in a format conducive to model training and analysis, requiring no additional normalization or transformation. §.§ LLM Factoscope Model Design After preparing the inner states datasets, we develop the LLM factoscope model, which is inspired by the principles of few-shot learning and Siamese networks. It is designed to effectively learn robust representations from limited data. This approach aims to distinguish between factual and non-factual content and demonstrates impressive generalization capabilities on similar but unseen data. Our model comprises four distinct sub-models, each dedicated to processing one of the key types of inner state data: activation map, top-k output index, top-k output probability, and final output rank.For the activation map, top-k output index, and top-k output probability, we utilize Convolutional Neural Networks (CNNs) with the ResNet18 architecture <cit.>. The choice of ResNet18, with its convolutional and residual connections, is particularly advantageous for efficiently capturing the relationships between and within different layers of the LLM. These CNNs transform the inner states into embeddings 𝐄_activation, 𝐄_top-k index, and 𝐄_top-k prob. Each embedding captures unique aspects of the LLM's processing dynamics. As for the final output rank, a sequential data type, we use a Gated Recurrent Unit (GRU) network <cit.>, reflecting the temporal evolution of the model's output preferences across layers. This network yields an embedding 𝐄_rank. The embeddings from these four sub-models are then integrated through a linear layer to form a comprehensive mixed representation, 𝐄_mixed. This representation embodies the LLM's holistic factual understanding. This combined embedding captures an integrated expression of the LLM's factual understanding, representing spatial and temporal insights.During training, our model utilizes the triplet margin loss <cit.>, a metric integral to embedding learning in few-shot learning scenarios. This loss function is designed to minimize the distance between instances of the same class while maximizing the distance between instances of different classes. For a given training instance x, we feed it to the combined model and get an embedding for its mixed representation, 𝐄_anchor. Alongside, we select a positive example 𝐱_pos from the same category as the anchor and a negative example 𝐱_neg from a different category. Subsequently, we obtain their respective mixed expressions 𝐄_pos and 𝐄_neg. The triplet margin loss aims to ensure that the distance between the anchor and the positive instance, Dist(𝐄_anchor, 𝐄_pos), is smaller than the distance between the anchor and the negative instance, Dist(𝐄_anchor, 𝐄_neg), by at least a margin α. This loss function is formally defined as:L=max (Dist(𝐄_anchor, 𝐄_pos) - Dist(𝐄_anchor, 𝐄_neg) + α, 0),where Dist(·,·) is the chosen similarity metric, typically Euclidean distance, and α is a critical hyperparameter. By fine-tuning α, we can enhance the model's discriminative capability, ensuring that the similarity between the anchor and the positive instance is greater than that between the anchor and the negative instance by at least the margin α. The training process minimizes the loss, refining the model's ability to accurately differentiate between factual and non-factual content.In the testing phase, we establish a support set consisting of data samples and their corresponding targets, denoted as { S_1, …, S_n} and { T_sup_1, …, T_sup_n}, respectively. These samples have not been used in the training process of the LLM factoscope model. They are crucial for the testing phase, as they provide a reference for comparing and classifying new, unseen test data. Each sample in the support set is processed through the LLM factoscope model to generate mixed representations, represented by {𝐄_sup_1, …, 𝐄_sup_n}. The mixed representations are outputs of the LLM factoscope model. The test data's mixed representation, 𝐄_test, is then compared against these support set representations. The classification of the test data is determined by identifying the closest support set embedding to 𝐄_test. The target of the test data is the target of this nearest support set data:T_test = T_sup_i^*where i^* = _iDist(𝐄_test, 𝐄_sup_i). Here, the index i^* identifies the support set data that is closest to the test data, and T_sup_i^* is the target associated with this closest support set data. This approach ensures accurate and reliable classification of the test data by leveraging the similarities within the representations of the support set. Details on the model’s architectural parameters and training parameters are explored in Section <ref>, where their impacts on model performance are thoroughly investigated. § EVALUATION §.§ Experimental Setup Dataset. We employ various factual datasets encompassing multiple domains such as art, sports, literature, geography, history, science, and economics. Each domain includes several relations, like the artist of a particular artwork and the founder of a company. The factual datasets comprise 247,246 data points, facilitating a comprehensive evaluation of the model’s ability to discern factual information. Then, we record the inner states of the LLM as it processes factual and non-factual statements, including activation values, final output rank, top-k output index, and top-k output probability. The label assigned to each data point indicates whether the corresponding model output is factual or non-factual. To ensure dataset balance, we randomly select an equal number of factual and non-factual data points for each factual relationship. Furthermore, the features of this dataset are preprocessed to ensure they are standardized and optimized for model learning.Models. Our experiments are conducted on several popular LLMs, each with distinctive architectures and characteristics. These models include GPT2-XL <cit.>, LLaMA-2-7B <cit.>, LLaMA-2-13B, Vicuna-7B-v1.5 <cit.>, Vicuna-13B-v1.5, and StableLM 7B <cit.>.These models allow us to comprehensively evaluate the effectiveness of our fact detection methodology across various LLM architectures and configurations. The LLM factoscope model comprises several sub-models, each tailored to handle a specific type of inner state. This includes a ResNet18 model for processing activation values, a GRU network for final output rankings, and two additional ResNet18 models for handling word embedding distances and top-k output probabilities. We set top-k to top-10. The output of each sub-model is an embedding of dimension 24.These embeddings from each sub-model are concatenated, resulting in a combined embedding of dimension 96.This combined embedding is then fed into a fully connected layer, which reduces the dimensionality to 64, ensuring a compact yet informative representation. The final embedding undergoes ReLU activation and L2 normalization, providing a normalized feature vector for each input. During testing, the size of the support set is set to 100.Experimental Environment. Our experimental setup was hosted on a server with 32 Intel Xeon Silver 4314 CPUs at 2.40 GHz, 386 GB of RAM, and four NVIDIA A100 Tensor Core GPUs, providing substantial computational capacity and facilitating efficient processing for large-scale computations. The entire suite of experiments was conducted on an Ubuntu 20.04 LTS operating system. §.§ EffectivenessIn evaluating the performance of LLM Factoscope, we considered various LLMs, including GPT2-XL-1.5B, Llama2-7B, Vicuna-7B, Stablelm-7B, Llama2-13B, and Vicuna-13B. To establish a comparative baseline, we use activation values from specific layers, particularly those in the middle to later stages of the LLMs, as input features to train a fully connected neural network for factual detection, which aligns with our observations in Figure <ref>. For GPT2-XL-1.5B, the model is based on the activation values from the 31st layer. In the case of Llama2-7B and Vicuna-7B, the 23rd layer's activation values are used. For Stablelm-7B, the baseline model relied on the 12th layer, while Llama2-13B and Vicuna-13B utilize the activation values from their 32nd layers.As shown in Table <ref>, our LLM Factoscope consistently maintains high accuracy levels, ranging between 96.1% and 98.3%, across different LLM architectures. In contrast, the accuracy of the baseline fluctuates between 78.5% and 88.8%. This variation suggests that as LLMs increase in parameter size, the regions responsible for different types of factual knowledge might differ, or multiple layers could be involved in representing a single type of factual knowledge. Consequently, the baseline, which relies solely on activation values from a single layer, demonstrated unstable performance. Based on the analysis, we believe that the superior performance of LLM Factoscope is attributable to its consideration of various inner state changes across layers. By integrating this multi-dimensional analysis of inner states within LLMs, LLM Factoscope effectively discerns factual from non-factual outputs, offering a more robust and reliable approach to factual detection. This method's success not only highlights the significance of inner activations and activation values in understanding LLM outputs but also paves the way for future explorations into the intricate workings of LLMs, particularly in the realm of natural language processing applications. §.§ GeneralizationIt is well-established in neural network research that the effectiveness of a model largely hinges on the similarity between training and testing distributions <cit.>. Thus, our model's performance may vary across different distributions. We adopt a leave-one-out approach for our generalization assessment. Specifically, we remove one relation dataset, train the model on the remaining datasets, and then test it on the omitted dataset. We selected three relations for assessing generalization, including Book-Author, Pantheon-Country, and Athlete-Country, as these relations form sizable datasets across all LLMs.Our empirical findings suggest that different LLMs exhibit varying generalization capabilities across different relations, as shown in Table <ref>. In the “Book-Author” relation, our method achieved a notable 97.7% accuracy with Llama2-7B and 69.0% with Stablelm-7B. This variation is likely due to each LLM's unique handling of different types of factual knowledge. Our method outperforms the baseline in most cases, with its average performance consistently outperforming the baseline, indicating its superior generalization ability.We believe that LLM factoscope uses various intermediate states beyond activation values—specifically, final output rank, top-k output index, and top-k output probability—significantly bolsters its generalization capabilities. Stablelm-7B exhibits the weakest generalization performance among all the LLMs tested. This aligns with its relatively lower scores on LLM leaderboards <cit.>. We hypothesize that this could be attributed to its less effective learning of factual versus non-factual content it was trained on.While our LLM factoscope demonstrates certain levels of generalization, we recommend ensuring similarity in the distribution of testing and training data when using this tool. For instance, an LLM who served as a historical knowledge assistant should align the LLM factoscope's training with relevant historical data to ensure its effectiveness. §.§ Interpretability We delve into the interpretability of the LLM factoscope, aiming to analyze the contribution of these features in discerning the factualness of LLM outputs. Specifically, we use the Integrated Gradient <cit.> to analyze the contribution of activation maps, final output ranks, top-k output indices, and top-k output probabilities. Integrated Gradient is particularly chosen for its higher faithfulness in interpretability assessments <cit.>. Our analysis reveals that the most influential features are mainly in the middle to later layers of the LLMs, consistently observed across all four data types. To provide a clearer visualization of this pattern, we present a typical example in Figure <ref>. In the figure, red indicates a positive contribution, while blue signifies a negative contribution, with deeper colors representing higher average importance. Due to the high dimensionality of activation data, we compute and display the average importance of features at each layer.The majority of positive contributions emerge after the 15th layer. This finding aligns with our observations that the model initially filters semantically coherent candidate outputs in the earlier layers, and then progressively focuses on candidates relevant to the given prompt task in deeper layers. §.§ Ablation StudyContribution of each sub-model. We evaluate the contribution of each sub-model by incrementally adding them to the factual detection model on GPT2-XL-1.5b. As depicted in Table <ref>, we notice a slight but consistent improvement in Acc with the addition of more sub-models. This indicates that each sub-model brings a unique dimension to the model's capabilities, enhancing its overall performance. We employ the “leave one out” training approach as in Section <ref> to assess the contribution of sub-models to generalization. The results in Table <ref> demonstrate enhanced generalization as more sub-models are integrated. This improvement is particularly evident in the final model, which shows a significant increase in Acc across various datasets compared to the model with only one sub-model. For instance, in the “book-author” category, the accuracy improves from 66.5% to 71.2%, and in the “athlete-country” category, it jumps from 91.6% to 97.9%. One possible explanation for this enhanced generalization is the varied dependencies of the sub-models on the relational data type. The first sub-model (activation map) is heavily reliant on the type of relationship data, whereas the subsequent sub-models (final output rank, top-k output index, and top-k output probability) are more independent of the relationship data type. Therefore, they exhibit stronger generalization capabilities. The design of multiple sub-models captures both the relational data type-dependent and independent features, achieving high effectiveness and improved generalization.Effects of different top-k. The top-k affects the top-k output index and top-k output probability. The previous experiments set the top-k to 10 unless otherwise stated. Now, we will evaluate the effect of choosing different values for top-k on the performance of the factual detection model. We set the top-k to 2, 4, 6, 8, 10. The results of the experiment are shown in Table <ref>. The lowest performance is 90.4% when top-k is 4, and the highest performance is 95.4% when top-k is 10. The difference between the two is only 5%, which indicates that top-k does not have much effect on the model performance, and that there is no purely positive correlation between top-k and the factual detection model's accuracy.Effects of different support set size. We also try different support set sizes from 50 to 250, observing their impact on the performance of the factual detection model. This evaluation was conducted on the Llama2-7b. The results, as presented in Table <ref>, demonstrate that the change in support set size does not significantly impact the model's performance across most metrics.However, a notable trend is observed with the increase of support set size to 200 or 250. There is a slight increase (about 2%) in Acc and a corresponding rise in the FNR.The rise in FNR could be attributed to the richer variety and broader distribution of non-factual words compared to factual ones. When the support set is expanded to a certain extent, the coverage of non-factual tokens increases disproportionately more than that of factual tokens. This imbalance possibly leads to a scenario where the model is more prone to misclassify data into the non-factual category. Effects of different sub-models' architectures. We use different sub-model architectures and assess the performance of the factual detection model. We use fully connected layers to replace the ResNet18 and RNN to replace the GRU network. As shown in Table <ref>, the original architecture achieves an impressive Acc of 95.4%. This demonstrates the effectiveness of the original design in capturing and processing factual content accurately. When we replace parts of the architecture with fully connected layers (act-fc, prob-fc) and RNNs (rank-rnn), we notice a slight decline in performance. Specifically, the act-fc show a decrease in Acc to 94.5%, while the rank-rnn drops the Acc to 91.5%. These changes do not drastically alter the model's ability to factual detection. In contrast, the emb-fc architecture, where we replace the ResNet18 with fully connected layers, result in a significant performance dip. This architecture substantially reduces all metrics, with Acc falling to 73.6%. Such a drastic drop highlights the pivotal role of ResNet18 in effectively capturing the LLM’s top-k output index. These results underscore the critical importance of selecting the appropriate sub-model architectures for factual detection models. While the model demonstrates resilience to certain architectural changes, some alterations can substantially impact its performance. § CONCLUSION We introduced the LLM factoscope, a pioneering approach that utilizes the inner states of Large Language Models for factual detection. Through extensive experiments across various LLM architectures, the LLM Polygraph consistently demonstrated high factual detection accuracy, surpassing 96% in most cases. This robust performance underscores the model's efficacy in discerning factual from non-factual content.Our research not only provides a novel method for factual verification within LLMs but also opens new avenues for future explorations into the untapped potential of LLMs' inner states. By paving the way for enhanced model understanding and reliability, the LLM Polygraph sets a foundation for more transparent, accountable, and trustworthy use of LLMs in critical applications.unsrt
http://arxiv.org/abs/2312.16374v2
{ "authors": [ "Jinwen He", "Yujia Gong", "Kai Chen", "Zijin Lin", "Chengan Wei", "Yue Zhao" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231227014447", "title": "LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States Analysis" }
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions André Yuji Yasutomi^1,2, Hideyuki Ichiwara^1,2, Hiroshi Ito^1,2, Hiroki Mori^3 and Tetsuya Ogata^2,4 Manuscript received: October 1, 2022; Revised December 13, 2022; Accepted January 29, 2023.This paper was recommended for publication by Editor Hyungpil Moon upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by Hitachi, Ltd.^1André Yuji Yasutomi, Hideyuki Ichiwara and Hiroshi Ito are with the R&D Group, Hitachi, Ltd, Japanandre.yasutomi.ss@hitachi.com^2André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito and Tetsuya Ogata are with the Graduate School of Fundamental Science and Engineering, Waseda University, Japan ogata@waseda.jp ^3Hiroki Mori is with the Future Robotics Organization, Waseda University, Japan mori@idr.ias.sci.waseda.ac.jp ^4Tetsuya Ogata is with the Waseda Research Institute for Science and Engineering (WISE), Waseda University, Japan Digital Object Identifier (DOI): https://doi.org/10.1109/LRA.2023.324352610.1109/LRA.2023.3243526January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Graph convolution networks (GCNs) are extensively utilized in various graph tasks to mine knowledge from spatial data. Our study marks the pioneering attempt to quantitatively investigate the GCN robustness over omnipresent heterophilic graphs for node classification. We uncover that the predominant vulnerability is caused by the structural out-of-distribution (OOD) issue. This finding motivates us to presenta novel method that aims to harden GCNs by automatically learning Latent Homophilic Structures over heterophilic graphs. We term such a methodology as LHS. To elaborate, our initial step involves learning a latent structure by employing a novel self-expressive technique based on multi-node interactions. Subsequently, the structure is refined using a pairwisely constrained dual-view contrastive learning approach. We iteratively perform the above procedure, enabling a GCN model to aggregate information in a homophilic way on heterophilic graphs. Armed with such an adaptable structure, we can properly mitigate the structural OOD threats over heterophilic graphs. Experiments on various benchmarks show the effectiveness of the proposed LHS approach for robust GCNs. Graph convolution networks (GCNs) have been widely applied to various graph tasks such as node classifications and link predictions. We take the first step towards theoretically analyzing the structural robustness of GCN approaches over heterophilic graphs, which are omnipresent in real-world cases, and show that these methods are vulnerable to the well-known out-of-distribution (OOD) attacks. This motivates us to present LHS, which aims to increase the structural robustness of GCN with a novel heterophilic structural learning method. Specifically, we first learn a latent structure with a novel self-expressive method based on multi-node interactions, and then iteratively refine such a structure by a pairwise constrained dual-view contrastive learning approach. Equipped with this structure, we are able to properly mitigate the structural OOD threats for GCNs over heterophilic graphs. Experiments on various benchmarks show the effectiveness of our method.Recently proliferated graph convolutional networks (GCNs) have shown great success for node classifications as it seamlessly models the node features and graph structure. However, the heterophilic graphs have challenged the reciprocity of node features and network structure, thus led to poor performance on original GCNs. In heterophilic graphs, the different classes node pairs tend to be connected together, which is actually widespread in real world. Existing approaches mainly employ potential neighbor search, but this strategy highly relies on the original heterophilic structure. It makes the models actually vulnerable to the structural OOD attacks, and then face huge structural robustness risks. To further quantitatively depict these robustness risks, we formulate the structural OOD via proposed peripheral heterophily and provide the structural distribution `left shift' evidence on heterophilic structure. To tackle the above structuralthese robustness problems, we proposed LHS, a novel method aims to harden model robustness via heterophilic structural learning. Specifically, the key ingredients of our approach include: 1) robust structure learner, we first introduce the self-expressive as class-anchored structure learner. By employing dual-view contrastive post-processor, we can obtain more robust and cleaner structure; 2) tunable aggregation and re-mask feature augmenter, which provide more flexible and powerful node structure-level and feature-level learning on learned structure. Finally, extensive experiments on various benchmarks show superiority for both robustness and performance improvement of LHS. This paper first studies the structural robustness problems on heterophilic graphs for graph convolution networks (GCNs). In heterophilic graphs, the different classes node pairs tend to be connected together, which is actually widespread in real world. The heterophilic graphs have also challenged the reciprocity of node features and network structure, thus led to poor performance on original GCNs. Existing approaches mainly employ potential neighbor search, but this strategy highly relies on the original heterophilic structures. It makes the models actually vulnerable to the structural OOD attacks, and then face huge structural robustness risks. To further quantitatively depict the above problems, we formulate the structural OOD via proposed peripheral heterophily and provide the structural distributions ¡®left shift¡¯ evidence on heterophilic structures. To tackle the above structural robustness problems, we proposed LHS, a novel method aims to harden model robustness via heterophilic structural learning. Specifically, the key ingredients of our approach include: 1) robust structure learner, we first introduce the self-expressive as class-anchored structure learner. By employing dual-view contrastive post-processor, we can obtain more robust and cleaner structure; 2) tunable aggregation and re-mask feature augmenter, which provide more flexible and powerful node structure-level and feature-level learning on learned structure. Finally, extensive experiments on various benchmarks show superiority for both robustness and performance improvement of LHS. § INTRODUCTION Graph-structured spatial data, such as social networks <cit.> and molecular graphs, is ubiquitous in numerous real-world applications <cit.>. Graph convolution networks (GCNs) <cit.>, following a neighborhood aggregation scheme, are well-suited to handle these relational and non-Euclidean graph structures, and have been widely applied in various graph tasks, including node classification and recommender systems. Recently, there has been a surge in GCN approaches for challenging heterophilic graphs <cit.>, where most neighboring nodes have different labels or features. These methods can be divided into two categories: 1) Multi-hop-based approaches <cit.>; 2) Ranking-based approaches <cit.>. The former group learns node representations based on multi-hop aggregations, while the latter performs selective node aggregations by a sorting mechanism. These GCN methods continue to advance the state-of-the-art performance for node classification and have enabled various downstream applications <cit.>.Graph convolution networks (GCNs)  <cit.> have been widely applied to various graph tasks such as node classifications and link predictions. So far we have witnessedin homophilic graphs, as they seamlessly model the node features and structure elegantly. However, the heterophilic graphs (graphs with heterophily) are also ubiquitous in real-world scenarios. In heterophilic graphs, the different-classes node pairs tend to be connected together <cit.>, as shown in the pink circles of Fig. <ref>. For example, different amino acid types (nodes) are connected in protein structure; people prefer to date opposite-gender people in dating networks. Due to the heterophilic structure, the local smoothing operations on original GCNs will learn similar representations between different-classes nodes, thus leading to poor GCN node classification accuracy.To mitigate the aforementioned accuracy degradation, existing GCNs approaches over heterophilic graphs have made endeavors in the following two directions: 1) high-order neighbor mixing and 2) potential neighbor search. The former mainly focuses on higher-hop neighbors aggregation, which includes 2-hop and multi-hop neighbor aggregation methods represented by MixHop <cit.>, H2GCN <cit.>, UGCN <cit.>, and TDGNN <cit.>. The latter aims to obtain a potential same-class node set, which includes two categories: one is to first samples a node neighbor sequence on the structure (e.g., random walk), and then ranks the sampled sequence by attention coefficient or correlations. We can select the top N nodes in ranked sequence as the aggregation node set, and the representative methods are NLGNN <cit.>, Node2Seq <cit.>, and GPNN <cit.>; the other is to construct a new structure by KNN networks, mainly represented by UGCN and SimP-GNN <cit.>.Despite the significant success of the current GCN methods on heterophilic graphs, these approaches are extremely vulnerable to malicious threats that aim to distort the structure of the target graph during testing. We conduct experiments to attack the state-of-the-art GPNN <cit.> method, which was trained on the popular Squirrel <cit.> benchmark for heterophilic graphs, using samples created by various attacks. Fig.<ref> demonstrates that the accuracy of node classification can be greatly reduced under three different types of destructive attacks, including a well-known poisoning attack <cit.>, and two attacks adapted from evasion attacks  <cit.>.Specifically, as shown in Fig.<ref> (a), the poisoning attack produces adversarial structural perturbations to the edges of the graph, fooling GPNN to make incorrect predictions. The proposed two evasion-based attacks are referred to as “OOD evasion attacks” and “injected evasion attacks”, respectively. Fig.<ref> (b) and Fig.<ref> (c) demonstrate how the sample sets for these two attacks are created. The first generates a graph with a node distribution that is vastly different from that of the target testing set, while the second manipulates the target graph by injecting more heterophilic edges. Under these three attacks, Fig.<ref> (d) shows thatclassification accuracy of GPNN is sharply decreased by 18.90%, 21.42%, and 29.30%, respectively. To analyze the reasons why GCN methods are fragile on heterophilic graphs, we further depict the ℋ distributions <cit.> of the crafted data from the aforementioned three attacks, as well as the distributions of the original train and test sets of Squirrel in Fig.<ref> (a). Here ℋ represents the node-level heterophily, which is the proportion of a node's neighbors that have a different class[A more formal definition is given in Preliminaries Section, and higher ℋ values indicate a node with strong heterophily.]. Fig.<ref> (a) demonstrates that the distributions of three attack samples are all located to the right of the training set, with the most destructive sample for the GPNN method being the furthest to the right. This observation led us to investigate the correlation between the “right-shift” of the ℋ distribution relative to the train set and the vulnerability of GCN approaches. This correlation is visualized in Fig.<ref> (b) and it is shown that the scale of “right-shift” is strongly proportional to the degradation of node classification performance. We refer to this phenomenon as “structural out-of-distribution (OOD)” in GCN methods for graphs of spatial data.To investigate the underlying cause of the aforementioned structural OOD, we attacked another GPNN model that was trained on the homophilic graph Cora <cit.> and depicted the resulting ℋ distributions in Fig.<ref> (c). Interestingly, the shifts of the three attacks relative to the training set of Cora are very small. This minor “right-shift” enables the GPNN model trained on Cora to be more robust(see further details in Appendix C). We attribute this to the strong homophily present in the Cora dataset and believe that more homophily will result in less “right-shift” under attacks, even for heterophilic graphs, and hence alleviate the structural OOD.In light of the above discussion, a critical question arises: “How can a GCN model automatically learn an appropriate homophilic structure over heterophilic graphs to reduce the scale of “right-shift” in ℋ distributions?” This could help to make the model more resistant to malicious attacks on heterophilic graphs. Achieving this goal is challenging. Despite the success of many structure learning-related methods <cit.>, they also tend to strengthen the heterophily or only focus on the local relations between two nodes rather than considering the global connections. These methods still suffer from vulnerability issues under attacks (as seen in Figure <ref> and Table <ref>), and they are hardly able to address the challenge.We address the above challenging question with a novel method called LHS. The key components of the proposed LHS are: 1) a self-expressive generator that automatically induces a latent homophilic structure over heterophilic graphs via multi-node interactions, and 2) a dual-view contrastive learner that refines the latent structure in a self-supervised manner. LHS iteratively refines this latent structure during the learning process, enabling the model to aggregate information in a homophilic way on heterophilic graphs, thereby reducing the “right-shift” and increasing robustness[We will release our code to the research community]. It should be noted that the original graph Experiments on five benchmarks of heterophilic graphs show the superiority of our method. We also verify the effectiveness of our LHS on three public homophilic graphs. Additionally, the induced structure can also be applied to other graph tasks such as clustering. Our contributions are as follows: * We quantitatively analyze the robustness of GCN methods over omnipresent heterophilic graphs for node classification, and reveal that the “right-shift” of ℋ_node distributions is highly proportional to the model's vulnerability, i.e., the structural OOD. To the best of our knowledge, this is the first study in this field. * We present LHS, a novel method that strengthens GCN against various attacks by learning latent homophilic structures on heterophilic graphs.* We conduct extensive experiments on various spatial datasets to show the effectiveness of the proposed LHS in mitigating the structural OOD issue. As for the above limitations of potential neighbor search methods, undeniably, similarity preservation based methods like SimP-GNN provide a solving perspective. They usually learn a new structure by KNN method, but this naive node feature-level similarity measure seem to have limited improvement for structure learning. Following the KNN method, we have calculated the cosine similarity on both homophilic and heterophilic graphs in appendix. We found that almost all node pairs which have formed bad similarities are only between 0.1 and 0.5. The results reveal that we can hardly distinguish nodes in different classes under such metrics and thus fail to learn a class-anchored better structure. Combined with the weakness of current model, two questions raised to be answered urgently in heterophilic graphs: Q1. How to intuitively derived the sub-optimality of directly using heterophilic structures?Q2. Is heterophilic structure learning necessary? And how to propose a better heterophilic structural learning strategy?Q1. Homophilic and heterophilic graph are now in a state of separation, which limits the development of heterophilic graph model.Can the homophilic and heterophilic graphs be incorporated into the same framework, so as to improve heterophilic graph model performance by gaining insights from the homophilic graphs model?Q2. Are there structural OOD problems in heterophilic graphs? How to formulately describe structural OOD, so as to design a more robust model to resist OOD attacks?Peripheral heterophily allows the two types of graphs to develop in a mutually evolutionary view, but more importantly, it allows us to truly bridge the two types of graphs from the problem level in unified perspective: The node embeddings of peripheral nodes are deteriorated due to the connection of different classes. The heterogeneity graph gets so bad that the original network structure cannot be used. We illustrate the effect of peripheral nodes through the following empirical evidence. The following table shows the proportion of peripheral nodes on the mainstream homogeneity graph to vanilla GNN misclassified points. Taking the average of 10 experimental results, we can see that one-third of the misclassified nodes are The second is the peripheral node, and the peripheral node effect exists significantly.The contribution of the concept of peripheral heterogeneity is twofold: first, the concept unifies the two types of graphs, allowing us to examine the common problem (peripheral node effect) from a unified perspective; second, the indicator of peripheral heterogeneity is not only one and the original indicator Complementary and more expressive indicators, and also provide a quantitative basis for subsequent research on the OOD problem of network structure.To answer the Q2, it is not easy. First, there is less generalization work on graph structures. Existing GNN generalization work mainly focuses on feature-level causal invariance, and the exploration of structure is currently blank. This may be due to the lack of a suitable index to quantify the structural distribution, while the edge peripheral heterogeneity we propose can just be used to characterize the local neighbor distribution of nodes. Another interesting point is about the detection confidence of OOD distribution in neural networks, Terrance pointed out that we can use misclassified samples as a conservative proxy threshold for OOD samples, which links the effect of peripheral nodes we found to the OOD problem, that is, the peripheral Nodes (with high heterogeneity at the periphery of the edge) produce a shift in the structure distribution, which is also conceptually clear. Consider Figure 1a. Consider the nodes in the green. Their substructures are aggregated nodes of the same type. This edge heterogeneity brings the structure. distribution drift, which further motivates us to characterize thisThe OOD perspective on peripheral heterogeneity and heterogeneity graphs provides a powerful research tool and theoretical approach for future studies of heterogeneity structure invariance and generalization. The simultaneous discovery of peripheral node effects and structural OOD issues on the network also inspired us to rethink the network structure. We incorporated the node classification task into a causal framework, and clarified an attribution confusion in the previous GNN design, and pointed out that reverse modeling is a further manifestation of this confusion on highly heterogeneous graphs; inspired by the causal framework of node classification, we further design a stable learning GNN model that satisfies structure invariance through the inherent causal invariance of node classification, specifically by The class membership structure is constructed based on the bipartite self-expression technique based on the maximizing consensus protocol. Note that, unlike the naive point-to-point matrix constructed based on similarity measures, we model the similarity of a node belonging to a class to maximize the similarity This enables nodes to aggregate homogeneous neighbors in the subsequent structure learning stage to avoid structure drift; we abandon the hard relational coding used in WWW2022, etc., and instead implement a tunable aggregation process through a set of truncation soft thresholds. Our contributions are as follows: To further clarify the rationality of studying structural OOD problems, we give the examples in Table <ref>. Even in homophilic graphs, the peripheral nodes still account for the majority (around 58%-65%) among the misclassification nodes (while in heterophilic graphs, almost all nodes are peripheral nodes, and it can imagine that the peripheral nodes proportion in misclassified nodes will be very large). According to the OOD detection confidence studies in neural networks  <cit.>. The above misclassified samples can be the conservative proxy threshold for OOD detection. Therefore, it inspired us to depict structural OOD via peripheral heterophily. We have made efforts for the structural distribution formulation and structural OOD empirical evidence. Simply speaking, different from the same-classes connected homophilic graphs, structural OOD can be regarded as a distribution shift under highly heterophilic connections (especially for high-layer searching situation), which makes the model face more structural robustness risk.So far we have clarified the structural robustness problems on graphs. To tackle these problems, our strategy is heterophilic structural learning. We aim to encourage the connections between same classes and reduce the connections between different classes, and this homophilic environments have reduced the structural OOD risk. However, the existing popular structure learning methods only focused on learning interaction between node pairs, while cannot capture multi-node interactions to better fit the above strategy. * We propose LHS, a novel heterophilic graph node classification model, which enables to learn a robust structure and reduce the structural OOD risks.* To quantitatively depict the structural robustness problems on graphs, we first formulate the structural OOD on graphs for GCN, and reveal the coexisting peripheral node effect via the proposed peripheral heterophily.* We introduce heterophilic structural learning and show its potential to improve robustness and performance, which may give more clear insights of developing heterophilic models to the community. §.§ A new heterophily concept However, similarity is only a special case of the world towards diversity, and the rise of heterophilic graphs not only facilitates the identification and understanding of the patterns that generate such heterophilic interactions, but also stimulates the design of more powerful and generalized GNNs. As a field that is widely concerned by GNNs today, there are some relevant definitions and indicators have been proposed, but still have a lot of room to improve, and we proposed the following two existing problems:* The present heterophily index is mainly derived from the homophily. This global index with limited expressiveness and cannot hight the heterophily, especially the local part.* The homophilic and heterophilic graphs are now in a state of separation. There is still a lack of concept to intuitively bridging their gap. In fact, if the two be integrated, we can study them from a unified perspective and design more powerful and universal GNNs. We first observed the peripheral nodes when exploring how to improve node classification performance in graphs with high homophily(e.g. cora), and these nodes taht intuitively distributed on the class periphery do not obey the clustering and smoothness assumptions of semi-supervised learning. The aggregation scheme followed by GNN will fail on these peripheral nodes, which is not a special case of homophilic graphs - when the peripheral nodes evolved layer by layer, a high homophilic graph will evolve into a high heterophilic graph. We refer to this peripheral node-induced GNN performance degradation as the peripheral node effect. Fig. <ref> shows how the peripheral node bridges the gap between these two kinds of graphs, which integrates the two via evolving into each other. On this basis, we proposed the peripheral heterophily as a conceptualization of peripheral nodes, and as the local indexes, the further proposed peripheral heterophily indexes are more expressive and suitable for highlight the heterophily in homophilic graphs. Peripheral heterogeneity concepts and indicators have the following advantages: § RELATED WORK §.§ Graph Convolution NetworksThere is a line of early studies in graph convolution networks (GCNs) <cit.>. Recent GCN approaches over heterophilic graphs can be grouped into multi-hop-based ones <cit.>, ranking-based ones <cit.>, and the ones using GCN architecture refinement <cit.>. These methods have achieved remarkable success in graph node classification. However, robustness is yet to be explicitly considered on challenging heterophilic graphs. §.§ Robust Graph Convolution NetworksRecently we have witnessed a surge in the robustness of GCN on heterophilic graphs. These methods can be categorized into the structure learning-based ones  <cit.>, and the ones based on adversarial training <cit.>. The most related to our work is ProGNN <cit.> which explores the low-rank and sparsity of the graph structure, and SimP-GCN <cit.> which relies on a similarity preservation scheme for structure learning. Our work differs from the above methods in two aspects: 1) We focus on the structural OOD issue of GCN approaches over heterophilic graphs. To the best of our knowledge, this problem is largely ignored in previous works. 2) We iteratively refine the latent structure of heterophilic graphs by a novel self-expressive method and a dual-view contrastive learning scheme, enabling a GCN model to effectively aggregate information in a homophilic way on heterophilic graphs.§ PRELIMINARIESWe denote the graph as G=(V,E,𝐗), where V ∈ℝ^N × N is the set of nodes, E is the set of edges between nodes, 𝐗 is the node features and (V,E) can form the original network structure 𝐀. We aim to generate a latent homophilic structure for robust GCN in node classification. For convenience, we give the following edge definition: definitionDefinition (Positive/Negative Edge) A positive edge indicates that the two nodes in the link have the same type, while negative one refers to a link that connects two nodes with different types.Node-level Heterophily:We use ℋ to represent the node-level heterophily, which is the proportion of a node's neighbors that have a different class. We refer to  <cit.> to give a formal definition, and it is a fine-grained metric to measure the edge heterophily in a graph.remark2Remark Node-level Heterophily (ℋ):ℋ(v_i)=|(v_i,v_j)| y(v_i)≠ y(v_j)|/|ℰ(v_i)|, ∀v_i, v_j∈ V, where (v_i,v_j) ∈ℰ(v_i), ℰ (v_i) is the edge set of v_i, y(v_i) is the node class of v_i, and |·| represents the number of edges. The nodes with strong heterophily have higher ℋ (closer to 1), whereas nodes with strong homophily have smaller ℋ (closer to 0). It also provides an edge distribution sampling set for quantitative analysis over heterophilic graphs. can you add more intuition here? The larger value means? and the smaller value means? I agree it is better to give some intuition to motivate PH. Otherwise, it is hard to understand the details of PH.The strict peripheral heterophily definition also reveals the peripheral node effect on the graphs. (Peripheral Node Effect.): For both homophilic and heterophilic graphs, peripheral node effect is the node classification performance deterioration phenomenon caused by peripheral nodes,when the structure inductive bias learning is employed by the GCNs.By unifying the two types of graphs, peripheral node effect essentially explains why heterophilic structures (or so called peripheral nodes, also existing in the homophilic graphs) hurt node classification performance. The unifying view further facilitates the studies of cross-domain structural OOD problems in next section. This makes the original heterophily metrics, which is only an inherent graphs property, turn to a tractable peripheral node effect problem., and further inspires us to propose heterophilic structure learning.To better understand the above effect, we further propose Theorem 1 and derive Corollary 1 in appendix to reveal how to bridges the gap between GCN structure learning and peripheral node effect. So far, we have unified the node classification hurting effects for both homophilic and heterophilic graphsfound the peripheral node effect and relate it with GCN structure learning, which inspiresus to propose heterophilic structure learning. While the strengths of proposing peripheral heterophily are more than that:Not only that, it also provides a quantitative view to depict the structure distribution on graphs, and further reveals the structural OOD risks caused by the original structure.To better understand the above effect, we first introduce the graph laplacian regularization in appendix. Given a graph signal 𝐱 defined on a graph G with normalized Laplacian matrix L=I-D̃^-1/2ÃD̃^-1/2, and the graph laplacian regularization can be proposed as follows: 𝐗^T L𝐗=1/2∑_i, j𝐀̃_i j(𝐱_i/√(1+d_i)-𝐱_j/√(1+d_j))^2 .where d_i and d_j denotes the degree of nodes v_i and v_j respectively. When minimizing this term, we are actually encouraging the graph signals between the edge-connected nodes to be more similar, which can be seen as graph signals (features) smooth between locally connected structures. So naturally, when there are peripheral nodes in the graph, this term will make the node similar to its non-homogeneous peripheral nodes, thus hurting the node classification performance.We further propose Theorem 1 as follows, which bridges the gap between GCN structure learning and the laplacian regularization. The detailed proof is provided in appendix.From the perspective of graph signal optimization, the laplacian regularization of Eq. <ref>, is equivalent to the GCN convolutional operator, which is responsible for GCN structure learning. The following Corollary 1 can be derived for better understanding of Definition 5.proof5ProofFirst, the signal optimization with a laplacian regularization term can be formally proposed as follows:min _𝐙 g(𝐙)=𝐙-𝐗^2+λ𝐗^T L𝐗,where 𝐙 is the learned node embeddings, which can be used for downstream tasks such as node classification.Then the gradient of g(𝐙) at ∀𝐱∈𝐗 is calculated as:∇ g(𝐱)=2(𝐱-𝐱)+2 λ L𝐱_0=2 λL𝐱_0 .And the one step gradient descent at ∀𝐱∈𝐗 with learning rate 1 is formulated as:𝐱-∇ g(𝐱)=𝐱-2 λ L 𝐱=(𝐈-2 λ L )𝐱=(D̃^-1/2ÃD̃^-1/2+L-2 λ L ) 𝐱 .By setting λ to 1/2, we finally arrives at:𝐱-∇ g(𝐱)=D̃^-1/2ÃD̃^-1/2𝐱 . Peripheral node effect can be regarded as the `side effect' of GCNs structure learning on the original highly heterophilic structure.Corollary 1 is easily proved by the graph laplacian term and Theorem 1. theorem4Theoremcorollary6Corollary Now we formally propose the class-anchored and node-centered ℋ̅_EP as the structure distribution sampling set, which can depict the useful structural aggregation information for node classification task, and it can also be regarded as the y_k-class-anchored local homophily ratio. Homophily Ratio(Structure Distribution Sampling Set) ℋ̅_EP(v_i):ℋ̅_EP(v_i) = 1- ℋ_EP(v_i), y(v_i)=y_k, v_i ∈ VIf v_i ∈𝒱_P, ℋ_EP(v_i)= |ℰ_P(v_i)|/|ℰ( v_i)|, and if v_i ∉𝒱_P, then ℋ_EP(v_i)=0More interestingly, we find that our peripheral heterophily index and the original heterophily index are complementarily described the heterophily relationship over the networks: just like the relations between recall rate index and accuracy rate index in machine learning, peripheral heterophily focuses on describing the local properties of the graph, and original heterophily focuses on describing the local properties. Especially for graphs with high homophily, if you want to explore the heterophilic part, such as inter-class interaction strength and frequency as insight, then the peripheral heterophily index is a more natural and suitable choice.Similarly to machine learning indicators, the above two heterophilic index also have their own defects. Therefore, like the F-1 score, this paper proposes the fair heterophilic index ℋ_F to more objectively and comprehensively model the overall heterogeneity on graph. Fair Heterophily Index (ℋ_F): ℋ_F = w* ℋ_EP + (1-w) * (1- ℋ_edge)OOD Formulation: We rigorously formulate the ego-graph edge distribution by utilizing the proposed node-level heterophily, and the formulation further enables the multi-layer edge distribution analyses. The `right-shift' phenomenon found in the heterophilic graphs also motivates us to propose latent homophilic structure refinement. Theoretical analysis from a spectral-domain view is given in Appendix 1 [Appendices are available in the preprint version.] to further elaborate the rationale of the proposed LHS. Edge Distribution Formulation: Given a random node v_i ∈ V, we define v_i's k-hop neighbors as N_v_i(k) (where k is an arbitrary positive integer) and the nodes in N_v_i(k) form an ego-graph substructure called A_v_i(k), which consists of a local adjacency matrix represented as A_v_i(k)={a_v_i u| u ∈ N_v_i(k)}. In this way, we can study the distribution of the k-hop substructure A_v via p( ℋ| A_v_i(k))=p( ℋ| A_v_i(1)A_v_i(2)... A_v_i(k)). It's worth noting that the ego-graph can be seen as a Markov blanket for the centered node v_i, meaning that the conditional distribution p(ℋ| A_v_i(k)) can be decomposed as a product of independent and identical marginal distributions p( ℋ| A_v_i(i), i ∈ k) for each of the A_v_i(j), j ≤ k. We also provide more empirical observations about the “right-shift” phenomenons on heterophilic graphs, which are available in Appendix 3.Cross-Domain Structural OOD. Cross-domain distribution indicates the two graph structural distributions come from different graph domains, and the most typical situations are homophilic graphs and heterophilic graphs. As shown in Fig. 2, the Squirrel, a highly heterophilic graph, has a serious left shift phenomenon compared with Cora.this structural distribution OOD makes the existing GCN models (applicable for homophilic graphs) perform poorly on heterophilic graphs. Another kind of cross-domain OOD is the structural OOD attacks, which are shown in the experiments section.Inner-Domain Structural OOD. The inner-domain distribution refers to the distribution of different substructure on the same graph, and the most typical situations are distributions of the training set and the test set, as shown in Fig. <ref> and Fig. <ref>. The heterophilic graphs, especially in the multi-layers search situation, are easy to exist the inner-domain structure OOD problems. Oppositely, the homophilic graphs have fewer inner-domain structural OOD risks. Compared with the homophilic graphs, the distribution sampling set of heterophilic graphs has a great irregularity. , which is the essence of (class-anchored) structural OOD.Because heterophilic graphs connect a large number of different-classes nodes, while the same-class nodes on them are always scattered. It will lead to the smaller ℋ̅_EP (left shift), while the ℋ̅_EP of homophilic graphs tend to be near 1. The unbalanced sampling on the inner-domain (such as training sets and test sets) and subsequent GCNs multi-layer aggregation learning or searching, will further exacerbate this structural OOD risks.§ METHODOLOGY §.§ OverviewIn this section, we present the proposed LHS. Our goal is to learn an appropriate latent homophilic structure from heterophilic graphs, so as to reduce the scale of “right-shift” in ℋ distributions. Inspired by the analysis in the Introduction Section that more homophily of a graph can reduce the “right-shift”, our latent structure tends to encourage positive edge connections by increasing the edge weights for pairs of nodes with the same type, and suppresses negative edge connections by reducing the edge weight for nodes with different types. Fig. <ref> shows the architecture of LHS.§.§ Structure InducerThe proposed structure inducer involves a self-expressive generator and a dual-view contrastive learner.The self-expressive technology has been successfully applied in computer vision for object detection and segregation <cit.>, while has not been applied in graph structure learning.§.§.§ Self-Expressive Generator. Our proposed self-expressive generator produces a latent homophilic structure over heterophilic graphs in three steps:Step 1: Capturing multi-node information. Given the node features 𝐗, we aim to capture the multi-node feature information by expressing one node feature via a linear or affine combination of other node features. Differently from the existing structure learning method with pair-wise similarity matrix <cit.>, our inducer can generate fine-grained latent structure S^* ∈ℝ^N × N by discovering the global information in low-dimension subspace. Specifically, for ∀ v_i ∈ V, we express it by a linear sum of multi-node features 𝐱_j, v_j ≠ v_i, which can be expressed as 𝐱_i=∑_v_j ∈ V q_i j𝐱_j, where q_i j is the (i, j) th element of a coefficient matrix Q. Step 2: Optimizing the generator loss. We use the coefficient matrix Q to generate latent structure. The optimization problem to solve Q can be formulated as follows:min _QQ_F s.t. 𝐗=Q 𝐗 ;diag(Q)=0where Q_F is the Frobenius matrix norm <cit.> of Q and diag(Q) denotes the diagonal entries of Q. Eq. <ref> optimizes a block-diagonal matrix Q to generate the latent structure S^*. Each block of Q contains the nodes which belong to the same class, thus mitigating the “right-shift" phenomenon. We relax the hard constraint 𝐗=Q 𝐗 with a soft constraint (𝐗-Q 𝐗), as the exact reconstruction of 𝐗 may be impractical. The relaxation formulation is:min _Qℒ_S E=𝐗-Q 𝐗_F^2+λ_1Q_F^2,s.t. diag(Q)=0,where λ_1 is a weight hyperparameter of optimization.Step 3: Generating latent homophilic structure. We construct the latent homophilic structure S^* by Q+Q^T, while this structure still has noise and outliers. Therefore, we rely on Algorithm <ref> to generate S^*. Specifically, the SVD decomposition in Algorithm <ref> aims to filter noisy information during the structure generation. In each iteration, we refine the latent structure S^*. We employ the scalable randomized SVD <cit.> to improve the computation efficiency for large-scale graphs. Details are available in Appendix 2.1.Under the assumption of subspace independence, the coefficient matrix Q can be obtained via minimizing its certain norms. We use the square Frobenius matrix norm for our implementation. This can be posed as the following optimization problem:min _QQ_F s.t. 𝐗=Q 𝐗 ;diag(Q)=0where Q_F is Frobenius matrix norm of Q and diag(Q) denotes the diagonal entries of Q.However, exact reconstruction of 𝐗 using this principle may be impractical. So, we relax the hard constraint 𝐗=Q 𝐗 with a soft constraint (𝐗-Q 𝐗), as follows:min _Qℒ_S E=𝐗-Q 𝐗_F^2+λ_1Q_F^2,s.t. diag(Q)=0,where λ_1 is a preset weight parameter of optimization.Q is usually a block-diagonal matrix, and each block contains nodes belonging to the same subspace, which can be understood as the same class type. This is the key point to devise self-expressive generator: 1) we aim to obtain same-class concentrated structure by capturing multi-node interactions, which can better achieve proposed strategy. The latent structure S^* can be constructed trivially as Q+Q^T, but this structure still has noise and outliers. Therefore, we propose the following algorithm to construct S^* to better fit the latent homophilic assumptions: §.§.§ Dual-view Contrastive Learner.So far we have obtained the latent structure S^*, and in the previous step, we focus on learning S^* based on the node features. To refine such a structure, we further explore the enriched structural information of the graph and propose a novel dual-view contrastive learner. We take four steps for such a refinement. Step 1: Generating the dual views of latent structure. We denote the graph as G=(S^*, 𝐗), where S^* is the learnable latent homophilic structure. Based on G, we generate two graphs G_1 and G_2 via the corruption function <cit.> to refine the structure in a self-supervised manner. Specifically, the corruption function randomly removes a small portion of edges from S^* and also randomly masks a fraction of dimensions with zeros in node features 𝐗.Step 2: Aggregating information on latent structure. The above generated latent structure S^* is a probability matrix to depict whether the node pair belongs to the same class.For efficient aggregation on S^*, we devise a truncated threshold GCN to control the sparsity of the structure. For S^*, we introduce a threshold σ to decide if there exists a soft connection with continuous values between two nodes and then form a new structure S^*_σ. Such a way is quite different from the previous hard-coding operations <cit.> that only have 0 or 1, and our S^*_σ can be flexibly applied to various benchmarks. We employ the truncated threshold GCN on three graphs, including G, G_1, and G_2. The proposed truncated threshold GCN on graph G can generate the representations as follows: S^* ={s^*_ij≥σ| s^*_ij∈ S^*} Z=GCN(𝐗, S^*) =Ŝ^* ReLU(Ŝ^* 𝐗 W^(0)) W^(1)where ReLU is an activation function, W^(0) and W^(1) are the trainable weight matrices, S̃^*=S^*+I, I ∈ℝ^|V| ×|V| is the identity matrix and the degree diagonal matrix D̃_i i with D̃_i i=∑_j ∈ VS̃^*_i j, ∀ i ∈ V . We set Ŝ^*=D̃^-1/2S̃^* D̃^-1/2. W^(0) and W^(1) are trainable parameter matrices of GCN. Z_1 and Z_2 denote the node embedding matrices for the two views G_1 and G_2, and these node embeddings are generated from the proposed GCN encoder.Step 3: Sampling the contrastive samples. For a node v_i ∈ V, let us denote the corresponding nodes in G_1 and G_2 as G_1(v_i) and G_2(v_i) respectively. Then we introduce the node-pair sampling rules for the contrastive learning as follows: a)positive example is the node pair from the same node of different graph views, that is, ∀ i ∈ V, the pair (G_1(i), G_2(i)). b) negative example is the node pair from the different nodes of the same or different graph views, that is, ∀ i ∈ V, V_-i={j ∈ V | j ≠ i}. Both (G_1(i), G_1(j)) and (G_1(i), G_2(j)) are the negative examples. * Positive example:∀ i ∈ V, the pair (G_1(i), G_2(i)).* Negative example: ∀ i ∈ V, V_-i={j ∈ V | j ≠ i}. Both (G_1(i), G_1(j)) and (G_1(i), G_2(j)) are the negative examples.* Pair-wise example: we sample the node pairs from training set, which have the labels. The same-class node pairs are positive samples, while the different-classes node pairs are negative samplesStep 4: Optimizing the contrastive loss. In addition to the above dual-view optimization, we also propose a novel pairwise constraint to optimize the original graph view, which can further improve the quality of the learned homophilic structure. Specifically, we sample the node pairs from the training set with labels. The same-class node pairs are positive samples noted as (u,v), while the different-classes node pairs are negative samples noted as (u,v_n), where u, v, and v_n belong to the training node set and y(u)=y(v), y(u)≠ y(v_n). Here y(·) is the node label. We formally propose the loss function as:ℒ_refine=∑_i ∈ V[-cos(z_1 i, z_2 i)/τ. .+log(∑_j ∈ V_-i e^cos(z_1 i, z_1 j)/τ+e^cos(z_1 i, z_2 j)/τ)] -λ_2 [log(σ(z_u^⊤z_v))- log(σ(-z_u^⊤z_v_n))]where z_1 i and z_2 i denote embeddings node i on Z_1 and Z_2 respectively, z_(v) denotes embedding of node v on Z, cos (·) is the cosine similarity between the two embeddings, τ is a temperature parameter, and λ_2 is ahyperparameter of the second term. The first term of Eq. <ref> encourages the consistent information between positive samples, while the second term penalizes the inconsistent information between dual views. The last term of Eq. <ref> makes sure that the same-class nodes have more similar representations. This function aims to fully explore the agreement information be c) pair-wise example, we sample the node pairs from training set, which have the labels. The same-class node pairs are positive samples, while the different-classes node pairs are negative samples.Additionally, we introduce pairwise constraints to further optimize the original view. (novelty)It aims to obtain refined embeddings that are more closely to real class constraints. (aim) Our goal is to make positive samples embedding more similar, while negative samples embedding remain different.(aim) Then, the objective function can be expressed as: This essentially maximizes the agreement between the embeddings of i-th node in two views.§.§.§ Structure Refinement.The node embedding matrix 𝐙 generated by Eq. <ref> has incorporated the refined structure and node features. Finally, we feed 𝐙 into the structure inducer again to iteratively refine the structure S^*. Equipped with both the original graph A and the refined one S^*, we use a structure bootstrapping mechanism S^*←ζ A+(1-ζ)S^* to update S^* with a slow-moving of A, where ζ is a hyperparameter to balance the information between A and S^*. Specifically, the input graphs with high heterophily will lead to smaller ζ, while the ones with high homophily have larger ζ. By doing so, we can reduce the scale of “right-shift” over heterophilic graphs and thus potentially mitigate the structural OOD issue under malicious attacks as discussed in Fig. <ref> in the Introduction Section. §.§ Graph EncoderOur graph encoder consists of a GCN encoder and a GCN decoder, where the former encodes the masked features, and the latter aims to generate the reconstructed features 𝐗̂. We feed the masked node features 𝐗̃ and S^* to the graph encoder.Then we use a scaled cosine error to optimize the encoder as follows: ℒ_Re=1/|𝒱|∑_v_i∈𝒱(1-𝐱_i^T𝐱̂_i/𝐱_i·𝐱̂_i)^γ, γ≥ 1where 𝐱_i and 𝐱̂_i are the feature and reconstructed feature of node i, γ is a scale factor. 𝐱̃_i∈𝐗=𝐱_[M]v_i∈ V_[M] 𝐱_iv_i∉ V_[M] 𝐇= GCN(𝐗̃, S)𝐡̃_i∈𝐇̃=𝐡_[M]v_i∈ V_[M] 𝐡_iv_i∉ V_[M] 𝐗̂= GCN(𝐇̃, S)ℒ_Re=1/|𝒱|∑_v_i∈𝒱(1-𝐱_i^T𝐱̂_i/𝐱_i·𝐱̂_i)^γ, γ≥ 1§.§ Classifier and Loss FunctionsFinally, our classifier outputs predictions. We generate classification representations via a fully-connected layer F(·), that is y_pred=softmax(F (𝐡_i )). Then the loss of the classifier can be expressed as ℒ_Pre=∑_i=1^N_l y_ilog y_predi. We jointly train the graph encoder and classifier with ℒ, which can be expressed as:y_pred =softmax(F (z_fi )) ℒ=ℒ_Pre+βℒ_Rewhere β is a hyperparameter of loss weight. § EXPERIMENTS §.§ Datasets, Baselines and SettingsDatasets: We experiment on nine benchmarks. For six heterographic spatial datasets including Cornell, Texas, Wisconsin <cit.>, Chameleon, Squirrel <cit.>, nodes are web pages and edges are hyperlinks between these pages, and Actor <cit.>, nodes are actors and edges denote co-occurrences on same web pages. For the three homophilic datasets includingCora, Citeseer and PubMed <cit.>, nodes refer to articles, and edges are the citations between articles. Due to space limitations, we provide detailed descriptions in Appendix 4.1. Baselines: We follow the previous works <cit.> to use eleven baselines. We categorize these methods into three groups: 1) multi-hop-based approaches MixHop <cit.> and H2GCN <cit.>, which mix the multi-hop neighbors for aggregation; 2) ranking-based approaches NLGNN <cit.>, GEOM-GCN <cit.>, Node2Seq <cit.> and GPNN <cit.> that aim to search on the network structure and then perform selective aggregation; 3) structure learning approaches ProGNN <cit.>, UGCN <cit.>, BM-GCN <cit.> and GREET <cit.> that automatically learn graph structures for aggregations. Specifically, ProGNN preserves the low-rank and sparsity characteristics of the graph structure for robust GCN. UGCN and SimP-GCN employ a similarity preservation scheme for structure learning on heterophilic graphs and BM-GCN employs a selective aggregation on structure via a block-guided strategy. We also compare our model with a recently proposed spectral-based method ALT-GCN <cit.>. Settings: We implement our method by Pytorch and Pytorch Geometric and use Adam Optimizer on all datasets with the learning rate as 0.001. We configure epochs as 1000 and apply early stopping with the patience of 40. We configure the hidden size as 64 and the batch size as 256. We perform the structure learning for 2 rounds. More detailed hyperparameters are available in Appendix 4.3.§.§ Main Results MLP 85.29±3.6 81.08±4.7 82.16±6.3 47.36±2.5 29.82±1.8 35.79±0.9 75.69±2.2 74.02±2.1 87.16±0.3 66.51 HOG-GCN 86.67±3.3 85.17±4.4 84.32±4.3 - - - 87.04±1.1 76.15±1.8 88.79±0.4 - §.§.§ Comparisons under Poisoning Attacks. We compare the robustness of our LHS with five baseline approaches under a popular poisoning attack  <cit.> on three benchmarks including Squirrel, Chameleon, and Actor. Under various perturbation rates ranging from 0 to 25%, Figure <ref> shows that our LHS consistently performs best among all baselines. For example, ours yields higher classification accuracy of up to 20 points compared to ProGNN. These results confirm the superiority of our latent structure learning scheme against poisoning attacks. The existing structure learning methods, including BM-GCN, SimP-GCN, and ProGNN, are also extremely vulnerable under large positioning perturbation rates. Nevertheless, they are better than the other two, showing the promise of structure learning over heterophilic graphs. We also observe that the positioning perturbations, which can significantly degrade the baselines at a large rate (i.e., 25%), have a very slight impact on our method. We attribute such gains to the latent structure that can be resistant to the structural OOD issue discussed in the Introduction Section, which will also be illustrated in the first question of the Discussion Section. §.§.§ Comparisons under Evasion Attacks.We presented two evasion-based attacks <cit.>, i.e., “OOD evasion attack (OOD)” and “injected evasion attack (Injected)” in Fig. <ref> (b) and Fig. <ref>(c), to craft attack samples with destructive structural perturbations to the edges of the graph. Here we compare our method with five baselines on five heterophilic graphs, and report the results in Table <ref>. We chose these five baselines because they are representative of different types of GCN and have been widely used in previous studies. For two nodes with different classes, the “Injected” attacks manipulate to inject a connection with a 0.9 probability. We repeat our experiments three times and report the mean and variance values in Table <ref>. Under the two attacks, Table <ref> shows that our method consistently achieves the best among all baselines on five benchmarks. Compared with “OOD” attacks, “Injected” attacks are much more constructive as they significantly increase of heterophily of the testing set. Compared to the state-of-the-art structure learning method BM-GCN, our LHS achieves an 11.33 point accuracy under “OOD” attacks. Overall, these results suggest that LHS is more robust against both attacks compared to all the considered baselines. The key to these improved results is the ability of our LHS to perform global searches of the homophilic structure learned by the structure inducer.§.§.§ Comparisons without attacks.We have shown that our model is more robust than existing methods under various attacks. To further investigate the performance without attacks, we conduct experiments on the five heterophilic graphs and compare ours with baseline approaches. Table <ref> shows that the proposed LHS performs best and we attribute this to the information aggregation in a homophilic way on heterophilic graphs. Additionally, we also achieve better or comparable classification results on three benchmarks for homophilic graphs. This suggests that improving homophily for both homophilic and heterophilic graphs benefits node classification, and this also remotely aligns with a previous work <cit.>, showing that our method can handle both types of graphs in a unified manner. In this section, we conduct robust experiments on LHS and previous methods against structural attacks. Some previous robust experiments were mainly conducted on homophilic graphs, but heterophilic graphs against structural attack experiments are almost blank. As we point out, the heterophilic graph structure environment is more complex and risky than ever, so it is necessary to conduct both inner-domain and cross-domain structural attacks to verify the model robustness.Specifically, we choose two types of attacks. The first is metattacks, which can hurt the trained model accuracy by destroying the original graph <cit.>. The second is escape attacks, which considers the potential structural risk on heterophilic graphs. The escape attacks include two types: out-of-distribution sample attacks and customized sample attacks. For each attack, we generate 200 samples to conduct centered node classification, and test the accuracy on different methods. The aim is to examine the robustness ability of different methods to unknown or out-of-distribution structure, which are critical to model deployment in real world but have been overlooked in the past.We report the accuracy against metattack in Fig.6, and LHS has strong robustness against metattack on two heterophilic graphs. When the perturbation rate gradually increases, the accuracy of LHS hardly decreases. This benefit from LHS directly abandoning its suboptimal original structure and building a robust structure. Note that this achievable structural distribution is sufficient for the competitive accuracy results. On the contrary, other methods show worse robustness after the network structure perturbation due to the direct use of the original structure. The performance against escape attack can be seen in Table 5, where LHS-in represents directly constructed a structure matrix in substructure, and LHS-tran represents incorporating substructure nodes into global structure learning. Whether it is from OOD sampling attack or customized sample attack, LHS can achieve better generalization than neighbor search methods with higher accuracy, and LHS-tran shows stronger accuracy due to global search.§.§ Discussion§.§.§ Performance on homophilic graphsWe analyze the interesting observations on the performance of homophilic graphs in Table <ref> as follows: 1) The performances of homophilic graphs show that the structure refinement of LHS is effective and Competitive for both heterophilic and homophilic graphs. 2) We find that LHS achieves state-of-the-art performance on Cora, which may be attributed to the ability of LHS to refine heterophilic structures. Because compared with other homophilic graphs, Cora has more heterophilic structure, as show in the misclassified nodes analysis in appendix. Too short. Table 2 should be highlighted and emphrasize that you can do all the task using a unified view, which is well corresponding to the claim. When you are writing the results, first recall what you claims in the introduction are and then the experiments are supporting evidences to your claims. Can LHS reduce the scale of “right-shift” of ℋ distributions?We have discussed that the “right-shift” phenomenon, i.e., the structural OOD, is the cause of performance degradation under attacks in the Introduction Section. To answer this question, we visualize how our method reduces the “right-shift” for experiments in Table <ref> on Squirrel. Under the “injected evasion attack”, Figure <ref> shows that our latent structure can greatly move the ℋ distribution of the attacking sample to the left, thus reducing the “right-shift” (see the red arrow). We also observe that the second round of refinement can further move distribution to the left side, further improving the model's robustness. However, we find that existing SimP-GCN can slightly move the distribution, visually explaining why LHS is more robust than Simp-GCN. This further confirms our hypothesis that reducing “right-shift” can harden the GCN over heterophilic graphs.What does the learnable homophilic structure look like? For this question, we rely on a tool Gephi <cit.> to visualize the structure of the test set of Squirrel and the homophilic structure of LHS, with a truncated threshold σ set as 0.91. Figure <ref> demonstrates the visualizations of the two structures respectively. We can observe that there are many more homophilic edges in our structure, compared with the one of the original test set. We also study the connection of the node #2021 of Squirrel. Figure <ref> shows that the homophily ratio, which is computed by 1-ℋv_2021, is significantly increased from 25% to 55% after the structure latent learning. In this case, we observe that our model reduces the heterophilic connections, while keeping the homophilic edges unchanged.How two hyparameters σ and β affect the node classification? The parameter σ and β indicate the truncated threshold for pruning the structure and the weight of encoder loss for node representation learning, respectively. For this question, we visualize the effect of the two parameters on the classification accuracy in Figure <ref>. It shows that both two parameters have an impact on the performance, while the impact of σ is larger as it can lead to very low accuracy (e.g., 40% with σ = 0.98). Configuring σ near 0.92 and β near 1.2 can achieve best classification accuracy. More detailed analyses are available in Appendix. Can the learnable homophilic structure be applied to other tasks? To answer this question, we also apply the homophilic structure learned on four graphs, including Wisconsin, Squirrel, Chameleon, and Cora, to the graph clustering task. We use vanilla GCN <cit.> and the proposed structure inducer of LHS to develop “GCN + Structure Inducer”. Even on the vanilla GCN, Table <ref> shows that our “GCN + Structure Inducer” outperforms all other baselines on heterophilic graphs. For example, ours outperforms the Simp-GCN on Squirrel by 2.79 points. We attribute such again to our homophilic structure.§.§ Ablation StudyWe conduct an ablation study on three benchmarks for heterophilic graphs to evaluate the effectiveness of each component. We remove our self-expressive generator in the structure inducer and denote it as “w/o SEG”.We use “w/o DCL” and “w/o GE” to refer to the model that removes the dual-view contrastive learner and the graph encoder, respectively. Table <ref> reports the comparison results. It shows that the removal of the self-expressive generator leads to the most significant performance degradation under “OOD” and “Injected” attacks, indicating that it is the key component for the robustness of the proposed LHS. e.g., a 15.07 points decrease on Texas under OOD. Our dual-view contrastive learner is also non-trivial to the overall robustness, as removal of this component can decrease the performance by 7.51 points under the “Injected” attack. We observe that the graph encoder also benefits the model performance. The ablation study further confirms the effectiveness of two key components in the structure inducer. * Q1: Can the self-expressive generator effectively defend against attacks?* Q2: Can the dual-view contrastive learner successfully defend against attacks?* Q3: Does the feature augmenter function as intended? * kNN-LHS: We replace the self-expressive generator with the kNN method of <cit.>, while keeping the other components of the LHS model unchanged.* Single-LHS: We remove the , while keeping the other components of the LHS model unchanged.* Un-LHS: We remove the graph encoder module, and use the truncated threshold GCN on refined structure and node features for classification directly. § CONCLUSIONThis paper first studies the structural robustness problems of GCNs over heterophilic graphs. From the quantitative view, we endeavor to formulate the structural OOD problems and have found the distribution `right shift' evidence on heterophilic structure. To mitigate the OOD risks, we propose heterophilic structure learning to hardern GCN models robustness. To achieve the strategy, we introduce class-constrained self-expressive technique into structure learning for the first time, while using dual-view contrastive learning to refine the learned structure. Please note that both homophilic and heterophilic graphs can benefit from this structure learning due to our unified perspective. In the future, more efficient models can be proposed to approximate our proposed structure learning strategy; and based on proposed structural distribution formulation tool, the causal invariant substructure learning on the original structure can also be explored. This paper studies robust graph convolution networks over heterophilic graphs. We take the first step towards quantitatively analyzing the robustness of GCN approaches over omnipresent heterophilic graphs for node classification, and reveal that the vulnerability is mainly caused by the structural out-of-distribution (OOD). Based on this crucial observation, we present LHS, a novel method that aims to harden GCN against various attacks by learning latent homophilic structures on heterophilic graphs. Our LHS can iteratively refine the latent structure during the learning process, facilitating the model to aggregate information in a homophilic way on heterophilic graphs. Extensive experiments on various benchmarks show the effectiveness of our approach. Besides the graph clustering as discussed in Section <ref>,We believe our structure can also benefit more graph tasks for better representation learning. Future work could focus on the development of novel adversarial training methods based on the structural OOD.§ ACKNOWLEDGMENTSThis work was partially supported by the National Key R&D Program of China (Grant No.2022YFB2902200), Major Projects of National Natural Science Foundation of China (GrantNo.72293583), and the Joint Funds for Regional Innovation and Development of the National Natural Science Foundation of China (No. U21A20449). § APPENDIX § 1. STRUCTURE LEARNING AND “RIGHT-SHIFT" PHENOMENON The “right-shift" of distribution, namely structural OOD, is caused by the high heterophilic neighbor structure, and learning the homophilic neighbor structure can be seen as one of the direct ways to "refrain from right-shift". In this section, we provide the following analysis from a spectral perspective, aiming to bridge the gap between “refrain from right-shift" and structure learning, thus elaborating the rationale of LHS to improve classification robustness.§.§ 1.1. Theoretical AnalysisGiven a graph signal 𝐱 defined on a graph G with normalized Laplacian matrix L=I-D̃^-1/2ÃD̃^-1/2, and the laplacian regularization term can be proposed as follows: 𝐗^T L𝐗=1/2∑_i, j𝐀̃_i j(𝐱_i/√(1+d_i)-𝐱_j/√(1+d_j))^2 .where d_i and d_j denotes the degree of nodes v_i and v_j respectively. Thus a smaller value of 𝐱^T L 𝐱 indicates a more similar graph signal, i.e., a smaller signal difference between adjacent nodes. This laplacian regularization can also be seen as a smooth or average between two nodes signals (features). Specifically, from the perspective of graph signals optimization, we will show the relationship between GCN structure learning and the “right-shift" phenomenon. theoremTheoremFrom the perspective of graph signal optimization, the laplacian regularization of Eq. <ref>, is equivalent to the GCN convolutional operator, which is responsible for structure learning.First, the signal optimization with a laplacian regularization term can be formally proposed as follows:min _𝐙 g(𝐙)=𝐙-𝐗^2+λ𝐗^T L𝐗,where 𝐙 is the learned node embeddings, which can be used for downstream tasks such as node classification.Then the gradient of g(𝐙) at ∀𝐱∈𝐗 is calculated as:∇ g(𝐱)=2(𝐱-𝐱)+2 λ L𝐱_0=2 λL𝐱_0 .And the one step gradient descent at ∀𝐱∈𝐗 with learning rate 1 is formulated as:𝐱-∇ g(𝐱)=𝐱-2 λ L 𝐱=(𝐈-2 λ L )𝐱=(D̃^-1/2ÃD̃^-1/2+L-2 λ L ) 𝐱 .By setting λ to 1/2, we finally arrives at:𝐱-∇ g(𝐱)=D̃^-1/2ÃD̃^-1/2𝐱 . The structure learning of the GCN convolution operator can be seen as a graph signal optimization with a laplacian regularization term. Hence, the graph convolution layer tends to increase the feature similarity between the connected nodes. Therefore, for heterophilic graphs, this structure learning will make neighbor heterophilic nodes have more similar embeddings, thus leading to poor classification performance. Based on this analysis, we introduce the rationality of our proposed LHS. §.§ 1.2. The Rationale behind LHSIn order to mitigate the “right-shift" phenomenon, which is caused by the GCN structure learning on inherent heterophilic structure, we propose LHS, a robust structure learning method on heterophilic graphs. It has the following two rationales that drew inspiration from the structural OOD problem:* R1: For potential homophilic relationships, we aim to obtain the refined structure with more close homophilic connections, thus playing a positive role in the GCN structure learning.* R2: For potential heterophilic relationships, our proposed LHS aims to suppress the negative edge connections, thus mitigating the "right-shift" phenomenon. We proposed a truncated threshold GCN to ensure that the negative edges with low confidence can preserve high similarity, thus mitigating the performance deterioration when GCN conducts structure learning. The proposed LHS designs customized structure learning components to realize the above two rationales, which have not been completely considered by the existing structure learning methods. For example, the previous work <cit.> <cit.> considered the homophilic structure learning with high feature similarity, while they only utilize the pair-wise information of node features, i.e. kNN networks. Furthermore, they can not handle the potential heterophilic nodes which can hardly filter, while LHS proposes a truncated threshold GCN to only preserve the high-similarity connections, thus mitigating the performance deterioration.§.§ 1.3. The compatibility of LHS with sampling methods.The proposed LHS is compatible with various sampling methods for contrastive learning. The positive instances and the negative instances, which are selected by a sampling method, can be fed into the contrastive learner to generate node representations. We follow the previous work <cit.> to use a popular sampling method in the proposed dual-view contrastive learner in our paper. §.§ 1.4. The graph tasks that could benefit from the proposed LHSWe additionally list the following tasks and give underlying reasons as follows. * GCN Model robustness aims to protect the GCN models from various attacks. This task is under-explored on heterophilic graphs. The latent homophilic structure can effectively harden graph models when facing OOD or injected attacks. * Link prediction aims to induce the edge relationships between nodes <cit.>. The latent homophilic structure learned by LHS captures the homophilic information of nodes and benefits the similar-feature-based link prediction tasks, such as similar product recommendations, and social recommendations with the same interests, etc.* High-order graph tasks aim to make predictions on high-order structures, such as a community <cit.>. The community is considered the smallest unit of prediction in these tasks and is hard to handle by the existing pairwise methods. The proposed latent homophilic structure can benefit these tasks by generating a refined latent structure.§.§ 1.5. Real-world example of the vulnerability of GCN models under malicious threatsFigure 1 (d) of the main paper demonstrates a malicious attack that can significantly degrade the classification accuracy of a system by 29.30%. We refer to the previous work <cit.> that involves real-world examples in fraud detection. We inject attacks, including malicious heterophilic nodes or spurious relations, into target nodes on a real-world social network, e.g., “the Actor”. Experimental results on “the Actor” in Table 1 of the main paper further confirm the vulnerability of GCN models under the above threats. We will follow the reviewer's suggestion to include these examples to enhance the demonstration of the real-world value of our work.§.§.§ A Causal View § 2. THE COMPLEXITY ANALYSIS OF LHSWe analyze the time and model complexity of LHS and provide some acceleration strategies.§.§ 2.1. Time ComplexityThe time complexity of LHS mainly comes from the denoising SVD decomposition in the robust structure learner. A naive time complexity is O(rN^2), where r=4K+1 ≪ N, and K is the number of node classes. In general, the time complexity of LHS is O(N^2), which is more suitable for the small dataset. For large-scale datasets, in order to balance time cost and performance, based on random projection principle <cit.>, we introduce the randomized SVD to further reduce the time complexity to O(rlog (N)N), which is an acceptable complexity. Then, we can also generate the node representations within a local structure that consists of neighbor nodes instead of the computation throughout the whole graph §.§ 2.2. Model ComplexityFor model complexity, benefit from the sampled learning strategy, for both robust structure learner and re-mask feature augmenter, have high scalability for large-scale datasets, because they only need to employ contrastive learning or reconstructing on those sampled nodes features. One the other hand, the parameters introduced are always transforming the node feature dimension to a smaller dimension. To sum up, LHS only introduces additional parameters that is linear to the feature dimension.§ 3. STRUCTURAL OOD EVIDENCEWe provide more empirical evidence on structural OOD as well as the insights gained from interesting observation as follows.§.§ Cross-Domain Structural OODWe first analyze the cross-domain OOD represented by Cora and Squirrel. This cross-domain OOD is very obvious in distribution view, and it is also consistent with the actual situation, that vanilla GCNs perform very poorly on heterophilic graphs. So it is necessary to study model robustness to overcome cross-domain OOD (such as structural OOD attacks). The following single layer represents each layer is independently distributed such as p( ℋ| A_v_i(1)), p( ℋ| A_v_i(2)), which respectively depict the single first layer and single second layer (one-hop and two-hop neighbors) structural distribution. And oppositely, the multi-layer is actually a joint distribution of multi-layer, such asp( ℋ| A_v_i(1), A_v_i(2), A_v_i(3)), which depict the three-layer joint structural distribution. Note that in the single-hop layer situation, the (a) to (d) in Fig. <ref> represent the first layer to fourth layer respectively.§.§.§ 3.1. Single-hop LayerWe provide the cross-domain structural OOD observations between Cora and Squirrel on single layer as follows:First, we can observe that there is a verydistribution shift problem for each hop (single layer) between Cora and Squirrel, which provide us a new perspective for study both homophilic and heterophilic graphs from structural distribution shift. In this way, the heterophilic graphs can be seen as the left distribution shift of homophilic graphs; in fact, another interesting observation potentially showed this: when we focus on each hop layer of Cora, we will find that when the layer becomes higher, the structural distribution of Cora is shifting more and more to the left. This can be explained as the increased high layer heterophiliy of Cora, which is derived from reduction of same-classs neighbors in the higher layer.§.§.§ Multi-hop LayerWhen we study the multi-hop joint distribution, the situation is more serious: the distribution of the heterophilic graph is more left-shifted and concentrated around 0, and this can also be explained as hardly to search the homophilic neighbors in a graph with high heterophily. So, it is necessary to conduct heterophilic structure learning. From the structural distribution view, heterophilic structure learning can be seen as making the distribution right-shift, thus increase the homophilic neighbors aggregation potential.§.§ Inner-Domain Structural OODThen we provide the structural inner-Domain OOD problem observations represented by training set and test set, including small heterophilic graph Wisconsin, large heterophilic graphs Squirrel and Chameleon and a counter example homophilic graph Cora. The inner-domain structural OOD make the worse model robust and led to the poor performance for the trained model on test set.§.§ 3.1. Single-hop LayerFirst, we show the empirical single-hop structural OOD observation on Wisconsin as Fig. <ref>, Squirrel as Fig. <ref>, Chameleon as Fig. <ref>, and the `right-shift' of structural distribution can be seen between training set and test set; this structural OOD will lead to poor robustness. We also study the counter example of heterophilic graphs, as a homophilic graph, Cora can hardly be found the structural OOD problem in both four single-hop layers. §.§ 3.2. Multi-hop LayerWe also provide the multi-hop joint distribution of edges (ℋ). The `right-shift' edge distribution becomes more serious on heterophilic graphs that there is always a different degree of shift when each time we generate the distribution. While the homophilic graph Cora hardly exists the distribution shift, even the in multi-hop situation. § 4. EXPERIMENTS DETAILS §.§ 4.1. Datasets InformationThe statistical information about the eight real-world networks is given in Table <ref>, including the number of classes, the dimension of features, and the number of nodes and edges.§.§ 4.2. Split ratio of dataset.For fair comparisons, we follow the settings of previous work <cit.> to use 60%, 20%, and 20% of the data for training, validation, and testing, respectively. Table 1 and Table 2 of the main paper report the comparison results under such settings. §.§ 4.3. Experiment Hardware and Setting§.§.§ HardwareAll experiments are performed on a Linux Machine with Intel(R) Xeon(R) Gold 6330 CPU with 28 cores, 378GB of RAM, and a NVIDIA RTX3090 GPU with 24GB of GPU memory. §.§.§ Additional SettingWe use Adam Optimizer with the β_1=0.9,β_2=0.999,ϵ=1 × 10^-8 on all datasets. We run epochs ∈{300, 1000} with the learning rate ∈{0.000005, 0.0001} and apply early stopping with a patience of 40. We use PReLU as activation and the scaling factor τ∈ 0, 2. According to the cross-entropy loss and ACC, we get a best model. On the validation set, we set hidden unit ∈{16, 32}, learning rate ∈{0.00005, 0.0001}, dropout in each layer ∈{0.1, 0.2} and the weight decay is 5e-4. Our method is implemented by Pytorch and Pytorch Geometric. §.§ The Optimal Hyperparameters on DatastesThen, when LHS achieves the best node classification accuracy, we have listed each optimal hyperparameters in eight graph datasets including the preset, as shown in Table  <ref>. As for robust structure learner, we uniformly set the number of dual-view post-processor layers to 2 and the output dimension to 265. §.§ The Structural Attack Detailsl[0cm]0pt < g r a p h i c s >Two kinds of escape attack.We now further detail the implementation details of the structural attack experiments and the acceleration strategy of LHS.For potential neighbor search methods, we directly search on the substructure; for LHS, we propose two methods, one is inductive LHS-in, which directly constructs a structure matrix in the substructure and performs related operations, and the other is transductive LHS-tran, by incorporating substructure nodes into original nodes for global structure learning, which performs better in small and medium datasets. For large datasets, some heuristic acceleration strategies can be proposed to balance consumption and performance. Here we provide a simple yet effective central sampling strategy to accelerate: for training stage, we can additionally save the core node of the class, which is the node that has the most connected edges in the same class. And then we can use the core nodes for the substitute of class, then the similarity between to-be-classified node and each core nodes can calculate to obtain which class the to-be-classified node belonging. Then we can conduct aggregation from the neighbor of the belonging core node. And this can be seen as performing a indirect global search while balancing the time consumption.On the other hand, as for the escape attack, we first train the model on such as Squirrel; then we randomly sample the node from the whole graph as the centered node v_i; as for out-of-distribution sampling, we can obtain the k-hop substructure by the sampling the OOD distribution p(ℋ̅_EP| A_v_i(i), i ∈ k); as for customized designing, we designed the highly destructive substructure such as setting the k-hop neighbors of centered node v_i are all different-classes nodes. And the neighbors of centered node v_i are sampled from the whole graphs nodes.§ SUPPLEMENTARY EXPERIMENTS §.§ Structure Simulation Experiments Following the proposed homophilic graph learning strategy, we can generate new structure by controlling number of positive and negative edges, while ensuring the total number of edges is almost the same as the original structure. Please note that experiments are conducted on two-layer vanilla GCN with 10% training data, 10% validation data and 80% test data.Specifically, we report the simulated experiment performance on Squirrel in Table  <ref>. As shown in table, too low or too high homophily ratio ℋ̅=1-ℋ will produce extreme results, which may be related to the randomized generation modes. But two insights can be obtained: 1) our proposed homophilic structure refinement over heterphilic graphs plays a key role in improving node classification accuracy; 2) structure learning quality ( ℋ̅_EP) is proportional to the improvement of classification accuracy, which means that the node classification task can be converted into solving proposed heterophilic structure learning problem.Focused on ℋ̅_EP in 0.4-0.5, it is an achievable range of current heterophilic structure learning. In spite of this, the results in Table <ref> are still competitive to the existing SOTA result. At the same time, they are only the results of pure structure learning by our strategy, which means that the later incorporating the feature-level learning into it will further improve accuracy. The further increase of ℋ̅ (such as, achieve to 0.5-0.6) can also be used as future direction. §.§ Ablation Study for Robust Structure LearnerWe conduct the ablation study to answer the following questions about our designed structure learner: * Q1: Does the self-expressive structure learner work?* Q2: Does the two-stage post-processor work?* Q3: Does the pairwise constraint work? For above three questions, we design the following models: * kNN-LHS: We change the self-expre ssive module to the k-NN structure following <cit.>, and maintain other component of LHS.* Single-LHS: We remove the two-stage post-processor and maintain single self-expressive module and other component of LHS.* Un-LHS: We remove the optimization for the original view by sampling pairwise samples, and maintain other component of LHS. Table. <ref> showed the different component-changed LHS model performance on graphs. As can be seen from the table, to sum up, LHSs that lack any designed component will not achieve optimal performance, which showed effectiveness and necessity of model component design. Specifically, the self-expressive component contributes the most to the performance, and the pairwise component is more important on the small graphs, and for large graphs such as Squirrel, the contribution of two-stage re-refinement is more than the pairwise component. The rationality behind the above analysis could be that large graphs rely more on two-stage refinement to obtain a better structure, while small graphs rely more on pairwise constraints to improve the quality of embedding learning.§.§ Homophily Ratio Analysis In this section, we analyze the one-hop neighbor homophily ratio. As shown in Fig.<ref>, LHS achieves the highest homophily ratios across all datasets. This may be due to the multi-node interaction based global search capability of LHS, while the other investigated methods are limited to local search.§.§ Parameter AnalysisThen we conduct additional hyperparameter analysis about pairwise constraint sampling heperparameter λ_2 in robust structure learner and the mask ratio θ in re-mask feature augmenter as Fig. <ref> and Fig. <ref>. It can be seen from the above Fig. <ref> and Fig. <ref> that the node classification accuracy is relatively stable with the change of hyperparameters λ_2 and θ, which shows that the LHS hyperparameters have stability. Specifically, the optimal λ_2 is between 0.9 to 1.5, and a relatively large λ_2 can improve the node classification accuracy. On the other hand, as for small dataset such as Wisconsin, the optimal θ is around 0.4, while for relatively larger datasets, the optimal θ has increased to 0.6-0.7 to meet the requirement of feature learning. § PERIPHERAL HETEROPHILYWe provide some case studies with the peripheral heterophily index, and provide some insights about the case studies.The previous heterophily index is derived from homophily. When we aim to mine the diversity of connections between different classes from a graph, a graph with stronger homophily will significantly reduce the original index H(𝒢) due to a large number of inner nodes and edges (its neighbors are all the same-class nodes ), so the peripheral heterophilic part cannot be highlighted (such as we want to study the frequency and intensity of one class of literatures citations and its different classes of literatures). Just like the machine learning indicators precision and recall, peripheral heterophily provide a local measurement view to highlight the heterophilic interaction. Inspired by the F-1 score in machine learning, we further proposed a fair index of heterophily to comprehensively evaluate the graphs heterophiliy as follows: DefDefinition Fair Heterophily Index (ℋ_F): ℋ_F = w* ℋ_P + (1-w) * H(𝒢) remark1RemarkWhere w is the weight of peripheral edges or nodes heterophily, H(𝒢) is the original index of heterophily. The peripheral heterophily explicitly and directly establishes the heterophily of graphs, so that it will not be affected by those inner nodes and edges, and provide the local view of heterophily, while the H(𝒢) provide the global view of heterophily. So we aim to integrate the two indicators for a more comprehensive evaluation metric.Then we conduct some case studies to show the insights provided by peripheral heterophily index.As for Cora showed in Table 4, first, the edge heterophily depicts that the case based papers has a citation relationship with non case based papers with a frequency of 47%, while the node heterophily depicts the non case based papers that have a citation relationship with case based papers account for 55% in the papers that cited case based papers. This insights can not directly derived from the original index H(𝒢).§ PERIPHERAL NODE EFFECTPrevious study on evaluation of similarity from node to node usually uses cosine similarity such as KNN network.However,we find that such method seems does not work well in real world datasets.Take Wisconsin as an example,the average similarity of peripheral node is pretty high,while ignoring the fact that the peripheral nodes comes from different sorts.Even if we calculate the similarity of homonode(nodes from the same class),it is even lower than that of peripheral nodes.It also does not work well in homogeneous graph.The average similarity of peripheral nodes is still high.Then the average similarity of peripheral nodes and homonode is close,which is irregular.In conclusion, traditional similarity measurement such as cosine similarity does not perform well in dealing with the nodes' similarity. § 5. THE LIMITATION OF PREVIOUS STRUCTURE LEARNING STRATEGY A large amount of current structure learning methods focus on the feature-level similarity to conduct structure learning <cit.><cit.>, which always formed poor similarity and can not reflect the true relationship between node pairs.𝐒_i j=𝐱_i^⊤𝐱_j/𝐱_i𝐱_jNow we provide empirical evidence to support our claim. To sum up, following the kNN method, we have calculated the cosine similarity on both homophilic graphs <cit.> (Cora, Citeseer, Pubmed) and heterophilic graphs <cit.> (Texas, Wiscon, Squirrel) as follows. We found that almost all node pairs that have formed bad similarities are only between 0.1 and 0.5. More seriously, there is no significant similarity difference between the same and different classes of nodes, whether they are between pairs or average levels. The results reveal that we can hardly distinguish nodes in different classes under such metrics and thus fail to learn a better structure to mitigate the “right-shift" phenomenon.We respectively generate structures via the kNN network method and our proposed LHS. First, we compare the overall average similarity of the structures generated by the two methods; as shown in Fig. <ref> to Fig. <ref> (a), the average similarity of the LHS is between 0.6044 to 0.7844 on each dataset, while the average similarity of kNN networks only is 0.0562 to 0.3451. The (b) of Fig. <ref> to Fig. <ref> is the average similarity of sampled node and its homophilic (left) and heterophilic (right) neighbors (we sampled the same and enough neighbors to calculate the average similarity); and under the average level, the homophilic similarities are always higher than the heterophilic similarities constructed by RSGC (left), while getting closer to the ground-truth similarity. However, the similarity constructed by the kNN network (right) is poor and the heterophilic similarities are even higher than the homophilic similarities in Cora, Squirrel, and Chameleon, which makes the heterophilic nodes harder to distinguish, thus led poor structure learning effect. Note that the (b) from Fig. <ref> to Fig. <ref> represent the statistical similarity from the LHS (dark blue) and kNN network (light blue). § ADDITIONAL RELATED WORK §.§ Graph Convolution NetworksGraph neural networks (GNNs) are a type of deep neural networks aiming to learn low-dimensional representations for graphstructure data <cit.> <cit.>. Modern GNNs can be categorized into two types: spectral and spatial methods. The spectral methods perform convolution operation to graph domain using spectral graph filter <cit.> and its simplified variants, e.g., Chebyshev polynomials filter and the first-order approximation of Chebyshev filter <cit.>. The spatial methods perform convolution operation by propagating and aggregating local information along edges in a graph <cit.>. Readers may refer to the elaborate survey <cit.> for a thorough review. In spatial GNNs, different aggregation functions are designed to learn node representations, including mean/max pooling, LSTM <cit.> and attention <cit.>. Here, we specifically introduce the graph convolution operator as follows:Z=f(X, A)=softmax(ÂReLU( X W^(0)) W^(1))§.§ Graphs with HeterophilyGraphs such as community networks and citation networks are often of high homophily, where the linked nodes are more likely to have similar features and belong to the same class. However, there are a large number of realworld graphs with heterophily (e.g., web-page linking networks). That is, the linked nodes usually have dissimilar features and belong to different classes. It is worth noting that heterophily is different from heterogeneity, as a heterogeneous network means that the network has multiple types of nodes and different relationships between them. In the real world, for example, different amino acid types are connected in protein structures; predator and prey are related in the ecological food webs. In these networks, due to the heterophily, the smoothing operation could generate similar representations for nodes with different labels, which lead to the poor performance of GNNs.To make a better understand to our proposed peripheral heterophily, we inroduce homophily ratio H(𝒢) defined by <cit.> as follows:H(𝒢)=1/|𝒱|∑_v ∈𝒱∑_u ∈ N_1(v)(y_u=y_v)/|N_1(v)|A high homophily ratio H(𝒢) → 1 means that the graph is with strong homophily while a graph of strong heterophily has a small homophily ratio H(𝒢) → 0.However, as it is defined, the H(𝒢) comes from the homophily and the global view, which makes H(𝒢) limited expressiveness for local heterophily measure, thus we propose the peripheral heterophily. As a local metric, peripheral heterophily can not only provides more insights for local measurement, but also can be the quantitative basis to depict structural distribution.§.§ Self-ExpressiveThe concept of self expressiveness was proposed to cluster data drawn from multiple low dimensional linear or affine subspaces embedded in a high dimensional space  <cit.>. Given enough samples, each data point in a union of subspaces can always be written as a linear or affine combination of all other points <cit.> <cit.>. And this is more prevalent when the graph has large number of nodes and embedding dimension is also reasonably high <cit.>. Subspace clustering exploits this to build a similarity matrix, from which the segmentation of the data can be easily obtained using spectral clustering <cit.>. Recently, a deep learning based subspace clustering method has been proposed where an encoder is used to map data to some embedding space before building the pair-wise similarity matrix and applying spectral clustering <cit.>. §.§ Re-masked Feature Reconstruction LearningMask reconstruction learning has been widely used in NLP, CV or as a performance boosting modules, and has achieved extraordinary success, such as BERT <cit.>, MAE <cit.>. A very recent study <cit.> brought this mask reconstruction technique into graph node classification field and perform a better node classification accuracy that surpassed the current state-of-the-art contrastive learning methods, by masking the node features and feeding them into the GCN encoder, taking into account the sparsity of the graph structure, the encoded embeddings perform a re-masking operation and use GCN decoding, which further enhances the GCN ability to learn node features.However, the current graph remask feature reconstruction framework is only suitable for homophilic graphs, and it will lead to poor performance if this framework is directly applied to heterophilic graphs. While by employing robust structure learner, LHS makes it possible to deploy re-mask feature augmenter on heterophilic graphs and further improve node classification performance. § THE COMPLEXITY ANALYSIS OF LHSThen we analyze the time and model complexity of LHS, and provide some acceleration strategies.§.§ Time ComplexityThe time complexity of LHS mainly comes from the denoising SVD decomposition in the robust structure learner. A naive time complexity is O(rN^2), where r=4K+1 ≪ N, and K is the number of node classes. In general, the time complexity of LHS is O(N^2), which is more suitable for the small dataset. For large-scale datasets, in order to balance time cost and performance, based on random projection principle <cit.>, we introduce the randomized SVD to further reduce time complexity to O(rlog (N)N), which is a acceptable complexity. §.§ Model ComplexityFor model complexity, benefit from the sampled learning strategy, for both robust structure learner and re-mask feature augmenter, have a high scalability for large-scale datasets, because they only need to employ contrastive learning or reconstructing on those sampled nodes features. One the other hand, the parameters introduced are always transforming the node feature dimension to a smaller dimension. To sum up, LHS only introduces additional parameters that is linear to the feature dimension.§ STRUCTURAL OOD EVIDENCEWe provide more empirical evidence on structural OOD as well as the insights gained from interesting observation as follows.§.§ Cross-Domain Structural OODWe first analyze the cross-domain OOD represented by Cora and Squirrel. This cross-domain OOD is very obvious in distribution view, and it is also consistent with the actual situation, that vanilla GCNs perform very poorly on heterophilic graphs. So it is necessary to study model robustness to overcome cross-domain OOD (such as structural OOD attacks). The following single layer represents each layer is independently distributed such as p( ℋ| A_v_i(1)), p( ℋ| A_v_i(2)), which respectively depict the single first layer and single second layer (one-hop and two-hop neighbors) structural distribution. And oppositely, the multi-layer is actually a joint distribution of multi-layer, such asp( ℋ| A_v_i(1), A_v_i(2), A_v_i(3)), which depict the three-layer joint structural distribution. Note that in the single-hop layer situation, the (a) to (d) in Fig. <ref> represent the first layer to fourth layer respectively.§.§.§ Single-hop LayerWe provide the cross-domain structural OOD observations between Cora and Squirrel on single layer as follows:First, we can observe that there is a verydistribution shift problem for each hop (single layer) between Cora and Squirrel, which provide us a new perspective for study both homophilic and heterophilic graphs from structural distribution shift. In this way, the heterophilic graphs can be seen as the left distribution shift of homophilic graphs; in fact, another interesting observation potentially showed this: when we focus on each hop layer of Cora, we will find that when the layer becomes higher, the structural distribution of Cora is shifting more and more to the left. This can be explained as the increased high layer heterophiliy of Cora, which is derived from reduction of same-classs neighbors in the higher layer.§.§.§ Multi-hop LayerWhen we study the multi-hop joint distribution, the situation is more serious: the distribution of the heterophilic graph is more left-shifted and concentrated around 0, and this can also be explained as hardly to search the homophilic neighbors in a graph with high heterophily. So, it is necessary to conduct heterophilic structure learning. From the structural distribution view, heterophilic structure learning can be seen as making the distribution right-shift, thus increase the homophilic neighbors aggregation potential.§.§ Inner-Domain Structural OODThen we provide the structural inner-Domain OOD problem observations represented by training set and test set, including small heterophilic graph Wisconsin, large heterophilic graphs Squirrel and Chameleon and a counter example homophilic graph Cora. The inner-domain structural OOD make the worse model robust and led to the poor performance for the trained model on test set.§.§ Single-hop LayerFirst, we show the empirical single-hop structural OOD observation on Wisconsin as Fig. <ref>, Squirrel as Fig. <ref>, Chameleon as Fig. <ref>, and the `right shift' of structural distribution can be seen between training set and test set; this structural OOD will lead to poor robustness. We also study the counter example of heterophilic graphs, as a homophilic graph, Cora can hardly be found the structural OOD problem in both four single-hop layers.§.§ Multi-hop LayerWe also provide the multi-hop joint distribution of edges (ℋ). The `right shift' edge distribution becomes more serious on heterophilic graphs that there is always a different degree of shift when each time we generate the distribution. While the homophilic graph Cora hardly exists the distribution shift, even the in multi-hop situation. § EXPERIMENTS DETAILS §.§ Datasets InformationThe statistical information about the eight real world networks is given in Table <ref>, including the number of classes, the dimension of features and the number of nodes and edges.§.§ Experiment Hardware and Setting§.§.§ HardwareAll experiments are performed on a Linux Machine with Intel(R) Xeon(R) Gold 6330 CPU with 28 cores, 378GB of RAM, and a NVIDIA RTX3090 GPU with 24GB of GPU memory. §.§.§ Additional SettingWe use Adam Optimizer with the β_1=0.9,β_2=0.999,ϵ=1 × 10^-8 on all datasets. We run epochs ∈{300, 1000} with the learning rate ∈{0.000005, 0.0001} and apply early stopping with a patience of 40. We use PReLU as activation and the scaling factor τ∈ 0, 2. According to the cross-entropy loss and ACC, we get a best model. On the validation set, we set hidden unit ∈{16, 32}, learning rate ∈{0.00005, 0.0001}, dropout in each layer ∈{0.1, 0.2} and the weight decay is 5e-4. Our method is implemented by Pytorch and Pytorch Geometric. §.§ The Optimal Hyperparameters on DatastesThen, when LHS achieves the best node classification accuracy, we have listed each optimal hyperparameters in eight graph datasets including the preset, as shown in Table. <ref>. As for robust structure learner, we uniformly set the number of dual-view post-processor layers to 2 and the output dimension to 265.§.§ The Structural Attack Detailsl[0cm]0pt < g r a p h i c s >Two kinds of escape attack.We now further detail the implementation details of the structural attack experiments and the acceleration strategy of LHS.For potential neighbor search methods, we directly search on the substructure; for LHS, we propose two methods, one is inductive LHS-in, which directly constructs a structure matrix in the substructure and performs related operations, and the other is transductive LHS-tran, by incorporating substructure nodes into original nodes for global structure learning, which performs better in small and medium datasets. For large datasets, some heuristic acceleration strategies can be proposed to balance consumption and performance. Here we provide a simple yet effective central sampling strategy to accelerate: for training stage, we can additionally save the core node of the class, which is the node that has the most connected edges in the same class. And then we can use the core nodes for the substitute of class, then the similarity between to-be-classified node and each core nodes can calculate to obtain which class the to-be-classified node belonging. Then we can conduct aggregation from the neighbor of the belonging core node. And this can be seen as performing a indirect global search while balancing the time consumption.On the other hand, as for the escape attack, we first train the model on such as Squirrel; then we randomly sample the node from the whole graph as the centered node v_i; as for out-of-distribution sampling, we can obtain the k-hop substructure by the sampling the OOD distribution p(ℋ̅_EP| A_v_i(i), i ∈ k); as for customized designing, we designed the highly destructive substructure such as setting the k-hop neighbors of centered node v_i are all different-classes nodes. And the neighbors of centered node v_i are sampled from the whole graphs nodes. § SUPPLEMENTARY EXPERIMENTS §.§ Structure Simulation Experiments Following the proposed homophilic graph learning strategy, we can generate new structure by controlling number of positive and negative edges, while ensuring the total number of edges is almost the same as the original structure. Please note that experiments are conducted on two-layer vanilla GCN with 10% training data, 10% validation data and 80% test data.Specifically, we report the simulated experiment performance on Squirrel in Table. <ref>. As shown in table, too low or too high homophily ratio ℋ̅=1-ℋ will produce extreme results, which may be related to the randomized generation modes. But two insights can be obtained: 1) our proposed homophilic structure refinement over heterphilic graphs plays a key role in improving node classification accuracy; 2) structure learning quality ( ℋ̅_EP) is proportional to the improvement of classification accuracy, which means that the node classification task can be converted into solving proposed heterophilic structure learning problem.Focused on ℋ̅_EP in 0.4-0.5, it is an achievable range of current heterophilic structure learning. In spite of this, the results in Table <ref> are still competitive to the existing SOTA result. At the same time, they are only the results of pure structure learning by our strategy, which means that the later incorporating the feature-level learning into it will further improve accuracy. The further increase of ℋ̅ (such as, achieve to 0.5-0.6) can also be used as future direction. §.§ Ablation Study for Robust Structure LearnerWe conduct the ablation study to answer the following questions about our designed structure learner: * Q1: Does the self-expressive structure learner work?* Q2: Does the two-stage post-processor work?* Q3: Does the pairwise constraint work? For above three questions, we design the following models: * kNN-LHS: We change the self-expre ssive module to the k-NN structure following <cit.>, and maintain other component of LHS.* Single-LHS: We remove the two-stage post-processor and maintain single self-expressive module and other component of LHS.* Un-LHS: We remove the optimization for the original view by sampling pairwise samples, and maintain other component of LHS. Table. <ref> showed the different component-changed LHS model performance on graphs. As can be seen from the table, to sum up, LHSs that lack any designed component will not achieve optimal performance, which showed effectiveness and necessity of model component design. Specifically, the self-expressive component contributes the most to the performance, and the pairwise component is more important on the small graphs, and for large graphs such as Squirrel, the contribution of two-stage re-refinement is more than the pairwise component. The rationality behind the above analysis could be that large graphs rely more on two-stage refinement to obtain a better structure, while small graphs rely more on pairwise constraints to improve the quality of embedding learning.§.§ Homophily Ratio Analysis In this section, we analyze the one-hop neighbor homophily ratio. As shown in Fig.<ref>, LHS achieves the highest homophily ratios across all datasets. This may be due to the multi-node interaction based global search capability of LHS, while the other investigated methods are limited to local search.§.§ Parameter AnalysisThen we conduct additional hyperparameter analysis about pairwise constraint sampling heperparameter λ_2 in robust structure learner and the mask ratio θ in re-mask feature augmenter as Fig. <ref> and Fig. <ref>. It can be seen from the above Fig. <ref> and Fig. <ref> that the node classification accuracy is relatively stable with the change of hyperparameters λ_2 and θ, which shows that the LHS hyperparameters have stability. Specifically, the optimal λ_2 is between 0.9 to 1.5, and a relatively large λ_2 can improve the node classification accuracy. On the other hand, as for small dataset such as Wisconsin, the optimal θ is around 0.4, while for relatively larger datasets, the optimal θ has increased to 0.6-0.7 to meet the requirement of feature learning.§ PERIPHERAL HETEROPHILYWe provide some case studies with the peripheral heterophily index, and provide some insights about the case studies.The previous heterophily index is derived from homophily. When we aim to mine the diversity of connections between different classes from a graph, a graph with stronger homophily will significantly reduce the original index H(𝒢) due to a large number of inner nodes and edges (its neighbors are all the same-class nodes ), so the peripheral heterophilic part cannot be highlighted (such as we want to study the frequency and intensity of one class of literatures citations and its different classes of literatures). Just like the machine learning indicators precision and recall, peripheral heterophily provide a local measurement view to highlight the heterophilic interaction. Inspired by the F-1 score in machine learning, we further proposed a fair index of heterophily to comprehensively evaluate the graphs heterophiliy as follows: DefDefinition Fair Heterophily Index (ℋ_F): ℋ_F = w* ℋ_P + (1-w) * H(𝒢) remark1RemarkWhere w is the weight of peripheral edges or nodes heterophily, H(𝒢) is the original index of heterophily. The peripheral heterophily explicitly and directly establishes the heterophily of graphs, so that it will not be affected by those inner nodes and edges, and provide the local view of heterophily, while the H(𝒢) provide the global view of heterophily. So we aim to integrate the two indicators for a more comprehensive evaluation metric.Then we conduct some case studies to show the insights provided by peripheral heterophily index.As for Cora showed in Table 4, first, the edge heterophily depicts that the case based papers has a citation relationship with non case based papers with a frequency of 47%, while the node heterophily depicts the non case based papers that have a citation relationship with case based papers account for 55% in the papers that cited case based papers. This insights can not directly derived from the original index H(𝒢).§ PERIPHERAL NODE EFFECTPrevious study on evaluation of similarity from node to node usually uses cosine similarity such as KNN network.However,we find that such method seems does not work well in real world datasets.Take Wisconsin as an example,the average similarity of peripheral node is pretty high,while ignoring the fact that the peripheral nodes comes from different sorts.Even if we calculate the similarity of homonode(nodes from the same class),it is even lower than that of peripheral nodes.It also does not work well in homogeneous graph.The average similarity of peripheral nodes is still high.Then the average similarity of peripheral nodes and homonode is close,which is irregular.In conclusion, traditional similarity measurement such as cosine similarity does not perform well in dealing with the nodes' similarity. § THE LIMITATION OF PREVIOUS STRUCTURE LEARNING STRATEGY A large amount of current structure learning methods focus on the feature-level similarity to conduct structure learning <cit.> <cit.>, which always formed poor similarity and can not reflect the true relationship between nodes pairs.𝐒_i j=𝐱_i^⊤𝐱_j/𝐱_i𝐱_jNow we provide empirical evidences to support our claim. To sum up, following the KNN method, we have calculated the cosine similarity on both homophilic graphs <cit.> (Cora, Citeseer, Pubmed) and heterophilic graphs <cit.> (Texas, Wiscon, Squirrel) as follows. We found that almost all node pairs that have formed bad similarities are only between 0.1 and 0.5. More seriously, there is no significant similarity difference between the same and different classes of nodes, whether they are between pairs or average levels. The results reveal that we can hardly distinguish nodes in different classes under such metrics and thus fail to learn a class-anchored better structure.We respectively generate structures via kNN network method and our proposed two-stage self-expressive method (note as LHS). First, we compare the overall average similarity of the structures generated by the two methods; as shown in Fig. <ref> to Fig. <ref> (a), the average similarity of the LHS is between 0.6044 to 0.7844 on each dataset, while the average similarity of kNN networks only is 0.0562 to 0.3451. The (b) of Fig. <ref> to Fig. <ref> are the average similarity of sampled node and its same-class (left) and different-class (right) neighbors (we sampled the same and enough neighbors to calculate the average similarity); and under the average level, the same-class similarities are always higher than the different-class similarities constructed by RSGC (left), while getting closer to the true similarity. However, the similarity constructed by the kNN network (right) is poor and the different-class similarities are even higher than the same-class similarities in Cora, Squirrel, and Chameleon, which makes the different-classes nodes harder to distinguish, thus led poor structure learning effect. Note that the (b) from Fig. <ref> to Fig. <ref> represent the statistical similarity from the LHS (dark blue) and kNN network (light blue).
http://arxiv.org/abs/2312.16418v1
{ "authors": [ "Chenyang Qiu", "Guoshun Nan", "Tianyu Xiong", "Wendi Deng", "Di Wang", "Zhiyang Teng", "Lijuan Sun", "Qimei Cui", "Xiaofeng Tao" ], "categories": [ "cs.LG", "cs.AI", "cs.SI" ], "primary_category": "cs.LG", "published": "20231227053514", "title": "Refining Latent Homophilic Structures over Heterophilic Graphs for Robust Graph Convolution Networks" }
1]William Gyory 2]Naoki Yamamoto[1]Graduate Center, City University of New York, New York 10314, USA[2]Department of Physics, Keio University, Yokohama 223-8522, JapanWe study the convergence of the Ginzburg-Landau (GL) expansion in the context of the Bardeen-Cooper-Schrieffer (BCS) theory for superconductivity and the Nambu–Jona-Lasinio (NJL) model for chiral symmetry breaking at finite temperature T and chemical potential μ. We present derivations of the all-order formulas for the coefficients of the GL expansions in both systems under the mean-field approximation. We show that the convergence radii for the BCS gap Δ and dynamical quark mass M are given by Δ_ conv = π T and M_ conv = √(μ^2 + (π T)^2), respectively.We also discuss the implications of these results and the quantitative reliability of the GL expansion near the first-order chiral phase transition. Convergence of Ginzburg-Landau expansions: superconductivity in the BCS theory and chiral symmetry breaking in the NJL model [ January 14, 2024 ================================================================================================================================§ INTRODUCTIONPower series expansions are ubiquitous in physics. Some examples can be found in the perturbative expansions appearing throughout quantum mechanics and quantum field theory (QFT). Another example is the paradigm of effective field theories (EFT), based on a systematic expansion at certain scales. Hydrodynamics, for instance, is an EFT based on a gradient expansion. Ginzburg-Landau (GL) theory, originally introduced in 1950 as a phenomenological model of superconductivity, is also an EFT based on the expansion of the free energy of a system in powers of an order parameter near a phase transition. It is well known that these power series do not always converge, and that even divergent series can generate useful results. Indeed, although the perturbative expansion in quantum electrodynamics (QED) should be a divergent asymptotic series with zero radius of convergence, as shown by Dyson <cit.>, certain quantities calculated using perturbation theory, such as the anomalous magnetic dipole moment of the electron, are among the most precise predictions in physics. Several arguments have been put forward for the divergence of perturbative expansions in other types of QFTs, such as for scalar λϕ^3 theory <cit.>, and by 't Hooft for quantum chromodynamics (QCD) <cit.>.While convergent and divergent series can both be useful, there remain reasons—both practical and theoretical—for studying their convergence properties. On the practical side, it is important to know whether adding more terms to an expansion will produce better results, and if so, over what region of parameter space. On the theoretical side, at least in some situations, one can gain physical insight from an expansion's convergence or lack thereof. For recent developments onasymptotic series in QFTs, see, e.g., the review <cit.>. On the other hand, convergence properties of the systematic expansions for EFTs have yet to be understood generally. In this context, it was recently shown, using holographic duality methods, that the derivative expansions for specific physical quantities (shear and sound frequencies) in hydrodynamics have a finite radius of convergence in an 𝒩 = 4 supersymmetric Yang-Mills plasma <cit.>; see also Ref. <cit.> for its extension to rotating plasmas.In this paper, we study the convergence of the GL expansion. As paradigmatic examples, we consider the Bardeen-Cooper-Schrieffer (BCS) theory for superconductivity and the Nambu–Jona-Lasinio (NJL) model for chiral symmetry breaking <cit.> at finite temperature T and chemical potential μ.Despite the physical differences between the BCS superconductivity,characterized by the BCS gap Δ, and the chiral symmetry breaking in the NJL model, characterized by the dynamical quark mass M, their free energies in the mean-field approximation are closely related to a common mathematical expression, namely,J_ℓ(x, y)= ∫_0^∞ t t^ℓ[ √(x^2 + t^2) +∑_ζ = ±ln( 1+^-√(x^2 + t^2) + ζ y) ] = ∫_0^∞ t t^ℓ∑_ζ = ±ln[ 2 cosh( √(x^2 + t^2) + ζ y/2) ] , where either ℓ=0 or ℓ=2. This expression must be regarded only schematically, because the first term in Eq. (<ref>) is divergent and requires regularization. We will explain how J_ℓ appears in each context, and then we will show that the GL expansion essentially reduces to expanding Eq. (<ref>) in powers of x, subject to some form of regularization. We will then derive the nth-order coefficients of the GL expansions for arbitrary n in both systems. [Although the first few GL coefficients are well known in the BCS theory (see, e.g., Refs. <cit.>), the generic higher-order GL coefficients do not seem to be widely known, except for an unpublished note <cit.>. To the best of our knowledge, the generic GL coefficients in the NJL model in 3+1 dimensions and the radii of convergence of the GL expansions in both systems are not provided in the literature.] We will show that the radius of convergence in each case is given by Δ_ conv = π T and M_ conv = √(μ^2 + (π T)^2), respectively, and we will clarify the physical origin of the difference between these two formulas. The finite convergence radii show that results calculated using an nth-order GL expansion eventually improve with increasing n for sufficiently small values of the order parameter. This paper is organized as follows. In Sec. <ref> and Sec. <ref>, we derive the all-order formulas for the coefficients of the GL expansions and convergence radii in the BCS theory and NJL model, respectively. We discuss our results and give concluding remarks in Sec. <ref>. The technical details of the analysis are provided in appendices.Throughout the paper, we use natural units ħ = c = k_B = 1. § GL EXPANSION IN BCS THEORYLet us first review the basics of BCS theory and then derive the formula that gives the GL coefficients to all orders. Although most of the results in this section are known in the literature (see, e.g., Ref. <cit.>), we review these for completeness and to discuss the similarities and differences with the results of the NJL model later.§.§ Microscopic theory and free energy As we will not be interested in the electromagnetic responses in this paper, we can turn off the gauge fields.The BCS Lagrangian is then given by ℒ_BCS = ψ^†( ∂_t + ∇^2/2m + μ) ψ + G/2 (ψ^†ψ)^2 . Here ψ = (ψ_↑, ψ_↓)^⊤ is a two-component fermion spinor, G is a four-fermion coupling constant used to model the attractive interaction between fermions, and μ is the chemical potential, which is equal to the Fermi energy ϵ_ F = k_ F^2 /(2m). Introducing the gap parameter Δ = G⟨ψ^⊤ C ψ⟩/2, which is assumed to be homogeneous for simplicity, with C the charge conjugation matrix, one can show that the free energy (except for the terms that do not depend of Δ) in the mean-field approximation is F = Δ^2/G -2T ∫_| k|≈ k_ F^3k/(2π)^3ln[ 2 cosh( β/2√(Δ ^2 + ξ_ k^2)) ] , where β is the inverse temperature β=1/T and ξ_ k=ϵ_ k-μ with ϵ_ F = | k|^2/(2m).Here and below, we take Δ as real without loss of generality.The integral should be taken near the Fermi surface over the region k_ F - k_ D < | k| < k_ F + k_ D, where k_ D is the Debye wavelength, which functions here as an ultraviolet (UV) cutoff. Near the Fermi surface, we can approximate ^3k≈ 4π k_ F^2k and ξ_ k≈ v_ F(k - k_ F) with v_ F = k_ F/m being the Fermi velocity. Therefore, assuming k_ D≪ k_ F, we haveF = Δ^2/G -4 ρ T ∫_0^ω_ Dξ ln[ 2 cosh( β/2√(Δ^2 + ξ^2)) ] , where ρ = k_ F^2 / (2 π^2 v_ F) is the density of states per spin at the Fermi surface and ω_ D = v_ F k_ D is the Debye frequency. It is now clear that after an integral transformation t = βξ, the integral in Eq. (<ref>) is essentially J_0(βΔ, 0), except with the infinite upper limit replaced by a finite cutoff. Defining J_ℓ^cut(x, y; λ)= ∫_0^λ t t^ℓ∑_ζ = ±ln[ 2 cosh( √(x^2 + t^2) + ζ y/2) ] , we have F = Δ^2/G -2ρ T^2 J_0^cut(βΔ, 0; βω_ D) . Notice that μ enters into the expression through the density of states ρ in Eq. (<ref>)—hence, μ does not enter into the position of y in J_0(x, y), unlike the case of the dynamical quark mass in the NJL model that we study in the next section.§.§ nth-order coefficient formulasThe GL expansion is a systematic expansion of the free energy F in powers of the order parameter Δ, in this case given by F = α_2 Δ^2 + α_4 Δ^4 + ⋯. Historically, GL theory was proposed as a phenomenological model of superconductivity, and the coefficients α_2 and α_4, often denoted α and β (not to be confused with the inverse temperature) in the literature, were certain parameters.In the context of BCS theory, however, the coefficients—not only at the second and fourth order <cit.>, but at all orders—are determined by the microscopic theory defined in Eq. (<ref>). From Eq. (<ref>) we see that the GL coefficients depend on the expansion of J_0^cut(x, 0; λ) in powers of x. If we can find a general nth-order formula for these coefficients, then all the GL coefficients will immediately be known. Thus, our task is to calculate the coefficients appearing inJ_0^cut(x, 0; λ)= c_2 x^2 + c_4 x^4 + ⋯,viac_2n = 1/n!∂_x^2^n J_0^cut(x, 0; λ) |_x = 0 .Note that in every expansion considered here, we ignore the constant term c_0, because the physics is unchanged by an overall shift in the free energy. The trick to finding the (approximate) coefficients in Eq. (<ref>) is to take λ→∞ on c_2n that remain finite in this limit, which turn out to be those with 2n ≥ 4. In the weak-coupling limit, defined by ρ G ≪ 1, ω_ D is large compared to other quantities characterizing the system, so taking λ→∞ is physically justified. Indeed, from the condition ∂ F/∂Δ = 0 for Eq. (<ref>) in the limit T → 0, one easily finds that the zero-temperature gap Δ_0 is given by Δ_0 = 2 ω_ D e^-1/(ρ G) . Therefore, since in general Δ≤Δ_0, we have λ = βω_ D≫βΔ = x, so that λ is large relative to the other variables in Eq. (<ref>). We also comment later on how this approximation affects the radius of convergence. In this context, we could interpret J_ℓ(x, y) not as divergent and requiring regularization, but as an abbreviated form of a more complicated expression in which the infinite upper limit applies only to certain terms.Taking a single derivative with respect to x^2 in Eq. (<ref>), letting ℓ = 0 and y = 0, and converting the integrand to a Matsubara sum gives ∂_x^2 J_0^cut =2 ∫_0^λ ttanh(1/2√(x^2 + t^2) )/4 √(x^2 + t^2)=2 ∫_0^λ t∑_k = 0^∞1/ω̅_k^2 + x^2 + t^2 , where ω̅_k ≡βω_k with ω_k= (2k + 1) π T being the fermionic Matsubara frequencies.Using the series expansion in x^2 of the summand, taking (n-1) more derivatives with respect to x^2 under the integral and sum, and evaluating at x = 0, we find ∂_x^2^nJ_0^cut|_x = 0 = 2 ∫_0^λ t∑_k = 0^∞(-1)^(n-1) (n-1)!/(ω̅_k^2 + t^2)^n . This expansion of the summand in x^2 has a positive and finite radius of convergence for each choice of ω̅_k^2 + t^2, the smallest being ω̅_1^2= π^2. [In the bosonic case, on the other hand, the smallest value of ω_k^2 is zero. Integrating this term and expanding in x gives a series with an odd term |x|^3, which is not analytic in x^2 (see, e.g., Ref. <cit.>). Such a non-analytic term does not appear in the fermionic case treated here due to nonzero ω_k, which acts as an infrared (IR) cutoff.] Since we will eventually substitute x = βΔ, the convergence radius x^2 = π^2 corresponds to Δ = π T. This foreshadows—but does not immediately imply—the final result, that the radius of convergence of the GL expansion is π T. The expression in Eq. (<ref>) is finite in the limit λ→∞ for n ≥ 2. For these n, we can interchange the sum and integral, and then use the formula∫_0^∞ t/(1 + t^2)^n = (2n - 3)!!/(2n - 2)!!π/2 , which follows from a recursive relation that can be obtained using integration by parts. The remaining Matsubara sum can then be related to the Riemann zeta function, yielding c_2n≥ 4 = (-1)^n + 1/n(2n - 3)!!/(2n - 2)!!1/π^2n - 2(1 - 2^- (2n - 1)) ζ(2n - 1). It follows from Eq. (<ref>) that the GL coefficients of order 2n ≥ 4 are given byα_2n≥ 4 = - 2ρ/T^2n - 2 c_2n .In particular, this reproduces the well-known result for the GL coefficient of the Δ^4 term <cit.>: α_4 = 7 ζ(3)/16ρ/(π T)^2 .The result of the GL coefficient c_2 is also known <cit.>, but we also provide its derivation to make the paper self-contained. For the coefficient c_2, we must proceed more carefully, because we cannot simply take λ→∞ (physically, ω_ D functions as a necessary UV cutoff). Using the expression in the first line of Eq. (<ref>) at x=0, and then performing the integral gives ∂_x^2 J^cut_0 |_x = 0 = 1/2[ ln( λ/2) tanh( λ/2) - ∫_0^λ / 2 tln(t) sech^2(t) ]. Now we clearly see a logarithmic UV divergence in the first term, although we can still take λ→∞ in the argument of tanh and on the remaining integral, the latter of which leads to ∫_0^∞ tln(t) sech^2(t) = - ln(4 ^γ/π) , where γ is the Euler-Mascheroni constant. Thus we have c_2 = 1/2ln( 2 ^γ/πλ), and the related result for BCS theory using Eq. (<ref>) is α_2 = 1/G - ρln( 2 ^γ/πω_ D/T) . The above result can be re-expressed in terms of the gap Δ_0 at T = 0, using Eq. (<ref>), giving α_2 = ρln(T/T_ c) , where T_ c = ^γΔ_0 / π is the critical temperature of the superconducting state, at which α_2 changes sign. Combining Eq. (<ref>) with the formula for T_ c, we also find ω_ D/T_ c =π/2 e^-γ + 1/(ρ G) . §.§ Radius of convergence The radius of convergence of the expansion in Eq. (<ref>) can essentially be read off of the coefficient formula (<ref>). We have (x^2)_conv = π^2 when viewed as a series in x^2, or equivalently, x_conv = π when viewed as a series in x. To make the argument rigorous, one can apply the ratio test, (x^2)_conv = lim_n→∞ |c_2n + 2 / c_2n|. Finally, (<ref>) shows that the GL expansion is essentially given by the same series, with x = βΔ, so we have Δ_conv = x_conv T = π T.Let us explain the exact meaning and implications of this result. The radius of convergence π applies technically to the series whose coefficients c_2n are given by Eq. (<ref>), but these are computed in the limit λ→∞, in which the original quantity J^cut_0 becomes ill-defined.Nonetheless, it is easy to see that the true coefficients in the expansion of J_0^cut, keeping λ finite, are bounded from above in magnitude by c_2n in Eq. (<ref>). This follows because the integrand of Eq. (<ref>) is either entirely positive or entirely negative, so increasing the upper limit of integration must strictly increase its absolute value. Since c_2n of Eq. (<ref>) lead to a series with radius of convergence π, and the series for J_0^cut with λ finite has its coefficients bounded by the former, we can actually conclude that the radius of convergence is at least π for the true expansion of J_0^cut, and hence the GL expansion has radius of convergence of at least π T.When using the GL expansion to approximate and solve the gap equation, the key question is whether the true gap Δ_exact falls within this convergence radius, Δ_exact < Δ_conv. When this condition is satisfied, then the solution of the Nth-order GL expansion, Δ_GL,N, will converge to Δ_exact as N →∞. This is the convergence of the GL solution. Note that this should be distinguished from the convergence of the GL expansion itself over some (possibly vanishing) range 0 ≤Δ < Δ_conv for each fixed T.The range of temperatures over which the GL solution converges is indicated in the left panel of Fig. <ref> (orange shaded region). We searched numerically for the temperature at which Δ_exact drops below Δ_conv over a range of parameter values for ρ G. For each ρ G, the value of ω_ D/T_ c is fixed by Eq. (<ref>), so ρ G is the only parameter of the theory when all quantities are expressed relative to T_ c. Note that the lower boundary of the convergence region is nearly constant over the given range of ρ G (e.g., for ρ G ≤ 0.21 the GL solution converges when T/T_ c≥ 0.53). The right panel of Fig. <ref> shows Δ vs. T computed from the gap equation and also from 6th- and 20th-order GL expansions. Both GL expansions are reliable near T_ c, as expected, and the 20th-order GL solutions remain reliable over a wider range of T.§ GL EXPANSION IN THE NJL MODELIn this section, we show how the expression J_2(x, y) appears in the GL expansion for the dynamical quark mass in the NJL model and how it can be regularized in this setting. We then solve for the nth-order GL coefficients and convergence radius.§.§ Microscopic theory and free energy The two-flavor NJL Lagrangian is given by <cit.> ℒ_NJL = ψ̅(γ^μ∂_μ - m̂) ψ + G [(ψ̅ψ)^2 + (ψ̅γ^5 τ^a ψ)^2 ]. Here ψ = ( u, d)^⊤ is a two-flavor quark field, m̂ = diag(m_ u, m_ d) is the bare quark mass matrix in flavor space, G is a four-fermion coupling constant, and τ^a are the Pauli matrices acting in flavor space. This model retains the approximate chiral symmetry of QCD, which becomes exact in the chiral limit m̂→ 0. We will assume the chiral limit from now on.In vacuum, chiral symmetry is spontaneously broken by the formation of a quark-antiquark pairing σ = ⟨ψ̅ψ⟩, called the chiral condensate. To study the behavior of the condensate σ as a function of temperature T and quark chemical potential μ, we expand Eq. (<ref>) about the ansatz⟨ψ̅ψ⟩ = σ,⟨ψ̅γ^5 τ^a ψ⟩ = 0(a = 1,2,3). We note that at sufficiently large densities, or even small finite densities and in the presence of a magnetic field, this ansatz is known to be disfavored over various spatially inhomogeneous condensates within the mean-field approximation <cit.>, the simplest being the so-called chiral density wave, given by ⟨ψ̅ψ⟩ + ⟨ψ̅γ^5 τ^3 ψ⟩ = σ^ qz. In this paper, we restrict to the homogeneous case for simplicity.In the mean-field approximation, one can show that the free energy is given by <cit.> Ω_pre-reg= M^2/4G - 2 N_ f N_ c∫^3 k/(2 π)^3[ E_ k + ∑_ζ = ± 1 T ln( 1 + ^-β (E_ k + ζμ)) ], where M = -2G σ can be interpreted as the dynamical mass of quarks due to the chiral condensate, N_ f=2 and N_ c=3 are the numbers of flavors and colors, respectively, and E_ k = √( k^2 + M^2). Making the transformations x = β k and y = βμ, we find Ω_pre-reg= M^2/4G -N_ f N_ c/π^2 T^4J_2 (β M, βμ) . Here, the divergence of J_2 reflects the zero-point vacuum energy, and this divergence must be regularized in order to carry out numerical calculations. Several schemes are in common use <cit.>. For consistency with the last section, we will impose a simple momentum cutoff Λ, and take Λ→∞ on any terms that remain finite in this limit. Such a cutoff clearly violates Lorentz invariance, which can lead to undesired consequences when the condensate under consideration is inhomogeneous <cit.>. In that context, covariant regularization methods, such as the Schwinger proper time method<cit.>, are typically used instead. Since we consider only homogeneous condensates in this paper, the following analytical results based on a simple cutoff are physically meaningful. However, the numerical accuracy of the approximation Λ = ∞ is somewhat limited in this setting, as we will discuss in Sec. <ref>. It is also possible to solve for the GL coefficients using the proper time method, and in fact, that allows for analytical formulas for the GL coefficients even without taking the limit Λ→∞; these results are given in Appendix <ref>. For the sake of accuracy, we use the proper time method in the numerical results presented in Fig. <ref>.Imposing the momentum cutoff | k| < Λ in Eq. (<ref>), we findΩ^cut= M^2/4G -N_ f N_ c/π^2 T^4J_2^cut (β M, βμ; βΛ) . §.§ nth-order coefficient formulasAs in the previous case, the GL expansion takes the form Ω = α_2 M^2 + α_4 M^4 + ⋯ . The coefficient α_4 was previously computed in Ref. <cit.>, but the expression for the generic coefficient α_2n has not been calculated explicitly, to the best our knowledge. These coefficients are determined by the expansion coefficients of J_2^cut(x, y; λ) in powers of x, J_2^cut(x, y; λ) = c_2 x^2 + c_4 x^4 + ⋯,viac_2n = 1/n!∂_x^2^n J_2^cut(x, y; λ) |_x = 0 ,which we will now derive. Finally, once the c_2n coefficients are known, the α_2n coefficients follow immediately from Eq. (<ref>), α_2n = δ_n1/4G - N_f N_c/π^2c_2n/T^2n - 4|_y = βμ, λ = βΛ .The two differences from the BCS case are that we now have J_2 rather than J_0, and we now have y ≠ 0. The appearance of J_2 adds a factor of t^2 to the integrand, which strengthens the UV divergence, and as a result, we will need to keep λ finite on the c_4 coefficient in addition to c_2. Having y≠ 0 complicates the algebra, but does not significantly affect the overall approach.Following the same procedure as in the BCS case, we start by taking a single derivative with respect to x^2, giving ∂_x^2 J_2^cut =∫_0^λ t∑_ζ = ± 1t^2/4 √(x^2 + t^2)tanh( √(x^2 + t^2) + ζ y/2 )=∫_0^λ t∑_ζ = ± 1t^2/√(x^2 + t^2)∑_k = 0^∞√(x^2 + t^2) + ζ y/ω̅_k^2+ (√(x^2 + t^2) + ζ y)^2, where ω̅_k = (2k + 1)π, as before. After applying the identity ∑_ζ = ± 11/b·b + ζ c/a^2 + (b + ζ c)^2 = ∑_ζ = ± 11/b^2 + (a + ζ c)^2 , we obtain ∂_x^2 J_2^cut = 2 Re∑_k = 0^∞∫_0^λ t t^2/x^2 + t^2 + (ω̅_k +y)^2 . At this point, in close analogy to the comment after Eq. (<ref>), we may see a hint of the final result for the radius of convergence. If one expands the integrand in powers of x^2, the resulting series has radius of convergence x_conv^2 = |t^2 + (ω̅_k +y)^2|. Taking t → 0 then gives x_conv^2 = |ω̅_k +y|^2, the minimum of which is x_conv^2 = π^2 + y^2; recalling that x = β M and y = βμ, this corresponds to M_conv = √(μ^2 + (π T)^2). However, this heuristic argument is fairly crude, partly because the quantity |t^2 + (ω̅_k +y)^2| need not be minimized in the limit t → 0.We therefore proceed to the rigorous proof.Taking n - 1 more derivatives of Eq. (<ref>) with respect to x^2, introducing a variable s = t/(ω̅_k +y), and letting λ→∞, we find ∂_x^2^n J_2^cut|_x = 0 = 2 (-1)^n - 1 (n - 1)! Re∑_k = 0^∞1/(ω̅_k +y)^2n - 3∫_0^∞ s s^2/(s^2 + 1)^n . The integral is easily performed by using Eq. (<ref>), giving ∫_0^∞ ss^2/(1 + s^2)^n = (2n - 5)!!/(2n - 2)!!π/2. Notice that the sum in Eq. (<ref>) converges when n ≥ 3. Identifying the sum with the standard series representation of the nth-order polygamma function ψ^(n)(z) = (-1)^n+1 n! ∑_k=0^∞ 1/(z+k)^n+1, we finally find c_2n≥ 6 =(-1)^n/n! 2^n (2π)^2n - 4 (2n - 4)!!Reψ^(2n - 4)( 1/2 + y/2π) . For c_2 and c_4, it is more convenient to avoid writing J_2 as a Matsubara sum, starting instead from the expression J_2^cut =∫_0^λ t t^2 [ √(x^2 + t^2) + ∑_ζ = ± 1ln( 1 + ^-√(x^2 + t^2) + ζ y) ]. For c_2, it is straightforward to calculate ∂_x^2 J_2^cut|_x = 0 = λ^2/4 + 1/2∑_ζ = ± 1[ λln( 1+^-λ + ζ y) - Li_2 ( -^-λ + ζ y) + Li_2 ( -^ζ y) ], where _n(x) = ∑_k=1^∞ x^k/k^n is the nth polylogarithm. The first two terms under the sum in Eq. (<ref>) vanish as λ→∞, and for the last term we can apply the identity <cit.>,_n(-^y)+(-1)^n_n(-^-y) = -(2π)^n/n! B_n(1/2+y/2π), where B_n(x) is the nth Bernoulli polynomial. The result isc_2 = 1/4(λ^2 - y^2 - π^2/3) . The c_4 coefficient is more subtle because a naive separation of Eq. (<ref>) into vacuum and medium contributions leads to IR divergences. An efficient way to compute this coefficient is to note that after commuting the derivatives ∂_x^2^2 with t^2 in the integrand, we can transform them as ∂_x^2→ (2t)^-1∂_t. This allows us to take the limit x → 0 and then take derivatives with respect to t. Performing only the right-most derivative, we find ∂_x^2^2 J_2^cut|_x = 0 = 1/4∫_0^λ t t∂_t 1/t( 1 - ∑_ζ = ± 11/1 + ^t + ζ y). Integrating by parts, taking λ→∞ on the boundary terms, and by applying the formula <cit.>, lim_λ→∞[ ln(λ/2π) -∫_0^λ t/t( 1 - ∑_ζ = ± 11/1 + ^t + ζ y) ] = Reψ( 1/2 + y/2π) , where ψ(z) is the digamma function defined by ψ(z)=lnΓ(z)/ z with Γ(z) the gamma function, we obtain ∂_x^2^2 J_2^cut|_x = 0≈1/4[ 1 - ln(λ/2π)+ Reψ( 1/2 + y/2π) ] ,for large λ, and accordingly, c_4= 1/8[ 1 - ln(λ/2π)+ Reψ( 1/2 + y/2π) ] .Finally, applying Eq. (<ref>), we have α_2 =1/4G - N_f N_c/π^21/4(Λ^2 - μ^2 - π^2/3 T^2 ) ,α_4 =- N_f N_c/π^21/8[ 1 - ln(Λ/2π T)+ Reψ( 1/2 + μ/2π T) ] ,α_2n≥ 6 = - N_f N_c/π^2(-1)^n/n! 2^n (2π T)^2n - 4 (2n - 4)!!Reψ^(2n - 4)( 1/2 + μ/2π T) . §.§ Radius of convergenceWe will show that the expansion of J_2^cut(x, y; λ) has radius of convergence x_conv = √(π^2 + y^2). Then, from Eq. (<ref>), it will follow that the GL expansion has convergence radius M_conv = (x_conv|_y = βμ) T = √(μ^2 + (π T)^2).First, since the convergence of any series only depends on its infinite tail, and not on any finite initial segment, we can focus on the coefficients c_2n ≥ 6 given by Eq. (<ref>). It is straightforward to show that the radius of convergence is at least √(π^2 + y^2). Using the inequality |Re (z)| ≤ |z| for complex z, we have | c_2n| ≤1/n! 2^n (2n - 4)!!1/(2 π)^2n - 4| ψ^(2n - 4)( 1/2 + y/2 π) |b_2n , for 2n ≥ 6.Using the identity ψ^(n)(x) = (-1)^n+1 n!ζ(n + 1, x), where ζ(s,a)=∑_k=0^∞1/(k+a)^s is the Hurwitz zeta function, one can show that ψ^(n)(x)/ψ^(n + 2)(x)1/(n + 1) (n + 2) x^2 for Re(x) > 0, in the sense that the ratio of the quantities on the left and right sides of the arrow approaches unity as n →∞. It follows thatlim_n →∞| b_2n/b_2n + 2|= π^2 + y^2. Thus, the series for J_2^cut(x, y; λ) in Eq. (<ref>) is bounded in magnitude by a series with radius of convergence x_conv = √(π^2 + y^2), so the original series has a convergence radius of at least this value. In fact, one can show the exact convergence radius is indeed √(π^2 + y^2), but the proof is more involved. In particular, if we do not take the upper bound of the coefficients using |Re (z)| ≤ |z|, then the ratio test does not work, because the quantity [Reψ^(2n - 4)(1/2 + y/2π)]/ [Reψ^(2n - 2)(1/2 + y/2π)] oscillates erratically. One can instead use the Cauchy-Hadamard theorem; we sketch this proof in Appendix <ref>.Although the results of this section were computed using a simple momentum cutoff Λ, the radius of convergence is unaffected when using Schwinger proper time regularization. While the former cutoff scheme involves taking the limit Λ→∞ to obtain parts of c_2n and α_2n analytically, the latter scheme allows deriving their analytic expressions even without taking such a limit for a cutoff parameter Λ. As shown in Appendix <ref>, the α_2n ≥ 6 coefficients are almost the same as in Eq. (<ref>), except with small corrections proportional to inverse powers of Λ. The key fact is that these correction terms decay faster than the α_2n coefficients calculated in this section, so they do not increase the convergence radius. We prove these statements in Appendix <ref>.Let us also mention that the assumption of Λ = ∞ made when deriving Eqs. (<ref>)–(<ref>) is only a rough approximation in the NJL setting.The values for the two parameters Λ and G are determined by a choice of values for M_0 and f_π, where M_0 is the dynamical quark mass and f_π is the pion decay constant at T=μ=0. Choosing M_0 = 300 MeV and f_π = 88 MeV in the chiral limit <cit.>, we find Λ = 614 MeV and GΛ^2 = 2.15 <cit.>.With these values, one can numerically find T_ c = 182 MeV at μ = 0, and μ_ c = 311 MeV, where μ_c is the chemical potential at which M vanishes in a first-order transition with T = 0 fixed. Thus, Λ is not especially large on the scale of other characteristic quantities of the system, so taking Λ→∞ is not fully justified. For instance, finding T_ c by setting α_2 = 0 in Eq. (<ref>) gives T_ c = 165 MeV. Although the series with coefficients given by Eqs. (<ref>)–(<ref>) converges with radius √(μ^2 + (π T)^2), the value does not converge exactly to Ω, even within this radius. These issues are avoided when using Schwinger proper time regularization, described in Appendix <ref>, because there the finite size of Λ is captured in the analytical formulas for the coefficients. Therefore, the numerical results for this section, presented in Fig. <ref>, are computed using the Schwinger method. In this regularization scheme, again choosing M_0 = 300 MeV and f_π = 88 MeV, we find Λ = 634 MeV and GΛ^2 = 6.02. We calculated T_ c over the range 0 ≤μ≤ 312 MeV, and we determined the region in the μ-T plane below T_ c where M < M_conv, i.e., where the GL solution converges (top panel, orange shaded region). The bottom panels of Fig. <ref> show M vs. T computed from the gap equation and also from the GL expansion at orders 6 and 20. For μ = 0 (bottom left panel), the GL solution converges only when T > 92 MeV (right of the vertical line); both the 6th- and 20th-order solutions are very accurate near T_ c, but the 6th-order solution becomes noticeably inaccurate at T ≲ 130 MeV. On the other hand, when μ = 300 MeV (bottom right panel), the phase transition is first order, and the 20th-order solution is a noticeable improvement over the 6th-order solution across the entire range 0 < T < T_ c. Moreover, the GL solution converges over this entire range, and hence the 20th-order solutions are very reliable all the way down to T = 0. This can be understood by considering the radius of convergence formula at T = 0, which reduces to M_conv = μ, and noticing that M ≤ 300 MeV for μ > 300 MeV. As mentioned above and shown in Appendix <ref>, the radius of convergence when using the proper time method is also √(μ^2 + (π T)^2). One could also ask about the radius of convergence in the cutoff method if not using the approximation Λ = ∞, i.e., if instead one were to calculate the GL coefficients from Eq. (<ref>) numerically at finite Λ. It is possible to show that in this case the radius of convergence is still given by at least √(μ^2 + (π T)^2) when M > 0; we sketch a proof in Appendix <ref>. § DISCUSSIONS AND OUTLOOKIn this paper, we studied the convergence of the GL expansion for two prominent examples—the BCS theory for superconductivity and the NJL model for chiral symmetry breaking at finite temperature T and chemical potential μ. We have shown that the convergence radii are given by Δ_ conv = π T and M_ conv = √(μ^2 + (π T)^2), respectively. The difference between these two expressions can be understood physically as follows. In the BCS theory, Cooper pairing occurs close to the Fermi surface, so μ dependence enters only through the density of states, which is universal for all the GL coefficients. In the NJL model, on the other hand, the chiral condensate is a pairing between quarks and antiquarks inside the Dirac sea, so the GL coefficients depend rather nontrivially on μ, as shown in Eqs. (<ref>)–(<ref>). We also note that this difference leads to the known fact that the BCS instability appears for an infinitesimally small attractive interaction between fermions, while generation of the chiral condensate requires a sufficiently strong attractive interaction between quarks and antiquarks <cit.>. In this paper, we focused on the two-flavor NJL model, where only even powers of M appear in the GL expansion. In the three-flavor case, on the other hand, odd powers in M can also appear in the expansion <cit.> due to the instanton-induced interaction (or Kobayashi-Maskawa-'t Hooft interaction) <cit.>. It would be interesting to see how the radius of convergence may be affected by such terms. One can also ask about the radius of the convergence of the GL theory for the chiral phase transition in QCD per se. For the mean-field approximation to be well justified, one can take the large-N_ c limit <cit.>. Because the radius of convergence for the dynamical quark mass in the NJL model above does not depend on the details of the model, such as the four-fermi coupling constant G and cutoff Λ, [Strictly speaking, this claim is valid for the Schwinger regularization, but not necessarily for the cutoff method. In the latter case, one can show that this radius of convergence holds whenever T≥μ/π or T≤1/π√(1/2Λ^2 - μ^2), and it turns out that one of these conditions is always satisfied when T < T_ c. This need not be the case, however, for other values of Λ and G. See Appendix <ref>.] one may conjecture that large-N_ c QCD has the same radius of convergence, M_ conv = √(μ^2 + (π T)^2). [The chiral phase transition is either first or second order for large-N_ c QCD in the chiral limit, depending on whether it coincides with the deconfinement transition <cit.>. For the former case, the jump in M at the phase transition has to be smaller than the radius of convergence in order for the GL solution to converge.] It would be interesting to extend our approach to the diquark condensate for color superconductivity <cit.>, the interplay between chiral and diquark condensates <cit.>, the inhomogeneous condensates like Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) pairing <cit.> or inhomogeneous chiral condensates <cit.>, and other orders in condensed matter systems. We defer these questions to future work. § ACKNOWLEDGMENT We thank T. Brauner for useful correspondence and comments on the manuscript. N. Y. is supported in part by the Keio Institute of Pure and Applied Sciences (KiPAS) project at Keio University and JSPS KAKENHI Grant Numbers JP19K03852 and JP22H01216.Furthermore, we acknowledge support from JSPS through the JSPS Summer Program 2023, where this collaboration was initiated. elsarticle-num§ GL COEFFICIENTS IN THE NJL MODEL FROM THE PROPER TIME METHODThe general idea of the proper time regularization amounts to making the following replacement by introducing a UV cutoff Λ: |E_ k|^-2n = 1/Γ(n)∫_0^∞ ss^n-1^-s E_ k^2→1/Γ(n)∫_1 / Λ^2^∞ ss^n-1^-s E_ k^2 .For our purpose, we need the n=-1/2 case, |E_ k| → - 1/2√(π)∫_1 / Λ^2^∞ s/s^3/2^-s E_ k^2 , for only the first term of Eq. (<ref>), where we used Γ(-1/2)=-2√(π).The replacement in Eq. (<ref>) arises more naturally when deriving the free energy Ω in the proper time framework, where the divergences appear as integrals of the form ∫_0^∞ s/s^3/2^-sE_ k^2 (see, e.g., Appendix A of Ref. <cit.>). We can now express the free energy as a well-defined quantity, Ω = M^2/4G - N_ f N_ c/π^2 T^4 J_2^Schw (β M, βμ; βΛ) , where J_ℓ^Schw(x, y; L) = ∫_0^∞ t t^ℓ[ (- 1/2√(π)∫_1 / L^2^∞ s/s^3/2^-s(x^2 + t^2)) + ∑_ζ=±ln( 1+^-(√(x^2 + t^2) + ζ y)) ]. Let us introduce the function F(z) =-L/√(π)^- (z / L)^2 + z erfc(z / L) + ∑_ζ = ± 1ln( 1 + ^- z + ζ y), where erfc(x) = 2/√(π)∫_x^∞ dte^-t^2 is the complementary error function, and we have suppressed the dependence on L and y to reduce clutter.One can show that t^2 F(√(x^2 + t^2)) is precisely the integrand of J_2^Schw(x, y; L), and this allows us to write J_2^Schw = 1/2∫_- ∞^+ ∞ t t^2 F(√(x^2 + t^2)). Note also that F(z) is an even function of z that vanishes rapidly at ±∞, which are important properties in what follows.We then need to compute ∂_x^2^n F(√(x^2 + t^2)) |_x = 0 = 2^-n(t^-1∂_t)^n F(t). The n = 1 case combined with an integration by parts immediately yields c_2 = -1/4∫_- ∞^+ ∞ tF(t) . For the higher coefficients, we can put (t^-1∂_t)^n into the form t^-1∂_t^2n + 1 using repeated integration by parts. One can show that <cit.> ∂_x^2^n ∫_- ∞^+ ∞ t t^2 F(√(x^2 + t^2)) |_x = 0 = - 1/2^n(2n - 4)!!∫_- ∞^+ ∞ t t^-1∂_t^2n - 3 F(t)n ≥ 2.The above integrals ∫_- ∞^+ ∞ t F(t) and ∫_-∞^+ ∞ tt^-1∂_t^n F(t) can be analytically performed for all n <cit.>: ∫_- ∞^+ ∞ t F(t) = - 1/2 L^2 + y^2 + 1/3π^2 , ∫_- ∞^+ ∞ t t^-1∂_t^n F(t)=[][l] 2 ln( L/4 π^γ / 2) - 2 Reψ( 1/2 + y/2π)n = 1[][l](-2)^(n + 1) / 2 (n - 3)!!/L^n - 1 + 2 (-1)^(n + 1) / 2/(2 π)^n - 1Reψ^(n - 1)( 1/2 + y/2 π)n = 3,5,… .We therefore have c_2= 1/4(1/2 L^2 - y^2 - 1/3π^2 ) , c_4 = 1/8[ - ln( L/4 π^γ / 2) + Reψ( 1/2 + y/2 π) ] ,andc_2n ≥ 6 = c_2n^vac + c_2n^med, c_2n^vac = (-1)^n/n! (2n - 4)!![(2n - 6)!!/4 L^2n - 4] , c_2n^med = (-1)^n/n! (2n - 4)!![1/2^n (2 π)^2n - 4Reψ^(2n - 4)( 1/2 + y/2 π)] .It now follows from Eq. (<ref>) that the GL coefficients are given by α_2n = δ_2,n/4G- N_ f N_ c/π^2 T^4 - 2n c_2n|_y →βμ, L →βΛ . Explicitly,α_2= 1/4G - N_ f N_ c/π^21/4( Λ^2/2- μ^2 - 1/3π^2 T^2 ) ,α_4= - N_ f N_ c/π^21/8[ - ln( Λ/4 π^γ / 2 T) + Reψ( 1/2 + μ/2 π T) ] ,α_2n ≥ 6 = - N_ f N_ c/π^2(-1)^n/n! (2n - 4)!![ (2n - 6)!!/4 Λ^2n - 4+ 1/2^n (2 π T)^2n - 4Reψ^(2n - 4)( 1/2 + μ/2 π T)] .Note that the only differences between these coefficients and those given in Eqs. (<ref>)–(<ref>) are in the terms involving Λ, as expected. We also note that these coefficients can be regarded as a special case of those derived in Refs. <cit.> for the case of an inhomogeneous chiral condensate in a magnetic field, after setting B = 0 and considering only terms in the GL expansion that do not contain any gradients.Let us now focus on the coefficients c_2n ≥ 6 and consider the convergence of the corresponding series. Since each coefficient is a sum of two parts, c_2n ≥ 6 = c_2n^vac + c_2n^med, the series for J_2^Schw(x, y; L) can be separated into two sub-series, J_2^Schw(x, y; L)= c_2 x^2 + c_4 x^4+ c_6 x^6 + ⋯= c_2 x^2 + c_4 x^4+ (c_6^vac x^6 + c_8^vac x^8 + ⋯) + (c_6^med x^6 + c_8^med x^8 + ⋯). This rearrangement of terms is justified because rearrangements can only affect a conditionally convergent series (or more accurately, a series for which some rearrangement converges conditionally), and power series can only converge conditionally at their exact radius of convergence. Thus, rearranging a power series cannot change its convergence radius. It is easy to check that the first sub-series, whose coefficients are c_2n^vac, has infinite radius of convergence, for example by applying the ratio test. Therefore, the convergence radius of the original series is determined by that of the second sub-series, whose coefficients are c_2n^med. But c_2n≥ 6^med are precisely the coefficients c_2n≥ 6 that were calculated in Sec. <ref>using the momentum cutoff, so the radius of convergence here is the same as in the previous case. § CONVERGENCE RADIUS FOR THE GL EXPANSION IN THE NJL MODEL When computing the GL coefficients for the NJL model in Sec. <ref>, we approximated Λ as being very large compared to other characteristic quantities of the system, resulting in the coefficients given by Eqs. (<ref>)–(<ref>). Let us denote the radius of convergence associated with these coefficients by M_conv^cut,Λ→∞. One can also consider the radius of convergence of the GL expansion obtained using cutoff regularization, but without taking the limit Λ→∞ (although we have not found simple analytical formulas for the coefficients in this case). Let us denote the latter radius of convergence by M_conv^cut. Finally, let us denote by M_conv^Schw the radius of convergence of the GL expansion when using the Schwinger proper time regularization scheme. Using the above notation, we can summarize the previous results as follows: M_conv^Schw( i)= M_conv^cut,Λ→∞( ii)≥√(μ^2 + (π T)^2). (i) was shown at the end of Appendix <ref>, and (ii) was shown at the end of Sec. <ref>. In the first part of this appendix, we show that equality holds in (ii), i.e.,M_conv^cut,Λ→∞ = √(μ^2 + (π T)^2). In the second part of this appendix, we consider the case of cutoff regularization without assuming Λ→∞, and we show that M_conv^cut≥√(μ^2 + (π T)^2) over the region in the μ-T plane where M > 0, after fixing Λ and G such that M_0 = 300 MeV and f_π = 88 MeV.§.§ Cutoff regularization with lambda to infinityAccording to the Cauchy-Hadamard theorem, the radius of convergence of the series c_2 x^2 + c_4 x^4 + ⋯ is given by (x_conv)^-1 = lim sup_n →∞√(|c_2n|) . Applying this theorem to c_2n given by Eq. (<ref>), and using the formula ψ^(n)(x) = (-1)^n + 1 n!ζ(n + 1, x) and the fact that √(|a|)→ 1 as n →∞ for any a ≠ 0, we find (x_conv)^-1 = 1/2 πlim sup_n →∞√((2n - 5)!!/(2n)!!)√(| Re∑_j = 0^∞1/(j + 12 +y 2 π)^2n - 3|) . It is easy to show that the first factor in the limit approaches 1. For the second factor, the terms except for j=0 in the sum over j become negligible as n →∞, so we have (x_conv)^-1 = |z|/2 πlim sup_n →∞√(| Re(ẑ^2n - 3) |) , where z = 1/(12 +y 2 π) and ẑ = z / |z| = ^2πϕ is a complex phase of unit magnitude. The result will follow if we can show that the remaining lim sup evaluates to unity. First, we have √(|Re(ẑ^2n - 3)|)≤√(|(ẑ^2n - 3)|) = 1, so we must only show that √(|Re(ẑ^2n - 3)|) contains a subsequence converging to 1. If ϕ is rational, then ẑ^2n - 3 will cycle through finitely many values infinitely many times. Letting a = |Re(ẑ)|, there is a subsequence √(a), which converges to 1. If ϕ is irrational, then the set of points {^2πϕ n| n ∈ℕ} is dense in the unit circle, and hence so is the set {^2πϕ (2n - 3)| n ∈ℕ}. Thus we can choose a subsequence {a_n} of ^2πϕ (2n - 3) whose elements all have real part at least 1/2, and then √(|Re(a_n)|)→ 1.§.§ Cutoff regularization with finite lambdaStarting from Eq. (<ref>) and repeating the calculation without taking λ→∞, we find c_2n≥ 6 = 2 (-1)^n - 1/nRe∑_k = 0^∞1/A_k^2n - 3∫_0^λ / A_k ds s^2/(1 + s^2)^n, where we have defined A_k := (2k + 1)π +i y. Because λ / A_k is complex, the integral in Eq. (<ref>) must now be interpreted as a contour integral. Let C_k be some contour that starts at the origin, ends at λ / A_k, and avoids the poles at s = ± i; we will consider two different choices of C_k. Let |C_k| denote the length of the contour C_k.Applying the Cauchy-Hadamard theorem (<ref>), we have (x_conv)^-1 ≤lim sup_n →∞√(∑_k = 0^∞1/|A_k|^2n - 3|C_k|max_s ∈ C_k| s^2/(1 + s^2)^n|) . Recall that Eq. (<ref>) holds for any valid choice of contours C_k. Let us first consider contours that follow the straight line from the origin to λ / A_k. If y ≤π, then y ≤ (2k + 1)π for all k, and it is easy to show that |1 + s^2| ≥ 1 for all s ∈ C_k. We therefore have (x_conv)^-1 ≤lim sup_n →∞√(∑_k = 0^∞1/|A_k|^2n - 3|λ/A_k||λ/A_k|^2)= 1/|A_0| , which shows that x_conv≥√(π^2 + y^2) if y ≤π. More generally, for any y ∈ℝ, one can derive the weaker lower bound min_y ∈ C_k|1 + y^2|≥ 2(2k + 1)π y/|A_k|^2 , from which we find (x_conv)^-1 ≤lim sup_n →∞√(∑_k = 0^∞1/|A_k|^2n - 3|λ/A_k||λ/A_k|^2 ( |A_k|^2/2(2k + 1)π y)^n)= 1/√(2π y) . Thus, we always have x_conv≥√(2π y), or equivalently, M_conv≥√(2πμ T). It is easy to check that the previous lower bound x_conv≥√(π^2 + y^2), which holds (at least) if y ≤π, is always an improvement over √(2π y) (except precisely at y = π, where the two bounds are equal).Finally, by considering a different choice of the contour C_0, one can show that the improved lower bound x_conv≥√(π^2 + y^2) holds over a larger region of parameter space. Let C_0 now be the contour that runs first along the positive real axis from 0 to |λ / A_0|, and then along a circular arc of constant radius from |λ / A_0| to λ / A_0 (we let C_k for k > 0 be the same as before). It is then easy to show that if |λ / A_0| ≥√(2), then again |1 + s^2| ≥ 1 for all s ∈ C_0. We also have |C_k| ≤ (1 + π/2)|λ / A_k| for all k, from which we find (x_conv)^-1 ≤lim sup_n →∞√(∑_k = 0^∞1/|A_k|^2n - 3( 1 + π/2) |λ/A_k||λ/A_k|^2 max_s ∈ C_k1/|1 + s^2|^n)= 1/|A_0| . We have shown that x_conv≥√(2π y) for all y ∈ℝ, and that the improved lower bound x_conv≥√(π^2 + y^2) holds if y ≤π or |λ / A_0| ≥√(2) (in fact, these conditions are sufficient, but not necessary). The condition |λ / A_0| ≥√(2) is equivalent to y ≤√(1/2λ^2 - π^2), and it follows that .T≥μ/πorT≤1π√(12 Λ^2 - μ^2) } M_conv≥√(μ^2 + (π T)^2). Figure <ref> shows the region in the μ-T plane where the two conditions in Eq. (<ref>) hold, along with T_ c computed using cutoff regularization.
http://arxiv.org/abs/2312.16372v1
{ "authors": [ "William Gyory", "Naoki Yamamoto" ], "categories": [ "hep-th", "cond-mat.supr-con", "hep-ph", "nucl-th" ], "primary_category": "hep-th", "published": "20231227011952", "title": "Convergence of Ginzburg-Landau expansions: superconductivity in the BCS theory and chiral symmetry breaking in the NJL model" }
IEEE TRANSACTIONS ON MOBILE COMPUTING Shell et al.: Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals Foundation models have shown great success in natural language processing, computer vision, and multimodal tasks. FMs have a large number of model parameters, thus requiring a substantial amount of data to help optimize the model during the training. Federated learning has revolutionized machine learning by enabling collaborative learning from decentralized data while still preserving the data privacy of clients. Despite the great benefits foundation models can have empowered by federated learning, they face severe computation, communication, and statistical challenges. In this paper, we propose a novel two-stage federated learning algorithm called FedMS. A global expert is trained in the first stage and a local expert is trained in the second stage to provide better personalization.We construct a Mixture of Foundation Models () with these two experts and design a gate neural network with an inserted gate adapter that joins the aggregation every communication round in the second stage. To further adapt to edge computing scenarios with limited computational resources, we design a novel Sparsely Activated LoRA () algorithm that freezes the pre-trained foundation model parameters inserts low-rank adaptation matrices into transformer blocks and activates them progressively during the training. We employ extensive experiments to verify the effectiveness of FedMS, results show that FedMS outperforms other SOTA baselines by up to55.25% in default settings.Federated Learning, Foundation Model, Edge Computing FedMS: Federated Learning with Mixture of Sparsely Activated Foundations Models Panlong Wu,  Kangshuo Li,  Ting Wang,  and Fangxin Wang, Member, IEEE Panlong Wu is with the Future Network of Intelligence Institute and the School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen 518172, China. E-mail: panlongwu@link.cuhk.edu.cn. Kangshuo Li is with the School of Data Science, The Chinese University of Hong Kong, Shenzhen, Shenzhen 518172, China. E-mail: 24ganbatte@gmail.com. Ting Wang is with the School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen 518172, China. E-mail: 011tingwang@gmail.com. Fangxin Wang is with the School of Science and Engineering and the Future Network of Intelligence Institute, The Chinese University of Hong Kong, Shenzhen and Guangdong Provincial Key Laboratory of Future Networks of Intelligence. Email: wangfangxin@cuhk.edu.cn. Manuscript received xxx; revised xxx. January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION§ INTRODUCTION Foundation Model (FM) has emerged as a potent solution to address the growing demand for machine learning services. It presents several advantages over its predecessors, the traditional smaller models. FM stands out primarily due to its extensive number of parameters, surpassing the capacity of earlier models. This massive increased parameter space allows FM to capture intricate patterns and relationships in the data, resulting in improved performance across various machine learning tasks. FM follows a distinct training methodology compared to smaller models. While smaller models often rely on task-specific training, FM employs a pre-training and fine-tuning strategy. Pre-training with large datasets allows FM to acquire a broad data understanding and representation learning capabilities. This pre-training phase acts as a stepping stone, equipping FM with substantial knowledge and context from diverse data sources.Consequently, when fine-tuning FM for specific tasks, they derive significant advantages from the initial pre-training, leading to enhanced performance across a diverse set of tasks. Federated learning (FL) has revolutionized the landscape of machine learning by enabling the collaborative training of a shared model across multiple edge devices without the need to share raw data. By adopting FL, we can utilize the distributed edge data while preserving data privacy and overcoming the limitations of centralized training approaches. This collaboration allows for collective learning from diverse datasets while respecting user privacy and local data ownership. FMs with a tremendous number of parameters are data-hungry for the reason that they have a large parameter space to be optimized during the training. By combining the power of FM with the decentralized approach of FL, we can leverage FL by allowing decentralized data from different sources to be used, thus enabling the enhanced generalization ability of FM. Each device in the federation contributes its local knowledge and data patterns to the training process, resulting in a more comprehensive and diverse understanding of the data.This combination allows us to harness the benefits of both FL's collaborative training across edge devices while preserving data privacy and FM's large parameter capacity and pre-training strategy.Several challenges arise in the domain of FL with FM which make FL with FM hard to employ in real-world applications. * The first challenge lies within the substantial number of parameters possessed by FMs, which distinguishes them from traditional FL models that possess much fewer parameters. This differentiation introduces impediments in the areas of communication and networking, as the transmission of parameters of these FMs is significantly time-consuming for modern mobile networks.* The second challenge arises from the huge computational resource requirements posed by FMs, particularly for edge devices with limited computing resources. The considerable computational costs associated with these FMs create difficulties in implementing FL on resource-constrained edge devices.* The third challenge is that FL encounters statistical challenges due to non-IID decentralized data, potentially resulting in issues such as parameter divergence and data distribution biases which can significantly harm the performance. To fill the gap, this paper addresses the challenges associated with FL with FM by introducing FedMS algorithm, an FL algorithm with a Mixture of Foundations Models that have Sparsely Activated Parameters. The proposed FedMS algorithm consists of two training stages.In the first training stage, each client owns a foundation model, and low-rank adaption matrices are inserted into every transformer block of the foundation model. During the training, the pre-trained weights of the foundation are frozen, and all the parameters of inserted matrices are activated to better extract global information. In each communication round only the inserted matrices join the weight aggregation to reduce bandwidths consumption. The trained foundation model in the first stage will be frozen in the second stage and act as the global expert.In the second training stage, we for the first time form a Mixture of Foundation Models () system in FL which specifically addresses the statistical challenges encountered in FL. We leverage the foundation model trained in the first stage as a global expert and introduce another local expert which is a foundation model initialized from the weights of the global model to provide better personalization. We design a gate model with a specially designed gate adapter inserted into it so that it can quickly adapt to the change of relationship between two experts and intelligently assign weights to the final decision of two experts. In each communication round, only the gate adapter's activated parameters join the aggregation to save communication resources. To further tackle the computation challenges, we propose a Sparsely Activated LoRA () algorithm to activate inserted low-rank adaptation matrices in a progressive way through a controller to suit different edge resource conditions.In summary, the main contributions of this paper can be summarized as follows:* We propose a communication and computation friendly two-stage personalized FL algorithm FedMS, which can capture the global feature information through collaborative learning and capture the local feature information through personalized learning.* We propose a Sparsely Activated LoRA () algorithm that sparsely activates the trainable low-rank decomposition matrices injected into foundation models in a progressive way through a self-defined controllerto adapt to scarce computation and communication resources in edge computing scenarios.* We propose a Mixture of Foundation Models () algorithm which is, to our best knowledge the first work to construct a mixture of vision language foundation models in personalized federated learning to tackle the data heterogeneous in federated learning and we further prove the effectiveness of FedMS through extensive experiments. § BACKGROUND AND RELATED WORK §.§ Foundation ModelRecently, FMs have achieved remarkable success in various domains such as natural language processing, computer vision, and multimodal tasks. By utilizing deep learning techniques like self-supervised Learning and contrastive learning, FMs with a massive number of model parameters are trained on large datasets. Consequently, these models exhibit strong generalization, feature extraction, and comprehension abilities. Various works have been done related to FMs in natural language processing. Bert<cit.>, referred to as Bidirectional Encoder Representation from Transformers, is an advanced natural language processing model introduced by Devlin et al. (2018). This model employs a transformer architecture and is pre-trained on extensive text data, using a masked language model pre-training objective. GPT-3<cit.> is trained using a language modeling pre-training objective. By making the model do the next token prediction, it can utilize the massive unlabeled data from the internet and have a powerful few-shots learning ability.There are also various works that have been done related to visual language FMs. Contrastive Language Image Pre-training (CLIP)<cit.>, as a famous FM proposed by OpenAI. This model a visual encoder and a text encoder to extract the semantic meaning and encode images and texts into image features and text features. Throughout the training process, contrastive learning is employed to maximize the similarity between related images and texts while minimizing the similarity between unrelated ones. DALL-E 3<cit.> is a modern text-to-image system that has extraordinary prompt-following capability. It addresses the noisy and inaccurate image captions issue by training another specially designed image captioner.§.§ Federated LearningFederated learning <cit.> is a machine learning technique that enables the training of decentralized data while preserving the privacy of clients participating in the training. Typically, every client doesn't share their data but their private model after local training in each communication round. Despite that FL has shown great potential in the Internet of Things, financial field, and smart healthcare, it still faces lots of challenges.Many studies focus on solving the statistical challenges. FL faces serious statistical challenges because the data distribution of the datasets is often non-iid which can lead to weight divergence after model aggregation. Li et al. <cit.> propose FedProx that handles the system heterogeneity by introducing an additional proximal term to prevent the local model updates from being far from the global model and thus can safely aggregate the local updates in statistical heterogeneity conditions.Li et al. <cit.> introduce MOON which uses the idea of contrastive learning to compare the representation learned by local models and the global model. Inspired by the philosophy of the global model has better feature extraction ability than local models that are trained on skewed local datasets.Zhang et al. <cit.> design FedLC that introduces a fine-grained calibrated cross-entropy loss to mitigate the local gradient deviation and gives theoretical proof of the deviation bound after calibration.Zec et al. <cit.> propose a personalized federated learning algorithm with a mixture of experts with a training pipeline of global expert training, local expert training, and mixer training.The communication efficiency issue is also an important issue that many researchers focus on. MAO et al. <cit.> propose an Adaptive Quantized Gradient (AQG) algorithm to decide the level of quantization according to the gradient update of heterogeneous clients.Huang et al. <cit.> propose a Residual Pooling Network(RPN) based on the approximation of parameters and selection of parameters, and apply it to a CNN-based model FL training. Haddadpour et al. <cit.> introduce an algorithm with periodical compressed communication. Specifically, they introduce the FedCOM algorithm to tackle the homogeneous client situation and the FedCOMGATE algorithm to tackle heterogeneous client situations. Chen et al. <cit.> propose a federated learning algorithm that considers the weight quantization in wireless transmission and formulate the federated learning problem into a mixed-integer programming problem. Zhang et al. <cit.> introduce a CFEL algorithm that jointly considers cloud-based federated learning and edge-based federated learning. Qu et al. <cit.> design a partially Synchronized federated learning algorithm to accelerate the federated learning training.§.§ FL with FMNot many works have been done related to FL with FM. Zhang et al. <cit.> propose a federated generative learning framework to utilize FM on the server to generate synthesized images given the transmitted prompt from the clients to improve the training performance of the model. Tao et al. <cit.> propose a PromptFL algorithm to replace the model aggregation in traditional FL to prompt aggregation to reduce communication and computation costs. Cai et al. <cit.> design a AdaFL algorithm to fine-tune FMs for modern natural language processing tasks by inserting adapters into models, dividing clients into three groups, and observing each group's training accuracy to decide the best configuration for adapters. Lu et al. <cit.> propose a FedCLIP algorithm to insert adapters in the visual encoder of FM CLIP and test it on datasets in different domains. Zhang et al. <cit.> introduce a Federated Instruction Tuning (FedIT) algorithm to leverage federated learning in induction tuning of FM to enhance their performance. However, none of these works consider the cooperation of FMs and thus cannot achieve good performance in data heterogeneous conditions on challenging datasets.§ DESIGN OF FEDMS§.§ Overview of FedMS We consider a typical FL scenario with a total number of N clients with non-iid dataset {D_1, ..., D_N}.Our method FedMS consists of two stages of training as depicted in Fig.<ref>. In the first stage, low-rank adaptation matrices are inserted into every transformer block of the foundation model <cit.>. All the clients freeze the pre-trained foundation model weights and only update and upload the weights of the inserted matrices in every communication round. In this stage, every client collaboratively trains a global model W_g, which will be the global expert in stage two. The objective function of stage one can be expressed asℱ = 1/N∑_i=1^N 𝔼_(x_i, y_i) ∼d_iℒ_i(x_i, y_i; W_g)where ℒ_i is the loss function of client i∈ [N], x_i is its private data and y_i is the corresponding label. d_i denotes the data distribution of client i.In the second stage, each client utilizes the trained global expert in the first stage and trains its personalized model. This model consists of a global expert, a local expert, and a gate model together constitute a Mixture of Foundation Models. During the second stage, local experts are only trained on clients' local datasets using a novel Sparsely Activated LoRA algorithm and do not engage the global aggregation. We optimize ℱ_i = 𝔼_(x_i, y_i) ∼d_iℒ_i(x_i, y_i; W_i)where W_i is the parameters of the local expert of client i.We design and insert a gate adapter into the gate model and aggregate all the parameters of gate adapters in each communication round. We optimize the gate adapter parameters byℱ_gate = 1/N∑_i=1^N 𝔼_(x_i, y_i) ∼d_iℒ_i(x_i, y_i; G_i)where G_i is the parameters of the gate model of client i. We propose two novel algorithms to tackle challenges raised by FL with FM. §.§ Sparsely Activated LoRA According to <cit.>, the capability of the deep neural network tends to improve with the increase of the number of parameters of the model. FL with FM presents substantial challenges to the communication and computation of the distributed system. In traditional FL <cit.>, model parameters after local training are usually transmitted to the server for model weights aggregation in each communication round.This paradigm faces great challenges when FMs are trained in a FL procedure.Suppose we have a FM whose parameters are represented by 𝐖_𝐟. For full parameters fine-tuning, we need to calculate and store another model 𝐖_𝐤 which has the same parameter size of 𝐖_𝐟 for each task k.FM typically consists of over 10 million model parameters, resulting in significant transmission time requirements for modern mobile communication networks. Moreover, the training of FM necessitates substantial computation power and storage capacity, whereas edge devices typically possess limited computational capabilities and storage space. Therefore, it is imperative to develop an algorithm that mitigates the communication and computation costs associated with FL using FM. To tackle these challenges, we design a novel Sparsely Activated LoRA () algorithm that can achieve the SOTA performance while only tuning less than 1% of the total parameters of FM.Common pre-trained language models are capable of efficient learning even when randomly projected into a smaller subspace because they have a very low intrinsic dimension<cit.>. Edward J. Hu et al.<cit.> propose Low-rank adaptation (LoRA) to insert trainable low-rank decomposition matrices in FMs, enabling model optimization with minimal parameter tuning.Inspired by this, we insert trainable low-rank decomposition matrices in every layer of the visual encoder and the text encoder of the CLIP model. We denote the weight parameter matrix as W_0∈ R^E× F and the inserted low-rank decomposition matrices as Δ W, which can be calculated by two low-rank matrices Δ W=W_A W_B, W_A∈ R^E× H and W_B∈ R^H× F (H << min(E, F)). For W_A, we employ a random Gaussian initialization, while W_B is initialized with zero. During training, W_0 is frozen and only W_A and W_B are optimized to save computation and storage costs.Suppose the input of the weight matrices and the inserted low-rank decomposition matrices is x. The output can be calculated by y = (W_0 + W_A W_B)x. The procedure of the proposedalgorithm is depicted in Fig.<ref>. We activate the low-rank decomposition matrices sparsely instead of activating them all during the training. At the beginning of the training stage, every layer of the visual encoder and the text encoder are inserted with frozen low-rank decomposition matrices. In deep neural networks, lower layers can better extract general information than higher layers<cit.>. During the first training stage, low-rank decomposition matrices in all layers are activated to better extract general information to form a global expert while in the second stage, we unfreeze the low-rank decomposition matrices from higher layers to lower layers during the training. More specifically, we introduce a Capability Queue with a maximum queue length of Q. Image classification accuracies of clients are forwarded to the Capability Queue after every communication round. Once the Capability Queue is full, the previously added accuracies will be popped out. We set an accuracy threshold δ to help decide whether the training comes into a bottleneck. The incremental factor Δ of client j in communication round i isΔ_i,j = Acc_i,j - 1/Q∑_t=i-Q^i-1 Acc_t,jWhere Acc_i,j denotes the image classification accuracy of the model of client j in communication round i.If Δ_i,j < δ, the training is considered to come into a bottleneck. Then low-rank decomposition matrices in the next lower layer will be activated. The design of thealgorithm is inspired by the fact that the performance of FM is usually affected by the model size, dataset size, and the quality of the dataset. Challenging datasets require more model parameters to be optimized to better extract the semantic meaning of the data. However, there is no silver bullet configuration in the training of FL with FM. So we introduce a Capability Queue to intelligently decide the number of tuning parameters and enable the training on computation resource-limited devices. §.§ Mixture of Foundation Models In traditional FL, a global model is trained using the decentralized data of clients. Only model weights are aggregated in the central server while the local data of clients are kept private to ensure clients' data privacy. This paradigm faces statistical challenges especially when the data distribution of clients is non-iid.Such non-iid data distribution could cause the weight divergence during the training<cit.> and cause significant performance drops. Moreover, training a single global model and applying it to all clients can not suit different clients' needs when their data have different data distributions. Training personalized models while benefiting from utilizing a global model is essential to providing better performance for different clients.To tackle this challenge, we design a novel Mixture of Foundation Models () algorithm to utilize an FM as the global expert and another FM as the local expert thus creating a mixture of Foundation Models to simultaneously learn personalized feature information as well as global feature information on each client. As shown in Fig.<ref>, in the first stage of training, every client collaboratively trains a global FM ζ_g with weight W_g. Low-rank decomposition matrices are inserted in every layer of the visual encoder and the text encoder. This global FM acts as a global expert. In the second stage, a local expert ζ_i with weight W_i is created for each client i to cooperate with the global expert. More specifically, the local experts have the same neural network architecture as the global expert and are initialized with the weights of the global expert. A gate function G_i with weight ξ_i for each client i is a neural network introduced to control the relative contribution of the global expert and the local expert to the final image classification decision given different images. We denote the extracted image features and text features by the global expert as V_g and T_g and the extracted image features and text features by the local expert i as V_i and T_i. The final cosine similarity of image features andtext features extracted from the dataset of client i can be denoted byÕ_̃ĩ = λ_i <V_g,T_g> + (1-λ_i) <V_i, T_i>where λ_i ∈ (0,1) is a weight factor representing the mixing ratio of the global expert and the local expert of client i. Larger λ_i indicates more global knowledge is used while smaller λ_i indicates more personal knowledge is used.During the second training stage of FedMS, the weights of the global expert are frozen, and the local expert ζ_i and the gate model are optimized only using the local data of client i.The adapter<cit.> has been a popular parameter-efficient tuning method in FMs. It works by inserting very few layers into FMs and optimizing FM by only tuning the inserted very few parameters. We design a gate adapter to adapt to the local datasets while maintaining a low computation and communication cost. In each communication round, clients' activated gate adapter parameters are aggregated to learn global feature information, thus maintaining a low computation and communication cost.We denote the gate adapter of gate i as Z_i and the gate adapter after aggregation as Z_g. Specifically, we construct the gate adapter with a Multi-Layer Perceptron (MLP), a batch norm layer, an MLP, a batch norm layer, and finally a Softmax function to ensure the output is between (0,1). The gate adapter aggregation procedure is denoted as:Z_g = ℒ_i,j/∑_j=0^N ℒ_i,j∑_i=1^N Z_iwhere ℒ_i,j denotes the loss on the dataset of client j in communication round i. § EXPERIMENTSIn this section, we conduct comprehensive experiments compared to SOTA baselines to verify the effectiveness of FedMS under different settings.§.§ Experiments set up §.§.§ DatasetsWe select some representative datasets that are widely used in the image classification task of the CLIP model. Specifically, we select Food101<cit.> which is a food classification dataset containing 101 classes, EuroSAT<cit.> which is a dataset for land use and land cover classification containing 10 classes, and UCF101<cit.> which is a dataset for human actions classification in the wild containing 101 classes. §.§.§ BaselinesTo verify the effectiveness of the proposed FedMS algorithm,we compare image classification accuracy with the following state-of-the-art baselines.* Vanilla Fine-Tuning (FT): This is one of the most representative fin-tuning algorithms used in natural language processing and computer vision areas<cit.>. * PromptFL<cit.>: This algorithm does prompt tuning instead of tuning the parameters of FM. prompts from different clients are aggregated in every communication round.* LayerFreeze Fine-Tuning (LFFT): This algorithm freezes several layers in FMs, and only the activated layers will be aggregated in every communication round to save communication and computation resources.§.§.§ Default training settingsWe set the backbone of the visual encoder of CLIP to be ViT-B/16. The batch size is set to be 512. The learning rate is set to be 2 e^-4. The optimizer is set to be Adam, with β_1=0.9, β_2=0.98, ϵ=1e^-6, and weight decay to be 0.05. The number of clients is set to be 10. The rank of the inserted low-rank decomposition matrices is set to 1, The dropout probability of the inserted low-rank decomposition matrices is set to 0.1. The number of communication rounds of training stage one is set to 25 and the number of communication rounds of training stage two is set to 25. §.§ Results ComparisonsWe assume the non-iid data partition in FL to follow the Dirichlet distribution <cit.>. The α parameter in Dirichlet distribution represents the degree of heterogeneity. The smaller the α, the more non-iid the data distributed in the clients will be. We test the image classification accuracy of various datasets in different non-iid levels. §.§.§ Impact of different system settingsImpact of degrees of data heterogeneity. Fig. <ref> shows the image classification accuracy of Food101, UCF101, and EuroSAT datasets respectively, assuming different degrees of data heterogeneity. Specifically, we set the α to be 0.1, 1, and 10. From the results, we can find that FedMS achieves the highest accuracy among the four algorithms in all cases. FedMS has surpassed the average accuracy of FT, PromptFL, and LFFT in all datasets by20.57%, 30.06%, and 11.17% respectively. Our method has a minimum accuracy gain of 6.36%, 14.43%, and 12.71% and a maximum accuracy gain of 9.62%, 23.82%, and 59.21% on Food101, UCF101, and EuroSAT datasets respectively compared to other baselines when α is set to 1.By observing the accuracy at different data heterogeneity levels, we can find that the performance of these algorithms does not always follow a positive correlation relationship with the data heterogeneity level. On Food101 and UCF101 datasets, the accuracy of FedMS when α is 10 has an increase of 0.24% and 0.15% compared to the case when α is 0.1. On the EuroSAT dataset, FedMS has an accuracy of 99.46% when α=1, but has an accuracy of 97.67% when α=10.This is because higher data heterogeneity may lead to a smaller number of classes in clients' local datasets. For example, a client's local dataset may contain data from 10 classes at a low data heterogeneity level but may contain 3 classes at a high data heterogeneity level, which can lower the difficulty of identifying the right class given an image. Impact of number of clients. We test the performance of FedMS and other baselines under different number of clients. Specifically, we set the number of clients to 5, 10, and 15.From Fig. <ref> we can conclude that FedMS has the highest accuracy in different client number cases.When the number of clients is 5, FedMS achieves the accuracy of 91.67%, 85.18%, and 98.51% in Food101, UCF101, and EuroSAT datasets respectively while the best accuracy of the other three baselines in these three datasets are 87.56%, 77.15%, and 90.40%. FedMS suppresses the best performance of other baselines by 4.11%, 8.03%, and 8.11%.In the case when there are 10 clients, FedMS has a maximum accuracy increase of 9.67%, 22.2%, and 55.24% compared to the three baselines. When the number of clients reaches 15, the accuracy of FedMS is 89.31%, 77.31%, and 97.03% while the highest accuracy of the other three baselines are87.44%, 62.86%, and 83.44%. Results show that FedMS works well under different scales. §.§.§ Impact of training settings Impact of visual encoders. We test the performance of FedMS under visual encoders ViT-B/16 and ViT-B/32 to further verify the effectiveness of FedMS under various visual encoders.We can observe from Fig. <ref> that FedMS achieves the highest image classification accuracy on all datasets using the visual backbone Vit-B/16 or Vit-B/32. Results show that visual encoders with a larger number of parameters can achieve better performance than those with smaller visual encoders. The accuracy of the four algorithms increased by 1.01%, 11.85%, 4.57%, and 4.25% when using ViT-B/16 as the visual encoder on the UCF101 dataset which confirms the theoretical analysis that larger models have better feature extraction ability.Our method suppresses other baselines by a maximum of 55.25% and a minimum of 8.55% when using Vit-B/16 and suppresses other baselines by a maximum of 60.09% and a minimum of 6.01% when using Vit-B/32. Results show that FedMS can adapt to visual encoders with different scales. Impact of accuracy thresholds. We further discuss the performance of FedMS when the value of the accuracy threshold δ is different. Typically, if the clients have more computation resources, they can have a higher δ to encourage more inserted low-rank decomposition matrices to be activated, and if their computing resources are scarce they can set a small δ to save more computation resources.We set the accuracy threshold to 0.001, 0.005, 0.01, and 0.02 to see its effect on the model performance. From Fig. <ref> we can find that the accuracy does not always increase with δ. On the Food101 dataset, the accuracy is 94.05% when α is 0.02 and is 92.96% when α is 0.001, which has an increase of 1.09% and on the EuroSAT dataset, the accuracy increase from 99.23% to 99.39%. But on the UCF101 dataset, the accuracy decreased from 88.48% to 88.24% when δ increased from 0.001 to 0.02. The largest accuracy difference in the three datasets is 1.09%, 0.0024%, and 0.0016%. Results imply the effectiveness of the design ofbecause of leveraging the idea of curriculum learning that progressively increases the number of activates low-rank adaptation matrices in the visual encoder and the text encoder.In the optimization process of FM, it is often easier at the beginning, while the optimization of parameters becomes more difficult as it progresses, making it harder to improve accuracy. We increase the number of activated parameters through a controller during the training to tackle the increasing difficulty of optimizing the FM during training.Imapct of learning rates. We show the accuracy of FedMS in three datasets under different learning rates. We set the learning rate to 2 e^-3, 2 e^-4, and 2 e^-5.From Fig. <ref> we can observe that when the learning rate is 2 e^-5 the model shows a slow convergence rate, especially in UCF101 datasets, the accuracy is 70.61% which has an accuracy loss of 16.72% compared to the accuracy when the learning rate is 2e^-4. When the learning rate is 2e^-3, the accuracy has a sharp increase in the first few epochs in all datasets but may encounter severe oscillation in the following epochs which is because of the large learning rate can cause instability in model training. This phenomenon is especially usual in the training of FMs for the reason that they usually have a large number of parameters and the scale of gradient calculations will increase which may cause the gradient explosion. Moreover,models with large parameters tend to have more complex optimization spaces, resulting in the training process being more easily affected by noise and instability.Impact of LoRA ranks. In FedMS, we incorporate LoRA for model training and optimization. The rank in LoRA refers to the degree of model compression or pruning, and different rank settings can have an impact on the model training performance. We examine the training accuracy performance of FedMS under different LoRA ranks, specifically rank is set to 1, 4, 8, and 16.From Fig. <ref>, we can observe that as the LoRA rank increases, FedMS shows a certain trend in accuracy performance across different datasets: it performs the best at LoRA rank=4, but the model accuracy decreases as the LoRA rank further increases. This trend is particularly evident on the UCF101 dataset, where changes in LoRA rank can cause the performance of the final model to fluctuate within a 5% range.One possible reason is that the semantic information that the UCF101 dataset contains is difficult for the model to capture and thus requires more training to optimize the model. A high LoRA rank can lead to a large optimization space which raises challenges for the training. On the other hand, the information patterns in the Food101 and EuroSAT datasets are relatively simpler and easy for the inserted low-rank matrices with different ranks to capture, so the model's learning performance is not affected significantly.Impact of LoRA dropout rates. The dropout coefficient in LoRA affects the probability of applying dropout regularization during the model training process. In FedMS, we apply dropout to prevent neural networks from overfitting and enhance the model's generalization ability. In our experiment, the group with a dropout rate of 0 will serve as the control group to represent the accuracy of the model without using dropout techniques. The groups with dropout rates of 0.1, 0.3, and 0.5 will serve as the experimental groups to investigate the impact of different dropout coefficients.From Fig. <ref>, we can clearly see that dropout has a significant effect on the final model accuracy when set to 0.1. It results in a noticeable improvement of 0.5%, 2%, and 0.6% respectively on the Food101, UFC101, and EuroSAT datasets. This improvement is quite significant, especially when the model itself already has a high accuracy on the dataset. Similar to the LoRA rank experiment, FedMS shows a decrease in accuracy on the UFC101 dataset with higher dropout rates. This is expected due to the more complex information patterns in the UFC101 dataset. Setting dropout too high increases the number of discarded neurons during model training, leading to a decrease in the model's learning and expressive capacity. Similarly, due to the simpler information patterns in the Food101 and EuroSAT datasets, the higher dropout rates contribute to the improved accuracy of the FedMS model on these two datasets. Impact of weight decays. Weight decay is used to solve the overfitting problem. By adding a regularization term to the loss function, it can encourage the model to have small weight values during the training. In our experiment, we set up five different experimental groups with varying degrees of weight decay. The group with a weight decay value of 0 represents the training performance of FedMS without using weight decay.From Fig. <ref> we can conclude that weight decay has a different impact on different datasets. On the UCF101 dataset, it has a significant impact on the accuracy. When the weight decay is 0.5 it has an accuracy loss of 4.53% compared to the case when weight decay is 0.1. Weight decay has less impact on Food101 and UCF101 datasets, the accuracy under different weight decay varies less than 1% on these two datasets. Overall, the performance of FedMS improves with the use of weight decay but decreases when the weight decay value is set too high. This is because large weight decay can lead to excessive constraints on parameters, limiting the effective information learned by the model.Impact of backdoor attacks. To further verify the robustness of FedMS we test the performance of FedMS and the three baselines under backdoor attack. We suppose that there is a certain ratio of malicious clients controlled by the attacker. The controlled malicious clients update the reverse weight of their local model in the weight aggregation every communication round to attack the FL system.We set the ratio of malicious clients to the total number of clients to 20% and compare the accuracies. From Fig. <ref> we can observe that the backdoor attack can severely harm the performance of the algorithm. FedMS has the highest accuracy in all datasets. It has an accuracy gain of6.97%, 14.94%, and 14.23% on three datasets compared to the highest accuracy of the three baselines. On the UCF101 dataset, the accuracy of the FT, LFFT, and, PromptFL have an accuracy of 1.20%, 65.09%, and 57.90% respectively. Three baselines have the lowest average on the EuroSAT dataset which is 36.11%, and they achieve the highest average accuracy of 86.88% on the Food101 dataset. FMs are pre-trained on large-scale datasets which enables them to have strong zero-shot ability. The FM we use has the highest zero-shot accuracy on the Food101 dataset among the three datasets which makes it more resistant to backdoor attacks when it is trained on the Food101 dataset. The FT algorithm is the most easily poisoned because it exposes all parameters to the attack. Although the LFFT algorithm performs inferior to FT when there are no attacks, it outperforms FT under backdoor attacks due to the reason that the frozen transformer layers can help maintain the feature extraction ability of FM. §.§.§ Comparison of resource consumptionComparison of the number of training parameters. Tab. <ref> illustrates the proportion of the number of training parameters of FedMS and other baselines to the total parameters of the FM. Results show that FedMS only tunes 0.1% and 0.108% of parameters on training stage one and training stage two. FT and LFFT tune 100% and 50% of parameters. Notice that although PromptFL can save 0.095% to 0.103% trainable parameters compared to FedMS, it has a cost of severe performance drop. FedMS achieves the best trade-off between training parameter saving and model performance. Comparison of transmission time. Considering the scenario in mobile computing and the Internet of Things, we test the proposed FedMS in different bandwidth conditions. Specifically, we set the bandwidth to be 0.1 MB/s, 1 MB/s, and 10 MB/s which are typical bandwidths in modern mobile communication networks.Fig. <ref> illustrates the communication time of the proposed FedMS and other baselines on different datasets per communication round under different network bandwidths. Low bandwidth results in long data transmission time for parameter aggregation in each communication round thus impairs the training efficiency of FL.From the results, we can observe that when the bandwidth is 0.1 MB/s, the time for FedMS to finish a communication round is 3.08 seconds while the communication time for FT, LFFT, and PromptFL per communication round is 5,984 seconds, 2,992 seconds, and 0.32 seconds. FedMS save 99.95% communication time compared to the FT algorithm. When the bandwidth is 1 MB/s and 10 MB/s, the difference in training time between different algorithms is reduced. FedMS and PromptFL have communication times of fewer than 5 seconds per communication round in all network bandwidth conditions. Although PromptFL has the minimum communication resource consumption, both FedMS and PromptFL are communication efficient in real-world scenarios and FedMS has much higher accuracy on all datasets compared with PromptFL.§.§.§ Ablation study Impact of Mixture of Foundation Models architecture. We compare the image classification accuracy of the model with thearchitecture using the visual encoder ViT-B/16 and the model without thearchitecture but with a visual encoder ViT-L/14, which has a much larger number of parameters. The first model is trained through all two stages of FedMS and the second model is trained through the first stage of FedMS. The total number of epochs of both models is set to 50 to ensure fair comparison. The comparison of the total number of parameters is shown in Tab. <ref>. The model witharchitecture but with a smaller visual encoder has 27.33% parameter less than another model.We can observe from Fig. <ref> that although with much fewer parameters, the model with thearchitecture and trained through complete two stages achieves a higher accuracy on all datasets. It has an average accuracy increase of 2.36% on all datasets. More specifically, with an increase of 2.03%, 3.24%, and 1.82% on Food101, UCF101, and EuroSAT datasets.This surprising result demonstrates the superiority of thearchitecture as it can intelligently assign different weights to different experts according to the characteristics of the images to be classified. This enables better generalization ability to data that is out of the distribution of clients' local datasets and enables better personalization ability to data that follow the local distribution of the local clients' local datasets. Impact of aggregation of gate parameters. We compare the image classification accuracy of FedMS and FedMS with all gate parameters activated in the second training stage. We can observe from Tab. <ref> that by only fine-tuning the last layer of the gate adapter, we can achieve an accuracy of 93.73%, 87.33%, and 99.46% in Food101, UCF101, and EuroSAT datasets which is 0.09%, 0.66%, and 0.35% higher than tuning full parameters of the gate model. Moreover, we save the communication resource consumption by 97.69%. This is because the parameters changing of the local expert during training can cause the relationship between the global expert and the local expert to keep changing which makes it hard for the gate model to decide the decision weight it assigns to the two FMs and can cause unstableness in training which harm the performance. By freezing the gate parameters and inserting a lightweight gate adapter, the gate model can quickly adapt to the newly optimized parameters of the local expert while maintaining the feature extraction ability. § CONCLUSIONS In this paper, we propose a novel FedMS algorithm that contains two training stages to address the computation, communication, and statistical heterogeneous challenges in federated learning with foundation models. In the first stage, we freeze the pre-trained FM weight and insert low-rank decomposition matrices in every transformer block. We activate all the inserted matrices to better extract global feature information. In every communication round only the parameters of low-rank decomposition matrices join the weight aggregation. In the second stage, we take FM trained in the first stage as the global expert and construct another local expert to provide personalization for individual clients. We are the first to form the global expert and the local expert as a Mixture of Foundation Models () in federated learning. We specially design and insert a gate adapter into the gate model to help assign the decision weight of the two experts. Moreover, to enable efficient training in computation-scarce scenarios, we propose a Sparsely Activated LoRA () algorithm to activate the low-rank adaptation matrices progressively according to past accuracies in the Capability Queue. We test the performance of FedMS through extensive experiments in various settings, and results show that FedMS outperforms other SOTA baselines. IEEEtran
http://arxiv.org/abs/2312.15926v1
{ "authors": [ "Panlong Wu", "Kangshuo Li", "Ting Wang", "Fangxin Wang" ], "categories": [ "cs.LG", "cs.DC" ], "primary_category": "cs.LG", "published": "20231226074026", "title": "FedMS: Federated Learning with Mixture of Sparsely Activated Foundations Models" }
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions André Yuji Yasutomi^1,2, Hideyuki Ichiwara^1,2, Hiroshi Ito^1,2, Hiroki Mori^3 and Tetsuya Ogata^2,4 Manuscript received: October 1, 2022; Revised December 13, 2022; Accepted January 29, 2023.This paper was recommended for publication by Editor Hyungpil Moon upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by Hitachi, Ltd.^1André Yuji Yasutomi, Hideyuki Ichiwara and Hiroshi Ito are with the R&D Group, Hitachi, Ltd, Japanandre.yasutomi.ss@hitachi.com^2André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito and Tetsuya Ogata are with the Graduate School of Fundamental Science and Engineering, Waseda University, Japan ogata@waseda.jp ^3Hiroki Mori is with the Future Robotics Organization, Waseda University, Japan mori@idr.ias.sci.waseda.ac.jp ^4Tetsuya Ogata is with the Waseda Research Institute for Science and Engineering (WISE), Waseda University, Japan Digital Object Identifier (DOI): https://doi.org/10.1109/LRA.2023.324352610.1109/LRA.2023.3243526January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we present approximate distance and shortest-path oracles for fault-tolerant Euclidean spanners motivated by the routing problem in real-world road networks. An f-fault-tolerant Euclidean t-spanner for a set V of n points in ℝ^d is a graph G=(V,E) where,for any two points p and q in V and a set F of f vertices of V,the distance between p and q in G-F is at most t times their Euclidean distance.Given an f-fault-tolerant Euclidean t-spanner G with O(n) edges and a constant ε, our data structure has size O_t,f(nlog n), and this allows us to computean (1+ε)-approximate distance in G-F between s and s' can be computedin constant time for any two vertices s and s' and a set F of f failed vertices. Also, with a data structure of size O_t,f(nlog nloglog n), we can computean (1+ε)-approximate shortest path in G-F between s and s'in O_t,f(log^2 nloglog n+) time for any two vertices s and s' and a set F of failed vertices, wheredenotes the number of vertices in the returned path.§ INTRODUCTIONComputing the shortest path in a graph is a fundamental problemmotivated by potential applications such as GPS navigation, route planning services and POI recommendation for real-world road networks. Although the shortest path can be computed by Dijkstra's algorithm, it is not sufficiently efficient if the given graph is large. This requires us to preprocess a given graph so that for two query vertices, their shortest path can be computed more efficiently.A data structure for this task is called a shortest-path (or distance) oracle. From the theoretical viewpoint, this problem is not an easy task. More specifically, any data structurefor answering (2k+1)-approximate distance queriesin O(1) time for n-vertices graphs must use Ω(n^1+1/k) space assuming the 1963 girth conjecture of Erdős <cit.>. On the other hand, there are algorithms for this task that work efficiently for real-world road networks in practice such as contraction hierarchies <cit.>, transit nodes <cit.>, and hub labels <cit.>.Although these algorithms work well in practice, there is still a lack of theoretical explanation for this. Bridging this theory-practice gap is one of interesting topics in computer science. Indeed, there are lots of works on bridging the theory-practice gap in the routing problems such as <cit.>.Dynamic networks: theory and practice. In real-life situations, networks might be vulnerable to unexpected accidents:edges or vertices might be failed, but these failures are transient due to a repair process. As the network is large, we cannot afford to construct the entire data structure from scratch.This motivates the study of fault-tolerant distance and shortest-path oracles on vulnerable networks: preprocess a graph G=(V,E) so thatfor any set F of f failed vertices (or edges) of G and two query vertices s and s', we can compute a shortest path in G-F between s and s' efficiently.A data structure that can handle failed vertices (or edges) is said to be fault-tolerant. The theoretical performance of a fault-tolerant data structure is measured bya function of the number n ofvertices of an input graph and the maximum number f of failed vertices. Although this problem is natural, little is known for vertex failures while there are lots of work on edge failures.To the best of our knowledge, there is the only one published result on (approximate) distance oracles for general graphs in the presence of vertex failures:an O(n^2)-sized approximate distance oracleanswering queries in (n) time in the presence of a constant number of vertex failures <cit.>.On the other hand, there are lots of theoretical results on (approximate) distance oracles for general graphsin the presence of edge failures <cit.>. Dynamic graphs whose edge weights change over time also have been studied from a more practical point of view <cit.>. In this case, vertex updates (and vertex failures) are not allowed.This raises an intriguing question: Can we design an efficient oracle for handling vertex failures? The size of the best-known oracle is O(n^2), which is still large for practical purposes.In real-life situations, vertices as well as edges are also prone to failures. A bus map can be considered as a graph where its vertices correspond to the bus stops. A bus stop can be closed due to unexpected events, and then buses make detours. This changes the bus map temporarily. To address this scenario, we need vertex-fault tolerant oracles.In this paper, we design a new efficient oracle for answering approximate distance and shortest-path queries for real-world road networks from a theoretical point of view. Similar to the static case, there is a huge gap between theory and practice in the dynamic settings:theoretical solutions are not efficient in general while practical solutions do not give theoretical explanation why they work efficiently. Researchers also tried to bridge the theory-practice gap. For instance, <cit.> gave a shortest-path oracle for dynamic road networks with theoretical guarantees together with experimental evaluations. Their theoretical guarantees partially explain why their result works efficiently.In particular, they analyzed the performance of their oracle in terms of the size of an input graph and some parameter depending on their construction algorithms. Then they showed that this parameter is small for real-world road networks.This still does not tell us why this parameter is small in practice, and the definition of the parameter does not look intuitive as it depends on their algorithms.Instead, we choose a different approach to bridge the gap between theory and practice from a more theoretical point of view in a more classical method: first define a theoretical model for real-world road networks, and then design an oracle on this theoretical model. Theoretical model. A geometric graph is a graph wherethe vertices correspond to points in ℝ^dand the weight of each edge is the Euclidean distances between the endpoints of the edge.Let V be a set of n points in ℝ^d for a constant d ≥ 1.A geometric graph G=(V,E) with |E|=O(|V|) is called a Euclidean t-spanner of V if the distance in G between any two vertices isat most t times the Euclidean distance of their corresponding points. More generally, a geometric graph G=(V,E)with |E|=f^O(1)|V| is an f-fault-tolerant Euclidean t-spanner if the distance in V-F between two vertices u and v is at most t times the Euclidean distance between u and v for any set F of at most f vertices of G. Here, we call the vertices of F the failed vertices. Lots of road networks can be represented as Euclidean t-spanners for a small constant t. For instance, a southern Scandinavian railroad network is a 1.85-spanner <cit.>.Thus it is reasonable to use a fault-tolerant Euclidean spanner as a theoretical model for our purposes <cit.>.Apart from this, Euclidean spanners have various applications such as pattern recognition, function approximation andbroadcasting systems in communication networks <cit.>. Due to its wide range of applications,many variations of fault-tolerant Euclidean spanners have been studied extensively <cit.>.For static Euclidean spanners without vertex/edges failures,<cit.> showed that a Euclidean spanner admits an efficientapproximate distance oracle.Their oracle has size O(nlog n) and answers approximate distance queries in O(1) time. Our result. In this paper, we presentthe first near-linear-sized approximate distance and shortest-path oraclespecialized for fault-tolerant Euclidean spanners. More specifically,given a fault-tolerant Euclidean t-spanner with constant t and a value , we present a near-linear-sized data structureso that given two vertices s, s' and a set F of at most f failed vertices,an (1+ε)-approximate distance between s and s' in G-Fcan be computed in {f, t} time. Moreover, we can report an approximate shortest path πin time almost linear in the complexity of π. The explicit bounds are stated in Table <ref>.Our oracle is significantly more compact compared to the quadratic-sized distance oracle <cit.> for general graphs.We only consider spanners constructed in a two-dimensional Euclidean space. However, our ideas can be extended to a general d-dimensional Euclidean space without increasing the dependency on n in the performance guarantees. We provide a sketch of this extension in the appendices. Related work. Although nothing is known for (approximate) distance oracle specialized for fault-tolerant Euclidean spanners, designing fault-tolerant structures is a popular topic in the field of algorithms and data structures.Fault-tolerant structures have received a lot of interests over the past few decades.In general, there are two types of problems in the research of fault-tolerant structures. For the first type of problems,the goal is to process a given graph G=(V,E) which can have failed vertices (or edges) so that for a set F of failed vertices (or edges) given as query,it can efficiently respond to several queries on the subgraph of G induced by V-F (or E-F). Various types of queries have been studied, for instance, reachability queries <cit.>,shortest path queries <cit.>, diameter queries <cit.>, and k-paths and vertex cover queries <cit.>.The problem we consider in this paper also belongs to this type of problems.For the second type of problems, the goal is tocompute a sparse subgraph H of a given graph G so that for any set F of failed vertices (or edges), H-F satisfies certain properties.For instance, the problems of computingsparse fault-tolerant spanners <cit.>and fault-tolerant distance preservers <cit.>have been widely investigated. § PRELIMINARIES For two paths γ and γ', we say γ can be extended to γ' if the vertex sequence of γ is a subsequence of the vertex sequence of γ'. Even if γ and γ' are from different graphs H and H', respectively, we can define the extension relation when V(H)⊆ V(H').For a graph G=(V,E) and two vertices u and v of G,let π_G(u,v) be a shortest path between u and v within G, andd_G(u,v) be the distance between u and v inG.An (1+)-approximate distance between u and v of G is defined asany value ℓ lying between d_G(u,v) and (1+ε)d_G(u,v). Here, there does not necessarily exist a path in G of length exactly ℓ,but it can be considered as a good estimate of the distance between u and v.Analogously, an (1+)-approximate shortest path between u and v in G is a path in G of length at most (1+)d_G(u,v).For two points p and q in a Euclidean space,let |pq| be the Euclidean distance between p and q. With a slight abuse of notation,we use |γ| to denote the length ofa path γ of G (the sum of the weights of the edges of γ). Note that a path in a graphis a sequence of adjacent edges, and equivalently, it can be represented by a sequence of incident vertices. For two numbers a,b∈ℝ, we let [a,b] be the closed interval between a and b.Also, let (a,b] and [a,b) be half-closed intervals excluding a and b, respectively.In addition, let (a,b) be the open interval between a and b. §.§ Generalization LemmasWe call a geometric graph G an L-partial f-fault-tolerant Euclidean t-spanner if d_G-F(u,v)≤ t|uv| for any two vertices u and v with |uv|≤ L andany set F of at most f failed vertices.We say two points s and s' are moderately far in Gif |ss'|∈ [L/m^2, L/t), where m is the number of edges in G.The following generalization lemmas states that once we have oracles on L-partial f-fault tolerant spanners for moderately far vertices, we can use them as black boxes to handle a query consisting of any two (not necessarily moderately far) vertices. Proofs of the following lemmas are stated in Section <ref>. lemmathmGeneralDistance Assume that for any '>0, we can construct an oracle of size h_s(n) in h_c(n) time onan n-vertices partial fault-tolerant Euclidean spanner for supporting(1+')-approximate distance queries for moderately far vertices and f failed vertices in T time.Then we can construct anoracle of size O(h_s(f^2n)+f^2n) in O(h_c(f^2n)+f^2nlog n) time for answering (1+)-approximate distance queries for any >0, two vertices, and f failed vertices in O(f+T) time.lemmathmGeneralPath Assume that for any '>0, we can construct an oracle of size h_s(n) in h_c(n) time onan n-vertices partial fault-tolerant Euclidean spanner for supporting(1+')-approximate shortest-path queries for moderately far vertices and f failed vertices in T time.Then we can construct anoracle of size O(h_s(f^2n)+f^2nlog^2 n loglog n) in O(h_c(f^2n)+f^2n^2log^2 n) time for answering (1+)-approximate shortest-path queries for any >0, two vertices, and f failed vertices in O(f^4log^2nloglog n + T+ f·) time, whereis the number of vertices in the returned path. Note that the parameter L does not appear in the performance guarantees.This is because L determines if two vertices are moderately far. In particular, for L=0, all geometric graphs are L-partial fault-tolerant Euclidean spanners, but no two vertices are moderately far.In the following, we let G be an L-partial f-fault-tolerant Euclidean spanner, and (s,s') be a pair of moderately far vertices unless otherwise stated. §.§ Utilized tools In this section, we introduce utilized tools and concepts used in the design of our data structures.*Kernels. An edge-weighted graph H is an (s,s',F;)-kernel of G if the followings hold: * s,s'∈ V(H)⊆ V(G),* d_G-F(s,s')≤ d_H(s,s')≤ (1+)d_G-F(s,s'), and* π_H(s,s') can be extended to an (1+)-approximate shortest path between s and s' in G-F.We define the size of a kernel as the number of vertices and edges of the kernel. For an edge uv of H,its weight is denoted by w_H(uv).Our main strategy is to construct a data structure on a partial fault-tolerant Euclidean spanner G and a valuefor computing an (s,s',F;)-kernel of small complexity for two vertices s and s' and a set F of failed vertices given as a query.We call this data structure a kernel oracle.By the definition of kernels, once we have an (s,s',F;)-kernel of G, we can compute an (1+)-approximate distance between s and s' in G-F in time near linear in the complexity of the kernel by applying Dijkstra's algorithm to the kernel. For distance oracles, it is sufficient to construct a kernel oracle for computing a kernel of small complexity. However, it is not sufficient for approximate shortest-path oracles. To retrieve an (1+)-approximate shortest path of G-F from a shortest path π of a kernel H, we are required to efficiently compute a path in G-F between u and v of lengthw_H(uv) for every edge uv of H.Then we can replace every edge of π with its corresponding path such that the resulting path becomes an (1+)-approximate shortest path between s and s' in G-F. From this motivation, we define the notion of path-preserving kernels as follows. We say a (·,·,F;·)-kernel H of G is a path-preserving if for every edge uv with w_H(uv)≤ d_H(u,v), at least one of the followings hold:* d_G(u,v)≤ tL/m^6, or* we can efficiently compute a path in G-F between u and v of length w_H(uv). Note that an edge u'v' with w_H(u'v')>d_H(u',v') does not appear in any shortest path in H. If d_G(u,v)≤ tL/m^6, we will see that it is sufficient to replace uv with an arbitrary path between u and v consisting of edge of length at most tL/m^6 to obtain an approximate shortest path in G-F.*Net Vertices. For an r>0, a set 𝒩 of vertices of G is an r-net if the followings hold: * d_G(u,v)≥ r for any two net vertices u,v∈𝒩, and* min_v∈𝒩 d_G(x,v) ≤ r for any vertex x∈ V(G). In the following lemmas, we show two properties of an r-net: it can be computed in near-linear time and there are bounded number of net vertices in bounded area.lemmalemRnetConstruction Given a graph G, an r-net of G can be computed in O(m+nlog n) time. An r-net of G can be computed in a greedy fashion. We select an arbitrary vertex u of V(G) as a net vertex. Then we remove every vertex v from V(G) such that d_G(u,v)≤ r. We repeat this process until V(G) becomes empty.By construction, the set of selected points is an r-net of G. This takes O(m+nlog n) time in total. lemmalemThereAreConstantCluster Let 𝒩 be an r-net of a Euclidean graph G in the plane. For a vertex v ∈ V(G) and a constant c>0, the number of vertices u ∈𝒩 with d_G(v,u) ≤ cr is at most 4(c+1)^2. The Euclidean disks centered at the net vertices u ∈𝒩 with d_G(v,u) ≤ cr of radius r/2 are pairwise disjoint by the definition of an r-net. The union of such disks is contained in the Euclidean disk centered at v with radius (c+1)r. Therefore, the size of 𝒩 is at most the ratio between the area of the Euclidean disk with radius r/2 andthe area of the Euclidean disk with radius (c+1)r, which is 4(c+1)^2.*Safe Paths and Weakly Safe Paths. In the design of kernel oracles, a key idea is to decompose a shortest path between s and s' in G-F into subpaths each of which is safe, weakly safe, or sufficiently short. We define safe paths and weakly safe paths as follows.With a slight abuse of notation, for a path γ in G, we define d_G(γ,v) as the minimum distance in G between vertex v and any vertex on γ. Let F be a set of failed vertices and (t,r) be a pair of positive parameters.* A path γ of G is (t,r)-safe if d_G(x_f, γ) ≥ tr for any x_f∈ F, and * A path γ of G is (t,r)-weakly safe ifmin{d_G(u,x_f),d_G(v,x_f)} is at most (2t^2+3t+1)rfor any x_f∈ F such that d_G(x_f,γ) is at most (1+t)r, where u, v are two end vertices of γ.For illustration, see Figure <ref>.If it is clear from the context, we omit the parameter (t,r). Note that a safe path γ is not necessarily weakly safe.There might be a failed vertex x_f such that d_G(x_f,γ) lies in [tr,(1+t)r), but d_G(u,x_f) and d_G(v,x_f) are both at least (2t^2+3t+1)r. Let 𝒩 be an r-net of G. Then the following lemmas hold. lemmalemCaseEps Assume that all edges of G have length at most r. Let γ be a (t,r)-weakly safe path between two vertices u and v in G-Fwith |γ|≥ (4t^2+8t+4)r. Then there is a (t,r)-safe path γ' between two net vertices z,z'∈𝒩with d_G(u,z),d_G(v,z') ≤ (2t^2+4t+4)rand |γ'|≤ |γ|.In this proof, we omit the explicit mention of the parameters (t,r) for clarity. We say safe and weakly safe to refer (t,r)-safe and weakly safe, respectively. Recall that 𝒩 is an r-net of G, and d_G-F(p,q)≤ t d_G-F(p,q) for p,q∈ V(G)-F. Moreover, and all edges of G have length at most r.Let x be the last vertex from u on γsuch that d_G(u,x) is at most (2t^2+4t+2)r, and let y be the next vertex of x from u along γ. We set z as the closest net vertex in 𝒩 to y. Similarly, we define x',y' and z' with respect to v. See Figure <ref>(b) in the main text. Since d_G(y,z),d_G(y',z' ),|xy|, and |x'y'| are at most r, we have : d_G(u,z) ≤ d_G(u,x)+d_G(x,y)+d_G(y,z) ≤ (2t^2+4t+4) r,and d_G(v,z') ≤ d_G(v,x')+d_G(x',y')+d_G(y',z')≤ (2t^2+4t+4) r. We claim that π_G-F(z,y)·γ[y,y']·π_G-F(y',z') is a desired path connecting z and z'. Clearly, it is shorter than γ.Thus, it suffices to show that the concatenation is safe. That is, we show that all pathsπ_G-F(z,y), γ[y,y'], and π_G-F(y',z') are safe.First, we show that γ[y,y'] is safe. Indeed, it has a stronger property:there is no failed vertex x_f with d_G(γ[y,y'], x_f)< (1+t) r.If such a failed vertex x_f exists,there is a vertex w on γ[y,y'] with d_G(w, x_f)< (1+t) r.Furthermore, by the choice of y and y', d_G(u,w)≥ (2t^2+4t+2) r.Then by the triangle inequality,d_G(u,x_f)≥ d_G(u,w)-d_G(w,x_f)> (2t^2+3t+1) r. Analogously, d_G(v,x_f)> (2t^2+3t+1) r.This contradicts that γ is weakly safe.Now we show that π_G-F(z,y) is safe. If it is not the case,there must be a vertex w' on π_G-F(z,y) and a failed vertex x_f with d_G(w',x_f)< t r.Note that d_G(y,z)≤ r.By the triangle inequality, d_G(y,x_f)≤d_G(y,z)+d_G(w',x_f)< (1+t) r. This contradicts that there is no failed vertex x_f with d_G(γ[y,y'],x_f) < (1+t) r,and Thus, π_G-F(z,y) is safe.We can show that π_G-F(y',z') is safe in an analogous way. Therefore, the path π_G-F(z,y)·γ[y,y'] ·π_G-F(y',z') is safe. lemmalemCaseCase Assume that all edges of G have length at most r. Let u and v be two vertices of G-F such that π_G-F(u,v) is neither (t,r)-safe nor (t,r)-weakly safe.Then there are a vertex y of π_G-F(u,v) and a net vertex z∈𝒩 such that * π_G-F(u,y) ·π_G-F(y,z) is (t,r)-weakly safe,* d_G-F(y,z)≤ (t^2+2t)r, and* d_G-F(z,v)≤ d_G-F(u,v)-t^2r.In this proof, we omit the explicit mention of the parameters (t,r) for clarity. We say safe and weakly safe to refer (t,r)-safe and weakly safe, respectively. Recall that 𝒩 is an r-net of G, and d_G-F(p,q)≤ t d_G-F(p,q) for p,q∈ V(G)-F. Moreover, and all edges of G have length at most r.Since π_G-F(u,v) is not weakly safe,we have a pair (y,x_f) such that y is a vertex of π_G-F(u,v) andx_f is a failed vertex withd_G(y,x_f)≤ (1+t) r and min{d_G(u,x_f)d_G(v,x_f)}≥(2t^2+3t+1) r.Among such pairs,we choose a pair (y,x_f) so that y is the first vertex along π_G-F(u,v) from u.Let z be the closest net vertex in 𝒩 to x_f.Then the following inequalities hold:d_G-F(u,y) ≥ d_G(u,y) ≥ d_G(u,x_f)-d_G(y,x_f) ≥ (2t^2+2t) randd_G-F(y,z)≤ t· d_G(y,z)≤ t·(d_G(y,x_f)+d_G(x_f,z))≤ t(t+2) r .The first inequality of the second line holds by Lemma <ref> and thefact that d_G(y,z)≤ L. By the above inequalities and triangle inequality, d_G-F(z,v) ≤ d_G-F(z,y)+d_G-F(y,v)=d_G-F(u,v)-d_G-F(u,y)+d_G-F(z,y)≤ d_G-F(u,v)-t^2r. The equality in the second line holds since we choose y as a vertex in the shortest path π_G-F(u,v).We prove that γ=π_G-F(u,y) ·π_G-F(y,z) is weakly safe. For this purpose, letx_f' be a failed vertex with d_G(γ,x_f')≤ (1+t)r,and we show that min{d_G(u,x_f'),d_G(v,x_f')}≤ (2t^2+3t+1) r.Note that γ is weakly safe if no such failed vertex exists. Let w be the vertex of γ closest to x_f'.If w lies on π_G-F(u,y), then min{d_G(u,x_f'),d_G(v,x_f')}≤ (2t^2+3t+1) rby the choice of y, and Thus, we are done.Otherwise, w lies on π_G-F(y,z), and the following holds:d_G(x_f',z) ≤ d_G(x_f',w)+d_G(w,z) ≤d_G(x_f',w)+d_G-F(w,z) ≤ (t^2+3t+1) r. The last inequality holds by the fact d_G-F(w,z)≤ d_G-F(y,z)≤ (t^2+2t) r. Therefore, for any failed vertex x_f' such thatd_G(γ,x_f') is at most (1+t) r, we havemin{d_G(u,x_f'),d_G(v,x_f')}≤ (2t^2+3t+1) r. Therefore, γ is weakly safe. Here, γ·γ' denotes the concatenation of two paths γ and γ' having a common endpoint. Notice that π_G-F(u,y) is a subpath of π_G-F(u,v), which will be used in the proof of Lemma <ref>.Organization Our main ideas lie in the design of kernel oracles. Once a kernel oracle is given, we can answer approximate distance queries immediately. Also, with a path-preserving kernel oracle and an additional data structure, we can answer approximate shortest-path queries efficiently.A kernel oracle consists of substructures called the -structures with different parameters. If two net vertices u and v are connected by a safe path γ of length at most 2t|uv|, we can find a path of length at most |γ| between them using a -structure.In the following, we first describe -structures, andthen show how to use it to construct a kernel oracle. Finally, we describe an approximate distance oracle and an approximate shortest-path oracle.Recall that G is an L-partial f-fault tolerant Euclidean t-spanner. § -DATA STRUCTUREThe -structure is defined with respect to a pair (u,v) of net vertices and a parameter W≤ L.We denote this data structure by(u,v;W). If W is clear in the context, we use (u,v) simply to denote it. For a set F of at most f failed vertices in G with u,v∉ F,it allows us to compute a path in G-F between u and v oflength at most |γ| efficiently, where γ is a (t, W)-safe path between u and v in G-F if it exists.This structure is a modification of the one introduced in <cit.>. While the work in <cit.> deals with failed edges, we handle failed vertices. Since the degree of a vertex can be large, the modification is not straightforward.Moreover, to reduce the space complexity of <cit.> near linearly, we apply two tricks. While <cit.> constructs (u,v) for every pair (u,v) of vertices of G, we construct (u,v) for every pair (u,v) of net vertices. In addition tothis,we construct the data structure on the subgraph Ĝ(u,v) of Ginduced by the vertices pwith max{|pu|,|pv|}≤ 2t|uv|. We will see that this is sufficient for our purpose; this is one of main technical contributions of our paper. §.§ Construction of (u,v; W)The -structure for (u,v;W) is a tree such that each node αcorresponds to a subgraph G_α of Ĝ(u,v) andstores the shortest path π_α between u and v of G_α. Initially, we let G_r= Ĝ(u,v) for the root node r. In each iteration, we pick a node α whose children are not yet constructed. We decompose π_α intosegments with respect to vertices u_1,…,u_k of π_α suchthatu_i is the farthest vertex from u along π_α with |π_α[u,u_i]| ≤ i· t·W/4, where π_α[u,u_i] is the subpath in π_α between u and u_i.Then we construct the children of αcorresponding to the segments of π_α. Let η_α' be the segment of π_α corresponding to a child α' of α.For illustration, see Figure <ref>.To construct G_α', we first removeall edges and vertices in η_α' except u and v from G_α.Also, we additionally remove all verticesp with d_G(u_α',p)≤ t·W/4 for an arbitrary internal vertex u_α' of η_α'.In this way, we can obtain G_α', and define π_α' asa shortest path between u and v in G_α'.If α' has level (f+1) in the tree, u and v are not connected in G_α', or π_α' is longer than 2t|uv|, we set α' as a leaf node of (u,v; W).A node of (u,v;W) has at most 8t|uv|/W child nodes. Furthermore, (u,v; W) has at most (8t|uv|/W)^(f+1) nodes. We first show that a node α has at most 8t|uv|/W child nodes. If α is a leaf node, then it is trivial.We assume that α is not a leaf node. Then the path π_α has length at most 2t|uv|. By the definition of segments, there are k segments of π_α only if |π_α|≥ kt·W/4. Thus, α has at most 8t|uv|/W segments, consequently, and child nodes.Note that, the depth of (u,v; W) is at most (f+1) by construction. Thus, (u,v; W) has at most (8t|uv|/W)^(f+1) nodes from the first claim. Note that each non-leaf node of (u,v; W) may have more than two children.To traverse the tree efficiently, we use a two-dimensional array so that givena vertex p in Ĝ(u,v) and a node α in (u,v; W),the child node α' of α with p ∈η_α' can be computedin constant time.We call this array the assistant array of (u,v; W). §.§ Computation of the -PathGiven a query of a set F of at most f failed vertices,we can find a node α^* of (u,v;W) such that π_α^* does not contain any vertex of F as follows. We traverse (u,v;W) starting from the root node r.Let α be the current node. If π_α contains at least one failed vertex,we visit one of its children α' such that η_α' contains a failed vertex using the assistant array of (u,v;W).We repeat this process until we reach a node α^*such thateither π_α^* contains no failed vertices, or α^* is a leaf node. If π_α^* contains no failed vertices, then we return π_α^* as output.Clearly, the returned path is in G-F. We call the returned path the -path of G-F. Otherwise,we reach a leaf node, then we do not return any path. lemmalemFTPathFindingAlgorithm Given a set F of at most f failed vertices,we can compute the node of (u,v;W) storingthe -path of G-F in O(f^2) time, if it exists. We visit at most f+1 nodes of (u,v;W). For each node α, we check if π_α contains a failed vertex in O(f) time using the assistant array of (u,v;W). If it contains a failed vertex, we can find a segment containing a failed vertex andmove towards the child of α corresponding to this segment.Therefore, this algorithm takes O(f^2) time in total. lemmalemFTFindEpsSafe Let F be a set of at most f failed vertices and W be a parameter in [4D,L], where D is the longest edge length in Ĝ(u,v).For two vertices u,v of V-F,the -path obtained from (u,v;W) with respect to F exists if there is a (t,W)-safe path γ between u and v in G-F with |γ|≤ 2t|uv|.Moreover, the -path has length at most |γ|.For clarity, we say safe to refer (t,W)-safe in this proof. Moreover, let (u,v) refer (u,v;W) and Ĝ=Ĝ(u,v).An edge in Ĝ has length at most W/4. This guarantees that any segment of a path in Ĝ has length at most tW/2. We first show that γ is a path of G_α for every node α of (u,v) visited during computation of the -path.This immediately implies that |π_α| is at most |γ|. The claim holds for the root node because the length of γ is at most 2t|uv|, so γ is a path of Ĝ-F = G_r-F. Assume to the contrary that the algorithm visits a node α of (u,v) such thatG_α does not contain γ.Since α is not the root node r,the segment η_α contains a failed vertex x_f. For a vertex p of G, we use d_G(γ,p) to denotethe minimum distance in G between p and a vertex of γ.The following two cases can occur:(1) η_α contains a vertex of γ, or(2)d_G(γ, u_α)≤tW/4. Note that |η_α|≤ tW/2 since η_α is a segment in Ĝ.If the case (1) holds, then d_G(γ, x_f) is at most |η_α| ≤ tW/2.This contradicts the fact thatγ is safe. If the case (2) holds, thend_G(γ,x_f) ≤ d_G(γ,u_α)+d_G(u_α,x_f)≤tW/4+tW/2<t W .The first inequality is a trivial extension of the triangle inequality.This also contradicts the fact that γ is safe. Therefore, G_α contains γ.It suffices to show that the algorithm successfully returns a path, and thus, the -path of G-F exists. Let α be the last node of (u,v) we visit during the computation of -path. The algorithm does not return any path only when α is a leaf node.Note that α has level at most f. Thus, α is a leaf node only if u and v are not connected in G_α or π_α has length at least 2t|uv|.As shown in the previous paragraph, γ is a path of G_α andits length is at most 2L. Thus, α is not a leaf node, and the algorithm returns the -path.§ KERNEL ORACLE FOR MODERATELY FAR VERTICESFor a value >0, we construct a kernel oracle that allows us tocompute an (s,s',F;)-kernel of small complexity for moderately far vertices s and s' and a set F of failed vertices of G. In particular, we can compute a kernel of size O(t^8f^2) in O(t^8f^4) time, and a path-preserving kernel of sizeO(t^8f^2log (tf/')log^2 n) inO(t^8f^4log (tf/)log^2 n)time. Recall that m=|E| and n=|V|, and m∈ O(n). For an overview of the structure of a kernel oracle, see Figure <ref>. The kernel oracle consists of several -structures with different parameters (u,v,W).For this purpose, we first choose several values for W such thatfor any two moderately far vertices s and s', there are at least one value W' with |ss'|∈ [W'/2,W).Recall that for two moderately far vertices s,s' in G, their Euclidean distance lies in [L/m^2, L /t) ⊆ [L/(2m^6), L).We first decompose the interval [L/2m^6, L) into (6log m+1) intervals [W_i-1,W_i), where W_i= (2^i· W_0) for i ∈[1,6log m+1] with W_0=L/(2m^6). We say two vertices s and s' arewell separated with respect to W if |ss'|∈[W/2,W). Note that a moderately far vertices are well separated with respect toW_i for at least one index i∈[1,6log m+1].§.§ Data StructureLemma <ref> and Lemma <ref> hold only when all edges of G have length at most r. To satisfy this condition,we modify G by splitting long edges as preprocessing before constructing the -structures. First, we delete the edges in G of length at least 2L.Since we want to find an (1+)-approximate shortest path or distance between two moderately far vertices,those long edges never participate in a path that we have desired. Next, for each edge e in G of length larger than ('L)/(4m^6) with ε'=ε/500 t^3 (f+1), we split e into subedges of length at most '/4W_j, where W_j is the smallest value such that |e|∈['/4W_j, 4tW_j].This process increases the number of edges by a factor of O(t/').In the following, to avoid confusion, we use G_ to denote the original given graph.Furthermore, we denote the number of vertices and edges in G by n and m, respectively. By construction, it is sufficient to deal with queries of two vertices s, s' and a set F of failed vertices in G such that{s,s'}∪ F is a subset of V(G_) and |ss'|∈ [L/m^2,L). That is because[L/m_^2,L)⊆ [L/m^2,L), where m_ is the number of edges in G_. Notice that G is not always a Euclidean t-spanner because of new vertices.However, it has a weaker property stated as follows. It is not difficult to see that this property is sufficient for obtaining Lemma <ref> and Lemma <ref>. Let 𝒱(e) be the set of new vertices of G obtained fromsplitting an edge e of G_.lemmalemDistancePreserved Let u and v be two vertices of G-F with d_G(u,v)≤ Lneither of which is contained in the unionof 𝒱(e) for all edges e adjacent to the vertices of F in G_.Then d_G-F(u,v)≤ t · d_G(u,v).Recall that G^ is a L-partial f-fault-tolerant Euclidean t-spanner. Let u' and v' be the vertices of G^ closest to u and v, respectively, lying on π_G(u,v). Here, u=u' and v=v' if u∈ V(G^) and v∈ V(G^), respectively. By construction, neither u' nor v' is contained in F. Then we have d_G(u,v) =d_G(u,u')+d_G(u',v')+d_G(v',v) and d_G-F(u,v)≤ d_G-F(u,u') + d_G-F(u',v') + d_G-F(v',v). Note that d_G(u,u')=d_G-F(u,u')=|uu'| and d_G(v,v')=d_G-F(v,v')=|vv'| since uu' (and vv') is contained in an edge of G^. Then by the fact that u' and v' are vertices of G^ with|u'v'|≤ d_G(u',v')≤ d_G(u,v)≤ L, we have d_G-F(u',v') =d_G^-F(u',v') ≤ t|u'v'| ≤ td_G^(u',v') = t· d_G(u',v'). Thus, the following holds: d_G-F(u,v) ≤ |uu'|+td_G(u',v')+|vv'|≤ t· d_G(u,v). * We construct (u,v; 'W_j) for all indices j with j∈ [1,6log m+1] andall net vertex pairs (u,v) of an ('W_j)-net 𝒩_j with |uv|≤ (1+ε)tW_j.Here, the nets 𝒩_j we use must be aligned, that is,𝒩_j⊆𝒩_j' for any two indices j and j' with j≥ j'. While the work in <cit.> construct the -structure for all pairs of vertices, we construct the -structure only for pairs (u,v) of net vertices.In this way, we can improve the space complexity near-linearly.However, it requires us to design a new algorithm to handle query vertices that are not net vertices.lemmalemComplexityOfFT The space complexity of all 's and their assistant arrays is (t/ε')^O(f)· nlog n. Furthermore, we can compute all of them in (t/ε')^O(f)· nlog^2 n time.We fix an index i∈ [1,6log m+1], and analysis the total space complexity and computation time for (u,v;W_i)'s and their assistant arrays with u,v∈𝒩_i and |uv|≤ (1+)tW_i. Precisely, we show that the space and computation time complexities are in (t/')^O(f) n and (t/')^O(f) mlog n, respectively. For clarity, let W=W_i. Moreover, we refer to (u,v) as (u,v;W). For each pair (u,v) of net vertices with |uv|≤ (1+)tW,let Ĝ(u,v) be the subgraph of Ginduced by the vertices pwith max{|pu|,|pv|}≤ 2(1+ε)t^2W. Recall that (u,v) is constructed on Ĝ(u,v), and has at most (t/ε')^O(f) nodes. We first show that the total numbers of vertices and edges of Ĝ(u,v) areO((t/ε')^4n) and O((t/ε')^4n), respectively, for all pairs (u,v) of net vertices with |uv|≤ (1+)tW. Observe that for a vertex p of Ĝ(u,v),u and v are contained in the Euclidean diskcentered at p with radius (1+ε)tW.By Lemma <ref>,the number of such net vertex pairs (u,v) is O((t/ε')^4). Therefore, each vertex of G is contained in O((t/ε')^4) subgraphs in total.Analogously, each edge is contained in O((t/ε')^4)subgraphs in total. This means that the totalnumber of vertices (and edges) of Ĝ(u,v) is O((t/ε')^4n) (and O((t/ε')^4m)).For the space complexity of 's and their assistant arrays, notice that each tree (u,v) has (t/')^O(f) nodes by Observation 1 and the fact that (u,v) has depth at most (f+1). Each node stores a path of length at most n̂, where n̂ denotes the number of vertices of Ĝ(u,v). Also, the assistant array takes (t/')^O(f)·n̂ space. Thus, an (u,v) and its assistant array takes(t/')^O(f)·n̂ space.For the construction time, observe thatwe can compute π_α in O(m̂+n̂logn̂) time for a node α of (u,v), wheren̂ and m̂ denote the numbers of vertices and edges in Ĝ(u,v), respectively. Analogously to the space complexity, construction time for an (u,v) and its assistant array is in(t/')^O(f)·(m̂+n̂logn̂) time. The total space and construction time complexities of trees and their assistant arrays with respect to (tW,'W) are (t/')^O(f)· n and (t/')^O(f)· mlog n, respectively.To complete this proof, we show that m is at most f^O(1)n. To obtain G, we split edges in the original graph G^ as a preprocessing. Thus, the number of added edges in G at the preprocessing step is linear in the number of added vertices. Recall that, m_0 is in f^O(1)n_0, where n_1 and m_1 are the number vertices and edges in G^, respectively. Thus, the number of edges in G is at most f^O(1)n. §.§ Kernel QueryNow we describe how to compute an (s,s',F;)-kernel H of size O(t^8f^2) for a pair (s,s') of moderately far vertices and a set F of at most f failed vertices given as a query.Let i^* be the index such that s and s' arewell separated with respect to W_i^*. Here, we look at the -structures constructed with respect to W_i^* only. For a point x in the plane, let n(x) be the net vertex in 𝒩_i^* closest to x. The kernel H is constructed as follows.The vertices of H are s, s', (s), (s'), and all net vertices u in 𝒩_i^* withd_G(u,F)≤ (4t^2+8t+5)ε'W_i^*, where d_G(u,F) is the minimum of d_G(u,x_f) for all x_f∈ F. For two vertices u and v in V(H),we add uv as an edge of H ifthe -path π exists for (u,v'; 'W_i^*) with respect to F, ord_G(u,v)≤ (4t^2+8t+5)ε'W_i^*. For the former case, we set w_H(uv)=|π|.For the latter case, we set w_H(uv)=40t^3ε'W_i^*.lemmalemWellSeparatedDOTimeComplexity A (s,s',F;)-kernel of size O(t^8f^2) can be computed in O(t^8f^4) time for two well separated vertices s, s' with respect to W_i for i∈[1,6log m+1] anda set F of at most f failed vertices given as a query. Note that H has O(t^4f) vertices by Lemma <ref> since we add a vertex u to V(H) only if there exist x∈{s,s'}∪ F with d_G(u,x)≤ (4t^2+8t+5)'W. For a pair of vertices, we can check if there is an edge between them anddetermine its weight in O(f^2) time by Lemma <ref>. Thus if H is a (s,s',F;)-kernel, then the lemma holds.We prove this statement in Section <ref>.§.§ Path-Preserving Kernel QueryNow we describe how to compute a path-preserving (s,s',F;)-kernel H of size for a pair (s,s') of moderately far vertices and a set F of at most f failed vertices given as a query. Let i^* be the index with |ss'|∈ [W_i^*/2,W_i^*). The kernel H_0 we constructed before is not necessarily path-preserving. For an edge uv added to H_0 due to its corresponding -path, we can compute the -path in time linear in its complexity using (u,v;'W_i^*). On the other hand, an edge uv added to H_0 because of its small length (i.e.,d_G(u,v)≤ (4t^2+8t+5)' W_i^*) can violate the path-preserving property of H_0.For such an edge, by the spanner property of G, d_G-F(u,v) is at most 40t^3' W_i^*, and this is why we set the weight of uv as 40t^3' W_i^*.However, although there is a path of G-F of length at most 40t^3' W_i^*,we do not know how to compute it efficiently. To obtain a path-preserving kernel, we first compute a (u,v,F;)-kernel for the violating edges uv of H_0, and take the union of them together with H_0. Since d_G(u,v)≤ (4t^2+8t+5)'W_i^*, |uv| is significantly smaller than |ss'|, and thus the value W_i with |uv|∈ [W_i/2, W_i] is smaller. Although there might still exist violating edges u'v', the distance in G between u' and v' becomes smaller.Then we repeat this procedure until for any violating edge uv, their distance in G becomes at most tL/m^6. In this case, uv is not violating the path-preserving kernel anymore due to the second condition for path-preserving kernels.Although this recursive description conveys our intuition effectively,it is more convenient to describe it in an integrated way as follows for formal proofs. The construction of H works as follows.The vertices of H ares, s', and all net vertices u from 𝒩_j with d_G(u,F∪{s,s'}) ≤ (4t^2+8t+5)ε' W_jfor some index j∈ [1,i^*].The edge set of H comprises the edges from the (p,q,F;)-kernels constructed by the previous query algorithm forall indices j∈[1,i^*] and two well separated net vertices p,q∈𝒩_j∩ V(H) with respect to W_j. lemmalemFTPathOrShort If an edge uv lies in a shortest path in H, either it corresponds to an -path, ord_G-F(u,v)≤ tL/m^6. We suppose that uv does not correspond to an -path. It suffices to show that d_G(u,v)≤ L/m^6. In this case, d_G-F(u,v)≤ tL/m^6 by Lemma <ref>. To prove this lemma, we use two facts:w_H(uv)≥ 2td_G(u,v) and d_G(u,v)< W_j' for some j'∈[1,i^*].These hold because we added uv to (·,·,F;)-kernel only ifit corresponds to an -path with respect to W_j' ord_G(u,v)≤ 2t^2ε' W_j', where u and v are in 𝒩_j'.['=/500t^3(f+1)] Furthermore, for the latter case,its weight is 40t^3'W_j'≥ 2td_G(u,v).We prove d_G(u,v)≤ L/m^6 by contradiction. We assume that d_G(u,v)>L/m^6.Then there is an index j∈[1,i^*] with d_G(u,v)∈ [W_j/2,W_j). By Lemma <ref>, the following holds:d_G-F(u,v)≥ d_G(u,v)≥ W_j/2, andd_G-F(u,v)≤ t· d_G(u,v)< tW_j. Then the (u,v,F;)-kernel H' satisfies d_H(u,v)≤ (1+ε)d_G-F(u,v)by Lemma <ref>. Since H is a supergraph of H' by the construction,d_H(u,v) is also at most (1+)d_G-F(u,v). Recall that w_H(uv) is at least 2td_G(u,v).By Lemma <ref>, we havew_H(uv)≥ 2td_G(u,v) > (1+ε)td_G(u,v)≥ (1+)d_G-F(u,v)≥ d_H(u,v).This contradicts that uv is in a shortest path in H. Lemma <ref> together with the fact thatH is a supergraph of the (s,s',F;)-kernel H_0 we constructed beforeimplies that H is a path-preserving (s,s',F;)-kernel. lemmalemPathSmallFor a query of two moderately far vertices s, s' anda set F of at most f failed vertices,a path-preserving (s,s',F;)-kernel of size O(t^8f^2log (t/')log^2 n) can be computed in O(t^8f^4log (t/')log^2 n) time.Notice that for a vertex u in H,there exist x∈{s,s'}∪ F and j∈ [1,i^*] such that u∈𝒩_j and d_G(u,x)≤ (4t^2+8t+5)'W_j, where F is a set of at most f failed vertices which is given as a part of query. By Lemma <ref>, the number of vertices of H is O(t^4flog n)since the index i^* is in [1, 6log m+1] and net vertices in 𝒩_j are at least 'W_j apart from each other on G. Then we can find a shortest path π in O(t^8f^2log^2 n) time.We show that computing every edges of H takes O(t^8f^4log^2 nlog (t/')) time. Here, we compute each edge one by one to efficiently compute H as follows.We iterate through each index j∈[1,i^*] and every pair (u,v) of 𝒩_j∩ V(H). If (u,v;'W_j) returns an -path π, we add an edge uv and set the weight w_H(uv)=|π|. Otherwise, if d_G(u,v)≤ (4t^2+8t+5)ε'W_j, we add uv and set the weight w_H(uv)=40t^3'W_j. We always store the tuple (u,v,j) in the added edge with its weight.For a pair (u,v) of different vertices of V(H),there are O(log (t/')) different indices jdefining (u,v;'W_j) with j∈ [1,i^*]. This is because we construct (u,v;'W_j) only when |uv|≤ (1+)tW_j and u,v∈𝒩_j. In other words, |uv|∈ ['W_j,(1+)tW_j).The number of such indices is at most O(log (t/')) since W_j=(2^j · W_0) with W_0=L/(2m^6) for j∈[1,6log m +1]. To check if uv is an edge of H, we check if an -path exists for each (u,v;'W_j) in O(f^2) time using Lemma <ref>. Therefore, we cancompute all vertices and all edges in O(t^8f^4 log^2 nlog (t/')) time in total. Note that ε'=ε/500 t^3 (f+1). In the preprocessing step, we split long edges.This increases the number of its vertices and edges by a factor of O(t/')⊆{t,f,1/}.So far, we use n and m to denote the numbers of vertices and edges, respectively, of the resulting graph.To get the final results in Table <ref> (for kernel and path-preserving kernel oracles) with respect to the complexity of the original input graph,we simply replace both n and m with (t/')· m ∈ (tf/)^O(1)· n.Let G be an L-partial f-fault-tolerant Euclidean t-spanner G and >0.There exists a kernel oracle which supports two query algorithms for a query of two moderatelyvertices s,s', and at most f failed vertices F in G:supporting an (s,s',F;)-kernel of G in O(t^8f^4) time andsupporting an (s,s',F;)-path-preserving kernel of G in O(t^8f^4log^3(tf/)log^2 n) time. Furthermore, we can construct such oracle in 2^O(g(t,f,)) nlog^2 n construction time which takes 2^O(g(t,f,))nlog n space complexity, where g(t,f,)=flog(tf/).§ CORRECTNESS OF THE KERNEL QUERY ALGORITHMIn this section, we complete to prove Lemma <ref>. Precisely, we show that the obtained graph H by Section <ref> is a kernel. The construction time and size of H have been shown in Section <ref>. * Let W be a parameter with W<L and 𝒩 be an 'W-net of G,['=/500t^3(f+1)]and let F be a set of at most f failed vertices.Moreover, let s,s' be two well separated vertices in G with respect to W.Then the weighted graph H is defined as following. The vertices of H are s, s', (s), (s'), and all net vertices u in 𝒩 with d_G(u,F)≤(4t^2+8t+5)ε'W. And there is an edge uv in H if and only if one of the following holds: * (u,v; W) returns the -path with respect to F, or* d_G(u,v)≤ (4t^2+8t+5)ε'W,where (s),(s') are the closest net vertex in 𝒩 to s and s', respectively, in G. The edge weight w_H(uv)=|π| if(u,v; W) returns the -path π, otherwise w_H(uv)=40t^3'W which is strictly larger than 2td_G(u,v).We show that H is (s,s',F;)-kernel of G. Since for every edge uv in H, there is a path in G-F of length at most w_H(uv) by construction. This means the distance between two vertices in H is at least the distance in G-F between same vertices. It is sufficient to show thatd_H(s,s')≤ (1+)d_G-F(s,s'). First, we prove the simple case that s,s' are net vertices in 𝒩. Next, we show how to handle when s or s' is not in 𝒩.§.§ Case for 𝒩⊃{s,s'}In this section, we assume that s,s' are net vertices in 𝒩. Thus, (s),(s') are s, s', respectively. Before we start the proof, we introduce some notions used in this proof. We say a simple path γ in G-F is a base path ifγ is (t,'W)-safe, (t,'W)-weakly safe, or|γ|≤ (4t^2+8t+4)ε'W.In the following, we omit the parameters (t,'W) for clarity. We say safe and weakly safe to refer (t,'W)-safe and (t,'W)-weakly safe, respectively.The following claim shows that ifthere is a base path γ of small length between two vertices of 𝒩∩ V(H), then H approximate the length of γ by allowing small addictive error. Thus, if there is a shortest path π_G-F(s,s') which is a base path, then d_H(s,s') approximates d_G-F(s,s').claimclaimBasement Let γ be a base path in G-F betweentwo vertices of 𝒩∩ V(H) with |γ|≤ (1+)tW. Then the following hold.* If γ is safe, then d_H(u,v)≤ |γ|.* If γ is weakly safe, then d_H(u,v)≤ |γ|+2· 40t^3ε'W.* If |γ|≤ (4t^2+8t+4)ε'W, then d_H(u,v)≤ 40t^3ε'W. The first claim follows from Lemma <ref>.The last claim holds since d_G(u,v)≤ |γ|≤ (4t^2+8t+4)ε' W. Note that if two vertices in G are at most (4t^2+8t+5)ε'W distance apart,an edge between them is added to H, and its weight is set to 40t^3ε'W.We show the second claim holds. If the length of γ is at most (4t^2+8t+4)'W, then the second claim holds due to the third claim.Otherwise, there exist two net vertices z and z' in 𝒩 connected bya safe path γ' with |γ'|≤ |γ| such thatd_G(u,z),d_G(v,z') are both at most (2t^2+4t+4)'W by Lemma <ref>.Then there exist two edges uz and vz' in H whose weights are 40t^3'W,and the following holds by the triangle inequality:d_H(u,v)≤ d_H(u,z) + d_H(z,z')+d_H(z',v)≤ d_H(u,z)+|γ'|+d_H(z',v)≤ |γ|+2· 40t^3ε'W.The second indequlity holds by the first claim which is already proved. Let π_G-F(s,s') be a fixed shortest path between s and s' in G-F. We repeatedly apply Lemma <ref> so that we can obtain a sequence of at most |F| base paths whose concatenation connects s and s' in G-F.For an index i, we use z_i-1 and z_i to denotethe end vertices of the i-th base paths of the desired sequence.Let z_0=s. If π_G-F(s,s') is a base path, then the sequence has only the single path π_G-F(s,s') and z_1=s'. Precisely,We compute z_1,…,z_ℓ one by one as follows. At i-th iteration, we assume that we have z_i-1. If π_G-F(z_i-1,s') is weakly safe, then we are done.We consider it as the last base path in the desired sequence, and we let z_i=s'.Otherwise, Lemma <ref> guarantees that there are a vertex y_i of π_G-F(z_i-1,s') and a net vertex z_i such that: * π_G-F(z_i-1,y_i)·π_G-F(y_i,z_i) is weakly safe,* d_G-F(y_i,z_i)≤ (t^2+2t)'W, and * d_G-F(z_i,s')≤ d_G-F(z_i-1,s')-t^2'W.We consider π_G-F(z_i-1,y_i)·π_G-F(y_i,z_i) as the i-th base path in the desired sequence.In the proof of Lemma <ref>, we choose z_i as(x_f) for a failed vertex x_f in F.Recall that (x) is the closest net vertex in 𝒩 to a vertex x in G, and d_G(x,(x)) is at most 'W. Thus, every z_1,…,z_ℓ are in V(H). claimclaimNumBasePath The desired sequence has at most |F| base paths.Let ℓ be the number of base paths we computed.In the proof of Lemma <ref>, we choose z_i as(x_f) for a failed vertex x_f in F, of which d_G(z_i-1,x_f)>(2t^2+3t+1)'W.Moreover, d_G-F(z_i,s') is strictly decreasing at least t^2'W for i∈[0,ℓ]. These imply that two distinct z_i and z_i' are apart at least 2t^2' W in G-F.If there are two distinct vertices z and z' with d_G(x_f,z),d_G(x_f,z')≤'W for the same failed vertex x_f in F, then d_G(z,z')≤ 2' W by triangle inequality. This implies if z_i and z_i' lie near a same failed vertex x_f, then they should be so close in G, consequently, and also in G-F.Thus, the failed vertices x_f's for all z_i are distinct. Therefore, ℓ is at most |F|. By Claim <ref>,if every paths π_G-F(z_i-1,y_i)·π_G-F(y_i,z_i) in the obtained sequence has length at most (1+)tW, then the distance d_H(z_i-1,z_i) is bounded.For this purpose, we prove the following claim.claimclaimInductiveSequenceThe length |π_G-F(z_i-1,y_i)·π_G-F(y_i,z_i)| is at most (1+)tW if d_G-F(z_i-1,s')≤ (1+/2)tW The length ofπ_G-F(z_i-1,y_i)·π_G-F(y_i,z_i) is exactlyd_G-F(z_i-1,y_i)+d_G-F(y_i,z_i). Since '=/500t^3(f+1), we have d_G-F(z_i-1,y_i) +d_G-F(y_i,z_i)≤ d_G-F(z_i-1,s')+d_G-F(y_i,z_i) ≤ (1+/2)tW+(t^2+2t)'W≤ (1+)tW.The first inequality holds because y_i is a vertex of π_G-F(z_i-1,s'). The distance d_G-F(z_0,s')≤ (1+/2)tW since z_0=s. Moreover, d_G-F(z_i,s') is strictly decreasing for i∈[0,ℓ]. Thus, every π_G-F(z_i-1,y_i)·π_G-F(y_i,z_i) has length at most (1+)tW by Claim <ref>. claimclaimInductivelyHoldsFor any index i∈[0,ℓ),d_H(z_i,s')≤ d_G-F(z_i,s')+3·40(|F|-i+1)· t^3ε'W. Recall that π_G-F(z_ℓ-1,s') is weakly safe. Thus, the claim holds for i=ℓ-1 by Claim <ref>.Then we use induction on i.We fix an index i∈[0,ℓ-1), and assume that the claim holds for i+1.Then our goal is to provethat the claim holds for i. The induction hypothesis for i+1 can be restated as follows. d_H(z_i+1,s') ≤ d_G-F(z_i+1,s') +3·40(|F|-(i+1)+1)t^3ε'W ≤ d_G-F(z_i+1,y_i+1) + d_G-F(y_i+1, s') +3·40(|F|-(i+1)+1)t^3ε'W. We have the following by Claims <ref> and <ref>.d_H(z_i,z_i+1)≤ d_G-F(z_i,y_i+1)+d_G-F(y_i+1,z_i+1) +2· 40t^3'W. The distance d_G-F(y_i+1,z_i+1) is at most 20t^3'W by construction the sequence. Finally, we have the following: d_H(z_i,s')≤ d_H(z_i,z_i+1)+d_H(z_i+1,s') ≤ d_G-F(z_i,y_i+1)+ d_G-F(y_i+1,z_i+1)+2· 40t^3'W + d_G-F(z_i+1,y_i+1) + d_G-F(y_i+1, s') +3·40(|F|-(i+1)+1)t^3ε'W ≤ d_G-F(z_i,y_i+1)+d_G-F(y_i+1,s')+(3·40(|F|-i)+2· 40+2· 20)t^3ε'W = d_G-F(z_i,q)+3· 40(|F|-i+1)t^3ε'W.Therefore, the claim holds.By setting i=0 in Claim <ref>, d_H(s,s') approximates d_G-F by allowing the addictive error 120t^3· (f+1)' W. Since d_G-F(s,s')≥ W/2 and '=/500t^3(f+1), the addictive error is at most · d_G-F(s,s').§.§ General CaseWe show the general case that s and s' might not be net vertices in 𝒩. Thus, we use the net vertices (s) and (s') of 𝒩 with d_G(s,(s)),d_G(s',(s'))≤'W. We follow the strategies in the previous section, by replacing s and s' as (s) and (s'), respectively. For clarity, let p=(s) and q=(s') in the following.Claims <ref>-<ref> still hold with respect to p and q if d_G-F(p,q)≤ (1+/2)tW whilep and q might not be a well separated with respect to W. In such case, we have d_H(s,s') ≤ d_H(s,p)+d_H(p,q)+d_H(q,s')≤ d_H(s,p)+d_H(q,s')+d_G-F(p,q) +3· 40(f+1)t^3ε'W≤ d_H(s,p)+d_H(q,s')+d_G-F(s,s')+d_G-F(s',q)+d_G-F(q,s')+3· 40(f+1)t^3ε'W≤ d_G-F(s,s')+(3· 40(f+1)+4·40)t^3ε'W ≤ (1+ε)d_G-F(s,s').The fourth inequality holds because d_G-F(s,p), d_H(s,p), d_G-F(s',q), and d_H(s',q) are at most 40t^3'W. The last inequality holds by the two facts: d_G-F(s,s')≥ W/2 andε'=ε/500t^3(f+1). Moreover, we can show d_G-F(p,q)≤ (1+/2)tW easily using triangle inequality as following :d_G-F(p,q) ≤ d_G-F(s,p)+d_G-F(s,s')+d_G-F(q,s')≤ (1+80t^2')tW<(1+/2)tW.Recall that d_G-F(s,s')≤ tW and d_G-F(s,p), d_G-F(s',q) are at most 40t^2' W. This section is summarized by the following lemma. lemmalemWellSeparatedPair H is a (s,s',F;)-kernel. Lemma <ref> guarantees that the described query algorithm of the kernel oracle in this paper is correct. Note that every query algorithms of oracles in this paper are based on the kernel query algorithm. § DISTANCE AND SHORTEST PATH ORACLES FOR MODERATELY FAR VERTICESIn this section, we construct approximate distance oracle and shortest path oracle for moderately far vertex queries.Let G be an L-partial f-fault-tolerant Euclidean t-spanner. Here, G might have long edges as the preprocessing step mentioned before only spans the previous section.Let n and m denote the numbers of vertices and edges of G, respectively.Distance oracle. For an approximate distance oracle, it suffices to construct a kernel oracle. Recall that the approximation factormust be give in the construction of the oracles. Given two moderately far vertices s and s' and a set F of failed vertices, we simply compute an (s,s',F;)-kernel H, and then compute the distance between s and s' in H using Dijkstra's algorithm. This value is an approximate distance between s and s' by the definition of kernels.Therefore, the following theorem holds by Theorem <ref>. Let G be an L-partial f-fault-tolerant Euclidean t-spanner G and >0.There exists an oracle which supports an (1+)-approximate distance of d_G-F(s,s') for a query of two moderatelyvertices s,s', and at most f failed vertices F in G in O(t^8f^4) time. Furthermore, we can construct such oracle in 2^O(g(t,f,)) nlog^2 n construction time which takes 2^O(g(t,f,))nlog n space complexity, where g(t,f,)=flog(tf/). Shortest path oracle. For an approximate shortest-path oracle, we construct a path-preserving kernel oracle. Given two moderately far vertices s and s', and a set F of failed vertices, we simply compute a path-preserving (s,s',F;)-kernel H, and then compute a shortest path π between s and s' in H using Dijkstra's algorithm.Since π might contain an edge not in G-F, we replace each edge of π with their corresponding path in G-F. More specifically, for an edge uv of π, either its length is at most tL/m^6, orthere is an -path between u and v of length w_H(uv).For the former case, we replace uv with an arbitrary path of G-Fconsisting of edges of length at most tL/m^6. By the spanner property, u and v are connected in G-F, and moreover, a shortest path between them consists of edges of length at most tL/m^6.For the latter case, we simply replace uv with the -path. The correctness of the query algorithm is guaranteed by the following lemma.lemmalemCorrGenPath The returned path has length at most (1+2ε)d_G-F(s,s'). Since s and s' are moderately far,|ss'|∈[L/m^2,L/t). The total weight of all edges in G-F of length at most tL/m^6 is at most tL/m^5.For sufficiently large m with m∈Ω(t/), the following holds.L/m^5≤ d_G-F(s,s')/m^3≤ε/t d_G-F(s,s').The length of the returned path is at most d_H(s,s')+tL/m^5, and the distance d_H(s,s') is at most (1+)d_G-F(s,s') since H is a (s,s',F;)-kernel of G.Thus, the lemma holds by Inequality <ref>. We have a challenge of computing an arbitrary path of G-F consisting of edges of length at most tL/m^6. To handle this problem, we utilize the following lemma which is proved in Section <ref>. lemmalemArbitraryPath For any graph G, we can construct a data structure of size O(fmlog nloglog n) in O(mnlog n) time which answers connectivity queries in the presence of f failed vertices.This structure can process a set F of at most f failed verticesin O(f^4log^2 n loglog n) time, and then it allows us to compute an arbitrary path π between any two vertices in G-Fin O(f + e(π)) time, where e(π) is the number of edges of π. We construct a data structure stated in Lemma <ref> on the subgraph of G induced by edges of length at most tL/m^6. Then we can get a shortest path oracle who performs as stated in Table <ref>. Let G be an L-partial f-fault-tolerant Euclidean t-spanner G and >0.There exists an oracle which supports an (1+)-approximate shortest path of π_G-F(s,s') for a query of two moderatelyvertices s,s', and at most f failed vertices F in G in O(f^4log^2 nloglog n + ) time. Furthermore, we can construct such oracle in 2^O(g(t,f,)) n^2log^2 n construction time which takes 2^O(g(t,f,))nlog^2 nloglog n space complexity, where g(t,f,)=flog(tf/) andis the number of edges in the returned path.The shortest path oracle of G is the union of path-preserving kernel oracle of G and an arbitrary path oracle of a subgraph Ĝ of G. Thus, we can obtain the construction time and space complexities of the approximate shortest path oracle by combining their performances each described in Theorem <ref> and Lemma <ref>, respectively.The query algorithm for computing an approximate shortest path has three steps as follows. First step is computing a path-preserving kernel H and updating the arbitrary path oracle of Ĝ with respect to the failed vertices. Next, we compute the shortest path π in H, and translate π to a path in G-F. By Lemma <ref> and Lemma <ref>, computing H and updating the oracle of Ĝ takes O(f^4log^2 n(t^8log(tf/)+loglog n)) time. The number of vertices of H is at most O(t^4flog n). This implies that computing π takes O(t^8f^2log^2 n) time, and the path has O(t^4flog n) edges. Moreover, translating π takes O(t^4f^3log n+t^4f^2log n +) by Lemma <ref> and Lemma <ref>, whereis the number of vertices in the returned solution path. We may assume that n is sufficiently large so that loglog n∈Ω(t^8log^3(tf/)).Then the total query time takes O(f^4log^2 n loglog n+). We notice that the correctness of the shortest path query is guaranteed by Lemma <ref>.§ ARBITRARY PATH ORACLE We construct an oracle described in Lemma <ref> by slightly modifying the oracle introduced in <cit.>.Duan and Pettie introduced a connectivity oracle of a general graphin the presence of failed vertices in <cit.>. Given a set of failed vertices and two query vertices, it allows us to check if the two query vertices are connected in the graph in the presence of the failed vertices. To check if two vertices are connected in the presence of failed vertices, they indeed compute an implicit representation of a path.In this case, given an implicit representation of a path, we can report it in time linear in its complexity.However, since this is not explicitly mentioned in <cit.>, we give a brief sketch of their approach here.In the rest of this section, we refer G as a general graph.Summary of <cit.>. Imagine that a spanning tree T of G has maximum degree of four.In this case, for any set F of f failed vertices,we can check the connectivity between any two vertices u and v efficiently as follows.After removing F from T, we have at most 4|F| subtrees.If u and v are contained in the same subtree, they are connected in G-F as well. If it is not the case, for each pair of subtrees, we check if there is an edge connecting two subtrees. Assume that we can do this in T_1 time for all pairs of subtrees.We can represent the adjacency between the subtrees asthe adjacency graph where each vertex corresponds to each subtree.Then it suffices to check the connectivity between the two subtrees containing u and v in theadjacencygraph.Since the adjacency graph has complexity O(f^2), we can check if u and v are connected in O(f^2+ T_1) time in total.Duan and Pettie showed how to check if two subtrees are connected by an edge in T_1=O(f^2log^2 n) time in total. To generalize this argument,they construct a hierarchy of components, say {𝒞_i}_i, anda set of Steiner forests, say {𝒯_i}_i, of maximum degree at most four.Here, the depth of the hierarchy is O(log n).For any two components in the hierarchy, either they are disjoint, or one of them is contained in the other. For each components γ in C_i, several vertices are marked as terminals, and they are contained in the same tree of 𝒯_i.Given a set F of f failed vertices, for each level i, at most f trees in 𝒯_i intersect F.Then after removing F, those trees are split into O(f) subtrees. We call such subtrees the affected subtrees.For a technical reason, we also call the trees containing query vertices u or v the affected subtrees. A tree of 𝒯_i not intersecting F∪{u,v} is calledan unaffected tree for any index i.For all levels, the total number of affected subtrees is O(flog n). For any two query vertices, if they are contained in the same affected subtree or the same unaffected tree, we can immediately conclude that they are connected in G-F. Another simple case is that for a path π in G-F between u and v,all vertices in π are contained inthe affected subtrees.As we did before, we construct the adjacency graph for all affected subtrees.A vertex of this graph represents each affected subtree,and two vertices are connected by an edge if their corresponding subtrees are connected by a single edge. Duan and Pettie showed how to compute the adjacency graph for these subtreesin O((flog n)^2(loglog n+f^2)) time in total.Then it is sufficient to check if an affected subtree containing u is connected to an affected subtree containing v in the adjacency graph. However, it is possible that a path between u and v might intersect a large number of unaffected trees of {𝒯_i}_i.We cannot afford to compute the adjacency graph for all trees of {𝒯_i}_i.To overcome this difficulty, they add several artificial edges to G carefully.An artificial edge defined from γ∈𝒞_i for an index i indeed corresponds to a path π in G such thatthe part of π excluding the last two edgesis fully contained in γ.This artificial edge connects the two trees of {𝒯_i}_i containing the two end vertices of π.Let G^+ be the graph obtained from G by adding the artificial edges. Given a set F of failed vertices, they first remove from G^+ artificial edges defined from components intersecting F∪{u,v}.Then they show that for any two vertices u and v connected in G-F,all vertices of a path between u and v in G^+-F are contained inthe affected subtrees, and vice versa. Thus, we can handle this case as we did before. In summary, they can construct a connectivity oracle of size O(fmlog nloglog n) in O(mnlog n) time. This structure can process a set F of at most f failed vertices in O(f^4log^2 n loglog n) time, and then it allows us tocheck if any two vertices are connected in G-F in O(f) time.Modification for arbitrary path queries. Using the connectivity oracle by Duan and Pettie, we can answer arbitrary path queries efficiently.Given a tree T, we can compute the path in T between any two vertices u and v in time linear in the complexity of the returned path.First, we compute a lowest common ancestor w of u and v in constant time, and then traverse the path from u to w.Similarly, we traverse the path from v to w. Moreover, we can do this for any affected subtrees.For each component of {𝒞_i}_i, we also compute a spanning tree so that a path between any two vertices in each component can be computed efficiently. Using this observation, we can retrieve a path between u and v in G-F.The connectivity oracle contains G^+ obtained from G by adding artificial edges to G.The query algorithm by Duan and Pettie first removes invalid artificial edges from G^+, and compute theadjacency graph between all affected subtrees.Each edge of the adjacency graph represents either an actual edge of G ora path whose inner vertices are fully contained in an unaffected component of {𝒞_i}_i.Let π=⟨ e_1, e_2,…,e_ℓ⟩ be a path in the adjacency graph between unaffected subtrees containing u and v.If e_j represents a path whose inner vertices are fully contained in an unaffected component C of{𝒞_i}_i, we replace it with such a path.The two end edges are stored in the connectivity oracle,and we can compute the other part in time linear in its complexity using thespanning tree of C we computed as a data structure.Then for any two consecutive edges e_j and e_j+1 with common endpoint v,note that v is contained in an affected subtree. Also, the endpoints of the paths/edges represented by e_j and e_j+1 are also contained in this subtree. Then we connect them by the path between them in the affected subtree. Therefore, we have Lemma <ref>. *§ PROOFS OF GENERALIZATION LEMMASIn this section, we prove the generalization lemmas: Lemma <ref> and Lemma <ref>. Precisely, we describe an approximate distance or shortest path oracle which supports general query by utilizing the oracle which supports a query of two moderately far vertices.Let G denote a given f-fault-tolerant Euclidean t-spanner,and n and m denote the numbers of vertices and edges of G, respectively. Recall that m∈ O(n).We basically follow the strategy of <cit.> and <cit.>, but we need to store additional information to handle multiple vertex failures.More specifically, we make use of the following lemma. We can construct five sequences ℒ_1,…,ℒ_5 of real numbersin O(nlog n) time so that given any two vertices p and q,an element L_i of ℒ_k with |pq|∈[L_i/m,L_i/t)can be found in constant time. Moreover, L_i≥ m^2 L_i-1 holds forany two consecutive elements L_i-1 and L_i of a single sequence ℒ_k=⟨ L_1,…,L_r⟩. We construct a data structure with respect to each of the five sequences in Lemma <ref>. To answer a query of two s, s' and failed vertices F, we use the data structure constructed for the sequence which contains L_i with |ss'|∈ [L_i/n, L_i/t).We assume that L_i∈ℒ_1. For j∈ [1,|ℒ_1|], let G_j be the subgraph of G induced by the edges whose lengths are at most L_j∈ℒ_1. Moreover, let E_j be the set of edges in G whose length are in [L_i-1,L_i), and V_j be the set of end vertices of E_j in G. §.§ Partial Spanner S_iFor an element L_i in ℒ_1, we construct a weighted graph S_i as follows. The vertex set of S_i isV_i-1∪ V_i∪ V_i+1.The set V_i-1∪ V_i∪ V_i+1 is decomposed into several components such that the vertices in a single component are connected in G_i-2.For each component U,we compute an f-fault-tolerant Euclidean (1+)t-spanner on (a point set) Uusing the algorithm of <cit.>. Their algorithm computes an O(f^2k)-sized f-fault-tolerant Euclidean spanner of k points in O(klog k+f^2k) time. The edge set of S_i is the union ofE_i-1, E_i, E_i+1, and the edge sets of the spanners on for all components U. Then S_i is a 4L_i-partial f-fault-tolerant Euclidean (1+)t-spanner by Lemma <ref>.lemmalemComplexityOfSiWe can construct S_i for all elements L_i in O(nlog n+f^2n) time.Furthermore, the total complexity of S_i's is O(f^2n). Clearly, the construction time and space complexity for all V_i, E_i, and G_i is O(m) in total. Therefore, it is sufficient to analyze the construction time and the space complexity of S_i for each component U of V_i-1∪ V_i∪ V_i+1.To do this, we use the algorithm by Narasimhan et al. <cit.>which computes an O(f^2k)-sized f-fault-tolerant Euclidean spanner of k points in O(klog k+f^2k) time. Thus, S_i has complexity of O(f^2n_i), andit can be built in O(n_ilog n_ilog + f^2 n_i) time,where n_i=|V_i-1∪ V_i∪ V_i+1|. Note that the sum of all n_i is at most O(m), and O(m)=O(n). Therefore, the construction of all S_i takes O(nlog n+f^2n) time, and the total complexity of S_i's is O(f^2n). lemmalemPropertiesOfSiLet F be a set of at most f failed vertices.For two vertices u and v in V(S_i) with |uv|≤ 4L_i, d_S_i-F(u,v)≤ (1+ε)d_G-F(u,v).Moreover, if they are incident in S_i but not in G, then d_G(u,v)≤ L_i/m^3. Since we add an edge uv in S_i not in G only if u and v are connected in G_i-2. Thus, d_G(u,v)≤ L_i/m^3. As a base case, we assume that u and v are connected in G_i-2. This means they are in a single component U of V_i-1∪ V_i∪ V_i+1.Since S_i contains an f-fault-tolerant Euclidean (1+ε)-spanner on U,d_S_i-F(u,v) is at most (1+ε)|uv|. In this case, the lemma immediately holds. Hence, we assume that u and v are not connected in G_i-2.Let π be the shortest path between u and v in G-F.Since |uv|≤ 4L_i,π does not contain any edge whose length is longer than L_i+1, so π is a path in G_i+1-F.Consider a maximal subpath π whose edges are in G_i-2. Let p and q be its endpoints, and let π[p,q] be the subpath of π between p and q. We replace π[p,q] witha path π̂_pq between p and q in S_i-F as follows. Note that p and q are connected in G_i-2, andboth p and q are in V(S_i) by themaximality of π[p,q].Then we have d_S_i-F(p,q)≤ (1+)|pq| by the base case,and there is a path π̂_pq between p and q in S_i-F whose length is at most (1+ε)|pq|. We can obtain a path π̂ between u and v in S_i-F by replacing every maximal subpath π[p,q] of π contained in G_i-2 withits corresponding path π̂_pq in S_i-F.Then the path π̂ has length at most (1+ε)|π|=(1+)d_G-F(u,v).§.§ f-Fault-Tolerant Connection Tree For a sequence ℒ_1, an f-fault-tolerant connection tree of ℒ_1is a modifictaion of the connection tree introduced in <cit.>.An f-fault-tolerant connection tree is a rooted tree such thateach node c corresponds to a connected component of G_j for j∈[1,|ℒ_1|].The tree allows us to findtwo moderately far vertices p and q in S_i such that p or q is not in F and p and s (and q and s') are connected in G_i-2.To construct an f-fault-tolerant connection tree T, we consider every connected component of G_j in the increasing order of j∈ [0,|ℒ_1|]. Observe that every single vertex is a component of size one in G_0. For a single vertex v of G, we add a leaf node c corresponding to v, and set i(c)=0. For j≥ 1, a component in G_j is a union of components in G_j-1. For a component C of G_j which is not contained in G_j-1, we add a new node c which corresponds to C, and assign i(c)=j. For every node c' that has no parent yet and corresponds to a component C' with C'⊂ C,we set c to be the parent of c' by adding an edge cc'.Furthermore, we select (f+1) arbitrary vertices from(V_i(c)∪ V_i(c)+1)∩ C', and store them at the edge cc'. If the number of vertices in (V_i(c)∪ V_i(c)+1)∩ C' is less than (f+1),then we store all of them.We do this until j=|ℒ_1|, and then we have a single tree T. Note that each node of T does not store its corresponding component.Every leaf node of T corresponds to one vertex of G. By the construction,for a node c in T, i(c) is the smallest index i such that the vertices corresponding to the leaf nodes in the subtree rooted at c areconnected in G_i.lemmalemComplexityConnectionTree We can construct an f-fault-tolerant connection tree of size O(fn) in O(nlog n) timewhich supports LCA queries in constant time.lemmalemConnectionTreeIndexFor a query of two vertices s, s' with |ss'|∈ [L_i/m,L_i/t), the lowest common ancestor c of the leaf nodes in Twhich corresponds to s and s' stores i(c)=i or i-1.The lemma holds if s and s' are connected in G_i, but not in G_i-2. More specifically,for such pair (s,s') of vertices,if s and s' are connected in G_i-1, then the lowest common ancestor c of the leaf nodes s and s' on T stores i(c)=i-1. Otherwise, it stores i(c)=i.Thus, in the following, we show that they are connected in G_i, but not connected in G_i-2. First, s and s' are connected in G_i since G is an Euclidean t-spanner, and Thus, d_G(s,s') lies in [L_i/m,L_i).There is no edge on the shortest path in G between s and s' whose length at least L_i.Second, they are not connected in G_i-2 because d_G(s,s') ≥ L_i/m. If it is not the case,d_G(s,s') ≤ nL_i-2 < L_i-1 < L_i/m, which is a contradiction. lemmalemConnectionTree We can compute p and q in S_i-F in O(f) time such thatp and q are moderately far in S_i and s and p (q and s') are connected in G_i-2.The algorithm takes O(f) time by Lemma <ref>. We show that the algorithm always finds a vertex p connected to s in G_i-2,then we can analogously prove for q.Recall that c is the LCA of two leaf nodes s and s' in T,and c' is the child node of c with s∈ L(c'). By Lemma <ref>, i(c) is either i or i-1.Then i(c')≤ i-1 and the following holds:(V_i(c)∪ V_i(c)+1)∩ L(c') ⊆ (V_i-1∪ V_i∪ V_i+1)∩ L(c') =V(S_i-F)∩ L(c'). The vertices stored in the edge cc' are in S_i-F and connected with s in G_i(c'). We show that if i(c')≤ i-2, then there exists a vertex p stored in cc' but not in F.The other case is that i(c')=i-1, andthen i(c”)≤ i-2,where c” be the child node of c' with s∈ L(c”).Thus, this case can be analogously proved with respect to c” instead of c'. Now we assume that i(c')≤ i-2. Suppose that all the vertices stored atcc' are contained in F.In this case, cc' stores all vertices in (V_i(c)∪ V_i(c)+1)∩ L(c') . Recall that the number of vertices stored incc'is exactly min{f+1, |(V_i(c)∪ V_i(c)+1)∩ L(c')|}. This means that s and s' are not connected in G_i-F because i(c)=i or i-1.This contradicts that s and s' are connected in G_i-F. Thus, there is a vertex p stored in cc' but not in F.Furthermore, s and p are in L(c'), and they are connected in G_i-2.We show that p and q are moderately far in S_i. Recall that S_i is anf-fault-tolerant Euclidean 4L_i-partial t(1+ε)-spannerby Lemma <ref>. Note that d_G(s,p) and d_G(q,s') are both at most mL_i-2 since s and p (and q and s') are connected in G_i-2. By the triangle inequality, the following inequalities hold since m is strictly larger than t:|pq| ≥ |st|-|ps|-|s't|≥ L_i/m -2mL_i-2≥ L_i(1/m-2/m^3)>4L_i/m^2 ,and|pq| ≤ |st|+|ps|+|s't|≤ L_i/t +2mL_i-2< 2L_i/t. The last term of the first inequalityis strictly larger than 4L_i/m^2 for m≥ 5. Thus, p and q are moderately far. §.§ Distance Oracle: Proof of Lemma <ref>We construct an (1+)-approximate distance oracles for every partial spanners S_i which supports a query of two moderately far points and at most f failed vertices in S_i.For a query of s,s' and a set F of at most f failed vertices, our goal is to compute an approximate distance between s and s' in G-F. We compute an element L in a sequence ℒ in the sequences described in Lemma <ref> with |ss'|∈ [L/m,L/t). Note that we have a partial spanner S with respect to L and a f-fault-tolerant connection tree of ℒ. We compute two moderately far vertices p and q in S-F using Lemma <ref>, and compute d_pq which is an (1+)-approximate distance between p and q in S-F.The query algorithm returns d_pq+2tL/m^2lemmalemEpsCorrectness The query algorithm returns an approximate distance between s and s' in G-F. in O(f+T), where T is the time for computing d_pq.It is clear that the query time is in O(f+T). To prove this lemma, we show that the d_pq+2tL/m^2 is an (1+8)-approximate distance between s and s' in G-F.For this purpose, we first show inequality (<ref>) and inequality (<ref>).Recall that |ss'|∈[L/m,L/t), and d_G-F(s,s')≥ L/m. Thus, we have L/m^2≤ d_G-F(s,s')/m≤ε/2t d_G-F(s,s') . By Lemma <ref>, d_G(s,p)≤ mL_i-2, where L_i=L, since G is an f-fault-tolerant Euclidean t-spanner. Then by inequality (<ref>), we prove d_G-F(s,p)≤ε/2 d_G-F(s,s') as follows:d_G-F(s,p) ≤ td_G(s,p)≤ tm L_i-2≤tL/2m^2≤ε/2 d_G-F(s,s') . Analogously, d_G-F(s',q)≤ε/2 d_G-F(s,s'). Now we are ready to prove that the upper bound holds. We have d_pq≤ (1+)d_S-F(p,q), and then d_pq is at most (1+)^2d_G-F(p,q) by Lemma <ref>. Thus, the following holds:d_pq+2tL/m^2≤ (1+)^2d_G-F(p,q)+ d_G-F(s,s')≤ (1+)^2(d_G-F(s,s')+d_G-F(p,s)+d_G-F(q,s'))+ d_G-F(s,s') ≤ (1+)^3d_G-F(s,s')+ d_G-F(s,s')≤ (1+8)d_G-F(s,s').To show that the lower bound holds, we construct a walk between p and q in G-F whose length is at most d_pq+2tL/m^2.Let π_S-F(p,q) be a shortest path between p and q in S-F. For each edge uv in π_S-F(p,q) not appearing in G,we replace this edge with a shortest path π_G-F(u,v) between u and v. Note that |π_G-F(u,v)|≤ tmL_i-2, where L_i=L, sincewe add an edge in S_i=S only if the edge is in G or its two end vertices are connected in G_i-2. Therefore, this process increases the length of the path by at most tm^2L_i-2≤ tL/m^2 in total. In other words,the obtained path is a path between p and q in G-F whose length is at most d_S-F(p,q)+tL/m^2. Note that d_pq is at least d_S-F(p,q). By the triangle inequality and inequality (<ref>),d_G-F(s,s') ≤ d_G-F(p,q)+d_G-F(s,p)+d_G-F(q,s')≤ d_S_i-F(p,q)+tL/m^2+tL/m^2 ≤ d_pq+2tL/m^2.By combining Lemmas <ref>, <ref>, <ref>, and <ref>, we can obtain Lemma <ref>.* By combining the lemma with Theorem <ref>, the following theorem holds.Let G be an f-fault-tolerant Euclidean t-spanner G and >0.There exists an oracle which supports an (1+)-approximate distance of d_G-F(s,s') for a query of twovertices s,s', and at most f failed vertices F in G in O(t^8f^4) time. Furthermore, we can construct such oracle in 2^O(g(t,f,)) nlog^2 n construction time which takes 2^O(g(t,f,))nlog n space complexity, where g(t,f,)=flog(tf/).§.§ Shortest Path Oracle: Lemma <ref>We construct an (1+)-approximate shortest path oracles for every partial spanners S_i which supports a query of two moderately far points and at most f failed vertices in S_i. Then we can compute a path in S_i for a query.To report an actual path in G, we need additional data structures which allow us to transform a path in S_i into a path in G.Let G^_i be the subgraph of G induced by the edges of length at most tL_i/m^3.We refer G^_i as the short edges subgraph with respect to tL_i/m^3. We construct an arbitrary path oracle described in Lemma <ref> of G^_iso that we can efficiently compute a path between two connected vertices in G^_i-F for any set of at most f failed vertices.For a query of s,s' and a set F of at most f failed vertices, the query algorithm is similar to the distance query. We compute an element L with |ss'|∈[L/m,L/t) by Lemma <ref>, and two moderately far vertices p and q in a partial spanner S-F using Lemma <ref>.Note that we have a partial spanner S and induced subgraph the G^ with respect to the tL/m^3. First, we update the arbitrary path oracle of G^ for the failed vertices F. Moreover, we get a path γ_pq which is an (1+)-approximate shortest path between p and q in S-F. We transform γ_pq into an actual path between s and s' in G-F.First, we compute an arbitrary path γ_sp between s and p in G^-Fand an arbitrary path γ_qs' between s' and q in G^-F. Then, we transform γ_pq into a path in G-F by replacing each edge uv of γ_pq not appearing in G-F with an arbitrary path in G^-F between u and v. Finally, we return the path π̂(s,s') which is the concatenation of γ_sp, γ_qs' and the transformed γ_pq. lemmalemCorrGenPath_gen The query algorithm returns an approximate shortest path between s and s' in G-F. in O(f^4log^2nloglog n + T+ f· e(π̂)), where T is the time for computing γ_pq and e(π̂) is the number of edges in the returned path π̂(s,s'). The query time is trivial by Lemma <ref>. In this proof, we show that the length of returned path is smaller than the distance returned by the distance query algorithm which is an approximate distance by Lemma <ref>. The total weight of all edges in G^ is at most tL/m^2. Thus, |π̂(s,s')| is at most |γ|+tL/m^2. Note that |γ| is at most (1+)d_S-F(p,q)≤ (1+)^2d_G-F(p,q) by Lemma <ref>.The length |π̂(s,s')| of π̂(s,s') is at most (1+8)d_G-F(s,s') by Inequalities <ref> and <ref>. Thus, the returned path is an approximate shortest path between s and s' in G-F. By combining Lemmas <ref>, <ref>, <ref>, <ref>, and <ref>, we can obtain Lemma <ref>. *By combining the lemma with Theorem <ref>, the following theorem holds.Let G be an f-fault-tolerant Euclidean t-spanner G and >0.There exists an oracle which supports an (1+)-approximate shortest path of π_G-F(s,s') for a query of twovertices s,s', and at most f failed vertices F in G in O(f^4log^2 nloglog n +f·) time. Furthermore, we can construct such oracle in 2^O(g(t,f,)) n^2log^2 n construction time which takes 2^O(g(t,f,))nlog^2 nloglog n space complexity, where g(t,f,)=flog(tf/) andis the number of edges in the returned path.§ CONCLUSIONIn this paper, we presented efficient approximate distance and shortest-path oracles for an f-fault-tolerant Euclidean t-spanner and a value >0.Although we state our results in the case that the underlying space is two-dimensional, we can extend our results to the d-dimensional Euclidean space.In this case, we can apply all strategies outlined in this paper slightly increasing the performance of the oracles stated in Table <ref>. This extension does not impact on the dependency on n while the dependency on {t,f,} increases. More specifically, in a d-dimensional space, for any r-net 𝒩 of a Euclidean graph G, there exist at most (2c+2)^d net vertices of 𝒩 within a ball of radius cr.As a result, the result in Lemma <ref> can be turned into a single exponential function in d. Consequently, the function h(t,f,) in Table <ref> becomes h(d,t,f,)= exp (O((d+f)log (dtf/))). Analogously, the kernel oracle answers a t^O(d)f^2-sized kernel in t^O(d)f^4 time for a query of two moderately far vertices and failed vertices, andthe path-preserving kernel oraclecan return the kernel of size t^O(d)f^2log^3(dtf/)log^2n in t^O(d)f^4log^3(dtf/)log^2n time. Then the query times stated in Table <ref> increase accordingly. Although this is the first near-linear-sized approximate shortest-path oracle for graphs with vertex failures,one might think that it is still not practical because of large hidden constants in the performance guarantees. Although it seems hard to avoid the exponential dependency on t and f in the oracle sizes theoretically, we believe that it can be made more efficient in practice by applying several optimization tricks.This is indeed one of interesting directions for future work; our work is just a starting point.We hope that our work would be a stepping stone towards bridging the gap between theory and practice in the routing problem for dynamic networks. plainurl
http://arxiv.org/abs/2312.16397v1
{ "authors": [ "Kyungjin Cho", "Jihun Shin", "Eunjin Oh" ], "categories": [ "cs.CG", "cs.DS" ], "primary_category": "cs.CG", "published": "20231227040357", "title": "Approximate Distance and Shortest-Path Oracles for Fault-Tolerant Geometric Spanners" }
http://arxiv.org/abs/2312.16657v1
{ "authors": [ "Iaroslav V. Blagouchine", "Eric Moreau" ], "categories": [ "math.NT", "cs.NA", "math.CA", "math.NA" ], "primary_category": "math.NT", "published": "20231227180609", "title": "On a finite sum of cosecants appearing in various problems" }
Graph Context Transformation Learning for Progressive Correspondence Pruning Myung-Ki Cheoun January 14, 2024 ============================================================================ Most of existing correspondence pruning methods only concentrate on gathering the context information as much as possible while neglecting effective ways to utilize such information. In order to tackle this dilemma, in this paper we propose Graph Context Transformation Network (GCT-Net) enhancing context information to conduct consensus guidance for progressive correspondence pruning. Specifically, we design the Graph Context Enhance Transformer which first generates the graph network and then transforms it into multi-branch graph contexts. Moreover, it employs self-attention and cross-attention to magnify characteristics of each graph context for emphasizing the unique as well as shared essential information. To further apply the recalibrated graph contexts to the global domain, we propose the Graph Context Guidance Transformer. This module adopts a confident-based sampling strategy to temporarily screen high-confidence vertices for guiding accurate classification by searching global consensus between screened vertices and remaining ones. The extensive experimental results on outlier removal and relative pose estimation clearly demonstrate the superior performance of GCT-Net compared to state-of-the-art methods across outdoor and indoor datasets. The source code will be available at: https://github.com/guobaoxiao/GCT-Net/.§ INTRODUCTIONTwo-view correspondence pruning methods strive to form robust correspondences between two sets of interest points to lay the foundation for many computer vision tasks, such as, Structure from Motion (SfM) <cit.>, Simultaneous Localization and Mapping (SLAM) <cit.> and Image Fusion <cit.>. Correspondence pruning involves three steps: keypoints and relating descriptors extraction, the initial correspondence set establishment and outlier (i.e. false correspondence) removal. More specifically, we employ established methods, such as SuperPoint <cit.> and SIFT <cit.>, to generate keypoints and computer their descriptors at first. Subsequently, the initial correspondence set is generated by applying the nearest matching algorithm to the descriptors. However, the initial correspondence set often contains numerous outliers (shown in Fig. <ref>) due to the limitations of local descriptor representation and the presence of low-quality images. Therefore, the third step, identifying and eliminating outliers, is indispensable. As illustrated in Fig. <ref> and Fig. <ref>, outliers are removed and most of inliers are preserved, enhancing the available insights for post-processing endeavors. Correspondence pruning methods are mainly evolved into two distinct factions, i.e., tradition methods and learning-based methods. RANSAC <cit.> and its modifications <cit.> are representative of traditional methods. These methods adopt a sampling-verification loop to retain most of correspondences adhering to a specific geometric model. However, in scenarios with a high proportion of outliers, the runtime of these methods will rapidly increase and simultaneously, the quality of the results will significantly deteriorate. In our task, it is common for the initial correspondence set to have an outlier proportion exceeding 80%, rendering these methods inapplicable.The most of learning-based advancements approach the correspondence pruning as a binary classification problem and achieve remarkable potential. But this treatment also poses formidable challenges: (1) unordered data should be handled appropriately to ensure its permutation invariance. (2) local context and global context should be adequately mined to provide the basis to identify outliers.For example, LFGC-Net <cit.> and OANet <cit.> employ the PointNet-like architecture <cit.> which embeds Context Normalization (CN) into Multi-Layer Perceptrons (MLPs) to deal with each correspondence individually for gaining the global context. CLNet <cit.>, MS^2DG-Net <cit.> and NCM-Net <cit.> all construct the graph network via K-Nearest Neighbor (KNN) to build relationships among adjacent correspondences for searching and aggregating local context. All these methods aim to obtain adequate local and global context information while preserving the permutation invariance of input data. However, their primary emphasis lies in acquiring abundant context information, overlooking the practical utilization of such context knowledge. In this paper, we propose the Graph Context Enhance Transformer (GCET) block that not only gathers multi-branch graph contexts but also thoroughly mines and emphasizes respective and common significant context information via self-attention and cross-attention to boost the discriminating capability of the network. In specific, we first employ KNN to construct the graph network where each node represents a correspondence and each edge denotes a relationship between two correspondences. Next, we transform the graph network filled with a wealth of context information into two completely different types of graph context to receive various evidence. In one type, local context is aggregated by MLPs and maxpooling, which, although sacrificing a considerable amount of structure information, guarantees the reliability of graph context. In the other type of graph context, local context is gathered by affinity-based convolution which captures the vast majority of relationships in the graph structure while preserving contaminated information. It is feasible to immediately integrate the complementary graph contexts from multiple branches, but it is not the optimal choice because there is untapped potential yet to be explored. Here, we utilize self-attention to amplify the crucial parts and reliable dependencies among correspondences for emphasizing the distinctiveness of graph context within each branch. In parallel, we employ cross-attention to uncover and enhance the shared importance between different salient graph contexts. Finally, the discriminative fusion strategy is applied to absorb highlight components of graph contexts and discard redundant portions meanwhile.Moreover, in order to further apply the fused graph context to the global domain, we present the Graph Context Guidance Transformer (GCGT) block which adopts the score-based sampling to select a set of candidates and utilize transformer to guide spatial consensus at the global level through sampled candidates. Specifically, we first employ the linear layer to score the confidence values for each correspondence. Then, we select a set of correspondences with high scores as the global guiding source and regard original correspondences as the guiding target. It is worth noting that before the guiding procedure, we also perform the cluster operation on the guiding source and target to enhance their reliability and reduce computational overhead. At last, the guiding source and target are fed into transformer to steer correspondences with high global consistency and shape long-range dependencies. Additionally, we also capture short-range spatial dependencies to complement the output of transformer.Our contributions are three-fold: (1) We propose the GCET block which generates multi-branch graph contexts with respective characteristics and employs self-attention and cross-attention to emphasize individual nature and shared crucial knowledge for better absorbing advantages from each branch. (2) Drawing on the foundation of global consensus and guided by the principle of distinct inliers to guide hidden inliers, we design the GCGT block which samples credible inliers to direct remaining inliers exhibiting high spatial consensus. (3) By combining the GCET block and GCGT block, we develop an effective Graph Context Transformation Network (GCT-Net) for outlier removal and relative pose estimation, achieving the state-of-the-art performance on both outdoor and indoor datasets.§ RELATED WORK§.§ Learning-Based Correspondence Pruning MethodsThe advent of deep learning has provided many new inspirations for tackling outlier rejection. As a pioneer in this field, LFGC-Net <cit.>, driven by <cit.>, subdivides the correspondence pruning task into outlier/inlier labeling and essential matrix regression. Besides, it further designs a permutation-equivariant architecture, which integrates CN into MLPs, thereby dealing with unordered data while obtaining the global context. The most of subsequent works adopt the de facto framework and incorporate or modify components to gain context information to enhance the network performance. For example, in order to overcome the disturbance of contaminated information caused by CN, ACNet <cit.> transforms CN into attentive ones to discriminately treat various information. OANet <cit.> clusters correspondences by the differentiable pooling layer and recovers the original order of correspondences based on the differentiable unpooling layer for exploiting potential local context as well as reducing computation overhead. CLNet <cit.> first constructs the local graph for each correspondence to gather local context and then connects all local graphs to generate a global one to acquire abundant global context. MS^2DG-Net <cit.> leverages the combination of maxpooling and self-attention to progressively update the local graphs for multi-level context. Diverging from the aforementioned approaches, which generates a solitary graph context and directly employs it in a superficial manner, our method generates multi-branch graph contexts with respective characteristics and knowledge and refine graph contexts through both self-interactions and collaborative interactions. §.§ Attention Mechanism in Correspondence PruningCurrently, attention mechanisms have been extensively employed in many computer tasks, including semantic segmentation <cit.>, image fusion <cit.> and so on. For the field of correspondence pruning, the introduction of attention mechanisms is beneficial to focus on inlier information and suppress redundant information, but it still necessitates some appropriate modifications. For instance, SENet <cit.> is a simple yet efficient channel attention mechanism that emphasizes important knowledge in the channel dimension via the squeeze-and-excitation (SE) block. However, it prioritizes the global aspect and neglects the demand for local aspect in correspondence pruning. Therefore, MSA <cit.> introduces the multi-scale attention by remoulding the SE block, carrying out information recalibration from multiple perspectives for accurate inlier/outlier classification. Additionally, CA <cit.> and CBAM <cit.>, integrating the spatial attention mechanisms, further consider the weight allocation in the spatial dimension. But their effectiveness is limited for correspondence pruning methods. The emergence of vanilla Transformer brings new prospects for addressing the problem of capturing long-range spatial dependencies, but simultaneously introduces various challenges. Firstly, considering that the number of correspondence N typically falls within the range of 1500 to 2000 in correspondence pruning, the computation complexity of Transformers, which is O(N^2· D), results in a substantial computational burden. Although some varieties of transformer, like ViT <cit.> and Swin-Transformer <cit.>, reduce the computational load, the patch strategy can negatively impact the handling of unordered data. Secondly, in the initial correspondence set, outliers constitute the majority, and their presence can interfere with similarity computation, significantly diminishing the reliability of the output attention map. Therefore, we propose the GCGT block which mitigates the impact of contaminated information during interaction process and reduces computational overhead in a scaling approach. This transformation renders Transformer-like architecture well-suited for correspondence pruning.§ METHODOLOGY§.§ Problem FormulationGiven a putative set of image pairs (I,I^'), we first employ off-the-shelf keypoint extraction methods <cit.> to search interest points and calculate concerning descriptors. Afterwards, the initial correspondence set Q=[q_1, q_2,q_3,…, q_N] ∈ R^N×4 consisted of N correspondences is generated by roughly matching keypoints through descriptor similarities. As the basic element in Q, q_i represents the i-th correspondence which involves two coordinates normalized by camera intrinsics in respective image. However, the brute-force matching process contributes to an overwhelming proportion of outliers in the initial correspondence set. Consequently, an efficient correspondence pruning method should be developed to conduct more accurate correspondence classification and relative pose estimation. In pursuit of this objective, we propose Graph Context Transformation Network (GCT-Net) and illustrate its network architecture in Fig. <ref>. We adopt the progressive pruning strategy <cit.> in our network which is capable of gradually screening outliers and thus mitigates the negative impact of contaminated information. With the perspective of pruning module, the input data passes through 3 ResNet blocks, the Graph Context Enhance Transformer (GCET), another 3 ResNet blocks and Graph Context Guidance Transformer (GCGT). ResNet blocks are used to boost representation ability of the network, GCET aims to enhance the converted graph context, and GCGT further leverages enhanced context to guide remaining inliers. In general, we first utilize two series-connected pruning modules to deal with the input correspondence set Q. The operations in these two modules can be respectively expressed as: (Q_1,o_1)=f_σ1(Q) and (Q_2,o_2)=f_σ2(Q_1) where σ1 and σ2 are their related parameters. Here, Q_1∈ℝ^N_1×4 and Q_2∈ℝ^N_2×4 denote the pruned correspondence set (N>N_1>N_2) and o_1∈ℝ^N_1×1 and o_2∈ℝ^N_2×1 represent the output logit values. Based on logit values, we sort correspondences in descending order and preserve 50% of correspondences by pruning the lower 50%.. It is worth noting that the feature map of Q_2 is preserved and additionally passed through a linear layer to estimate the inlier weight set w. The next step (i.e. model estimation), regarding w as supplementary input, involves and Q_2 with w to execute the parametric model calculation (i.e., estimate essential matrix Ê). Finally, we leverage Ê combined with Q to carry out the full-size verification which can retrieve the falsely removed inliers during sequential pruning process. In short, the model estimation and the full-size verification can be expressed as:Ê =H(Q_2,w),ED =V(Ê,Q),where H(·,·) denotes the weighted eight-point algorithm <cit.> and V(·,·) represents the full-size verification operation to measure the epipolar distance set of all correspondences (i.e., ED=[ed_1, ed_2,…, ed_N] ∈ R^N×1). Each correspondence q_i corresponds to a polar distance ed_i, and we classify q_i into inliers when ed_i is less than an artificially set threshold.§.§ Graph Context Enhance Transformer Collecting abundant local context is highly beneficial for accurate correspondence pruning. The graph network plays a significant role in establishing and exploring relationships among neighbors. <cit.> and <cit.> leverage the nature of graph network to generate graph contexts with respective advantages. However, the converted graph contexts are not thoroughly explored and refined resulting in a lost opportunity to substantially improve the effectiveness of subsequent tasks. As shown in Fig. <ref>, GCET first transforms the feature map of correspondences F={f_1,⋯,f_N} into the graph network 𝒢_i=(𝒱_i,ℰ_i) where 𝒢_i denotes i-th correspondence, 𝒱_i=(v_i^1, ⋯, v_i^k) contains k neighbors and ℰ_i=(e_i^1, ⋯, e_i^k) indicates the relationships between 𝒢_i and its neighbors. Here, we describe e_i^j as [f_if_i-f_i^j] where [··] means the concatenation operation. Then, the graph network is converted into two different types of graph context: credible graph context (CGC) and structure graph context (SGC) by maxpooling with MLPs and convolution with p neighborhood segmentation <cit.> to gather diverse context information. This process can be expressed as follows:CGC =MLPs(Maxpooling(MLPs(ℰ_i))),SGC =Conv_2(Conv_1(ℰ_i)),where Conv_1 and Conv_2 are convolutional operations with kernel sizes of 1× p and 1×k/p, respectively. Although CGC discards a majority of edge information, it remains the most credible neighbor relationships. Conversely, SGC, captures the most of structure information among nodes but is susceptible to interference from contaminated information. Next, in order to amplify the strengths of these graph contexts, we employe self-attention to recalibrate themselves and leverage cross-attention in parallel to uncover shared significant part. However, both self-attention and cross-attention demand substantial computation resources, especially when dealing with a large number of N. Therefore, before the recalibration of graph contexts, it is imperative to streamline graph contexts into {CGC^', SGC^'} through a clustering operation <cit.> to compact vertices in a learnable manner. The detail operation can be formulated as follows:CGC^', SGC^' =Cluster(CGC, SGC),CGC^',e =(SA(CGC^')⊕ CA(SGC^',CGC^')),SGC^',e =(SA(SGC^')⊕ CA(CGC^',SGC^')),where CGC^',e and SGC^',e denote enhanced graph contexts in a clustered state. SA(·) represents the self-attention and CA(·,·) indicates the cross-attention where the query is derived from the preceding input and the key-value pairs source from the second input. Additionally, ⊕ is the attentional fusion operation <cit.> which discriminately treats transitional graph contexts to generate the complete graph context with strong characteristics. Finally, the enhanced graph contexts are recovered to the original sizes to keep the permutation invariance and go through another attentional fusion to combine respective highlighted advantages. The process can be described as:CGC^e, SGC^e =Recover(CGC^',e, SGC^',e),GC^e =(CGC^e⊕ SGC^e),where GC^e is the output of GCET.§.§ Graph Context Guidance Transformer Global consensus is served as convictive evidence to assist in inlier/outlier discerning. Nevertheless, due to the substantial presence and random distribution of outliers, excavating global consensus among inliers is a highly challenging task. To extend the application of the enhanced graph context to the global realm, we design GCGT, to guide the inlier discrimination process by mining the consensus among inliers. The detailed guidance process is shown in Fig. <ref>. In specific, we first subject the enhanced graph context to a linear layer, assigning confidence scores to each node to generate a score table (ST). Based on ST, we proceed to sort confidence scores in descending order and sample vertices with higher confidence scores to form a set of candidates. Notably, before delving into the consensus guidance procedure, we expand the candidate set to enhance its expressive capacity and mitigate the potential disruption caused by hidden outliers. Simultaneously, we perform the cluster operation to the enhanced graph context before sampling phase, streamlining its representation and concurrently reducing the computational load during the guidance process. These preparations can be described as:ST =Linear Layer(GC^e),GS =Expand(Sample(Sort_dec(ST,sr))),GT =Cluster(GC^e),where Sort_dec implies sorting targets in descending order, sr indicates sampling rate, and GS and GT represents the guiding source and the guiding target, respectively. Expand is the inverse operation of Cluster. Next, we employ the vanilla Transformer to conduct the consensus guidance which seeks similarities between GS and GT to assign greater attention to inliers. Here, the query is a linear projection of GS and the key-value pairs source from GT. To prevent from information loss during guidance procedure, we apply skip connection to GS. Besides, we also perform OAFliter <cit.> to the clustered graph context, which captures spatial-wise dependencies complementing the output of Transformer. Finally, we recover the fusion results of the output of OAFilter and consensus guidance and further integrate GC^e to harmonize the balance between local context and global consensus. These operations can be formulated as follows:GR =((TF(GS,GT)+GS)⊕ OAFilter(GT)),GC_out =(Recover(GR)⊕ GC^e),where GR means guiding results, TF indicates the vanilla Transformer and GC_out is the final graph context output by GCGT.§.§ Loss functionFollowing <cit.>, we employ a hybrid loss function to optimize GCT-Net. The loss function is composed of two constituents:L =L_cls(o_i , y_i)+δ L_reg(Ê, E), where L_cls and L_reg denotes the correspondence classification loss and the essential matrix regression loss, respectively. δ represents a parameter utilized to balance these two losses. L_cls can be further formulated as:L_cls(o_i,y_i)=∑_i=1^Kℋ(η_i⊙ o_i,y_i), where ℋ represents the binary cross entropy loss. o_i signifies the logit values derived from i-th pruning module. y_i denotes the ground-truth label set for the i-th pruning module where labels are ascertained by the threshold of 10^-4. ⊙ is the Hadamard product. The parameter η_i is a dynamic temperature vector, strategically leveraged to mitigate the negative effects of label ambiguity <cit.>. K indicates the count of correspondence pruning modules. L_reg can be described as follows <cit.>:L_e(Ê,E) =(p^'TÊp)^2/Ep_[1]^2+Ep_[2]^2+E^Tp^'_[1]^2+E^Tp^'_[2]^2,where p and p^' denote the coordinate sets in image matching pairs. q_[i] stands for the i-th element in vector q. § EXPERIMENTS §.§ Evaluation Protocols§.§.§ DatasetsWe conduct experiments on outdoor and indoor datasets (i.e., YFCC100M and SUN3D) to demonstrate the outlier removal capability of GCT-Net. The YFCC100M dataset contains 100 million publicly accessible travel images divided into 71 sequences. The SUN3D dataset, comprising a substantial collection of RGBD images, has been categorized into 254 sequences. As in <cit.>, these sequences are further divided to generate a training set, a validation set, and a test set. Some images from the sequences being used as training set are retained to act as known scenario testing. §.§.§ Evaluation metricsWe evaluate our proposed GCT-Net in terms of both inlier/outlier classification and relative pose estimation tasks. In the inlier/outlier classification task, the network is supposed to remove outliers and preserve as many inliers as possible. Therefore, Precision (P), Recall (R) and F-score (F) are selected as our evaluation metrics. In relative pose estimation task, the mean average precision (mAP) is adopted as our criteria which measures the angular differences between estimated vectors and the ground truth ones with the perspective of both rotation and translation.§.§ Evaluation ProtocolsIn the overall framework implementation of our network, following <cit.>, we utilize two consecutive pruning modules with a pruning rate of 0.5 each to achieve progressive selection. SIFT is employed to generate an initial set of N = 2000 correspondences where the number of channel dimension d is extended to 128. In GCET, the neighbor number k in KNN algorithm is set to 9 for constructing the graph network. In GCGT, we configure the sampling rate sr to be 0.2. As for the common components in GCET and GCGT, the channel reduction ratio r in attentional fusion <cit.> and the head number h in Transformer <cit.> is all configured to 4. In alignment with configuration of <cit.>, we utilize the Adam optimizer <cit.>, to set the batch size to 32 and maintain the learning rate of 10^-3 to train our network. It's noteworthy that the training process spans a total of 500k epochs where for the initial 20k epochs, δ in Eq. <ref> is set to 0, and for the remaining 480k epochs, δ is fixed to 0.5. §.§ Correspondence ClassificationWe perform a comprehensive comparison between GCT-Net and a selection of classic and cutting-edge works, spanning the traditional method <cit.> as well as learning-based methods <cit.>. Here, we utilize a ratio test with a threshold of 0.8 in RANSAC to proactively eliminate certain erroneous matches, preventing a sharp decline.Table <ref> showcases the comparative results conducting the task of correspondence classification on YFCC100M and SUN3D. We can observe that our network achieves the best performance, except in terms of the Recall metric. The reason lies in our adoption of the progressive correspondence pruning strategy, which, while removing a mass of outliers, inevitably eliminates hidden inliers as well. Consequently, our method and CLNet exhibit significant improvement in the Precision metric, while the value of Recall is relatively lower compared to other methods. However, considering the overall metrics (i.e. F-score), we still gain the optimal results which surpasses the second-best method by 2.27% and 0.51% on the YFCC100M and SUN3D datasets, respectively. Fig. <ref> displays the visualization results of classification, which further demonstrates the remarkable ability of our network in removing outliers. After correspondence classification, inliers are assigned to weights to execute the relative pose estimation task. The concerning experimental results are shown in Table <ref>. Here, we also evaluate the compatibility of various feature matching methods with different feature extraction approaches. In contrast to the hand-crafted method SIFT <cit.>, we employ a learning-based feature extraction approach, SuperPoint <cit.> for testing. In experiments, we select mAP5^∘ and mAP20^∘ to comprehensively evaluate the performance of these methods under high-tolerance and low-tolerance scenarios. Besides, to assess the generalization capability of models, we conduct experiments in both known and unknown scenes.From Table <ref>, it is apparent that GCT-Net outperforms all configurations under the SIFT-based condition. When compared to CLNet, which also employs the progressive pruning framework, our network demonstrates a significant lead in unknown scenes, surpassing CLNet by 13% in mAP5^∘ and 7.1% in mAP20^∘. We also achieve 9.18% and 6.98% improvements compared to ConvMatch on unknown and known scenes under mAP5^∘. However, when adopting SuperPoint as the feature extraction method, our network only slightly surpasses ConvMatch in mAP20^∘, while trailing behind ConvMatch in mAP5^∘. This discrepancy might be attributed to SuperPoint generating lots of high-quality correspondences at the beginning. For our network, pruning such high-quality correspondences could result in the loss of crucial information and thus cause a decrease in estimation accuracy. In contrast, ConvMatch can leverage convolutions to capture additional information effectively.§.§ Ablation Studies We perform ablation experiments on GCT-Net to demonstrate the effectiveness of individual components. Table <ref> displays the experimental results about the network integration with various modules. IPS indicates that the network composed of only ResNet blocks adopts the iterative pruning strategy. GCET represents the application of Graph Context Enhance Transformer. GCGT-P refers the partial Graph Context Guidance Transformer, where we isolate the injection of OAFilter to assess the effectiveness of the process of sampling to consensus guidance. GCGT-W signifies the whole Graph Context Guidance Transformer. From Table <ref>, it is evident that the integration of every component causes a favorable impact on the network performance, compared to single utilization of IPS. In specific, the second row of table which incorporates GCET into IPS obatins 13.97% and 10.27% improvments under mAP5^∘ and mAP20^∘. This demonstrates that the significance of generating graph contexts and effectively leveraging them. The third row (i.e. IPS + GCGT-P) validates the effectiveness of the sampling strategy and consensus guidance, which gains a 11.67% improvement under mAP5^∘. Compare to partial GCGT, utilizing complete GCGT (the fourth row) results in improvements of 1.3% and 0.84% under mAP5^∘ and 20^∘. This highlights the injection of OAFilter, which enhances the output of Transformer in a complementary manner. By combining GCET and GCGT, the network achieves the optimal performance.We also perform the ablation studies about different sampling rates. When the sampling rate is excessively large, it can lead to substantial computational load. Therefore, during the implementation of experiments, we maintain the sampling rate below 0.5. As shown in Fig. <ref>, opting for a low sampling rate (i.e., 0.05) can limit the expressive capacity of network, whereas selecting an excessively high sampling rate (i.e., 0.5) can make it susceptible to disruption by outlier information. Therefore, it's necessary to choose an appropriate sampling rate (i.e., 0.2) to strike a balance between the two aspects. § CONCLUSIONIn this paper, we propose the effective Graph Context Transformation Network (GCT-Net) for progressive correspondence pruning. The graph network is served as an effective carrier of local context information. Therefore, we propose the Graph Context Enhance Transformer to convert the graph network into multi-branch graph contexts and enhance the individual characteristic and shared significant information of graph contexts. This allows the advantages of different graph contexts to be effectively combined and fully utilized. For extending the enhanced graph context to the global domain, we further design the Graph Context Guidance Transformer. This module adopts a score-based sampling strategy to select candidates as the guiding source and regards the unsampled vertices as the guiding target for the execution of consensus guidance which seeks the hidden inliers by consensus similarities. Numerous experiments conducted on tasks related to correspondence classification and relative pose estimation demonstrate the superior ability of GCT-Net, surpassing the performance of state-of-the-art methods.§ ACKNOWLEDGMENTThis work was supported by the National Natural Science Foundation of China under Grants 62072223, 62125201 and 62020106007.
http://arxiv.org/abs/2312.15971v1
{ "authors": [ "Junwen Guo", "Guobao Xiao", "Shiping Wang", "Jun Yu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226094330", "title": "Graph Context Transformation Learning for Progressive Correspondence Pruning" }
An efficient approach to characterize spatio-temporal dependence in cortical surface fMRI data Huy DangDept. of Statistics, Pennsylvania State University, USAand Marzia A. Cremona Dept. of Operations and Decision Systems, Université Laval, CanadaCHU de Québec – Université Laval Research Center, Canadaand Francesca ChiaromonteDept. of Statistics, Pennsylvania State University, USAInst. of Economics and L'EMbeDS, Sant'Anna School of Advanced Studies, Italy and Nicole LazarDept. of Statistics, Pennsylvania State University, USAJanuary 14, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Petyt-Spriano-Zalloum recently developed the notion of a curtain model, which is a hyperbolic space associated to any CAT(0) space.It plays a similar role for CAT(0) spaces that curve graphs do for mapping class groups of finite-type surfaces.Those authors asked whether this curtain model is a quasi-isometry invariant, namely if quasi-isometric CAT(0) spaces have quasi-isometric curtain models.In this short note, we provide an explicit example answering this question in the negative. § INTRODUCTIONIn <cit.>, Petyt-Spriano-Zalloum introduced a combinatorial tool called a curtain that serves as an analogue in the CAT(0) setting of a hyperplane from the theory of CAT(0) cube complexes. Building off of “hyperplane-separation" metrics introduced by Genevois <cit.>, the authors utilize curtains in a CAT(0) space X to build the curtain model — a hyperbolic space which effectively collapses the “flat" parts of the CAT(0) space. This “coning off" is by design, and gives rise to many similarities between a CAT(0) space and its curtain model that parallel the relationship between mapping class groups and their curve graphs (See <cit.>).Petyt-Spriano-Zalloum asked in <cit.> if a quasi-isometry between CAT(0) spaces always induces a quasi-isometry between their corresponding curtain models. We answer this question in the negative. For a CAT(0) space X, we denote X to be its curtain model. There exists a CAT(0) space X and a self quasi-isometry ϕ: X⟶ X such that ϕ does not descend to a quasi-isometry for X. Further, there exists two quasi-isometric CAT(0) spaces W, Z whose curtain models W, Z are not quasi-isometric.Our example is based on an example due to Cashen <cit.>, which he used to show that quasi-isometries of CAT(0) spaces need not induce homeomorphisms of their contracting boundaries when equipped with the Gromov product topology. Thus, it also follows that we get an analogous result for the curtain models of CAT(0) spaces. There exist quasi-isometric CAT(0) spaces W,Z whose curtain models have non-homeomorphic Gromov boundaries. Acknowledgments: Harry Petyt independently discovered this example.I would like to thank him for his useful comments on an earlier draft of this article and for encouraging me to write it up. Also, the warmest of thanks goes to Matthew Gentry Durham for his constructive feedback on an earlier draft of this paper. § BACKGROUND We now give a small summary of definitions imported from <cit.>. For background of CAT(0) spaces, we refer the reader to <cit.>. The following is the required background to define the curtain model (Definition <ref>). We always assume X is a CAT(0) space.Let X be a CAT(0) space and let α:I→ X be a geodesic. For any number r such that [r-1/2, r+1/2] in in the interior of I, the curtain dual to α at r ish=h_α=h_α,r=π^-1_α(α[r-1/2, r+1/2])where π_α is the closest point projection to α. We call the segment α[r-1/2, r+1/2] the pole of the curtain which we denote as P when needed.A curtain h separates sets A,B ⊂ X if A ⊂ h^- and B ⊂ h^+. A set {h_i} is a chain if each of the h_i are disjoint and h_i separates h_i-1 and h_i+1 for all i. We say a chain {h_i} separates sets A,B ⊂ X if each h_i separates A and B.Let L ∈ℕ. Disjoint curtains h and h^' are said to be L-separated if every chain meeting both h and h^' has cardinality at most L. Two disjoint curtains are said to be separated if they are L-separated for some L. If c is a chain of curtains such that each pair is L-separated, then we refer to c as an L-chain. Denote X_L for the metric space (X, d_L), where d_L is the metric defined asd_L(x, y)=1+max{|c|: c is an L-chain separating x from y} with d_L(x,x) = 0. Note that, by Remark 2.16 in <cit.>, we have that for any x,y ∈ X, it follows that d_L(x,y) < 1 +d(x,y). Fix a sequence of number λ_L ∈ (0,1) such that∑_L=1^∞λ_L < ∑_L=1^∞ Lλ_L < ∑_L=1^∞ L^2λ_L < ∞We consider the space (X, d̂), where the distance between two points x,y∈ X is defined by d̂(x,y) = ∑_L=1^∞λ_L d_L(x,y) and d_L is the L-metric defined in Definition <ref>. We call (X, d̂) the curtain model of X and denote it as X. Both of the following definitions will also help in the construction of the counterexample. Let X be a CAT(0) space and let α:[0, a] → X and α^':[0, a^'] → X be two geodesic paths issuing from the same point α(0)=α^'(0). Then the comparison angle ∠_𝔼(α(t), α^'(t^')) is a non-decreasing function of both t, t^'≥ 0, and the Alexandrov angle ∠(α, α^') is equal tolim _t, t^'→ 0∠_𝔼(α(t), α^'(t^'))=lim _t → 0∠_𝔼(α(t), α^'(t)) .Hence, we define:∠(α, α^')=lim _t → 0 2 arcsin1/2 t d(α(t), α^'(t)) . A geodesic α is D-strongly contracting if for any ball B disjoint from α we have diam(π_α(B)) ≤ D, where π_α is the closest point projection to α.§ THE COUNTEREXAMPLE The following counterexample was used in <cit.> to show that two quasi-isometric CAT(0) spaces can have contracting boundaries of different homeomorphism type when equipped with the Gromov product topology. We first introduce this space and its curtain model. 3.1 The Infinite Parking Lot and its Curtain Model.Let Y be ^2 with a disc of radius one centered at the origin removed. Denote X as the universal cover of Y. We can view X in the following way: Take X_i to be a quarter flat with the quarter disc centered at the origin removed. Then X = ∪_i X_i∼ where ∼ denotes gluing the y-axis of X_i to the x-axis of X_i+1 for all i∈ℤ. One informally calls X the “infinite parking lot" as it can be viewed as a collection of quarter flats glued together that are spiraling up and down, giving the “infinite levels" of a parking lot. See Figure <ref>.X is indeed a CAT(0) space since it is a gluing of CAT(0) spaces along single geodesic lines. The result of this space is that a half flat with a half disc of radius one removed at the origin can be isometrically embedded into each X_i∪ X_i+1∼. In fact, we can spiral up any θ amount and get the same isometry of the half flat with a half disc removed at the origin. Parameterize X via its natural polar coordinates × [1,∞), and define the spiral to be the line ℝ×{1}. We now explain why X's curtain model X is a quasi-line.Take any geodesic ray γ such that γ(0) is on the spiral and the Alexandrov angle between γ and the spiral is π/2. Up to an isometric rotation of X by some θ along the spiral, γ is the y-axis of some X_i. Since γ is the y-axis of some isometrically embedded half flat (with a half disc removed), all curtains dual to γ will stay in its half flat, X_i∪ X_i+1∼. As seen in Figure <ref>, if h_1, h_2 are two disjoint curtains dual to γ, then h_1,h_2 will be two parallel, infinitely long strips of width one in X_i∪ X_i+1∼. All curtains dual to the x-axis of X_i will meet h_1 and h_2, which means h_1 and h_2 are not L-separated for any L. The same is true for any two disjoint curtains dual to γ. Also, by Lemma 2.21 in <cit.>, the max L-chain that can cross γ is bounded above by 4L+10. Thus, the diameter of γ isdiam(γ) = ∑_L=1^∞λ_Ldiam_L(γ) ≤∑_L=1^∞λ_L(4L+10) < ∞.This is true for any geodesic ray that starts at the spiral and whose Alexandrov angle with the spiral is π/2. In particular, if we denote the spiral as α, then for any x ∈ X, d̂ (x, π_α(x)) ≤ 4L+10.Now, fix some origin ∈ X on α, and let α^+ denote the positive spiral direction and α^- the negative spiral direction emanating from . Both directions are π-strongly contracting as balls disjoint from the axis can only project to half of the circumference of one of the circles in the spiral. By <cit.>, there exists an infinite L-chain dual to α^+ for some L (similarly for α^-). Thus, in the curtain model X, the diameters of α^+ and α^- will both be unbounded. By <cit.>, both α^+ and α^- are unparameterized quasi-geodesics in X. This concludes α is a quasi-line in X. Since for any x ∈ X, d̂(x, π_α(x))≤ 4L+10, this yields that X is a quasi-line.3.2 A Self Quasi-Isometry Does Not Induce a Quasi-Isometry of Curtain Models.For some ∈ X on the spiral, denote the points of X by (θ, r), where θ is the angle traveled around the spiral starting at , and r is the “radius" distance away from the spiral. Consider the points (i, 2^i) and (0, 2^i)for all i ∈ℕ. Through a variation of the logarithmic spiral quasi-isometry of the Euclidean planeϕ:X⟶ X(t,r) ⟼ (t-log_2(r),r),we see that ϕ( (i, 2^i)) = (0, 2^i). However, in the curtain model X, {(0,2^i)}_i represents a quasi-point, and {(i,2^i)}_i represents a quasi-line. This means that the self-quasi-isometry ϕ will not descend to a quasi-isometry for X. 3.3 Upgrading to a Counterexample for Quasi-Isometric Invariance. Now, following the same vein as <cit.>, we construct two quasi-isometric CAT(0) spaces whose curtain models are not quasi-isometric. Construct the space W by gluing a geodesic ray γ_i to X at each (i, 2^i) point. Similarly, construct the space Z by gluing a geodesic ray γ_i' to X at each (0, 2^i) point. These spaces are quasi-isometric via the quasi-isometryϕ:W⟶ Z(t,r) ⟼ (t-log_2(r),r) γ_i ⟼γ_i'.However, the curtain models will not be quasi-isometric. See Figure <ref>. Indeed, as {(0,2^i)}_i is a quasi-point in Z,each of the geodesic rays in {γ_i'}_i emanate from a point which is within bounded distance ofon the quasi-line X. Thus, Z is quasi-isometric to an infinite wedge of rays. On the other hand, {(i,2^i)}_i represents some sub-quasi-line in X, so the geodesic rays {γ_i}_i have starting points at increasing distance away fromin X as i increases. So, W is quasi-isometric to ℝ with a ray attached to each positive integer. These two spaces are not quasi-isometric.The same logic can also apply to show W and Z have Gromov boundaries of different homeomorphism type. The sequence {γ_i}_i in the Gromov boundary of W converges to α^+. No such converging sequence exists in Z. This proves the two Gromov boundaries for W, Z are not homeomorphic. alpha
http://arxiv.org/abs/2312.16325v1
{ "authors": [ "Elliott Vest" ], "categories": [ "math.MG" ], "primary_category": "math.MG", "published": "20231226201722", "title": "The Curtain Model is Not a Quasi-Isometry Invariant of CAT(0) Spaces" }
Learning from small data sets: Patch-based regularizers in inverse problems for image reconstruction Moritz Piening^1,Fabian Altekrüger^2, Johannes Hertrich^2, Paul Hagemann^1, Andrea Walther^2, Gabriele Steidl^1 January 14, 2024 ===================================================================================================================== Single-cell RNA sequencing (scRNA-seq) enables researchers to analyze gene expression at single-cell level. One important task in scRNA-seq data analysis is unsupervised clustering, which helps identify distinct cell types, laying down the foundation for other downstream analysis tasks.In this paper, we propose a novel method called Cluster-aware Iterative Contrastive Learning (CICL in short) for scRNA-seq data clustering, which utilizes an iterative representation learning and clustering framework to progressively learn the clustering structure of scRNA-seq data with a cluster-aware contrastive loss. CICL consists of a Transformer encoder, a clustering head, a projection head and a contrastive loss module. First, CICL extracts the feature vectors of the original and augmented data by the Transformer-encoder. Then, it computes the clustering centroids by K-means and employs the student’s t-distribution to assign pseudo-labels to all cells in the clustering head. The projection-head uses a Multi-Layer Perceptron (MLP) to obtain projections of the augmented data. At last, both pseudo-labels and projections are used in the contrastive loss to guide the model training. Such a process goes iteratively so that the clustering result becomes better and better. Extensive experiments on 25 real-world scRNA-seq datasets show that CICL outperforms the state-of-the-art (SOTA) methods. Concretely, CICL surpasses the existing methods by from 14% to 280%, and from 5% to 133% on average in terms of performance metrics ARI and NMI respectively. Source code is available at https://github.com/Alunethy/CICLhttps://github.com/Alunethy/CICL. § INTRODUCTION Each cell possesses unique characteristics and biological functions defined by its gene transcription activities. Conventional bulk RNA sequencing measures the average transcription levels of a multitude of cells, thereby obscuring the heterogeneity among individual cells. In the past decade, the rapid progress of single-cell RNA sequencing (scRNA-seq) technologies <cit.> enables transcriptome-wide gene expression measurement in individual cells, which greatly helps deepen our understanding of cellular heterogeneity and propels the research on cell biology, immunology, and complex diseases <cit.>. Identifying cell types is a fundamental step in unraveling complex biological processes such as cellular differentiation, lineage commitment, and gene regulation <cit.>. As such, cell clustering becomes an important task in scRNA-seq analysis. However, the inherent high-dimensionality, noise, and sparsity of scRNA-seq data present severe challenges for scRNA-seq data clustering analysis <cit.>.Up to now, many models or algorithms have been developed for scRNA-seq data clustering.Early scRNA-seq clustering methods mainly rely on traditional dimensionality reduction and clustering methods. For example, pcaReduce <cit.> combines PCA and K-means, iteratively merging cluster pairs based on related probability density function. Recognizing the importance of similarity metrics in the clustering task, SIMLR <cit.> amalgamates multiple kernels to learn sample similarity and perform spectral clustering. Seurat <cit.> employs a graph-based community detection algorithm, while Louvain <cit.> is based on the shared nearest neighbor graph to identify cell types.In the past decade, with the rapid development of deep learning, deep neural networks (DNN) have been extensively applied to scRNA-seq data clustering to address the limitations of conventional methods <cit.>. DEC <cit.> and IDEC <cit.>, based on autoencoders (AE), use KL divergence as the clustering loss, achieving simultaneous learning of feature representations and cluster assignments. To address the pervasive dropout events in scRNA-seq data, DCA <cit.> proposes a zero-inflated negative binomial (ZINB) model to better characterize the distribution of scRNA-seq data, and uses the negative likelihood as the reconstruction loss instead of the frequently-used mean-square error (MSE) loss in autoencoders. scVI <cit.> is a deep generative model based on variational autoencoders, which can do various scRNA-seq data analyses such as data imputation, clustering, and visualization. scDeepCluster <cit.> introduces a novel model-based deep learning clustering approach. By combining the ZINB model with the DEC algorithm, it is designed to capture the underlying cluster structure of scRNA-seq data. scDHA <cit.> exploits a stacked Bayesian self-learning network to learn compact and generalized representations of scRNA-seq data. To leverage the relationships between cells, some studies construct the cell-cell graph and apply Graph Neural Networks (GNNs) to learn the representations of cells. scDSC <cit.> formulates and aggregates cell-cell relationships with graph neural networks and learns latent gene expression patterns using a ZINB model based autoencoder. GraphSCC <cit.> integrates the structural relationships between cells into scRNA-seq clustering by employing a graph convolutional network. It also utilizes a dual self-supervised module to cluster cells and guide the training process. Furthermore,Some other works have tried to train models using manual annotations as supervisory information or prior knowledge, as demonstrated in transfer learning and meta-learning methods <cit.>. While these methods can deliver excellent results on specific datasets, they also face serious challenge of scalability.Contrastive learning (CL) has been widely used in computer vision and natural language processing <cit.>. There have also been endeavors to incorporate contrastive learning into scRNA-seq data clustering. For instance, contrastive-sc <cit.> proposes a contrastive learning based method for scRNA-seq data by masking a certain proportion of data features to obtain augmented data. Similar to most practices in contrastive learning, this method designates augmented pairs as positive samples, while considering all other pairs as negatives. scNAME <cit.> improves the conventional contrastive loss by proposing a new neighborhood contrastive loss combined with an ancillary mask estimation task, characterizing feature correlation and pairwise cell similarity better. CLEAR <cit.> employs multiple data augmentation methods to simulate different noise types, uses the infoNCE <cit.> loss as a contrastive loss, and generates feature representations for scRNA-seq data with the momentum update strategy of encoder. However, these methods mainly apply the standard contrastive learning directly, failing to adapt the selection of positive and negative samples to the clustering task. This paper aims to boost the performance of scRNA-seq data clustering by exploring new methods. Our contributions are two-fold. On the one hand, we propose a Cluster-aware Iterative Contrastive Learning (CICL) method for scRNA-seq data clustering. CICL employs an iterative representation learning and clustering framework with a cluster-aware contrastive loss, it can progressively improve the clustering result by comprehensively exploiting the hidden cluster structure for scRNA-seq data representation. On the other hand, we conduct extensive experiments on 25 real-world datasets, which show that our method outperforms the SOTA methods in most cases. § MATERIALS AND METHODS§.§ Datasets and Performance MetricsThe proposed CICL method is evaluated on 25 real scRNA-seq datasets, and each dataset contains cells whose labels are known as prior or validated in the previous studies. The 25 datasets were derived from 7 different sequencing platforms. The smallest dataset contains only 90 cells, while the largest dataset has 48,266 cells. The number of cell subtypes in these datasets ranges from 2 to 15. Statistics of these datasets are presented in Table <ref>. We preprocess thescRNA-seq data with the Python package SCANPY <cit.>, following the strategy in <cit.>. Specifically, given the raw read counts (i.e., gene expression matrix), we first filter out cells and genes without counts. Then, we calculate the library size of each cell as the total number of read counts per cell, and obtain the size factor of each cell via dividing its library size by the median of all library sizes. Thirdly, we obtain the normalized read count by dividing the raw read count with the size factor of each cell, followed by a natural log transformation. Furthermore, we consider only the top-t highly variable genes according to their normalized dispersion values, and set t to 500 by default in our paper. Finally, we transform the normalized read counts into z-score data.Two widely used metrics, normalized mutual information (NMI) and adjusted rand index (ARI) are used to evaluate clustering performance.NMI measures the similarity between the predicted labels and the real labels. Specifically, given the predicted labels U=[u_1, u_2, ..., u_N] ∈ℝ^N and the real labels V=[v_1, v_2, ..., v_N] ∈ℝ^N, N denotes the number of cells, NMI is evaluated as follows:NMI=I(U,V)/max(H(U),H(V))where I(U,V)=∑_u∑_vp(u, v)log p(u, v)/p(u)p(v) calculates the mutual information between U and V, p(u, v) is the joint distribution of U and V, p(u) and p(v) are marginal distributions respectively. H(U) = ∑_u p(u)log(p(u)) is the entropy of clustering U. Similarly, H(V) = ∑_v p(v)log(p(v)). ARI was also used to measure the similarity between clustering results and true categories. It solves the problem of insufficient punishment of RI and considers the impact of random assignments. The value of ARI ranges from -1 to 1. The larger the value, the more similar the clustering result is to the real categories. ARI is defined asARI=∑_ijn_ij 2-[ ∑_ia_i2∑_jb_j2 ]/N2/[ ∑_ia_i2∑_jb_j2 ]/2-[ ∑_ia_i2∑_jb_j2 ]/N2where n_ij denotes the number of cells in both cluster i of U and cluster j of V, and a_i denotes the count of cells assigned to cluster i of U, b_j indicates the count of cells assigned to cluster j of V.§.§ The CICL Method§.§.§ OverviewCICL is a cluster-aware iterative contrastive learning method designed for clustering scRNA-seq data, its framework is illustrated in Fig. <ref>. Specifically, in the model training phase, we first generate two augmented views X_aug1 and X_aug2 of the raw data X by adding noise that is randomly sampled from Gaussian N(0, 1) and is mapped to the range [0, 1] via a linear transformation. Then, X, X_aug1 and X_aug2 are input into a transformer encoder <cit.> to obtain their representations H, H_aug1 and H_aug2, respectively.Next, we perform K-means on H to get the centroid matrix C = [c_1, c_2, ..., c_K] where c_i is the centroid vector of cluster i. The number of centroids is equal to the number of cell subtypes (or clusters) in the training dataset.After that, H and C are fed to the clustering-head and generate a pseudo-label for each cell. Meanwhile, the projection-head encodes H_aug1 and H_aug2 to obtain their projections Z_aug1 and Z_aug2.Finally, in addition to the traditional instance-wise contrastive loss, we propose a novel cluster-aware contrastive loss to align the positive pairs and contrast the negative pairs simultaneously, which takes the projections Z_aug1, Z_aug2 and pseudo-labels as input. We construct the positive pairs by an instance-wise way and a pseudo-label based way. In particular, an instance-wise positive pair consists of the representations of the two augmented copies of each cell, and a pseudo-label positive pair is formed by the representations of two augmented copies of two cells with similar pseudo-label (i.e., belonging to the same cluster). This training process goes iteratively. In the clustering phase, the input data X are preprocessed and encoded by the trained transformer encoder to obtain the representation H. Then, H are clustered by K-means to generate the final clustering result. In the following sections, we present the major components of our method in detail.§.§.§ Transformer EncoderThe raw scRNA-seq data is modeled as a matrix X ∈ℝ^N × G where N indicates the number of cells andG denotes the number of genes.To begin with, we construct augmented data by adding Gaussian noise, and two augmented copies (or views) X_aug1 and X_aug2 are generated for X. Then, we encode X_aug1, X_aug2 and X by a transformer encoder, which has four layers, each of which consists of two networks: a multi-head self-attention network and a position-wise fully connected feed-forward network, each of them is followed by a residual connection and layer normalization. For example, give the input X of the self-attention layer, the output is as follows: H_MulitHead = Concat(Att_1(XW_1^v), ..., Att_h(XW_h^v))W^Owhere W_i^v∈ℝ^G × d and W^O∈ℝ^hd × G are learnable parameter matrices, h is the number of heads. And Att_i is evaluated by Att_i = softmax(XW_i^q× (XW_i^k)^𝖳/√(d)) , i = 1, 2, ..., hAbove, W_i^q∈ℝ^G × d and W_i^k∈ℝ^G × d are learnable parameter matrices. Then, after the residual connection and layer normalization, we haveH_res = LayerNorm(X + H_MulitHead) The fully connected feed-forward network consists of two linear layers, following a rectified linear activation function (ReLU), so we haveH_fc = ReLU(H_resW_1)W_2where W_1 and W_2 are learnable parameter matrices. Finally, the output of the i-th layer of the transformer encoder isH_i = LayerNorm(H_fc + H_res) §.§.§ Clustering-head and Pseudo-label GenerationTraditional contrastive loss suffers from sampling bias <cit.>. For example, given a cell i, all the other cells are considered as its negative samples. However, these negative samples contain some cells of the same type as cell i, which will be undesirably pushed away from cell i in the representation space by current contrastive learning. To address this problem, CICL employs a cluster-aware contrastive learning strategy. To this end, we cluster the training data H by K-means, and each cluster is characterized by its centroid c_i in the representation space, which will be updated iteratively. Then, in the clustering-head, we use the Student’s t-distribution to compute the probability q_ij that cell i belongs to the j-th cluster,q_ij = (1+h_i - c_j) _2^2 / α)^-α + 1/2/∑_k = 1^K(1+h_i - c_k) _2^2 / α)^-α + 1/2where h_i is the representation of cell i in H. α is the degree of freedom of the Student’s t-distribution, we set α = 1 in this paper. Finally, we obtain the pseudo-label l_i of cell i by the probability vector q_i=(q_i1,q_i2,...,q_iK) as follows: l_i = label_assign(q_i)where label_assign is a function that returns the cluster index corresponding the maximum q_ij (j ∈ 1, 2, ..., K). Thus, we obtain the pseudo-labels L=(l_1, l_2, ..., l_N) of all cells, which are used for contrastive loss computation. Note that each cell and its two augmented copies have the same pseudo-label. We use the term “pseudo-label”because they are just intermediate (not final) cluster labels. §.§.§ Projection-head and Contrastive Learning LossesWe project H_aug1 and H_aug2 to obtain projections Z_aug1 and Z_aug2 by the projection-head, which is composed of a two-layer perceptron. Formally,Z_aug1 = W_3ReLU(W_4H_aug1) Z_aug2 = W_3ReLU(W_4H_aug2)where W_3 and W_4 are learnable parameters. ReLU is the activation function. Let z_i and z^'_i be the i-th row of Z_aug1 and Z_aug2 respectively, which correspond to the representations of cell i in the two augmented views.For z_i, we not only treat z_i and z^'_i, but also z_i and any other sample of the same cluster in terms of pseudo-label as a positive pair, while z_i and any sample of the other clusters as a negative pair. Given the batch size B, we consider two losses as follows: Instance-wise contrastive loss. CICL computes the infoNCE loss <cit.> for each cell. For cell i with two views z_i and z_i^', its contrastive loss in terms of z_i is1!l_ins(z_i) = -log exp(sim(z_i, z_i^')/T)/∑_m=1^B𝕀_i ≠ m exp(sim(z_i, z_m)/T) + ∑_m=1^B exp(sim(z_i, z_m^')/T)where 𝕀_i ≠ m is an indicator function whose value is 1 if i ≠ m, otherwise is 0. T is the temperature parameter set to 0.5 in our paper. The similarity function sim(.,.) adopts point product or cosine similarity, i.e.,sim(z_i, z_i^') = z_i^Tz_i^'/z_iz_i^' The overall instance-wise contrastive loss isℒ_ins = 1/2B∑^B_i=1[l_ins(z_i) + l_ins(z^'_i)]where l_ins(z^'_i) is cell i's contrastive loss in terms of z^'_i. With this instance-wise contrastive loss, CICL can learn the representations well by pulling positive pairs together and pushing negative pairs away in the cell representation space.Cluster-aware contrastive loss. To solve the sample bias, we propose a novel cluster-aware contrastive loss, which is evaluated with the pseudo-labels L, Z_aug1 and Z_aug2. We treat the pairs of representations of the same pseudo-label as positive pairs, and the remaining pairs as negative pairs. For cell i, its cluster-aware contrastive loss in terms of z_i is as follows:1.05!l_clu(z_i) = -log ∑_j^B E_z_i, z^'_j∈ l_i· exp(sim(z_i, z^'_j)/T) + ∑_j^B E_z_i, z_j∈ l_i·𝕀_i ≠ j· exp(sim(z_i, z_j)/T)/∑_j^B E_z_i, z^'_j∉ l_i· exp(sim(z_i, z^'_j)/T) + ∑_j^B E_z_i, z_j∉ l_i· exp(sim(z_i, z_j)/T)where l_i is the pseudo-label of z_i. E_z_i, z^'_j∈ l_i is an indicator function whose value is 1 if the label of z^'_j is l_i, and 0 otherwise. E_z_i, z^'_j∉ l_i is also an indicator function whose value is 0 if the label of z^'_j is l_i, and 1 otherwise.The overall cluster-aware loss is as follows:ℒ_clu = 1/2B∑_i=1^B[l_clu(z_i) + l_clu(z^'_i)]where l_clu(z^'_i) is cell i's cluster-aware contrastiveloss in terms of z^'_i. This loss is particularly effective because it tries to minimize the distance between cells of similar cluster and maximize the distance between cells of different clusters.Finally, by combining ℒ_ins and ℒ_clu with the hyperparameter λ, we have the whole loss function as follows:ℒ = ℒ_ins + λℒ_cluWe set λ = 0.1 in our experiments. A small λ can prevent the negative effect of clustering error of K-means. With this loss, CICL exploits the cluster structure underlying the data to achieve simultaneous optimization of data representation and cluster label assignment.Compared with traditional contrastive learning, ours is iterative contrastive learning, which iteratively learns the representations of cells in the direction favorable for clustering.§.§ AlgorithmHere, we present the algorithm of our method in Alg. <ref>, which consists of two phases: the training phase and the clustering phase. In the training phase, in each epoch we first randomly split the training data X^train into n_B=[X^train/S_B] minbatches. And for each minbatch X^j, wegenerate two augmented views X^j_aug1 and X^j_aug2.Next, we obtain the representations H^j_aug1, H^j_aug2 and H^j of X^j_aug1, X^j_aug2 and X^j by a transformer encoder. We perform K-means on H^j to get centroid matrix C^j, and then the pseudo-labels L^j are obtained from H^j and C^j.Meanwhile, H^j_aug1, H^j_aug2 are input into the projection-head for obtaining projections Z^j_aug1 and Z^j_aug2. The whole loss ℒ consists of ℒ_ins and ℒ_clu, which are computed with Z^j_aug1,Z^j_aug2 and L^j by Equ. (<ref>) and Equ. (<ref>) respectively. In the training phase, we use K-means to cluster the representations H^test of testing data X^test, encoded by the trained transformer encoder, to generate the clustering result R^test. 1em 1em§ EXPERIMENTS AND RESULTS§.§ Implementation Details and Experimental SetupHere, we present the implementation details of our method and the experimental setup in Table <ref>. CICL uses similar parameters on all of the datasets in Table <ref>, and all compared methods use default parameters provided in their original papers. All the experiments in this paper are conducted on 4 NVIDIA RTX3090 GPUs.§.§ Compared Existing MethodsWe compare CICL with 8 existing scRNA-seq data clustering methods, including a graph-based method Seurat <cit.>, a multi-kernel learning method SIMLR <cit.>, a transfer learning method ItClust <cit.>, a contrastive learning method CLEAR <cit.>, a deep graph embedding based method GraphSCC <cit.>, and three deep embedding based methods scDeepCluster <cit.>, scDHA <cit.> and scVI <cit.>. More information of these methods is as follows: * Seurat <cit.> is a widely used pipeline for single-cell gene expression data analysis. It performs dimension reduction first, then employs Louvain method on the shared nearest neighbor graph.* SIMLR <cit.> combines multiple cores to learn the similarity between samples and performs spectral clustering.* ItClust <cit.> trains a neural network to extract information from a well-labeled source dataset, then initializes the target network with parameters estimated from the training network. * CLEAR <cit.> is a self-supervised contrastive learning-based integrative scRNA-seq data analysis tool. It introduces a novel data augmentation method and performs contrastive learning by InfoNCE loss.* GraphSCC <cit.> extracts the structural relationships between cells using a graph convolutional network, and optimizes the representations by a dual self-supervised module.* scDeepCluster <cit.> adds a ZINB distribution model simulating the distribution of scRNA-seq data to the denoising autoencoder, and learns feature representations and clusters by explicit modeling of scRNA-seq data. * scDHA <cit.> first exploits a non-negative kernel autoencoder to do dimension reduction and then projects the data onto a low-dimensional space with a self-learning network based on variational autoencoder (VAE).* scVI <cit.> is a comprehensive tool for the analysis of scRNA-seq data. It models scRNA-seq data in a deep generative manner with the ZINB model and variational autoencoder.§.§ Performance ComparisonTable <ref> summarizes the clustering performance of CICL and 8 existing methods on 25 scRNA-seq datasets.CICL achieves the best ARI and NMI on 10 and 9 datasets, and the 2nd best ARI and NMI on 7 and 10 datasets, respectively. On average, our method obtains the best ARI (0.7757) and NMI (0.8057) on the 25 datasets. In particular, CICL surpasses scDHA by 13.87% and 4.96% in terms of ARI and NMI on average, which shows the outstanding clustering performance of our method.We can also see that CICL performs excellently on large datasets such as Bach (23184 cells), havatin (48266 cells), QX_Trachea (11269 cells), QX_Spleen (9552 cells) and Wang_Lung (9519 cells). Furthermore, our method also achieves good clustering scores on datasets with more than 10 subtypes of cells, such as muraro (10 subtypes), pollen (11 subtypes), QS_Lung (11 subtypes) and Young (11 subtypes). In summary, CICL surpasses the existing methods by from 14% to 280%, and from 5% to 133% on average in terms of performance metrics ARI and NMI,respectively.Note that the latest contrastive learning-based method CLEAR does not show advantages over the other methods on the 25 datasets. However, our method achieves excellent results, thanks to the proposed cluster-aware iterative contrastive learning mechanism.§.§ Visualization with Low-dimensional RepresentationsIn cellular heterogeneity analysis, visualization is an intuitive and effective way to display different cell types. We use t-SNE <cit.> to project the representations of cells into a two-dimensional space and visualize them in Fig. <ref>. As we can see, CICL learns to embed cells of the same type within the same cluster while separating cells of different types well into different clusters, producing similar clustering results to the ground truth cell annotations. The clustering result of CICL is superior to that of the other methods on the hrvatin dataset. Although the performance of scDHA and scVI is also good, they divide the oligodendrocyte cells into multiple clusters.Furthermore, CICL performs well on QS_Lung, the cells of different types are effectively separated in the embedding space, which is much better than with the other methods. As for the Wang_Lung dataset with two subtypes, CICL not only achieves the best ARI (see Table <ref>) but also exhibits the best clustering visualization effect: the data is grouped into two distinct clusters. Fig. <ref> illustrates the clustering process of our iterative contrastive learning on the muraro dataset. The upper and lower figures represent the clustering results of our method and the ground truth, respectively. We can see that various types of cells are distributed chaotically in the early epochs (e.g. epoch = 0, 3, 6).However, as the iterative learning goes, CICL split different types of cells with growing accuracy. At epoch 50, our method correctly clusters the data. In summary,CICL is able to gradually refine the clustering outcome, and eventually makes the clustering result match with the ground truth. This demonstrates that our model iteratively learns more and more accurate cell representations.Furthermore, we demonstrate how the clustering performance metrics ARI and NMI change with the iterative contrastive learning process on the muraro and pollen datasets in Fig. <ref>. We can see that in the early epochs (epoch <60 on muraro and epoch <50 on pollen), both metrics undergo a rapid increasing and acutely fluctuation period. After that, the metrics enter a relatively stable period. Certainly, excessive training will also lead to overfitting and result in slight degradation of model performance, as we can see on the pollen dataset. §.§ Ablation StudyHere, we conduct ablation studies on the effect of cluster-aware contrastive learning.Cluster-aware contrastive loss.One of the major innovations of CICL is the cluster-aware contrastive learning mechanism, which incorporates cluster structure information into contrastive loss, thereby enhancing the representations of cells.To validate the effectiveness of our mechanism, we conduct an ablation study. For comparison, we consider a variant without the cluster-aware loss (i.e., the 2nd term in Equ. (<ref>)).The results are presented in Fig. <ref>. Here, the vertical axis shows the results of our method, and the horizontal axis presents the results of the variant without the cluster-aware contrastive loss. Notably, in terms of both ARI and NMI, the majority of points lie above the line y = x, indicating that CICL outperforms the variant model, which affirms the efficacy of our new contrastive learning loss. Nevertheless, we also see that on some datasets (e.g. kolodziejczyk, Mammary_Gland and Tosches_turtle), CICL exhibits similar or even inferior performance. This is possibly caused by the errors of pseudo-labels.Effect of hyperparameter λ. Here, we investigate the effect of the hyperparameter λ on the performance of the model. We increase λ from 0 to 1.0, and report the performance in terms of ARI and NMI. The results are illustrated in Fig. <ref>. We can see that our method has the worst performance at λ = 0 (without the cluster-aware contrastive loss). As λ increases, the performance is improved rapidly. When λ = 0.1, both ARI and NMI reach the highest point. After that, ARI and NMI decrease slightly and gradually tend to be stable with the increase of λ. The result indicates that the model considerably benefits from the cluster-aware contrastive loss.In our experiments, we set λ=0.1, to avoid potential negative impact of pseudo-label error.§ CONCLUSIONIn this paper, to boost the performance of scRNA-seq data clustering analysis, we propose a novel approach called CICL. CICL adopts an iterative representation learning and clustering framework with an innovative cluster-aware contrastive loss. By comprehensively exploiting the underlying cluster structure of the training data, CICL can learn better scRNA-seq data presentations and thus achieve better clustering performance progressively.Extensive experiments on 25 real scRNA-seq datasets show that CICL outperforms the state-of-the-art methods in most cases, and achieves dominate advantage over the existing methods on average. Future work will focus on replacing K-means with advanced clustering methods to generate accurate pseudo-labels, and extending our idea to other downstream scRNA-seq data analysis tasks.plainnat
http://arxiv.org/abs/2312.16600v1
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
Computing Gerber-Shiu function in the classical risk model with interest using collocation method Zan Yu,   Lianzeng ZhangCorresponding author. School of Finance, Nankai University, Tianjin 300350,China =================================================================================================================== Addressing bias in the trained machine learning system often requires access to sensitive attributes. In practice, these attributes are not available either due to legal and policy regulations or data unavailability for a given demographic. Existing bias mitigation algorithms are limited in their applicability to real-world scenarios as they require access to sensitive attributes to achieve fairness. In this research work, we aim to address this bottleneck through our proposed unsupervised proxy-sensitive attribute label generation technique. Towards this end, we propose a two-stage approach of unsupervised embedding generation followed by clustering to obtain proxy-sensitive labels. The efficacy of our work relies on the assumption that bias propagates through non-sensitive attributes that are correlated to the sensitive attributes and, when mapped to the high dimensional latent space, produces clusters of different demographic groups that exist in the data. Experimental results demonstrate that bias mitigation using existing algorithms such as Fair Mixup and Adversarial Debiasing yields comparable results on derived proxy labels when compared against using true sensitive attributes. § INTRODUCTION Machine Learning has attained high success rates in practically every field, including healthcare, finance, and education, based on the accuracy and efficiency of the model's outcome <cit.>. However, these models are biased and exhibit a propensity to favor one demographic group over another in various applications, including credit and loan approval, criminal justice, and resume-based candidate shortlisting <cit.>. The idea of fairness has received a lot of attention recently to combat the discrimination from the outcome of ML models <cit.>.The existing bias mitigation techniques <cit.> can be classified into three categories: pre-processing<cit.>, post-processing <cit.> and in-processing <cit.>. While pre-processing bias mitigation techniques attempt to transform the input before feeding it to the model for training, post-processing strategies filter out the output through certain transformations. In order to produce fair output, in-processing strategies strive to learn bias-invariant models by imposing certain constraints during training. Nevertheless, most state-of-the-art algorithms require information about sensitive attributes to produce an unbiased model. However, in practice, these sensitive attributes are inaccessible due to difficulties in data collection, privacy, and legal constraints imposed by the government, like General Data Protection Regulation(GDPR) introduced by the European Union in May 2018 and Equal Credit Opportunity Act <cit.>. Fairness is challenging to achieve in the absence of sensitive attributes due to a lack of supervision. While sensitive attributes are inaccessible in the real-world setting, it has been found that some non-sensitive attributes have strong correlations with the sensitive features, which leads to bias propagating through AI models<cit.>. For instance, Hispanic and black populations have a higher proportion of younger people, resulting in the correlation between age and race <cit.>. Similarly, zip codes can be correlated with race. Hence, the bias gets embedded in the non-sensitive attributes that are used in the model training. Based on this hypothesis, a few initial efforts have been made to mitigate bias in the absence of protected attributes <cit.>. The most recent approach <cit.> identifies related features that are correlated with the sensitive attributes and would further minimize the correlation between the related features and the model’s prediction to learn a fair classifier with respect to the sensitive attribute. However, identification of related features require domain knowledge and access to sensitive attributes to determine the correlation.This research aims to provide proxy labels for sensitive attributes to make the present bias mitigation approaches suitable for real-world applications where access to protected attributes during model training is constrained. Ideally the likelihood of a positive outcome should be the same regardless of person's protected group. However in real life this does not hold true. The group which has more likelihood of getting positive outcome just because of their protected attribute is referred as favourable groupand group which has more likelihood of getting negative outcome just because of their protected attribute is referred as unfavourable group in this paper. We determine proxy for favorable and unfavorable groups by leveraging the bias information embedded in the non-sensitive features available in the given dataset. This proxy sensitive labels can then be passed as an input into the existing bias mitigation techniques. Thus we address the bottleneck in the applicability of the existing bias mitigation to real-world applications. We have proposed a novel pipeline that involves two stages: (1) Stage-1: Learn embeddings using self-supervised learning that captures inter-feature relationships and, consequently, latent bias information. (2) Stage-2: Generate proxy for demographic groups by clustering the samples based on the embeddings obtained from Stage-1. Further, experimental analysis reveals that identical results can be observed by using the proxy labels in the current bias mitigation technique as opposed to the genuine labels of sensitive qualities.§ RELATED WORK A substantial amount of work has been done to address and mitigate bias in data sets and models<cit.>. Based on the point of intervention of the modeling stage, bias mitigation techniques broadly fall into three categories: pre-processing, in-processing, and post-processing. Pre-processing techniques underpin the first stage of the modeling and transform the training data so that the underlying discrimination is removed<cit.>. These techniques reduce or eliminate the correlation between sensitive attributes and other features, including the target labels. Unfortunately, due to the blindness of these techniques to model's inference of the data, some level of bias still can creep into the model predictions. In-processing techniques modify learning algorithms to remove bias during the model training process. Most of the algorithms in this category solve constraint optimization problem for different fairness objectives. To ensure independence between predictions and sensitive attributes, <cit.> regularizes the covariance between them. <cit.> minimizes the disparity between the sensitive groups by regularizing the decision boundary of the classifier. <cit.> proposed a data augmentation strategy for optimizing group fairness constraints such as equalized odds anddemographic parity. Another efficient algorithm <cit.>, tries to maximize the predictor's ability to predict the ground truth while minimizing the adversary’s ability to predict the sensitive attribute. Post-processing techniques treat the learned model as a black-box model and try to mitigate bias from the prediction <cit.>. Typically, post-processing algorithms select a subset of samples and adjust the predicted labels accordingly. An intriguing finding is that any sample can be altered to meet the requirements of group fairness because the metrics are expectations. The papers <cit.> choose samples at random, whereas <cit.> choose the samples with the greatest degree of uncertainty, reflecting the human tendency to give unprivileged groups the benefit of the doubt.Most of the current algorithms have restrictions on their use in real-world scenarios since they need access to protected attributes for bias mitigation. Very recently efforts have been made towards bias mitigation in the absence of sensitive attributes <cit.>. <cit.> introduced a framework based on bayesian variational autoencoders that relies on knowledge ofcausal graph to derive proxy. The algorithm estimates proxy in a multi dimensional space and then uses this generated proxy to remove bias from the model. But, since the proxy are generated in a multi dimensional space, they cannot be generalised to other bias mitigation algorithms. The paper <cit.> introduced a framework wherein it only performs debiasing on the classification head. The algorithm neutralizes the training samples that have the same ground truth label but with different sensitive attribute annotations. Proxy generation for the sensitive attributes is done by training a bias intensified model and then annotating samples based on its confidence level. However, the algorithm makes a strong assumption that bias-amplified model tends to assign the privileged group more desired outcome whereas assigning the under-privileged group a less-desired outcome based on the obtained prediction scores. The most recent approach <cit.> identifies related features that are correlated with the sensitive attributes and would further minimize the correlation between the related features and the model’s prediction to learn a fair classifier with respect to the sensitive attribute. To identify the related features, however, this method needs access to sensitive attributes to determine the correlation. § METHODOLOGYIt is widely established that bias propagates to the models even when protected attributes are not used during training <cit.>. This is attributed to the frequent incorporation of protected attribute data into other correlated non-protected attributes. Zip codes, for instance, can be associated to the race attribute. Based on this hypothesis, we utilize the non-protected attributes to obtain proxy-sensitive labels. Assuming the availability of all variables except the protected attribute, our goal is to recover all the latent information associated with the protected attribute embedded into the available non-protected features.This section outlines our suggested method for generating a proxy for a sensitive protected attribute. We break the objective down into two stages. In the first stage, we utilize self-supervised learning to produce the contextual embedding of the input samples. Our goal is to learn an embedding with maximum information about the protected attribute. In the second stage, we obtain proxy labels for favorable and unfavorable groups using an unsupervised clustering approach on the embedding obtained from the first stage. Finally, we pass the generated proxy through existing state-of-the-art bias mitigation algorithms to mitigate bias from any model. Figure <ref> outlines the proxy-generation pipeline. §.§ Proxy Generation for Sensitive Attribute Stage-1: In the first stage, as shown the figure <ref>, we obtain contextual embedding of the input samples. Towards this goal, we train neural network architectures in a self-supervised fashion to efficiently encode inter-feature relationships. In this paper, we have experimented with two neural network architectures: (1) Auto-encoders and, (2) Transformers.We train an auto-encoder on the reconstruction task to obtain embeddings containing crucial input data details. An auto-encoder consists of encoder and decoder modules. In the encoding operation, we pass the input feature vector that gets mapped to a lower dimensional latent representation. In the decoding operation, the original input data gets reconstructed back from the latent representation. We trained the network on a reconstruction loss that minimizes the mean square error between the input and output embeddings. Input data X is passed through the encoder to get latent representation h and then reconstructed as X̂ by the decoder as shown in the equations <ref> and <ref>. We train the network on reconstruction loss Loss_AE as shown in equation <ref> where n represents the number of data points in a batch. Here, f1 and f2 are activation functions, W is weight matrix and b is bias.The latent embeddings obtained from the encoder module contain information about the protected attribute as it is generated from features that are correlated with the protected attribute. h = f1(W_i*X + b_i) X̂ = f2(W_j*h + b_j) Loss_AE = 1/nΣ_i=1^n|X_i -X̂_̂î| We experimented with another neural network architecture called Transformer with a similar goal. Transformers utilize a self-attention <cit.> mechanism to learn the embeddings. To compute self-attention, first, three vectors, Query(Q), Key(K), and (V), are learned corresponding to each feature in the input, and then the attention is computed as shown in the equation <ref>. Finally, self-attained embeddings h are obtained as shown in the equation <ref>. We train the Transformer on a self-supervised learning task called Masked Language Modelling (MLM). Towards this, 15% of the input data fields are chosen randomly and replaced with a masked token. The Transformer then processes samples to produce contextual row embeddings. The MLM head, made up of MLP layers, reconstructs the original fields from these row embeddings. The model is trained end-to-end by minimizing cross-entropy loss as shown in the equation <ref>. The loss is calculated only on masked fields. The latent embeddings (h) obtained from the transformer contain information about the protected attribute due to its inherent property to learn the inter-feature relationships. Attention(Q, K, V) = softmax(QK^T/√(d_k))V head_i = Attention(Q W^Q_i, K W^K_i, V W^V_i) h = Concat(head_1, ..., head_h)W^Op_i = Softmax(MLP(h)) Loss_T = -∑_c=1^My_ilog(p_i)Further to ensure that the generated embeddings do not corresponds to the true labels of the downstream classification task, we have trained the above described neural network models on KL Divergence loss. KL divergence loss historically has been used in classification tasks to ensure class separation between two different labels. The KL divergence loss is based on the information theoretic measure of the Kullback-Leibler (KL) Divergence, which measures the difference between two probability distributions. By introducing the KL divergence loss, the model is able to learn the distinction between the two different labels better, thus leading to improved embedding generation which contains information related to protected attribute and not downstream task labels.In order to implement the Kullback-Leibler (KL) divergence in the proposed neural network architecture, a multi-layer perceptron (MLP) layer has been applied on the generated embedding vectors. In the autoencoder, the MLP is applied on top of the latent vectors, while in the transformer, the MLP is fed with the contextual vector (h). The calculation of the KL loss on top of the MLP depends on the input embedding vector. Specifically, the input embedding vector is fed into the MLP, which will generate the probability distribution. Then, the KL divergence between the probability distribution and the target distribution is calculated. This KL loss is then used to optimize the MLP weights and biases.Stage-2: In the second stage, as shown the figure <ref>, we use an unsupervised clustering algorithm to identify various groups in the embeddings obtained from the previous stage. As we know, clustering is a subjective statistical analysis, and there are many algorithms suitable for each data set and problem type. In this paper, we have experimented with centroid-based and hierarchical clustering algorithms. In particular, we have experimented with K means, Hierarchical and BIRCH to obtain two clusters that serves as a proxy for favourable and unfavourable groups. We further evaluate the performance of generated proxy from each clustering algorithm on bias mitigation. §.§ Bias Mitigation Through Generated Proxy Sensitive AttributeOnce the proxy labels are obtained corresponding to the favourable and unfavourable groups, we pass them as input to the existing bias mitigation algorithms. In this paper, we have experimented with two widely used benchmarks for bias mitigation: Adversarial Debiasing and Fair Mixup. Both algorithms require labels corresponding to the protected attribute in the input. We pass the proxy for the protected attribute obtained from the proposed pipeline as input to de-bias the model. We have compared the performance on bias mitigation with the true labels and proxy labels for the protected attributes in the results section. However, for fairness evaluation, we use true sensitive labels. Figure <ref> shows the pipeline for bias mitigation and fairness evaluation. § EXPERIMENTAL DETAILS§.§ Dataset DescriptionWe have evaluated the proposed pipeline on the Adult Income Dataset, generated from 1994 US Census. The objective of the dataset is to predict the income level based on personal individual information. The target variable,Y takes a binary value depicting salary ≤ 50K or salary > 50k.The dataset consist of 14 independent attributes and the field 'Gender' is considered as a sensitive attribute in our case. It takes up two values, namely 'Male' and 'Female'.The dataset is imbalanced: only 24% of the samples belong to class 1, out of which only 15.13% are females. The dataset consist of 48,842 independent rows. During the training of our model, we do not take into account the information provided by the 'Gender' attribute. §.§ Implementation DetailsWe have implemented the proposed pipeline in the Pytorch framework. All the experiments were performed on Ubuntu 16.04.7 with the Nvidia GeForce GTX 1080Ti GPU. 16GB of RAM was utilized while experimenting on the Adult Income Dataset.In Stage-1, we experimented with two embedding generator networks, autoencoders and transformers. The autoencoders used in the algorithm consist of one hidden layer. The hidden layer's output receives ReLU activation while its input receives Tanh activation. The model was trained for 200 epochs with a batch size of 32 and a learning rate of 0.001 using Adam as the optimizer. The Transformer architecture contains only the encoder module. Three encoder blocks are used with six attention heads. Each encoder module is a feed-forward network with 128 hidden units. We used the implementation of the Transformer provided in the hugging face library. In Stage-2, we experimented with K-Means, BIRCH, and Hierarchical clustering algorithms to generate proxy labels for protected attributes. We utilize the implementation of these clustering algorithms given in python's sklearn library. We utilize all the data samples to train our proposed pipeline to obtain the proxy labels for sensitive attributes. Next, we randomly split the dataset into 80-20 train and test split and train the classification model using bias mitigation algorithms on the train set. We employ existing bias mitigation algorithms like Adversarial debiasing<cit.> provided in the IBM AIF360 toolkit and fair-mixup<cit.> an open-source solution that is accessible on GitHub. During training, we use the generated proxy instead of the actual labels of the protected attribute and assess performance on the protected attribute's actual labels. §.§ Fairness MetricsFairness in machine learning measures the degree of disparate treatment for different groups (e.g., female vs. male), or individual fairness, emphasizing similar individuals should be treated similarly. There exists various metrics in the literature to quantify fairness, each focusing on different aspects of fairness. We have used two popularly used metrics: Statistical Parity Difference (SPD) and Equalized Odds Difference (EOD). Statistical Parity Difference (SPD): A classifier is considered fair if the prediction Y on input features X is independent from the protected attribute S. The underlying idea is that eachdemographic group has the same chance for a positive outcome. <cit.> SPD = |P(Ŷ=1|S = 0) - P(Ŷ = 1|S = 1)|Equalized Odds Difference (EOD) : An algorithm is considered fair if across both privileged and unprivileged groups, the predictor Y has equal false positive rate(FPR) and false negative rate(FNR). This constraint enforces that accuracy is equally high in all demographics since the rate of positive and negative classification is equal across the groups.The notion of fairness here is that chances of being correctly or incorrectly classified positive should be equal for every group.FPR = |{P(Ŷ=1|S = 1, Y = 0) - P(Ŷ=1|S = 0, Y = 0)}|FNR = |{P(Ŷ=0|S = 1, Y = 1) -P(Ŷ=0|S = 0, Y = 1)}| EOD = FPR + FNR/2§ RESULTS In this section, we empirically assess the effectiveness of the proxy-sensitive label obtained through the proposed pipeline. Towards this end, we pass the proxy-sensitive labels through state-of-the-art bias mitigation methods like adversarial debiasing and fair mixup and evaluate the fairness and classification performance on a public dataset called UCI Adult Income. We have reported the classification performance on Average Precision and fairness on Statistical Parity Difference (SPD) and Equalized Odds Difference (EOD). Fair mixup and adversarial debiasing bias mitigation algorithms require protected attribute information to de-bias the models. To form the baseline, we have passed true labels of the protected attribute gender through the mentioned bias mitigation algorithms. Fair mixup has a trade-off parameter between fairness and accuracy, called lambda. We set this parameter as 0.5 for SPD and 2.5 for EOD. Table <ref> compares the classification and fairness performance of the model trained using bias mitigation algorithms like fair mixup and adversarial debiasing against a classifier trained without using any bias mitigation algorithm. From table <ref>, we can observe that the model trained without any bias mitigation algorithm produce an average precision of 0.8 and SPD and EOD metrics as 0.2 and 0.11.However, with model debiasing, we can see an improvement in SPD and EOD values proving the efficacy of the bias mitigation algorithms in achieving fairness. In this paper, we concentrate on a more practical experimental setup, where we have assumed that the protected attributes are unavailable during model training. Here, we have used a proxy generated by our pipeline as an input to the existing bias mitigation techniques discussed above rather than the true labels of the protected attribute to test the efficacy of the generated proxy in model debiasing. With proxy-sensitive labels, we aim to achieve a similar performance as the baselines as shown in Table <ref>.We have experimented with several algorithms in both stages to generate proxy-sensitive labels. In Stage 1, we experimented with Autoencoder and Transformer architectures to generate the embeddings. And in Stage-2, we experimented with clustering algorithms like K-means, Hierarchical, and BIRCH. Figure <ref> shows the performance of all the configurations when Autoencoder is used for embedding generation, and Figure <ref> shows the performance when Transformer is utilized. From figure <ref> we can observe that the proxy generated by hierarchical clustering produces the best results with the adversarial debiasing algorithm. In this configuration, we can observe an absolute improvement of 0.14% in SPD with comparable average precision and EOD performance when proxy labels are used instead of true labels for the sensitive attribute Gender. Figure <ref> shows that with the Fair mixup algorithm, the best-performing configuration with proxy-sensitive labels has achieved an average precision of 0.77 with EOD and SPD values of 0.05 and 0.07. This performance is comparable to model performance with the true protected attribute. On the other hand, with the Adversarial debiasing algorithm, the embeddings obtained from the Transformer have led to a 1% absolute lift in the average precision while improving the fairness metrics compared to the baseline model trained on true sensitive labels. Transformer architecture to learn embedding in the proxy generation phase produces a significant lift in fairness. The inherent properties of the transformer architecture to learn the inter-feature relationships enables it to generate informative embeddings for the tabular dataset. This is supported by the experimental results shown in figures <ref> and <ref> on the Adult Income dataset. However, the choice of the modeling architectures to obtain the embedding and the clustering algorithms are dataset-dependent. §.§ Learned Embedding Analysis The performance evaluation discussed in the above section indicates that the proxy-sensitive labels can be used as a substitute for the true labels of protected attributes in the existing bias-mitigation algorithms. In this section, we analyze the quality of embeddings learned in Stage-1 of the proposed pipeline through an auxiliary prediction task similar to <cit.>.Towards this effect, we train three linear classifiersCProxy, CTrue and CDownstream that take the embeddings as input and predict proxy attribute, true protected attribute, and target class labels respectively. Next, we compare the learned weight matrix of CProxy with CTrue and CDownstream separately using cosine similarity.The cosine similarity between the weight vectors of C_Proxy and C_True is 0.25, and between C_Proxy and C_Downstream is 0.02. A high value of cosine similarity between weight parameters of C_Proxy and C_True indicates that embedding contains a substantial amount of information about the true protected attribute. In contrast, a low cosine similarity value between weights of C_Proxy and C_Downstream indicates that the clusters formed over the embeddings are not along the downstream prediction task.§ CONCLUSIONBias mitigation with no access to sensitive attributes is a challenging problem and has received little attention in the literature. Numerous relevant research studies exist on fairness in AI, but most of these studies assume that protected attributes are accessible at the time of training. This assumption limits their use in modeling scenarios where protected labels are unavailable. In an effort to reduce this dependency, we propose a novel pipeline that leverages the inherent bias information in the non-protected attributes to obtain proxy labels of protected attributes. In the current state-of-the-art bias mitigation algorithms, these proxies are passed as input rather than the true labels of the sensitive attribute. Experimental results demonstrate that model trained using generated proxy labels results in satisfactory bias metrics such as SPD and EOD with little or no reduction in detection rate. In the future, we will continue to advance our research by investigating more effective methods to incorporate additional bias information into the embedding to improve proxy labels. Additionally, we would validate the compatibility of the proposed approach with additional bias mitigation algorithms beyond the algorithms studied in this work.
http://arxiv.org/abs/2312.15994v1
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }